id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1207.5184
|
Lossy Compression of Quality Values via Rate Distortion Theory
|
q-bio.GN cs.IT math.IT q-bio.QM
|
Motivation: Next Generation Sequencing technologies revolutionized many
fields in biology by enabling the fast and cheap sequencing of large amounts of
genomic data. The ever increasing sequencing capacities enabled by current
sequencing machines hold a lot of promise as for the future applications of
these technologies, but also create increasing computational challenges related
to the analysis and storage of these data. A typical sequencing data file may
occupy tens or even hundreds of gigabytes of disk space, prohibitively large
for many users. Raw sequencing data consists of both the DNA sequences (reads)
and per-base quality values that indicate the level of confidence in the
readout of these sequences. Quality values account for about half of the
required disk space in the commonly used FASTQ format and therefore their
compression can significantly reduce storage requirements and speed up analysis
and transmission of these data.
Results: In this paper we present a framework for the lossy compression of
the quality value sequences of genomic read files. Numerical experiments with
reference based alignment using these quality values suggest that we can
achieve significant compression with little compromise in performance for
several downstream applications of interest, as is consistent with our
theoretical analysis. Our framework also allows compression in a regime - below
one bit per quality value - for which there are no existing compressors.
|
1207.5191
|
Schrodinger equation and wave equation on finite graphs
|
math.AP cs.IT math.DG math.DS math.IT
|
In this paper, we study the schrodinger equation and wave equation with the
Dirichlet boundary condition on a connected finite graph. The explicit
expressions for solutions are given and the energy conservations are derived.
Applications to the corresponding nonlinear problems are indicated.
|
1207.5206
|
Transmit Optimization with Improper Gaussian Signaling for Interference
Channels
|
cs.IT math.IT
|
This paper studies the achievable rates of Gaussian interference channels
with additive white Gaussian noise (AWGN), when improper or circularly
asymmetric complex Gaussian signaling is applied. For the Gaussian
multiple-input multiple-output interference channel (MIMO-IC) with the
interference treated as Gaussian noise, we show that the user's achievable rate
can be expressed as a summation of the rate achievable by the conventional
proper or circularly symmetric complex Gaussian signaling in terms of the
users' transmit covariance matrices, and an additional term, which is a
function of both the users' transmit covariance and pseudo-covariance matrices.
The additional degrees of freedom in the pseudo-covariance matrix, which is
conventionally set to be zero for the case of proper Gaussian signaling,
provide an opportunity to further improve the achievable rates of Gaussian
MIMO-ICs by employing improper Gaussian signaling. To this end, this paper
proposes widely linear precoding, which efficiently maps proper
information-bearing signals to improper transmitted signals at each transmitter
for any given pair of transmit covariance and pseudo-covariance matrices. In
particular, for the case of two-user Gaussian single-input single-output
interference channel (SISO-IC), we propose a joint covariance and
pseudo-covariance optimization algorithm with improper Gaussian signaling to
achieve the Pareto-optimal rates. By utilizing the separable structure of the
achievable rate expression, an alternative algorithm with separate covariance
and pseudo-covariance optimization is also proposed, which guarantees the rate
improvement over conventional proper Gaussian signaling.
|
1207.5208
|
Meta-Learning of Exploration/Exploitation Strategies: The Multi-Armed
Bandit Case
|
cs.AI cs.LG stat.ML
|
The exploration/exploitation (E/E) dilemma arises naturally in many subfields
of Science. Multi-armed bandit problems formalize this dilemma in its canonical
form. Most current research in this field focuses on generic solutions that can
be applied to a wide range of problems. However, in practice, it is often the
case that a form of prior information is available about the specific class of
target problems. Prior knowledge is rarely used in current solutions due to the
lack of a systematic approach to incorporate it into the E/E strategy.
To address a specific class of E/E problems, we propose to proceed in three
steps: (i) model prior knowledge in the form of a probability distribution over
the target class of E/E problems; (ii) choose a large hypothesis space of
candidate E/E strategies; and (iii), solve an optimization problem to find a
candidate E/E strategy of maximal average performance over a sample of problems
drawn from the prior distribution.
We illustrate this meta-learning approach with two different hypothesis
spaces: one where E/E strategies are numerically parameterized and another
where E/E strategies are represented as small symbolic formulas. We propose
appropriate optimization algorithms for both cases. Our experiments, with
two-armed Bernoulli bandit problems and various playing budgets, show that the
meta-learnt E/E strategies outperform generic strategies of the literature
(UCB1, UCB1-Tuned, UCB-v, KL-UCB and epsilon greedy); they also evaluate the
robustness of the learnt E/E strategies, by tests carried out on arms whose
rewards follow a truncated Gaussian distribution.
|
1207.5216
|
A colouring protocol for the generalized Russian cards problem
|
cs.IT math.IT
|
In the generalized Russian cards problem, Alice, Bob and Cath draw $a$, $b$
and $c$ cards, respectively, from a deck of size $a+b+c$. Alice and Bob must
then communicate their entire hand to each other, without Cath learning the
owner of a single card she does not hold. Unlike many traditional problems in
cryptography, however, they are not allowed to encode or hide the messages they
exchange from Cath. The problem is then to find methods through which they can
achieve this. We propose a general four-step solution based on finite vector
spaces, and call it the "colouring protocol", as it involves colourings of
lines.
Our main results show that the colouring protocol may be used to solve the
generalized Russian cards problem in cases where $a$ is a power of a prime,
$c=O(a^2)$ and $b=O(c^2)$. This improves substantially on the set of parameters
for which solutions are known to exist; in particular, it had not been shown
previously that the problem could be solved in cases where the eavesdropper has
more cards than one of the communicating players.
|
1207.5226
|
On the Relative Trust between Inconsistent Data and Inaccurate
Constraints
|
cs.DB
|
Functional dependencies (FDs) specify the intended data semantics while
violations of FDs indicate deviation from these semantics. In this paper, we
study a data cleaning problem in which the FDs may not be completely correct,
e.g., due to data evolution or incomplete knowledge of the data semantics. We
argue that the notion of relative trust is a crucial aspect of this problem: if
the FDs are outdated, we should modify them to fit the data, but if we suspect
that there are problems with the data, we should modify the data to fit the
FDs. In practice, it is usually unclear how much to trust the data versus the
FDs. To address this problem, we propose an algorithm for generating
non-redundant solutions (i.e., simultaneous modifications of the data and the
FDs) corresponding to various levels of relative trust. This can help users
determine the best way to modify their data and/or FDs to achieve consistency.
|
1207.5232
|
Peer-to-Peer and Mass Communication Effect on Revolution Dynamics
|
physics.soc-ph cs.SI
|
Revolution dynamics is studied through a minimal Ising model with three main
influences (fields): personal conservatism (power-law distributed),
inter-personal and group pressure, and a global field incorporating
peer-to-peer and mass communications, which is generated bottom-up from the
revolutionary faction. A rich phase diagram appears separating possible
terminal stages of the revolution, characterizing failure phases by the
features of the individuals who had joined the revolution. An exhaustive
solution of the model is produced, allowing predictions to be made on the
revolution's outcome.
|
1207.5259
|
Optimal discovery with probabilistic expert advice: finite time analysis
and macroscopic optimality
|
cs.LG stat.ML
|
We consider an original problem that arises from the issue of security
analysis of a power system and that we name optimal discovery with
probabilistic expert advice. We address it with an algorithm based on the
optimistic paradigm and on the Good-Turing missing mass estimator. We prove two
different regret bounds on the performance of this algorithm under weak
assumptions on the probabilistic experts. Under more restrictive hypotheses, we
also prove a macroscopic optimality result, comparing the algorithm both with
an oracle strategy and with uniform sampling. Finally, we provide numerical
experiments illustrating these theoretical findings.
|
1207.5261
|
Modelling Epistemic Systems
|
physics.soc-ph cs.SI
|
In this Chapter, I will explore the use of modeling in order to understand
how Science works. I will discuss the modeling of scientific communities,
providing a general, non-comprehensive overview of existing models, with a
focus on the use of the tools of Agent-Based Modeling and Opinion Dynamics. A
special attention will be paid to models inspired by a Bayesian formalism of
Opinion Dynamics. The objective of this exploration is to better understand the
effect that different conditions might have on the reliability of the opinions
of a scientific community. We will see that, by using artificial worlds as
exploring grounds, we can prevent some epistemological problems with the
definition of truth and obtain insights on the conditions that might cause the
quest for more reliable knowledge to fail.
|
1207.5265
|
Hidden information and regularities of information dynamics IR
|
nlin.AO cs.IT math.IT
|
The introduced entropy functional's (EF) information measure of random
process integrates multiple information contributions along the process
trajectories, evaluating both the states' and between states' bound information
connections. This measure reveals information that is hidden by traditional
information measures, which commonly use Shannon's entropy function for each
selected stationary states of the process. The hidden information is important
for evaluation of missing connections, disclosing the process' meaningful
information, which enables producing logic of the information. The presentation
consists of three Parts. In Part 1R-revised we analyze mechanism of arising
information regularities from a stochastic process, measured by EF,
independently of the process' specific source and origin. Uncovering the
process' regularities leads us to an information law, based on extracting
maximal information from its minimum, which could create these regularities.
The solved variation problem (VP) determines a dynamic process, measured by
information path functional (IPF), and information dynamic model, approximating
the EF measured stochastic process with a maximal functional probability on
trajectories. In Part 2, we study the cooperative processes, arising at the
consolidation, as a result of the VP-EF-IPF approach, which is able to produce
multiple cooperative structures, concurrently assembling in hierarchical
information network (IN) and generating the IN's digital genetic code. In Part
3 we study the evolutionary information processes and regularities of evolution
dynamics, evaluated by the entropy functional (EF) of random field and
informational path functional of a dynamic space-time process. The information
law and the regularities determine unified functional informational mechanisms
of evolution dynamics.
|
1207.5272
|
Information spreading on dynamic social networks
|
physics.soc-ph cs.SI
|
Nowadays, information spreading on social networks has triggered an explosive
attention in various disciplines. Most of previous works in this area mainly
focus on discussing the effects of spreading probability or immunization
strategy on static networks. However, in real systems, the peer-to-peer network
structure changes constantly according to frequently social activities of
users. In order to capture this dynamical property and study its impact on
information spreading, in this paper, a link rewiring strategy based on the
Fermi function is introduced. In the present model, the informed individuals
tend to break old links and reconnect to their second-order friends with more
uninformed neighbors. Simulation results on the susceptible-infected-recovered
(\textit{SIR}) model with fixed recovery time $T=1$ indicate that the
information would spread more faster and broader with the proposed rewiring
strategy. Extensive analyses of the information cascade size distribution show
that the spreading process of the initial steps plays a very important role,
that is to say, the information will spread out if it is still survival at the
beginning time. The proposed model may shed some light on the in-depth
understanding of information spreading on dynamical social networks.
|
1207.5293
|
Probability Bracket Notation, Multivariable Systems and Static Bayesian
Networks
|
cs.AI math.PR
|
Probability Bracket Notation (PBN) is applied to systems of multiple random
variables for preliminary study of static Bayesian Networks (BN) and
Probabilistic Graphic Models (PGM). The famous Student BN Example is explored
to show the local independences and reasoning power of a BN. Software package
Elvira is used to graphically display the student BN. Our investigation shows
that PBN provides a consistent and convenient alternative to manipulate many
expressions related to joint, marginal and conditional probability
distributions in static BN.
|
1207.5319
|
On the Capacity of the Two-user Gaussian Causal Cognitive Interference
Channel
|
cs.IT math.IT
|
This paper considers the two-user Gaussian Causal Cognitive Interference
Channel (GCCIC), which consists of two source-destination pairs that share the
same channel and where one full-duplex cognitive source can causally learn the
message of the primary source through a noisy link. The GCCIC is an
interference channel with unilateral source cooperation that better models
practical cognitive radio networks than the commonly used model which assumes
that one source has perfect non-causal knowledge of the other source's message.
First the sum-capacity of the symmetric GCCIC is determined to within a
constant gap. Then, the insights gained from the derivation of the symmetric
sum-capacity are extended to characterize the whole capacity region to within a
constant gap for more general cases. In particular, the capacity is determined
(a) to within 2 bits for the fully connected GCCIC when, roughly speaking, the
interference is not weak at both receivers, (b) to within 2 bits for the
Z-channel, i.e., when there is no interference from the primary user, and (c)
to within 2 bits for the S-channel, i.e., when there is no interference from
the secondary user. The parameter regimes where the GCCIC is equivalent, in
terms of generalized degrees-of-freedom, to the noncooperative interference
channel (i.e., unilateral causal cooperation is not useful), to the non-causal
cognitive interference channel (i.e., causal cooperation attains the ultimate
limit of cognitive radio technology), and to bilateral source cooperation are
identified. These comparisons shed lights into the parameter regimes and
network topologies that in practice might provide an unbounded throughput gain
compared to currently available (non cognitive) technologies.
|
1207.5326
|
Guarantees of Augmented Trace Norm Models in Tensor Recovery
|
cs.IT cs.CV math.IT
|
This paper studies the recovery guarantees of the models of minimizing
$\|\mathcal{X}\|_*+\frac{1}{2\alpha}\|\mathcal{X}\|_F^2$ where $\mathcal{X}$ is
a tensor and $\|\mathcal{X}\|_*$ and $\|\mathcal{X}\|_F$ are the trace and
Frobenius norm of respectively. We show that they can efficiently recover
low-rank tensors. In particular, they enjoy exact guarantees similar to those
known for minimizing $\|\mathcal{X}\|_*$ under the conditions on the sensing
operator such as its null-space property, restricted isometry property, or
spherical section property. To recover a low-rank tensor $\mathcal{X}^0$,
minimizing $\|\mathcal{X}\|_*+\frac{1}{2\alpha}\|\mathcal{X}\|_F^2$ returns the
same solution as minimizing $\|\mathcal{X}\|_*$ almost whenever
$\alpha\geq10\mathop {\max}\limits_{i}\|X^0_{(i)}\|_2$.
|
1207.5328
|
A prototype for projecting HPSG syntactic lexica towards LMF
|
cs.CL
|
The comparative evaluation of Arabic HPSG grammar lexica requires a deep
study of their linguistic coverage. The complexity of this task results mainly
from the heterogeneity of the descriptive components within those lexica
(underlying linguistic resources and different data categories, for example).
It is therefore essential to define more homogeneous representations, which in
turn will enable us to compare them and eventually merge them. In this context,
we present a method for comparing HPSG lexica based on a rule system. This
method is implemented within a prototype for the projection from Arabic HPSG to
a normalised pivot language compliant with LMF (ISO 24613 - Lexical Markup
Framework) and serialised using a TEI (Text Encoding Initiative) based
representation. The design of this system is based on an initial study of the
HPSG formalism looking at its adequacy for the representation of Arabic, and
from this, we identify the appropriate feature structures corresponding to each
Arabic lexical category and their possible LMF counterparts.
|
1207.5342
|
A Robust Signal Classification Scheme for Cognitive Radio
|
cs.IT cs.LG cs.NI math.IT
|
This paper presents a robust signal classification scheme for achieving
comprehensive spectrum sensing of multiple coexisting wireless systems. It is
built upon a group of feature-based signal detection algorithms enhanced by the
proposed dimension cancelation (DIC) method for mitigating the noise
uncertainty problem. The classification scheme is implemented on our testbed
consisting real-world wireless devices. The simulation and experimental
performances agree with each other well and shows the effectiveness and
robustness of the proposed scheme.
|
1207.5343
|
Social and strategic imitation: the way to consensus
|
physics.soc-ph cs.SI
|
Humans do not always make rational choices, a fact that experimental
economics is putting on solid grounds. The social context plays an important
role in determining our actions, and often we imitate friends or acquaintances
without any strategic consideration. We explore here the interplay between
strategic and social imitative behaviors in a coordination problem on a social
network. We observe that for interactions in 1D and 2D lattices any amount of
social imitation prevents the freezing of the network in domains with different
conventions, thus leading to global consensus. For interactions in complex
networks, the interplay of social and strategic imitation also drives the
system towards global consensus while neither dynamics alone does. We find an
optimum value for the combination of imitative behaviors to reach consensus in
a minimum time, and two different dynamical regimes to approach it: exponential
when social imitation predominates, and power-law when strategic considerations
dominate.
|
1207.5371
|
Towards a theory of statistical tree-shape analysis
|
stat.ME cs.CV math.MG
|
In order to develop statistical methods for shapes with a tree-structure, we
construct a shape space framework for tree-like shapes and study metrics on the
shape space. This shape space has singularities, corresponding to topological
transitions in the represented trees. We study two closely related metrics on
the shape space, TED and QED. QED is a quotient Euclidean distance arising
naturally from the shape space formulation, while TED is the classical tree
edit distance. Using Gromov's metric geometry we gain new insight into the
geometries defined by TED and QED. We show that the new metric QED has nice
geometric properties which facilitate statistical analysis, such as existence
and local uniqueness of geodesics and averages. TED, on the other hand, does
not share the geometric advantages of QED, but has nice algorithmic properties.
We provide a theoretical framework and experimental results on synthetic data
trees as well as airway trees from pulmonary CT scans. This way, we effectively
illustrate that our framework has both the theoretical and qualitative
properties necessary to build a theory of statistical tree-shape analysis.
|
1207.5409
|
FST Based Morphological Analyzer for Hindi Language
|
cs.CL cs.IR
|
Hindi being a highly inflectional language, FST (Finite State Transducer)
based approach is most efficient for developing a morphological analyzer for
this language. The work presented in this paper uses the SFST (Stuttgart Finite
State Transducer) tool for generating the FST. A lexicon of root words is
created. Rules are then added for generating inflectional and derivational
words from these root words. The Morph Analyzer developed was used in a Part Of
Speech (POS) Tagger based on Stanford POS Tagger. The system was first trained
using a manually tagged corpus and MAXENT (Maximum Entropy) approach of
Stanford POS tagger was then used for tagging input sentences. The
morphological analyzer gives approximately 97% correct results. POS tagger
gives an accuracy of approximately 87% for the sentences that have the words
known to the trained model file, and 80% accuracy for the sentences that have
the words unknown to the trained model file.
|
1207.5425
|
Ranked Document Retrieval in (Almost) No Space
|
cs.IR cs.DB
|
Ranked document retrieval is a fundamental task in search engines. Such
queries are solved with inverted indexes that require additional 45%-80% of the
compressed text space, and take tens to hundreds of microseconds per query. In
this paper we show how ranked document retrieval queries can be solved within
tens of milliseconds using essentially no extra space over an in-memory
compressed representation of the document collection. More precisely, we
enhance wavelet trees on bytecodes (WTBCs), a data structure that rearranges
the bytes of the compressed collection, so that they support ranked conjunctive
and disjunctive queries, using just 6%-18% of the compressed text space.
|
1207.5434
|
sSCADA: Securing SCADA Infrastructure Communications
|
cs.IT cs.NI math.IT
|
Distributed control systems (DCS) and supervisory control and data
acquisition (SCADA) systems were developed to reduce labour costs, and to allow
system-wide monitoring and remote control from a central location. Control
systems are widely used in critical infrastructures such as electric grid,
natural gas, water and wastewater industries. While control systems can be
vulnerable to a variety of types of cyber attacks that could have devastating
consequences, little research has been done to secure the control systems.
American Gas Association (AGA), IEC TC57 WG15, IEEE, NIST and National SCADA
Test Bed Program have been actively designing cryptographic standard to protect
SCADA systems. American Gas Association (AGA) had originally been designing
cryptographic standard to protect SCADA communication links and finished the
report AGA 12 part 1. The AGA 12 part 2 has been transferred to IEEE P1711.
This paper presents an attack on the protocols in the first draft of AGA
standard (Wright et al., 2004). This attack shows that the security mechanisms
in the first version of the AGA standard protocol could be easily defeated. We
then propose a suite of security protocols optimised for SCADA/DCS systems
which include: point-to-point secure channels, authenticated broadcast
channels, authenticated emergency channels, and revised authenticated emergency
channels. These protocols are designed to address the specific challenges that
SCADA systems have.
|
1207.5437
|
Generalization Bounds for Metric and Similarity Learning
|
cs.LG stat.ML
|
Recently, metric learning and similarity learning have attracted a large
amount of interest. Many models and optimisation algorithms have been proposed.
However, there is relatively little work on the generalization analysis of such
methods. In this paper, we derive novel generalization bounds of metric and
similarity learning. In particular, we first show that the generalization
analysis reduces to the estimation of the Rademacher average over
"sums-of-i.i.d." sample-blocks related to the specific matrix norm. Then, we
derive generalization bounds for metric/similarity learning with different
matrix-norm regularisers by estimating their specific Rademacher complexities.
Our analysis indicates that sparse metric/similarity learning with $L^1$-norm
regularisation could lead to significantly better bounds than those with
Frobenius-norm regularisation. Our novel generalization analysis develops and
refines the techniques of U-statistics and Rademacher complexity analysis.
|
1207.5439
|
Edge-Colored Graphs with Applications To Homogeneous Faults
|
cs.DM cs.IT math.CO math.IT
|
In this paper, we use the concept of colored edge graphs to model homogeneous
faults in networks. We then use this model to study the minimum connectivity
(and design) requirements of networks for being robust against homogeneous
faults within certain thresholds. In particular, necessary and sufficient
conditions for most interesting cases are obtained. For example, we will study
the following cases: (1) the number of colors (or the number of non-homogeneous
network device types) is one more than the homogeneous fault threshold; (2)
there is only one homogeneous fault (i.e., only one color could fail); and (3)
the number of non-homogeneous network device types is less than five.
|
1207.5458
|
On the Non-robustness of Essentially Conditional Information
Inequalities
|
cs.IT math.IT math.PR
|
We show that two essentially conditional linear inequalities for Shannon's
entropies (including the Zhang-Yeung'97 conditional inequality) do not hold for
asymptotically entropic points. This means that these inequalities are
non-robust in a very strong sense. This result raises the question of the
meaning of these inequalities and the validity of their use in
practice-oriented applications.
|
1207.5466
|
Approximate Inverse Frequent Itemset Mining: Privacy, Complexity, and
Approximation
|
cs.DB
|
In order to generate synthetic basket data sets for better benchmark testing,
it is important to integrate characteristics from real-life databases into the
synthetic basket data sets. The characteristics that could be used for this
purpose include the frequent itemsets and association rules. The problem of
generating synthetic basket data sets from frequent itemsets is generally
referred to as inverse frequent itemset mining. In this paper, we show that the
problem of approximate inverse frequent itemset mining is {\bf NP}-complete.
Then we propose and analyze an approximate algorithm for approximate inverse
frequent itemset mining, and discuss privacy issues related to the synthetic
basket data set. In particular, we propose an approximate algorithm to
determine the privacy leakage in a synthetic basket data set.
|
1207.5483
|
Exact Cramer-Rao Bounds for Semi-blind Channel Estimation in
Amplify-and-Forward Two-Way Relay Networks
|
cs.IT math.IT
|
In this paper, we derive for the first time the exact Cramer-Rao bounds
(CRBs) on semi-blind channel estimation for amplify-and-forward two-way relay
networks. The bounds cover a wide range of modulation schemes that satisfy a
certain symmetry condition. In particular, the important classes of PSK and
square QAM are covered. For the case square QAM, we also provide simplified
expressions that lend themselves more easily to numerical implementation. The
derived bounds are used to show that the semi-blind approach, which exploits
both the transmitted pilots and the transmitted data symbols, can provide
substantial improvements in estimation accuracy over the training-based
approach which only uses pilot symbols to estimate the channel parameters. We
also derive the more tractable modified CRB which accurately approximates the
exact CRB at high SNR for low modulation orders.
|
1207.5528
|
On the Conjecture on APN Functions
|
cs.IT math.AG math.CO math.IT
|
An almost perfect nonlinear (APN) function (necessarily a polynomial
function) on a finite field $\mathbb{F}$ is called exceptional APN, if it is
also APN on infinitely many extensions of $\mathbb{F}$. In this article we
consider the most studied case of $\mathbb{F}=\mathbb{F}_{2^n}$.
A conjecture of Janwa-Wilson and McGuire-Janwa-Wilson (1993/1996), settled in
2011, was that the only exceptional monomial APN functions are the monomials
$x^n$, where $n=2^i+1$ or $n={2^{2i}-2^i+1}$ (the Gold or the Kasami exponents
respectively). A subsequent conjecture states that any exceptional APN function
is one of the monomials just described. One of our result is that all functions
of the form $f(x)=x^{2^k+1}+h(x)$ (for any odd degree $h(x)$, with a mild
condition in few cases), are not exceptional APN, extending substantially
several recent results towards the resolution of the stated conjecture.
|
1207.5536
|
MCTS Based on Simple Regret
|
cs.AI cs.LG
|
UCT, a state-of-the art algorithm for Monte Carlo tree search (MCTS) in games
and Markov decision processes, is based on UCB, a sampling policy for the
Multi-armed Bandit problem (MAB) that minimizes the cumulative regret. However,
search differs from MAB in that in MCTS it is usually only the final "arm pull"
(the actual move selection) that collects a reward, rather than all "arm
pulls". Therefore, it makes more sense to minimize the simple regret, as
opposed to the cumulative regret. We begin by introducing policies for
multi-armed bandits with lower finite-time and asymptotic simple regret than
UCB, using it to develop a two-stage scheme (SR+CR) for MCTS which outperforms
UCT empirically.
Optimizing the sampling process is itself a metareasoning problem, a solution
of which can use value of information (VOI) techniques. Although the theory of
VOI for search exists, applying it to MCTS is non-trivial, as typical myopic
assumptions fail. Lacking a complete working VOI theory for MCTS, we
nevertheless propose a sampling scheme that is "aware" of VOI, achieving an
algorithm that in empirical evaluation outperforms both UCT and the other
proposed algorithms.
|
1207.5542
|
LT Codes For Efficient and Reliable Distributed Storage Systems
Revisited
|
cs.IT math.IT
|
LT codes and digital fountain techniques have received significant attention
from both academics and industry in the past few years. There have also been
extensive interests in applying LT code techniques to distributed storage
systems such as cloud data storage in recent years. However, Plank and
Thomason's experimental results show that LDPC code performs well only
asymptotically when the number of data fragments increases and it has the worst
performance for small number of data fragments (e.g., less than 100). In their
INFOCOM 2012 paper, Cao, Yu, Yang, Lou, and Hou proposed to use exhaustive
search approach to find a deterministic LT code that could be used to decode
the original data content correctly in distributed storage systems. However, by
Plank and Thomason's experimental results, it is not clear whether the
exhaustive search approach will work efficiently or even correctly. This paper
carries out the theoretical analysis on the feasibility and performance issues
for applying LT codes to distributed storage systems. By employing the
underlying ideas of efficient Belief Propagation (BP) decoding process in LT
codes, this paper introduces two classes of codes called flat BP-XOR codes and
array BP-XOR codes (which can be considered as a deterministic version of LT
codes). We will show the equivalence between the edge-colored graph model and
degree-one-and-two encoding symbols based array BP-XOR codes. Using this
equivalence result, we are able to design general array BP-XOR codes using
graph based results. Similarly, based on this equivalence result, we are able
to get new results for edge-colored graph models using results from array
BP-XOR codes.
|
1207.5554
|
Bellman Error Based Feature Generation using Random Projections on
Sparse Spaces
|
cs.LG stat.ML
|
We address the problem of automatic generation of features for value function
approximation. Bellman Error Basis Functions (BEBFs) have been shown to improve
the error of policy evaluation with function approximation, with a convergence
rate similar to that of value iteration. We propose a simple, fast and robust
algorithm based on random projections to generate BEBFs for sparse feature
spaces. We provide a finite sample analysis of the proposed method, and prove
that projections logarithmic in the dimension of the original space are enough
to guarantee contraction in the error. Empirical results demonstrate the
strength of this method.
|
1207.5555
|
A Simplified Min-Sum Decoding Algorithm for Non-Binary LDPC Codes
|
cs.IT math.IT
|
Non-binary low-density parity-check codes are robust to various channel
impairments. However, based on the existing decoding algorithms, the decoder
implementations are expensive because of their excessive computational
complexity and memory usage. Based on the combinatorial optimization, we
present an approximation method for the check node processing. The simulation
results demonstrate that our scheme has small performance loss over the
additive white Gaussian noise channel and independent Rayleigh fading channel.
Furthermore, the proposed reduced-complexity realization provides significant
savings on hardware, so it yields a good performance-complexity tradeoff and
can be efficiently implemented.
|
1207.5558
|
Fast directional spatially localized spherical harmonic transform
|
cs.IT astro-ph.IM math.IT
|
We propose a transform for signals defined on the sphere that reveals their
localized directional content in the spatio-spectral domain when used in
conjunction with an asymmetric window function. We call this transform the
directional spatially localized spherical harmonic transform (directional
SLSHT) which extends the SLSHT from the literature whose usefulness is limited
to symmetric windows. We present an inversion relation to synthesize the
original signal from its directional-SLSHT distribution for an arbitrary window
function. As an example of an asymmetric window, the most concentrated
band-limited eigenfunction in an elliptical region on the sphere is proposed
for directional spatio-spectral analysis and its effectiveness is illustrated
on the synthetic and Mars topographic data-sets. Finally, since such typical
data-sets on the sphere are of considerable size and the directional SLSHT is
intrinsically computationally demanding depending on the band-limits of the
signal and window, a fast algorithm for the efficient computation of the
transform is developed. The floating point precision numerical accuracy of the
fast algorithm is demonstrated and a full numerical complexity analysis is
presented.
|
1207.5560
|
Evolving Musical Counterpoint: The Chronopoint Musical Evolution System
|
cs.SD cs.AI cs.NE
|
Musical counterpoint, a musical technique in which two or more independent
melodies are played simultaneously with the goal of creating harmony, has been
around since the baroque era. However, to our knowledge computational
generation of aesthetically pleasing linear counterpoint based on subjective
fitness assessment has not been explored by the evolutionary computation
community (although generation using objective fitness has been attempted in
quite a few cases). The independence of contrapuntal melodies and the
subjective nature of musical aesthetics provide an excellent platform for the
application of genetic algorithms. In this paper, a genetic algorithm approach
to generating contrapuntal melodies is explained, with a description of the
various musical heuristics used and of how variable-length chromosome strings
are used to avoid generating "jerky" rhythms and melodic phrases, as well as
how subjectivity is incorporated into the algorithm's fitness measures. Next,
results from empirical testing of the algorithm are presented, with a focus on
how a user's musical sophistication influences their experience. Lastly,
further musical and compositional applications of the algorithm are discussed
along with planned future work on the algorithm.
|
1207.5589
|
VOI-aware MCTS
|
cs.AI cs.LG
|
UCT, a state-of-the art algorithm for Monte Carlo tree search (MCTS) in games
and Markov decision processes, is based on UCB1, a sampling policy for the
Multi-armed Bandit problem (MAB) that minimizes the cumulative regret. However,
search differs from MAB in that in MCTS it is usually only the final "arm pull"
(the actual move selection) that collects a reward, rather than all "arm
pulls". In this paper, an MCTS sampling policy based on Value of Information
(VOI) estimates of rollouts is suggested. Empirical evaluation of the policy
and comparison to UCB1 and UCT is performed on random MAB instances as well as
on Computer Go.
|
1207.5640
|
Enabling Wireless Power Transfer in Cellular Networks: Architecture,
Modeling and Deployment
|
cs.IT math.IT
|
Microwave power transfer (MPT) delivers energy wirelessly from stations
called power beacons (PBs) to mobile devices by microwave radiation. This
provides mobiles practically infinite battery lives and eliminates the need of
power cords and chargers. To enable MPT for mobile charging, this paper
proposes a new network architecture that overlays an uplink cellular network
with randomly deployed PBs for powering mobiles, called a hybrid network. The
deployment of the hybrid network under an outage constraint on data links is
investigated based on a stochastic-geometry model where single-antenna base
stations (BSs) and PBs form independent homogeneous Poisson point processes
(PPPs) and single-antenna mobiles are uniformly distributed in Voronoi cells
generated by BSs. In this model, mobiles and PBs fix their transmission power
at p and q, respectively; a PB either radiates isotropically, called isotropic
MPT, or directs energy towards target mobiles by beamforming, called directed
MPT. The model is applied to derive the tradeoffs between the network
parameters including p, q, and the BS/PB densities under the outage constraint.
First, consider the deployment of the cellular network. It is proved that the
outage constraint is satisfied so long as the product the BS density decreases
with increasing p following a power law where the exponent is proportional to
the path-loss exponent. Next, consider the deployment of the hybrid network
assuming infinite energy storage at mobiles. It is shown that for isotropic
MPT, the product between q, the PB density, and the BS density raised to a
power proportional to the path-loss exponent has to be above a given threshold
so that PBs are sufficiently dense; for directed MPT, a similar result is
obtained with the aforementioned product increased by the array gain. Last,
similar results are derived for the case of mobiles having small energy
storage.
|
1207.5660
|
Achieving the Capacity of the N-Relay Gaussian Diamond Network Within
log N Bits
|
cs.IT math.IT
|
We consider the N-relay Gaussian diamond network where a source node
communicates to a destination node via N parallel relays through a cascade of a
Gaussian broadcast (BC) and a multiple access (MAC) channel. Introduced in 2000
by Schein and Gallager, the capacity of this relay network is unknown in
general. The best currently available capacity approximation, independent of
the coefficients and the SNR's of the constituent channels, is within an
additive gap of 1.3 N bits, which follows from the recent capacity
approximations for general Gaussian relay networks with arbitrary topology.
In this paper, we approximate the capacity of this network within 2 log N
bits. We show that two strategies can be used to achieve the
information-theoretic cutset upper bound on the capacity of the network up to
an additive gap of O(log N) bits, independent of the channel configurations and
the SNR's. The first of these strategies is simple partial decode-and-forward.
Here, the source node uses a superposition codebook to broadcast independent
messages to the relays at appropriately chosen rates; each relay decodes its
intended message and then forwards it to the destination over the MAC channel.
A similar performance can be also achieved with compress-and-forward type
strategies (such as quantize-map-and-forward and noisy network coding) that
provide the 1.3 N-bit approximation for general Gaussian networks, but only if
the relays quantize their observed signals at a resolution inversely
proportional to the number of relay nodes N. This suggest that the
rule-of-thumb to quantize the received signals at the noise level in the
current literature can be highly suboptimal.
|
1207.5661
|
A Framework of Algorithms: Computing the Bias and Prestige of Nodes in
Trust Networks
|
cs.SI physics.soc-ph
|
A trust network is a social network in which edges represent the trust
relationship between two nodes in the network. In a trust network, a
fundamental question is how to assess and compute the bias and prestige of the
nodes, where the bias of a node measures the trustworthiness of a node and the
prestige of a node measures the importance of the node. The larger bias of a
node implies the lower trustworthiness of the node, and the larger prestige of
a node implies the higher importance of the node. In this paper, we define a
vector-valued contractive function to characterize the bias vector which
results in a rich family of bias measurements, and we propose a framework of
algorithms for computing the bias and prestige of nodes in trust networks.
Based on our framework, we develop four algorithms that can calculate the bias
and prestige of nodes effectively and robustly. The time and space complexities
of all our algorithms are linear w.r.t. the size of the graph, thus our
algorithms are scalable to handle large datasets. We evaluate our algorithms
using five real datasets. The experimental results demonstrate the
effectiveness, robustness, and scalability of our algorithms.
|
1207.5663
|
Groupwise information sharing promotes ingroup favoritism in indirect
reciprocity
|
physics.soc-ph cs.SI q-bio.PE
|
Indirect reciprocity is a mechanism for cooperation in social dilemma
situations, in which an individual is motivated to help another to acquire a
good reputation and receive help from others afterwards. Ingroup favoritism is
another aspect of human cooperation, whereby individuals help members in their
own group more often than those in other groups. Ingroup favoritism is a puzzle
for the theory of cooperation because it is not easily evolutionarily stable.
In the context of indirect reciprocity, ingroup favoritism has been shown to be
a consequence of employing a double standard when assigning reputations to
ingroup and outgroup members; e.g., helping an ingroup member is regarded as
good, whereas the same action toward an outgroup member is regarded as bad. We
analyze a model of indirect reciprocity in which information sharing is
conducted groupwise. In our model, individuals play social dilemma games within
and across groups, and the information about their reputations is shared within
each group. We show that evolutionarily stable ingroup favoritism emerges even
if all the players use the same reputation assignment rule regardless of group
(i.e., a single standard). Two reputation assignment rules called simple
standing and stern judging yield ingroup favoritism. Stern judging induces much
stronger ingroup favoritism than does simple standing. Simple standing and
stern judging are evolutionarily stable against each other when groups
employing different assignment rules compete and the number of groups is
sufficiently large. In addition, we analytically show as a limiting case that
homogeneous populations of reciprocators that use reputations are unstable when
individuals independently infer reputations of individuals, which is consistent
with previously reported numerical results.
|
1207.5721
|
Cognitive network structure: an experimental study
|
physics.soc-ph cs.SI
|
In this paper we present first experimental results about a small group of
people exchanging private and public messages in a virtual community. Our goal
is the study of the cognitive network that emerges during a chat seance. We
used the Derrida coefficient and the triangle structure under the working
assumption that moods and perceived mutual affinity can produce results
complementary to a full semantic analysis. The most outstanding outcome is the
difference between the network obtained considering publicly exchanged messages
and the one considering only privately exchanged messages: in the former case,
the network is very homogeneous, in the sense that each individual interacts in
the same way with all the participants, whilst in the latter the interactions
among different agents are very heterogeneous, and are based on "the enemy of
my enemy is my friend" strategy. Finally a recent characterization of the
triangular cliques has been considered in order to describe the intimate
structure of the network. Experimental results confirm recent theoretical
studies indicating that certain 3-vertex structures can be used as indicators
for the network aging and some relevant dynamical features.
|
1207.5742
|
Conditional Information Inequalities for Entropic and Almost Entropic
Points
|
cs.IT cs.DM math.IT math.PR
|
We study conditional linear information inequalities, i.e., linear
inequalities for Shannon entropy that hold for distributions whose entropies
meet some linear constraints. We prove that some conditional information
inequalities cannot be extended to any unconditional linear inequalities. Some
of these conditional inequalities hold for almost entropic points, while others
do not. We also discuss some counterparts of conditional information
inequalities for Kolmogorov complexity.
|
1207.5745
|
Semantic Information Retrieval Using Ontology In University Domain
|
cs.IR
|
Today's conventional search engines hardly do provide the essential content
relevant to the user's search query. This is because the context and semantics
of the request made by the user is not analyzed to the full extent. So here the
need for a semantic web search arises. SWS is upcoming in the area of web
search which combines Natural Language Processing and Artificial Intelligence.
The objective of the work done here is to design, develop and implement a
semantic search engine- SIEU(Semantic Information Extraction in University
Domain) confined to the university domain. SIEU uses ontology as a knowledge
base for the information retrieval process. It is not just a mere keyword
search. It is one layer above what Google or any other search engines retrieve
by analyzing just the keywords. Here the query is analyzed both syntactically
and semantically. The developed system retrieves the web results more relevant
to the user query through keyword expansion. The results obtained here will be
accurate enough to satisfy the request made by the user. The level of accuracy
will be enhanced since the query is analyzed semantically. The system will be
of great use to the developers and researchers who work on web. The Google
results are re-ranked and optimized for providing the relevant links. For
ranking an algorithm has been applied which fetches more apt results for the
user query.
|
1207.5746
|
Delay Stability Regions of the Max-Weight Policy under Heavy-Tailed
Traffic
|
cs.SY cs.NI math.PR
|
We carry out a delay stability analysis (i.e., determine conditions under
which expected steady-state delays at a queue are finite) for a simple 3-queue
system operated under the Max-Weight scheduling policy, for the case where one
of the queues is fed by heavy-tailed traffic (i.e, when the number of arrivals
at each time slot has infinite second moment). This particular system
exemplifies an intricate phenomenon whereby heavy-tailed traffic at one queue
may or may not result in the delay instability of another queue, depending on
the arrival rates.
While the ordinary stability region (in the sense of convergence to a
steady-state distribution) is straightforward to determine, the determination
of the delay stability region is more involved: (i) we use "fluid-type" sample
path arguments, combined with renewal theory, to prove delay instability
outside a certain region; (ii) we use a piecewise linear Lyapunov function to
prove delay stability in the interior of that same region; (iii) as an
intermediate step in establishing delay stability, we show that the expected
workload of a stable M/GI/1 queue scales with time as
$\mathcal{O}(t^{1/(1+\gamma)})$, assuming that service times have a finite
$1+\gamma$ moment, where $\gamma \in (0,1)$.
|
1207.5774
|
A New Training Algorithm for Kanerva's Sparse Distributed Memory
|
cs.CV cs.LG cs.NE
|
The Sparse Distributed Memory proposed by Pentii Kanerva (SDM in short) was
thought to be a model of human long term memory. The architecture of the SDM
permits to store binary patterns and to retrieve them using partially matching
patterns. However Kanerva's model is especially efficient only in handling
random data. The purpose of this article is to introduce a new approach of
training Kanerva's SDM that can handle efficiently non-random data, and to
provide it the capability to recognize inverted patterns. This approach uses a
signal model which is different from the one proposed for different purposes by
Hely, Willshaw and Hayes in [4]. This article additionally suggests a different
way of creating hard locations in the memory despite the Kanerva's static
model.
|
1207.5777
|
Efficient Snapshot Retrieval over Historical Graph Data
|
cs.DB cs.SI physics.soc-ph
|
We address the problem of managing historical data for large evolving
information networks like social networks or citation networks, with the goal
to enable temporal and evolutionary queries and analysis. We present the design
and architecture of a distributed graph database system that stores the entire
history of a network and provides support for efficient retrieval of multiple
graphs from arbitrary time points in the past, in addition to maintaining the
current state for ongoing updates. Our system exposes a general programmatic
API to process and analyze the retrieved snapshots. We introduce DeltaGraph, a
novel, extensible, highly tunable, and distributed hierarchical index structure
that enables compactly recording the historical information, and that supports
efficient retrieval of historical graph snapshots for single-site or parallel
processing. Along with the original graph data, DeltaGraph can also maintain
and index auxiliary information; this functionality can be used to extend the
structure to efficiently execute queries like subgraph pattern matching over
historical data. We develop analytical models for both the storage space needed
and the snapshot retrieval times to aid in choosing the right parameters for a
specific scenario. In addition, we present strategies for materializing
portions of the historical graph state in memory to further speed up the
retrieval process. Secondly, we present an in-memory graph data structure
called GraphPool that can maintain hundreds of historical graph instances in
main memory in a non-redundant manner. We present a comprehensive experimental
evaluation that illustrates the effectiveness of our proposed techniques at
managing historical graph information.
|
1207.5781
|
Confidence-based Optimization for the Newsvendor Problem
|
math.OC cs.SY stat.OT
|
We introduce a novel strategy to address the issue of demand estimation in
single-item single-period stochastic inventory optimisation problems. Our
strategy analytically combines confidence interval analysis and inventory
optimisation. We assume that the decision maker is given a set of past demand
samples and we employ confidence interval analysis in order to identify a range
of candidate order quantities that, with prescribed confidence probability,
includes the real optimal order quantity for the underlying stochastic demand
process with unknown stationary parameter(s). In addition, for each candidate
order quantity that is identified, our approach can produce an upper and a
lower bound for the associated cost. We apply our novel approach to three
demand distribution in the exponential family: binomial, Poisson, and
exponential. For two of these distributions we also discuss the extension to
the case of unobserved lost sales. Numerical examples are presented in which we
show how our approach complements existing frequentist - e.g. based on maximum
likelihood estimators - or Bayesian strategies.
|
1207.5810
|
Ordering dynamics of the multi-state voter model
|
cond-mat.stat-mech cs.SI physics.soc-ph
|
The voter model is a paradigm of ordering dynamics. At each time step, a
random node is selected and copies the state of one of its neighbors.
Traditionally, this state has been considered as a binary variable. Here, we
relax this assumption and address the case in which the number of states is a
parameter that can assume any value, from 2 to \infty, in the thermodynamic
limit. We derive mean-field analytical expressions for the exit probability and
the consensus time for the case of an arbitrary number of states. We then
perform a numerical study of the model in low dimensional lattices, comparing
the case of multiple states with the usual binary voter model. Our work
generalizes the well-known results for the voter model, and sheds light on the
role of the so far almost neglected parameter accounting for the number of
states.
|
1207.5844
|
SODEXO: A System Framework for Deployment and Exploitation of Deceptive
Honeybots in Social Networks
|
cs.SI cs.CR cs.GT
|
As social networking sites such as Facebook and Twitter are becoming
increasingly popular, a growing number of malicious attacks, such as phishing
and malware, are exploiting them. Among these attacks, social botnets have
sophisticated infrastructure that leverages compromised users accounts, known
as bots, to automate the creation of new social networking accounts for
spamming and malware propagation. Traditional defense mechanisms are often
passive and reactive to non-zero-day attacks. In this paper, we adopt a
proactive approach for enhancing security in social networks by infiltrating
botnets with honeybots. We propose an integrated system named SODEXO which can
be interfaced with social networking sites for creating deceptive honeybots and
leveraging them for gaining information from botnets. We establish a
Stackelberg game framework to capture strategic interactions between honeybots
and botnets, and use quantitative methods to understand the tradeoffs of
honeybots for their deployment and exploitation in social networks. We design a
protection and alert system that integrates both microscopic and macroscopic
models of honeybots and optimally determines the security strategies for
honeybots. We corroborate the proposed mechanism with extensive simulations and
comparisons with passive defenses.
|
1207.5847
|
Growing a Network on a Given Substrate
|
physics.soc-ph cond-mat.stat-mech cs.SI
|
Conventional studies of network growth models mainly look at the steady state
degree distribution of the graph. Often long time behavior is considered, hence
the initial condition is ignored. In this contribution, the time evolution of
the degree distribution is the center of attention. We consider two specific
growth models; incoming nodes with uniform and preferential attachment, and the
degree distribution of the graph for arbitrary initial condition is obtained as
a function of time. This allows us to characterize the transient behavior of
the degree distribution, as well as to quantify the rate of convergence to the
steady-state limit.
|
1207.5849
|
Migration in a Small World: A Network Approach to Modeling Immigration
Processes
|
physics.soc-ph cond-mat.stat-mech cs.SI
|
Existing theories of migration either focus on micro- or macroscopic behavior
of populations; that is, either the average behavior of entire population is
modeled directly, or decisions of individuals are modeled directly. In this
work, we seek to bridge these two perspectives by modeling individual agents
decisions to migrate while accounting for the social network structure that
binds individuals into a population. Pecuniary considerations combined with the
decisions of peers are the primary elements of the model, being the main
driving forces of migration. People of the home country are modeled as nodes on
a small-world network. A dichotomous state is associated with each node,
indicating whether it emigrates to the destination country or it stays in the
home country. We characterize the emigration rate in terms of the relative
welfare and population of the home and destination countries. The time
evolution and the steady-state fraction of emigrants are also derived.
|
1207.5850
|
Performance of the Bounded Distance Decoder on the AWGN Channel
|
cs.IT math.IT
|
In contrast to a maximum-likelihood decoder, it is often desirable to use an
incomplete decoder that can detect its decoding errors with high probability.
One common choice is the bounded distance decoder. Bounds are derived for the
total word error rate, Pw, and the undetected error rate, Pu. Excellent
agreement is found with simulation results for a small code, and the bounds are
shown to be tractable for a larger code.
|
1207.5853
|
Spectrum Coordination in Energy Efficient Cognitive Radio Networks
|
cs.GT cs.IT math.IT math.OC
|
Device coordination in open spectrum systems is a challenging problem,
particularly since users experience varying spectrum availability over time and
location. In this paper, we propose a game theoretical approach that allows
cognitive radio pairs, namely the primary user (PU) and the secondary user
(SU), to update their transmission powers and frequencies simultaneously.
Specifically, we address a Stackelberg game model in which individual users
attempt to hierarchically access to the wireless spectrum while maximizing
their energy efficiency. A thorough analysis of the existence, uniqueness and
characterization of the Stackelberg equilibrium is conducted. In particular, we
show that a spectrum coordination naturally occurs when both actors in the
system decide sequentially about their powers and their transmitting carriers.
As a result, spectrum sensing in such a situation turns out to be a simple
detection of the presence/absence of a transmission on each sub-band. We also
show that when users experience very different channel gains on their two
carriers, they may choose to transmit on the same carrier at the Stackelberg
equilibrium as this contributes enough energy efficiency to outweigh the
interference degradation caused by the mutual transmission. Then, we provide an
algorithmic analysis on how the PU and the SU can reach such a spectrum
coordination using an appropriate learning process. We validate our results
through extensive simulations and compare the proposed algorithm to some
typical scenarios including the non-cooperative case and the
throughput-based-utility systems. Typically, it is shown that the proposed
Stackelberg decision approach optimizes the energy efficiency while still
maximizing the throughput at the equilibrium.
|
1207.5857
|
Distance Distributions in Regular Polygons
|
cs.IT math.IT
|
This paper derives the exact cumulative density function of the distance
between a randomly located node and any arbitrary reference point inside a
regular $\el$-sided polygon. Using this result, we obtain the closed-form
probability density function (PDF) of the Euclidean distance between any
arbitrary reference point and its $n$-th neighbour node, when $N$ nodes are
uniformly and independently distributed inside a regular $\ell$-sided polygon.
First, we exploit the rotational symmetry of the regular polygons and quantify
the effect of polygon sides and vertices on the distance distributions. Then we
propose an algorithm to determine the distance distributions given any
arbitrary location of the reference point inside the polygon. For the special
case when the arbitrary reference point is located at the center of the
polygon, our framework reproduces the existing result in the literature.
|
1207.5871
|
Optimal Sampling Points in Reproducing Kernel Hilbert Spaces
|
cs.IT math.IT stat.ML
|
The recent developments of basis pursuit and compressed sensing seek to
extract information from as few samples as possible. In such applications,
since the number of samples is restricted, one should deploy the sampling
points wisely. We are motivated to study the optimal distribution of finite
sampling points. Formulation under the framework of optimal reconstruction
yields a minimization problem. In the discrete case, we estimate the distance
between the optimal subspace resulting from a general Karhunen-Loeve transform
and the kernel space to obtain another algorithm that is computationally
favorable. Numerical experiments are then presented to illustrate the
performance of the algorithms for the searching of optimal sampling points.
|
1207.5879
|
Selecting Computations: Theory and Applications
|
cs.AI
|
Sequential decision problems are often approximately solvable by simulating
possible future action sequences. {\em Metalevel} decision procedures have been
developed for selecting {\em which} action sequences to simulate, based on
estimating the expected improvement in decision quality that would result from
any particular simulation; an example is the recent work on using bandit
algorithms to control Monte Carlo tree search in the game of Go. In this paper
we develop a theoretical basis for metalevel decisions in the statistical
framework of Bayesian {\em selection problems}, arguing (as others have done)
that this is more appropriate than the bandit framework. We derive a number of
basic results applicable to Monte Carlo selection problems, including the first
finite sampling bounds for optimal policies in certain cases; we also provide a
simple counterexample to the intuitive conjecture that an optimal policy will
necessarily reach a decision in all cases. We then derive heuristic
approximations in both Bayesian and distribution-free settings and demonstrate
their superiority to bandit-based heuristics in one-shot decision problems and
in Go.
|
1207.5926
|
Redundant Sudoku Rules
|
cs.AI
|
The rules of Sudoku are often specified using twenty seven
\texttt{all\_different} constraints, referred to as the {\em big} \mrules.
Using graphical proofs and exploratory logic programming, the following main
and new result is obtained: many subsets of six of these big \mrules are
redundant (i.e., they are entailed by the remaining twenty one \mrules), and
six is maximal (i.e., removing more than six \mrules is not possible while
maintaining equivalence). The corresponding result for binary inequality
constraints, referred to as the {\em small} \mrules, is stated as a conjecture.
|
1207.5990
|
File system on CRDT
|
cs.DC cs.DB
|
In this report we show how to manage a distributed hierarchical structure
representing a file system. This structure is optimistically replicated, each
user work on his local replica, and updates are sent to other replica. The
different replicas eventually observe same view of file systems. At this stage,
conflicts between updates are very common. We claim that conflict resolution
should rely as little as possible on users. In this report we propose a simple
and modular solution to resolve these problems and maintain data consistency.
|
1207.6033
|
Effective Retrieval of Resources in Folksonomies Using a New Tag
Similarity Measure
|
cs.IR cs.SI
|
Social (or folksonomic) tagging has become a very popular way to describe
content within Web 2.0 websites. However, as tags are informally defined,
continually changing, and ungoverned, it has often been criticised for
lowering, rather than increasing, the efficiency of searching. To address this
issue, a variety of approaches have been proposed that recommend users what
tags to use, both when labeling and when looking for resources. These
techniques work well in dense folksonomies, but they fail to do so when tag
usage exhibits a power law distribution, as it often happens in real-life
folksonomies. To tackle this issue, we propose an approach that induces the
creation of a dense folksonomy, in a fully automatic and transparent way: when
users label resources, an innovative tag similarity metric is deployed, so to
enrich the chosen tag set with related tags already present in the folksonomy.
The proposed metric, which represents the core of our approach, is based on the
mutual reinforcement principle. Our experimental evaluation proves that the
accuracy and coverage of searches guaranteed by our metric are higher than
those achieved by applying classical metrics.
|
1207.6037
|
Measuring Similarity in Large-scale Folksonomies
|
cs.IR cs.SI
|
Social (or folksonomic) tagging has become a very popular way to describe
content within Web 2.0 websites. Unlike taxonomies, which overimpose a
hierarchical categorisation of content, folksonomies enable end-users to freely
create and choose the categories (in this case, tags) that best describe some
content. However, as tags are informally defined, continually changing, and
ungoverned, social tagging has often been criticised for lowering, rather than
increasing, the efficiency of searching, due to the number of synonyms,
homonyms, polysemy, as well as the heterogeneity of users and the noise they
introduce. To address this issue, a variety of approaches have been proposed
that recommend users what tags to use, both when labelling and when looking for
resources.
As we illustrate in this paper, real world folksonomies are characterized by
power law distributions of tags, over which commonly used similarity metrics,
including the Jaccard coefficient and the cosine similarity, fail to compute.
We thus propose a novel metric, specifically developed to capture similarity in
large-scale folksonomies, that is based on a mutual reinforcement principle:
that is, two tags are deemed similar if they have been associated to similar
resources, and vice-versa two resources are deemed similar if they have been
labelled by similar tags. We offer an efficient realisation of this similarity
metric, and assess its quality experimentally, by comparing it against cosine
similarity, on three large-scale datasets, namely Bibsonomy, MovieLens and
CiteULike.
|
1207.6051
|
Composition of Modular Telemetry System with Interval Multiset Estimates
|
cs.SY cs.AI math.OC
|
The paper describes combinatorial synthesis approach with interval multset
estimates of system elements for modeling, analysis, design, and improvement of
a modular telemetry system. Morphological (modular) system design and
improvement are considered as composition of the telemetry system elements
(components) configuration. The solving process is based on Hierarchical
Morphological Multicriteria Design (HMMD): (i) multicriteria selection of
alternatives for system components, (ii) synthesis of the selected alternatives
into a resultant combination (while taking into account quality of the
alternatives above and their compatibility). Interval multiset estimates are
used for assessment of design alternatives for telemetry system elements. Two
additional systems problems are examined: (a) improvement of the obtained
solutions, (b) aggregation of the obtained solutions into a resultant system
configuration. The improvement and aggregation processes are based on multiple
choice problem with interval multiset estimates. Numerical examples for an
on-board telemetry subsystem illustrate the design and improvement processes.
|
1207.6052
|
Coding Delay Analysis of Dense and Chunked Network Codes over Line
Networks
|
cs.IT math.IT
|
In this paper, we analyze the coding delay and the average coding delay of
random linear network codes (a.k.a. dense codes) and chunked codes (CC), which
are an attractive alternative to dense codes due to their lower complexity,
over line networks with Bernoulli losses and deterministic regular or Poisson
transmissions. Our results, which include upper bounds on the delay and the
average delay, are (i) for dense codes, in some cases more general, and in some
other cases tighter, than the existing bounds, and provide a more clear picture
of the speed of convergence of dense codes to the (min-cut) capacity of line
networks; and (ii) the first of their kind for CC over networks with such
probabilistic traffics. In particular, these results demonstrate that a
stand-alone CC or a precoded CC provide a better tradeoff between the
computational complexity and the convergence speed to the network capacity over
the probabilistic traffics compared to arbitrary deterministic traffics which
have previously been studied in the literature.
|
1207.6053
|
Compressed Sensing off the Grid
|
cs.IT math.IT
|
We consider the problem of estimating the frequency components of a mixture
of s complex sinusoids from a random subset of n regularly spaced samples.
Unlike previous work in compressed sensing, the frequencies are not assumed to
lie on a grid, but can assume any values in the normalized frequency domain
[0,1]. We propose an atomic norm minimization approach to exactly recover the
unobserved samples. We reformulate this atomic norm minimization as an exact
semidefinite program. Even with this continuous dictionary, we show that most
sampling sets of size O(s log s log n) are sufficient to guarantee the exact
frequency estimation with high probability, provided the frequencies are well
separated. Numerical experiments are performed to illustrate the effectiveness
of the proposed method.
|
1207.6076
|
Equivalence of distance-based and RKHS-based statistics in hypothesis
testing
|
stat.ME cs.LG math.ST stat.ML stat.TH
|
We provide a unifying framework linking two classes of statistics used in
two-sample and independence testing: on the one hand, the energy distances and
distance covariances from the statistics literature; on the other, maximum mean
discrepancies (MMD), that is, distances between embeddings of distributions to
reproducing kernel Hilbert spaces (RKHS), as established in machine learning.
In the case where the energy distance is computed with a semimetric of negative
type, a positive definite kernel, termed distance kernel, may be defined such
that the MMD corresponds exactly to the energy distance. Conversely, for any
positive definite kernel, we can interpret the MMD as energy distance with
respect to some negative-type semimetric. This equivalence readily extends to
distance covariance using kernels on the product space. We determine the class
of probability distributions for which the test statistics are consistent
against all alternatives. Finally, we investigate the performance of the family
of distance kernels in two-sample and independence tests: we show in particular
that the energy distance most commonly employed in statistics is just one
member of a parametric family of kernels, and that other choices from this
family can yield more powerful tests.
|
1207.6083
|
Determinantal point processes for machine learning
|
stat.ML cs.IR cs.LG
|
Determinantal point processes (DPPs) are elegant probabilistic models of
repulsion that arise in quantum physics and random matrix theory. In contrast
to traditional structured models like Markov random fields, which become
intractable and hard to approximate in the presence of negative correlations,
DPPs offer efficient and exact algorithms for sampling, marginalization,
conditioning, and other inference tasks. We provide a gentle introduction to
DPPs, focusing on the intuitions, algorithms, and extensions that are most
relevant to the machine learning community, and show how DPPs can be applied to
real-world applications like finding diverse sets of high-quality search
results, building informative summaries by selecting diverse sentences from
documents, modeling non-overlapping human poses in images or video, and
automatically building timelines of important news stories.
|
1207.6084
|
Information Embedding on Actions
|
cs.IT math.IT
|
The problem of optimal actuation for channel and source coding was recently
formulated and solved in a number of relevant scenarios. In this class of
models, actions are taken at encoders or decoders, either to acquire side
information in an efficient way or to control or probe effectively the channel
state. In this paper, the problem of embedding information on the actions is
studied for both the source and the channel coding set-ups. In both cases, a
decoder is present that observes only a function of the actions taken by an
encoder or a decoder of an action-dependent point-to-point link. For the source
coding model, this decoder wishes to reconstruct a lossy version of the source
being transmitted over the point-to-point link, while for the channel coding
problem the decoder wishes to retrieve a portion of the message conveyed over
the link. For the problem of source coding with actions taken at the decoder, a
single letter characterization of the set of all achievable tuples of rate,
distortions at the two decoders and action cost is derived, under the
assumption that the mentioned decoder observes a function of the actions
non-causally, strictly causally or causally. A special case of the problem in
which the actions are taken by the encoder is also solved. A single-letter
characterization of the achievable capacity-cost region is then obtained for
the channel coding set-up with actions. Examples are provided that shed light
into the effect of information embedding on the actions for the
action-dependent source and channel coding problems.
|
1207.6087
|
Automated Dynamic Offset Applied to Cell Association
|
cs.GT cs.IT math.IT
|
In this paper, we develop a hierarchical Bayesian game framework for
automated dynamic offset selection. Users compete to maximize their throughput
by picking the best locally serving radio access network (RAN) with respect to
their own measurement, their demand and a partial statistical channel state
information (CSI) of other users. In particular, we investigate the properties
of a Stackelberg game, in which the base station is a player on its own. We
derive analytically the utilities related to the channel quality perceived by
users to obtain the equilibria. We study the Price of Anarchy (PoA) of such
system, where the PoA is the ratio of the social welfare attained when a
network planner chooses policies to maximize social welfare versus the social
welfare attained in Nash/Stackeleberg equilibrium when users choose their
policies strategically. We show by means of a Stackelberg formulation, how the
operator, by sending appropriate information about the state of the channel,
can configure a dynamic offset that optimizes its global utility while users
maximize their individual utilities. The proposed hierarchical decision
approach for wireless networks can reach a good trade-off between the global
network performance at the equilibrium and the requested amount of signaling.
Typically, it is shown that when the network goal is orthogonal to user's goal,
this can lead the users to a misleading association problem.
|
1207.6096
|
Accurate and Efficient Private Release of Datacubes and Contingency
Tables
|
cs.DB
|
A central problem in releasing aggregate information about sensitive data is
to do so accurately while providing a privacy guarantee on the output. Recent
work focuses on the class of linear queries, which include basic counting
queries, data cubes, and contingency tables. The goal is to maximize the
utility of their output, while giving a rigorous privacy guarantee. Most
results follow a common template: pick a "strategy" set of linear queries to
apply to the data, then use the noisy answers to these queries to reconstruct
the queries of interest. This entails either picking a strategy set that is
hoped to be good for the queries, or performing a costly search over the space
of all possible strategies.
In this paper, we propose a new approach that balances accuracy and
efficiency: we show how to improve the accuracy of a given query set by
answering some strategy queries more accurately than others. This leads to an
efficient optimal noise allocation for many popular strategies, including
wavelets, hierarchies, Fourier coefficients and more. For the important case of
marginal queries we show that this strictly improves on previous methods, both
analytically and empirically. Our results also extend to ensuring that the
returned query answers are consistent with an (unknown) data set at minimal
extra cost in terms of time and noise.
|
1207.6137
|
Degrees of Freedom of MIMO X Networks: Spatial Scale Invariance,
One-Sided Decomposability and Linear Feasibility
|
cs.IT math.IT
|
We show that an M X N user MIMO X network with A antennas at each node has
AMN/(M+N-1) degrees of freedom (DoF), thus resolving in this case a discrepancy
between the spatial scale invariance conjecture (scaling the number of antennas
at each node by a constant factor will scale the total DoF by the same factor)
and a decomposability property of overconstrained wireless networks. While the
best previously-known general DoF outer bound is consistent with the spatial
invariance conjecture, the best previously-known general DoF inner bound,
inspired by the K user MIMO interference channel, was based on the
decomposition of every transmitter and receiver into multiple single antenna
nodes, transforming the network into an AM X AN user SISO X network. While such
a decomposition is DoF optimal for the K user MIMO interference channel, a gap
remained between the best inner and outer bound for the MIMO X channel. Here we
close this gap with the new insight that the MIMO X network is only one-sided
decomposable, i.e., either all the transmitters or all the receivers (but not
both) can be decomposed by splitting multiple antenna nodes into multiple
single antenna nodes without loss of DoF. The result is extended to SIMO and
MISO X networks as well and in each case the DoF results satisfy the spatial
scale invariance property. In addition, the feasibility of linear interference
alignment is investigated based only on spatial beamforming without symbol
extensions. Similar to MIMO interference networks, we show that when the
problem is improper, it is infeasible.
|
1207.6146
|
Systematic DFT Frames: Principle, Eigenvalues Structure, and
Applications
|
cs.IT math.IT
|
Motivated by a host of recent applications requiring some amount of
redundancy, frames are becoming a standard tool in the signal processing
toolbox. In this paper, we study a specific class of frames, known as discrete
Fourier transform (DFT) codes, and introduce the notion of systematic frames
for this class. This is encouraged by a new application of frames, namely,
distributed source coding that uses DFT codes for compression. Studying their
extreme eigenvalues, we show that, unlike DFT frames, systematic DFT frames are
not necessarily tight. Then, we come up with conditions for which these frames
can be tight. In either case, the best and worst systematic frames are
established in the minimum mean-squared reconstruction error sense. Eigenvalues
of DFT frames and their subframes play a pivotal role in this work.
Particularly, we derive some bounds on the extreme eigenvalues DFT subframes
which are used to prove most of the results; these bounds are valuable
independently.
|
1207.6164
|
Customer Empowerment in Healthcare Organisations Through CRM 2.0: Survey
Results from Brunei Tracking a Future Path in E-Health Research
|
cs.CY cs.SI
|
Customer Relationship Management (CRM) with the Web technology provides
healthcare organizations the ability to broaden services beyond its usual
practices, and thus provides a particular advantageous environment to achieve
complex e-health goals. This paper discusses and demonstrates how a new
approach in CRM based on Web 2.0 namely CRM 2.0 will help customers to have
greater control in the sense of controlling the process of interaction
(empowerment) between healthcare organizations with its customers, and among
customers themselves. A survey was conducted to gather preliminary requirements
and expectations on empowerment in Brunei. The survey revealed that there is a
high demand for empowering customers in Brunei through the Web. Regardless of
the limitations of the survey, the general public has responded with a great
support for the capabilities of empowerment listed from the questionnaires. The
data were analyzed to provide initial ideas and recommendation to a future
direction on research for customers' empowerment in e-health services.
|
1207.6174
|
A++ Random Access for Two-way Relaying in Wireless Networks
|
cs.IT math.IT
|
Two-way relaying can significantly improve performance of next generation
wireless networks. However, due to its dependence on multi-node cooperation and
transmission coordination, applying this technique to a wireless network in an
effective and scalable manner poses a challenging problem. To tackle this
problem without relying on complicated scheduling or network optimization
algorithms, we propose a scalable random access scheme that takes measures in
both the physical layer and the medium access control layer. Specifically, we
propose a two-way relaying technique that supports fully asynchronous
transmission and is modulation-independent. It also assumes no priori knowledge
of channel conditions. On the top of this new physical layer technique, a
random access MAC protocol is designed to dynamically form two-way relaying
cooperation in a wireless network. To evaluate the scalable random access
scheme, both theoretical analysis and simulations are carried out. Performance
results illustrate that our scheme has achieved the goal of scalable two-way
relaying in a wireless network and significantly outperforms CSMA/CA protocol.
|
1207.6178
|
A Biased Review of Sociophysics
|
physics.soc-ph cs.SI
|
Various aspects of recent sociophysics research are shortly reviewed:
Schelling model as an example for lack of interdisciplinary cooperation,
opinion dynamics, combat, and citation statistics as an example for strong
interdisciplinarity.
|
1207.6180
|
A Unified Approach of Observability Analysis for Airborne SLAM
|
cs.RO
|
Observability is a key aspect of the state estimation problem of SLAM,
However, the dimension and variables of SLAM system might be changed with new
features, to which little attention is paid in the previous work. In this
paper, a unified approach of observability analysis for SLAM system is
provided, whether the dimension and variables of SLAM system are changed or
not, we can use this approach to analyze the local or total observability of
the SLAM system.
|
1207.6199
|
Achieving Approximate Soft Clustering in Data Streams
|
cs.DS cs.AI
|
In recent years, data streaming has gained prominence due to advances in
technologies that enable many applications to generate continuous flows of
data. This increases the need to develop algorithms that are able to
efficiently process data streams. Additionally, real-time requirements and
evolving nature of data streams make stream mining problems, including
clustering, challenging research problems.
In this paper, we propose a one-pass streaming soft clustering (membership of
a point in a cluster is described by a distribution) algorithm which
approximates the "soft" version of the k-means objective function. Soft
clustering has applications in various aspects of databases and machine
learning including density estimation and learning mixture models. We first
achieve a simple pseudo-approximation in terms of the "hard" k-means algorithm,
where the algorithm is allowed to output more than $k$ centers. We convert this
batch algorithm to a streaming one (using an extension of the k-means++
algorithm recently proposed) in the "cash register" model. We also extend this
algorithm when the clustering is done over a moving window in the data stream.
|
1207.6202
|
Sum-Rate Optimization in a Two-Way Relay Network with Buffering
|
cs.IT math.IT
|
A Relay Station (RS) uses a buffer to store and process the received data
packets before forwarding them. Recently, the buffer has been exploited in
one-way relaying to opportunistically schedule the two different links
according to their channel quality. The intuition is that, if the channel to
the destination is poor, then RS stores more data from the source, in order to
use it when the channel to the destination is good. We apply this intuition to
the case of half-duplex two-way relaying, where the interactions among the
buffers and the links become more complex. We investigate the sum-rate
maximization problem in the Time Division Broadcast (TDBC): the users send
signals to the RS in different time slots, the RS decodes and stores messages
in the buffers. For downlink transmission, the RS re-encodes and sends using
the optimal broadcast strategy. The operation in each time slot is not
determined in advance, but depends on the channel state information (CSI). We
derive the decision function for adaptive link selection with respect to CSI
using the Karush-Kuhn-Tucker (KKT) conditions. The thresholds of the decision
function are obtained under Rayleigh fading channel conditions. The numerical
results show that the sum-rate of the adaptive link selection protocol with
buffering is significantly larger compared to the reference protocol with fixed
transmission schedule.
|
1207.6224
|
Evolving knowledge through negotiation
|
cs.AI cs.HC
|
Semantic web information is at the extremities of long pipelines held by
human beings. They are at the origin of information and they will consume it
either explicitly because the information will be delivered to them in a
readable way, or implicitly because the computer processes consuming this
information will affect them. Computers are particularly capable of dealing
with information the way it is provided to them. However, people may assign to
the information they provide a narrower meaning than semantic technologies may
consider. This is typically what happens when people do not think their
assertions as ambiguous. Model theory, used to provide semantics to the
information on the semantic web, is particularly apt at preserving ambiguity
and delivering it to the other side of the pipeline. Indeed, it preserves as
much interpretations as possible. This quality for reasoning efficiency,
becomes a deficiency for accurate communication and meaning preservation.
Overcoming it may require either interactive feedback or preservation of the
source context. Work from social science and humanities may help solving this
particular problem.
|
1207.6231
|
Touchalytics: On the Applicability of Touchscreen Input as a Behavioral
Biometric for Continuous Authentication
|
cs.CR cs.LG
|
We investigate whether a classifier can continuously authenticate users based
on the way they interact with the touchscreen of a smart phone. We propose a
set of 30 behavioral touch features that can be extracted from raw touchscreen
logs and demonstrate that different users populate distinct subspaces of this
feature space. In a systematic experiment designed to test how this behavioral
pattern exhibits consistency over time, we collected touch data from users
interacting with a smart phone using basic navigation maneuvers, i.e., up-down
and left-right scrolling. We propose a classification framework that learns the
touch behavior of a user during an enrollment phase and is able to accept or
reject the current user by monitoring interaction with the touch screen. The
classifier achieves a median equal error rate of 0% for intra-session
authentication, 2%-3% for inter-session authentication and below 4% when the
authentication test was carried out one week after the enrollment phase. While
our experimental findings disqualify this method as a standalone authentication
mechanism for long-term authentication, it could be implemented as a means to
extend screen-lock time or as a part of a multi-modal biometric authentication
system.
|
1207.6253
|
On When and How to use SAT to Mine Frequent Itemsets
|
cs.AI cs.DB cs.LG
|
A new stream of research was born in the last decade with the goal of mining
itemsets of interest using Constraint Programming (CP). This has promoted a
natural way to combine complex constraints in a highly flexible manner.
Although CP state-of-the-art solutions formulate the task using Boolean
variables, the few attempts to adopt propositional Satisfiability (SAT)
provided an unsatisfactory performance. This work deepens the study on when and
how to use SAT for the frequent itemset mining (FIM) problem by defining
different encodings with multiple task-driven enumeration options and search
strategies. Although for the majority of the scenarios SAT-based solutions
appear to be non-competitive with CP peers, results show a variety of
interesting cases where SAT encodings are the best option.
|
1207.6255
|
Power Control for Two User Cooperative OFDMA Channels
|
cs.IT math.IT
|
For a two user cooperative orthogonal frequency division multiple access
(OFDMA) system with full channel state information (CSI), we obtain the optimal
power allocation (PA) policies which maximize the rate region achievable by a
channel adaptive implementation of inter-subchannel block Markov superposition
encoding (BMSE), used in conjunction with backwards decoding. We provide the
optimality conditions that need to be satisfied by the powers associated with
the users' codewords and derive the closed form expressions for the optimal
powers. We propose two algorithms that can be used to optimize the powers to
achieve any desired rate pair on the rate region boundary: a projected
subgradient algorithm, and an iterative waterfilling-like algorithm based on
Karush-Kuhn-Tucker (KKT) conditions for optimality, which operates one user at
a time and converges much faster. We observe that, utilization of power control
to take advantage of the diversity offered by the cooperative OFDMA system, not
only leads to a remarkable improvement in achievable rates, but also may help
determine how the subchannels have to be instantaneously allocated to various
tasks in cooperation.
|
1207.6269
|
Shaping Communities out of Triangles
|
cs.SI physics.soc-ph
|
Community detection has arisen as one of the most relevant topics in the
field of graph data mining due to its importance in many fields such as
biology, social networks or network traffic analysis. The metrics proposed to
shape communities are generic and follow two approaches: maximizing the
internal density of such communities or reducing the connectivity of the
internal vertices with those outside the community. However, these metrics take
the edges as a set and do not consider the internal layout of the edges in the
community. We define a set of properties oriented to social networks that
ensure that communities are cohesive, structured and well defined. Then, we
propose the Weighted Community Clustering (WCC), which is a community metric
based on triangles. We proof that analyzing communities by triangles gives
communities that fulfill the listed set of properties, in contrast to previous
metrics. Finally, we experimentally show that WCC correctly captures the
concept of community in social networks using real and syntethic datasets, and
compare statistically some of the most relevant community detection algorithms
in the state of the art.
|
1207.6282
|
Using Community Structure for Complex Network Layout
|
physics.soc-ph cs.SI
|
We present a new layout algorithm for complex networks that combines a
multi-scale approach for community detection with a standard force-directed
design. Since community detection is computationally cheap, we can exploit the
multi-scale approach to generate network configurations with close-to-minimal
energy very fast. As a further asset, we can use the knowledge of the community
structure to facilitate the interpretation of large networks, for example the
network defined by protein-protein interactions.
|
1207.6313
|
A CLT on the SNR of Diagonally Loaded MVDR Filters
|
cs.IT math.IT math.ST stat.TH
|
This paper studies the fluctuations of the signal-to-noise ratio (SNR) of
minimum variance distorsionless response (MVDR) filters implementing diagonal
loading in the estimation of the covariance matrix. Previous results in the
signal processing literature are generalized and extended by considering both
spatially as well as temporarily correlated samples. Specifically, a central
limit theorem (CLT) is established for the fluctuations of the SNR of the
diagonally loaded MVDR filter, under both supervised and unsupervised training
settings in adaptive filtering applications. Our second-order analysis is based
on the Nash-Poincar\'e inequality and the integration by parts formula for
Gaussian functionals, as well as classical tools from statistical asymptotic
theory. Numerical evaluations validating the accuracy of the CLT confirm the
asymptotic Gaussianity of the fluctuations of the SNR of the MVDR filter.
|
1207.6329
|
Computing optimal k-regret minimizing sets with top-k depth contours
|
cs.DB cs.CG
|
Regret minimizing sets are a very recent approach to representing a dataset D
with a small subset S of representative tuples. The set S is chosen such that
executing any top-1 query on S rather than D is minimally perceptible to any
user. To discover an optimal regret minimizing set of a predetermined
cardinality is conjectured to be a hard problem. In this paper, we generalize
the problem to that of finding an optimal k$regret minimizing set, wherein the
difference is computed over top-k queries, rather than top-1 queries.
We adapt known geometric ideas of top-k depth contours and the reverse top-k
problem. We show that the depth contours themselves offer a means of comparing
the optimality of regret minimizing sets using L2 distance. We design an
O(cn^2) plane sweep algorithm for two dimensions to compute an optimal regret
minimizing set of cardinality c. For higher dimensions, we introduce a greedy
algorithm that progresses towards increasingly optimal solutions by exploiting
the transitivity of L2 distance.
|
1207.6353
|
PETRELS: Parallel Subspace Estimation and Tracking by Recursive Least
Squares from Partial Observations
|
stat.ME cs.IT math.IT
|
Many real world data sets exhibit an embedding of low-dimensional structure
in a high-dimensional manifold. Examples include images, videos and internet
traffic data. It is of great significance to reduce the storage requirements
and computational complexity when the data dimension is high. Therefore we
consider the problem of reconstructing a data stream from a small subset of its
entries, where the data is assumed to lie in a low-dimensional linear subspace,
possibly corrupted by noise. We further consider tracking the change of the
underlying subspace, which can be applied to applications such as video
denoising, network monitoring and anomaly detection. Our problem can be viewed
as a sequential low-rank matrix completion problem in which the subspace is
learned in an on-line fashion. The proposed algorithm, dubbed Parallel
Estimation and Tracking by REcursive Least Squares (PETRELS), first identifies
the underlying low-dimensional subspace via a recursive procedure for each row
of the subspace matrix in parallel with discounting for previous observations,
and then reconstructs the missing entries via least-squares estimation if
required. Numerical examples are provided for direction-of-arrival estimation
and matrix completion, comparing PETRELS with state of the art batch
algorithms.
|
1207.6355
|
The Entropy Power Inequality and Mrs. Gerber's Lemma for Abelian Groups
of Order 2^n
|
cs.IT math.CO math.GR math.IT math.PR
|
Shannon's Entropy Power Inequality can be viewed as characterizing the
minimum differential entropy achievable by the sum of two independent random
variables with fixed differential entropies. The entropy power inequality has
played a key role in resolving a number of problems in information theory. It
is therefore interesting to examine the existence of a similar inequality for
discrete random variables. In this paper we obtain an entropy power inequality
for random variables taking values in an abelian group of order 2^n, i.e. for
such a group G we explicitly characterize the function f_G(x,y) giving the
minimum entropy of the sum of two independent G-valued random variables with
respective entropies x and y. Random variables achieving the extremum in this
inequality are thus the analogs of Gaussians in this case, and these are also
determined. It turns out that f_G(x,y) is convex in x for fixed y and, by
symmetry, convex in y for fixed x. This is a generalization to abelian groups
of order 2^n of the result known as Mrs. Gerber's Lemma.
|
1207.6379
|
Identifying Users From Their Rating Patterns
|
cs.IR cs.LG stat.ML
|
This paper reports on our analysis of the 2011 CAMRa Challenge dataset (Track
2) for context-aware movie recommendation systems. The train dataset comprises
4,536,891 ratings provided by 171,670 users on 23,974$ movies, as well as the
household groupings of a subset of the users. The test dataset comprises 5,450
ratings for which the user label is missing, but the household label is
provided. The challenge required to identify the user labels for the ratings in
the test set. Our main finding is that temporal information (time labels of the
ratings) is significantly more useful for achieving this objective than the
user preferences (the actual ratings). Using a model that leverages on this
fact, we are able to identify users within a known household with an accuracy
of approximately 96% (i.e. misclassification rate around 4%).
|
1207.6380
|
About the Linear Complexity of Ding-Hellesth Generalized Cyclotomic
Binary Sequences of Any Period
|
cs.IT cs.CR math.IT
|
We defined sufficient conditions for designing Ding-Helleseth sequences with
arbitrary period and high linear complexity for generalized cyclotomies. Also
we discuss the method of computing the linear complexity of Ding-Helleseth
sequences in the general case.
|
1207.6416
|
The Social Climbing Game
|
physics.soc-ph cs.SI
|
The structure of a society depends, to some extent, on the incentives of the
individuals they are composed of. We study a stylized model of this interplay,
that suggests that the more individuals aim at climbing the social hierarchy,
the more society's hierarchy gets strong. Such a dependence is sharp, in the
sense that a persistent hierarchical order emerges abruptly when the preference
for social status gets larger than a threshold. This phase transition has its
origin in the fact that the presence of a well defined hierarchy allows agents
to climb it, thus reinforcing it, whereas in a "disordered" society it is
harder for agents to find out whom they should connect to in order to become
more central. Interestingly, a social order emerges when agents strive harder
to climb society and it results in a state of reduced social mobility, as a
consequence of ergodicity breaking, where climbing is more difficult.
|
1207.6430
|
Optimal Data Collection For Informative Rankings Expose Well-Connected
Graphs
|
stat.ML cs.LG stat.AP
|
Given a graph where vertices represent alternatives and arcs represent
pairwise comparison data, the statistical ranking problem is to find a
potential function, defined on the vertices, such that the gradient of the
potential function agrees with the pairwise comparisons. Our goal in this paper
is to develop a method for collecting data for which the least squares
estimator for the ranking problem has maximal Fisher information. Our approach,
based on experimental design, is to view data collection as a bi-level
optimization problem where the inner problem is the ranking problem and the
outer problem is to identify data which maximizes the informativeness of the
ranking. Under certain assumptions, the data collection problem decouples,
reducing to a problem of finding multigraphs with large algebraic connectivity.
This reduction of the data collection problem to graph-theoretic questions is
one of the primary contributions of this work. As an application, we study the
Yahoo! Movie user rating dataset and demonstrate that the addition of a small
number of well-chosen pairwise comparisons can significantly increase the
Fisher informativeness of the ranking. As another application, we study the
2011-12 NCAA football schedule and propose schedules with the same number of
games which are significantly more informative. Using spectral clustering
methods to identify highly-connected communities within the division, we argue
that the NCAA could improve its notoriously poor rankings by simply scheduling
more out-of-conference games.
|
1207.6435
|
Capacity of optical reading, Part 1: Reading boundless error-free bits
using a single photon
|
quant-ph cs.IT math.IT
|
We show that nature imposes no fundamental upper limit to the number of
information bits per expended photon that can, in principle, be read reliably
when classical data is encoded in a medium that can only passively modulate the
amplitude and phase of the probe light. We show that with a coherent-state
(laser) source, an on-off (amplitude-modulation) pixel encoding, and
shot-noise-limited direct detection (an overly-optimistic model for commercial
CD/DVD drives), the highest photon information efficiency achievable in
principle is about 0.5 bit per transmitted photon. We then show that a
coherent-state probe can read unlimited bits per photon when the receiver is
allowed to make joint (inseparable) measurements on the reflected light from a
large block of phase-modulated memory pixels. Finally, we show an example of a
spatially-entangled non-classical light probe and a receiver
design---constructable using a single-photon source, beam splitters, and
single-photon detectors---that can in principle read any number of error-free
bits of information. The probe is a single photon prepared in a uniform
coherent superposition of multiple orthogonal spatial modes, i.e., a W-state.
The code, target, and joint-detection receiver complexity required by a
coherent-state transmitter to achieve comparable photon efficiency performance
is shown to be much higher in comparison to that required by the W-state
transceiver.
|
1207.6438
|
Product Superposition for MIMO Broadcast Channels
|
cs.IT math.IT
|
This paper considers the multiantenna broadcast channel without transmit-side
channel state information (CSIT). For this channel, it has been known that when
all receivers have channel state information (CSIR), the degrees of freedom
(DoF) cannot be improved beyond what is available via TDMA. The same is true if
none of the receivers possess CSIR. This paper shows that an entirely new
scenario emerges when receivers have unequal CSIR. In particular, orthogonal
transmission is no longer DoF-optimal when one receiver has CSIR and the other
does not. A multiplicative superposition is proposed for this scenario and
shown to attain the optimal degrees of freedom under a wide set of antenna
configurations and coherence lengths. Two signaling schemes are constructed
based on the multiplicative superposition. In the first method, the messages of
the two receivers are carried in the row and column spaces of a matrix,
respectively. This method works better than orthogonal transmission while
reception at each receiver is still interference free. The second method uses
coherent signaling for the receiver with CSIR, and Grassmannian signaling for
the receiver without CSIR. This second method requires interference
cancellation at the receiver with CSIR, but achieves higher DoF than the first
method.
|
1207.6445
|
Profit Incentive In A Secondary Spectrum Market: A Contract Design
Approach
|
cs.CE cs.GT
|
In this paper we formulate a contract design problem where a primary license
holder wishes to profit from its excess spectrum capacity by selling it to
potential secondary users/buyers. It needs to determine how to optimally price
the excess spectrum so as to maximize its profit, knowing that this excess
capacity is stochastic in nature, does not come with exclusive access, and
cannot provide deterministic service guarantees to a buyer. At the same time,
buyers are of different {\em types}, characterized by different communication
needs, tolerance for the channel uncertainty, and so on, all of which a buyer's
private information. The license holder must then try to design different
contracts catered to different types of buyers in order to maximize its profit.
We address this problem by adopting as a reference a traditional spectrum
market where the buyer can purchase exclusive access with fixed/deterministic
guarantees. We fully characterize the optimal solution in the cases where there
is a single buyer type, and when multiple types of buyers share the same, known
channel condition as a result of the primary user activity. In the most general
case we construct an algorithm that generates a set of contracts in a
computationally efficient manner, and show that this set is optimal when the
buyer types satisfy a monotonicity condition.
|
1207.6448
|
Query Optimization Over Web Services Using A Mixed Approach
|
cs.DB cs.IR
|
A Web Service Management System (WSMS) can be well-thought-out as a
consistent and a secure way of managing the web services. Web Service has
become a quintessential part of the web world, managing and sharing the
resources of the business it is associated with. In this paper, we focus on the
query optimization aspect of handling the "natural language" query, queried to
the WSMS. The map-select-composite operations are piloted to select specific
web services. The main aftermath of our research is ensued in an algorithm
which uses cost-based as well as heuristic based approach for query
optimization. Query plan is formed after cost-based evaluation and using Greedy
algorithm. The heuristic based approach further optimizes the evaluation plan.
This scheme not only guarantees an optimal solution, which has a minimum
diversion from the ideal solution, but also saves time which is otherwise
utilized in generating various query plans using many mathematical models and
then evaluating each one.
|
1207.6465
|
Sketch \star-metric: Comparing Data Streams via Sketching
|
cs.DS cs.DM cs.IT math.IT
|
In this paper, we consider the problem of estimating the distance between any
two large data streams in small- space constraint. This problem is of utmost
importance in data intensive monitoring applications where input streams are
generated rapidly. These streams need to be processed on the fly and accurately
to quickly determine any deviance from nominal behavior. We present a new
metric, the Sketch \star-metric, which allows to define a distance between
updatable summaries (or sketches) of large data streams. An important feature
of the Sketch \star-metric is that, given a measure on the entire initial data
streams, the Sketch \star-metric preserves the axioms of the latter measure on
the sketch (such as the non-negativity, the identity, the symmetry, the
triangle inequality but also specific properties of the f-divergence).
Extensive experiments conducted on both synthetic traces and real data allow us
to validate the robustness and accuracy of the Sketch \star-metric.
|
1207.6475
|
Distributed team formation in multi-agent systems: stability and
approximation
|
cs.MA cs.SI
|
We consider a scenario in which leaders are required to recruit teams of
followers. Each leader cannot recruit all followers, but interaction is
constrained according to a bipartite network. The objective for each leader is
to reach a state of local stability in which it controls a team whose size is
equal to a given constraint. We focus on distributed strategies, in which
agents have only local information of the network topology and propose a
distributed algorithm in which leaders and followers act according to simple
local rules. The performance of the algorithm is analyzed with respect to the
convergence to a stable solution.
Our results are as follows. For any network, the proposed algorithm is shown
to converge to an approximate stable solution in polynomial time, namely the
leaders quickly form teams in which the total number of additional followers
required to satisfy all team size constraints is an arbitrarily small fraction
of the entire population. In contrast, for general graphs there can be an
exponential time gap between convergence to an approximate solution and to a
stable solution.
|
1207.6512
|
CSS-like Constructions of Asymmetric Quantum Codes
|
cs.IT math.IT
|
Asymmetric quantum error-correcting codes (AQCs) may offer some advantage
over their symmetric counterparts by providing better error-correction for the
more frequent error types. The well-known CSS construction of $q$-ary AQCs is
extended by removing the $\F_{q}$-linearity requirement as well as the
limitation on the type of inner product used. The proposed constructions are
called CSS-like constructions and utilize pairs of nested subfield linear codes
under one of the Euclidean, trace Euclidean, Hermitian, and trace Hermitian
inner products.
After establishing some theoretical foundations, best-performing CSS-like
AQCs are constructed. Combining some constructions of nested pairs of classical
codes and linear programming, many optimal and good pure $q$-ary CSS-like codes
for $q \in {2,3,4,5,7,8,9}$ up to reasonable lengths are found. In many
instances, removing the $\F_{q}$-linearity and using alternative inner products
give us pure AQCs with improved parameters than relying solely on the standard
CSS construction.
|
1207.6514
|
Earthquake Scenario Reduction by Symmetry Reasoning
|
cs.AI
|
A recently identified problem is that of finding an optimal investment plan
for a transportation network, given that a disaster such as an earthquake may
destroy links in the network. The aim is to strengthen key links to preserve
the expected network connectivity. A network based on the Istanbul highway
system has thirty links and therefore a billion scenarios, but it has been
estimated that sampling a million scenarios gives reasonable accuracy. In this
paper we use symmetry reasoning to reduce the number of scenarios to a much
smaller number, making sampling unnecessary. This result can be used to
facilitate metaheuristic and exact approaches to the problem.
|
1207.6540
|
Achieving Net Feedback Gain in the Butterfly Network with a Full-Duplex
Bidirectional Relay
|
cs.IT math.IT
|
A symmetric butterfly network (BFN) with a full-duplex relay operating in a
bi-directional fashion for feedback is considered. This network is relevant for
a variety of wireless networks, including cellular systems dealing with
cell-edge users. Upper bounds on the capacity region of the general memoryless
BFN with feedback are derived based on cut-set and cooperation arguments and
then specialized to the linear deterministic BFN with really-source feedback.
It is shown that the upper bounds are achievable using combinations of the
compute-forward strategy and the classical decode-and-forward strategy, thus
fully characterizing the capacity region. It is shown that net rate gains are
possible in certain parameter regimes.
|
1207.6560
|
Covering Rough Sets From a Topological Point of View
|
cs.DB
|
Covering-based rough set theory is an extension to classical rough set. The
main purpose of this paper is to study covering rough sets from a topological
point of view. The relationship among upper approximations based on topological
spaces are explored.
|
1207.6563
|
Hidden information and regularities of information dynamics IIR
|
nlin.AO cs.IT math.IT
|
Part 1 has studied the conversion of observed random process with its hidden
information to related dynamic process, applying entropy functional measure
(EF) of the random process and path functional information measure (IPF) of the
dynamic conversion process. The variation principle, satisfying the EF-IPF
equivalence along shortest path-trajectory, leads to information dual
complementary maxmin-minimax law, which creates mechanism of arising
information regularities from stochastic process(Lerner 2012). This Part 2
studies mechanism of cooperation of the observed multiple hidden information
process, which follows from the law and produces cooperative structures,
concurrently assembling in hierarchical information network (IN) and generating
the IN digital genetic code. We analyze the interactive information
contributions, information quality, inner time scale, information geometry of
the cooperative structures, evaluate curvature of these geometrical forms and
their cooperative information complexities. The law information mechanisms
operate in information observer. The observer, acting according the law,
selects random information, converts it in information dynamics, builds the IN
cooperatives, which generate the genetic code.
|
1207.6588
|
Dynamical Models Explaining Social Balance and Evolution of Cooperation
|
physics.soc-ph cs.SI nlin.AO
|
Social networks with positive and negative links often split into two
antagonistic factions. Examples of such a split abound: revolutionaries versus
an old regime, Republicans versus Democrats, Axis versus Allies during the
second world war, or the Western versus the Eastern bloc during the Cold War.
Although this structure, known as social balance, is well understood, it is not
clear how such factions emerge. An earlier model could explain the formation of
such factions if reputations were assumed to be symmetric. We show this is not
the case for non-symmetric reputations, and propose an alternative model which
(almost) always leads to social balance, thereby explaining the tendency of
social networks to split into two factions. In addition, the alternative model
may lead to cooperation when faced with defectors, contrary to the earlier
model. The difference between the two models may be understood in terms of the
underlying gossiping mechanism: whereas the earlier model assumed that an
individual adjusts his opinion about somebody by gossiping about that person
with everybody in the network, we assume instead that the individual gossips
with that person about everybody. It turns out that the alternative model is
able to lead to cooperative behaviour, unlike the previous model.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.