id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1112.0674
|
Analytical Evaluation of Fractional Frequency Reuse for Heterogeneous
Cellular Networks
|
cs.IT cs.NI math.IT
|
Interference management techniques are critical to the performance of
heterogeneous cellular networks, which will have dense and overlapping coverage
areas, and experience high levels of interference. Fractional frequency reuse
(FFR) is an attractive interference management technique due to its low
complexity and overhead, and significant coverage improvement for
low-percentile (cell-edge) users. Instead of relying on system simulations
based on deterministic access point locations, this paper instead proposes an
analytical model for evaluating Strict FFR and Soft Frequency Reuse (SFR)
deployments based on the spatial Poisson point process. Our results both
capture the non-uniformity of heterogeneous deployments and produce tractable
expressions which can be used for system design with Strict FFR and SFR. We
observe that the use of Strict FFR bands reserved for the users of each tier
with the lowest average SINR provides the highest gains in terms of coverage
and rate, while the use of SFR allows for more efficient use of shared spectrum
between the tiers, while still mitigating much of the interference.
Additionally, in the context of multi-tier networks with closed access in some
tiers, the proposed framework shows the impact of cross-tier interference on
closed access FFR, and informs the selection of key FFR parameters in open
access.
|
1112.0698
|
Machine Learning with Operational Costs
|
stat.ML cs.AI math.OC
|
This work proposes a way to align statistical modeling with decision making.
We provide a method that propagates the uncertainty in predictive modeling to
the uncertainty in operational cost, where operational cost is the amount spent
by the practitioner in solving the problem. The method allows us to explore the
range of operational costs associated with the set of reasonable statistical
models, so as to provide a useful way for practitioners to understand
uncertainty. To do this, the operational cost is cast as a regularization term
in a learning algorithm's objective function, allowing either an optimistic or
pessimistic view of possible costs, depending on the regularization parameter.
From another perspective, if we have prior knowledge about the operational
cost, for instance that it should be low, this knowledge can help to restrict
the hypothesis space, and can help with generalization. We provide a
theoretical generalization bound for this scenario. We also show that learning
with operational costs is related to robust optimization.
|
1112.0708
|
Information-Theoretically Optimal Compressed Sensing via Spatial
Coupling and Approximate Message Passing
|
cs.IT cond-mat.stat-mech math.IT math.ST stat.TH
|
We study the compressed sensing reconstruction problem for a broad class of
random, band-diagonal sensing matrices. This construction is inspired by the
idea of spatial coupling in coding theory. As demonstrated heuristically and
numerically by Krzakala et al. \cite{KrzakalaEtAl}, message passing algorithms
can effectively solve the reconstruction problem for spatially coupled
measurements with undersampling rates close to the fraction of non-zero
coordinates.
We use an approximate message passing (AMP) algorithm and analyze it through
the state evolution method. We give a rigorous proof that this approach is
successful as soon as the undersampling rate $\delta$ exceeds the (upper)
R\'enyi information dimension of the signal, $\uRenyi(p_X)$. More precisely,
for a sequence of signals of diverging dimension $n$ whose empirical
distribution converges to $p_X$, reconstruction is with high probability
successful from $\uRenyi(p_X)\, n+o(n)$ measurements taken according to a band
diagonal matrix.
For sparse signals, i.e., sequences of dimension $n$ and $k(n)$ non-zero
entries, this implies reconstruction from $k(n)+o(n)$ measurements. For
`discrete' signals, i.e., signals whose coordinates take a fixed finite set of
values, this implies reconstruction from $o(n)$ measurements. The result is
robust with respect to noise, does not apply uniquely to random signals, but
requires the knowledge of the empirical distribution of the signal $p_X$.
|
1112.0711
|
Quantization and Bit Allocation for Channel State Feedback for
Relay-Assisted Wireless Networks
|
cs.IT math.IT
|
This paper investigates quantization of channel state information (CSI) and
bit allocation across wireless links in a multi-source, single-relay
cooperative cellular network. Our goal is to minimize the loss in performance,
measured as the achievable sum rate, due to limited-rate quantization of CSI.
We develop both a channel quantization scheme and allocation of limited
feedback bits to the various wireless links. We assume that the quantized CSI
is reported to a central node responsible for optimal resource allocation. We
first derive tight lower and upper bounds on the difference in rates between
the perfect CSI and quantized CSI scenarios. These bounds are then used to
derive an effective quantizer for arbitrary channel distributions. Next, we use
these bounds to optimize the allocation of bits across the links subject to a
budget on total available quantization bits. In particular, we show that the
optimal bit allocation algorithm allocates more bits to those links in the
network that contribute the most to the sum-rate. Finally, the paper
investigates the choice of the central node; we show that this choice plays a
significant role in CSI bits required to achieve a target performance level.
|
1112.0721
|
Performance Analysis of Hybrid Relay Selection in Cooperative Wireless
Systems
|
cs.IT math.IT
|
The hybrid relay selection (HRS) scheme, which adaptively chooses
amplify-and-forward (AF) and decode-and-forward (DF) protocols, is very
effective to achieve robust performance in wireless networks. This paper
analyzes the frame error rate (FER) of the HRS scheme in general cooperative
wireless networks without and with utilizing error control coding at the source
node. We first develop an improved signal-to-noise ratio (SNR) threshold-based
FER approximation model. Then, we derive an analytical average FER expression
as well as an asymptotic expression at high SNR for the HRS scheme and
generalize to other relaying schemes. Simulation results are in excellent
agreement with the theoretical analysis, which validates the derived FER
expressions.
|
1112.0725
|
Approximate ML Decision Feedback Block Equalizer for Doubly Selective
Fading Channels
|
cs.IT math.IT
|
In order to effetively suppress intersymbol interference (ISI) at low
complexity, we propose in this paper an approximate maximum likelihood (ML)
decision feedback block equalizer (A-ML-DFBE) for doubly selective
(frequency-selective, time-selective) fading channels. The proposed equalizer
design makes efficient use of the special time-domain representation of the
multipath channels through a matched filter, a sliding window, a Gaussian
approximation, and a decision feedback. The A-ML-DFBE has the following
features: 1) It achieves performance close to maximum likelihood sequence
estimation (MLSE), and significantly outperforms the minimum mean square error
(MMSE) based detectors; 2) It has substantially lower complexity than the
conventional equalizers; 3) It easily realizes the complexity and performance
tradeoff by adjusting the length of the sliding window; 4) It has a simple and
fixed-length feedback filter. The symbol error rate (SER) is derived to
characterize the behaviour of the A-ML-DFBE, and it can also be used to find
the key parameters of the proposed equalizer. In addition, we further prove
that the A-ML-DFBE obtains full multipath diversity.
|
1112.0736
|
Measurement-induced nonlocality based on the relative entropy
|
quant-ph cs.IT math.IT
|
We quantify the measurement-induced nonlocality [Luo and Fu, Phys. Rev. Lett.
106, 120401 (2011)] from the perspective of the relative entropy. This
quantification leads to an operational interpretation for the
measurementinduced nonlocality, namely, it is the maximal entropy increase
after the locally invariant measurements. The relative entropy of nonlocality
is upper bounded by the entropy of the measured subsystem. We establish a
relationship between the relative entropy of nonlocality and the geometric
nonlocality based on the Hilbert- Schmidt norm, and show that it is equal to
the maximal distillable entanglement. Several trade-off relations are obtained
for tripartite pure states. We also give explicit expressions for the relative
entropy of nonlocality for Bell-diagonal states.
|
1112.0765
|
Spectral Design of Dynamic Networks via Local Operations
|
math.OC cs.DM cs.MA cs.SI physics.soc-ph
|
Motivated by the relationship between the eigenvalue spectrum of the
Laplacian matrix of a network and the behavior of dynamical processes evolving
in it, we propose a distributed iterative algorithm in which a group of $n$
autonomous agents self-organize the structure of their communication network in
order to control the network's eigenvalue spectrum. In our algorithm, we assume
that each agent has access only to a local (myopic) view of the network around
it. In each iteration, agents in the network peform a decentralized decision
process to determine the edge addition/deletion that minimizes a distance
function defined in the space of eigenvalue spectra. This spectral distance
presents interesting theoretical properties that allow an efficient distributed
implementation of the decision process. Our iterative algorithm is stable by
construction, i.e., locally optimizes the network's eigenvalue spectrum, and is
shown to perform extremely well in practice. We illustrate our results with
nontrivial simulations in which we design networks matching the spectral
properties of complex networks, such as small-world and power-law networks.
|
1112.0767
|
Revenue Prediction of Local Event using Mathematical Model of Hit
Phenomena
|
physics.soc-ph cs.SI
|
Theoretical approach to investigate human-human interaction in society
performed using a many-body theory including human-human interaction. The
advertisement is treated as an external force. The word of mouth (WOM) effect
is included as a two-body interaction between humans. The rumor effect is
included as a three-body interaction between humans. The parameters to define
the strength of human interactions are assumed to be constant values. The
calculated result explained well the two local events "Mizuki-Shigeru Road in
Sakaiminato" and "the sculpture festival at Tottori" in Japan.
|
1112.0789
|
On the error of estimating the sparsest solution of underdetermined
linear systems
|
cs.IT math.IT
|
Let A be an n by m matrix with m>n, and suppose that the underdetermined
linear system As=x admits a sparse solution s0 for which ||s0||_0 < 1/2
spark(A). Such a sparse solution is unique due to a well-known uniqueness
theorem. Suppose now that we have somehow a solution s_hat as an estimation of
s0, and suppose that s_hat is only `approximately sparse', that is, many of its
components are very small and nearly zero, but not mathematically equal to
zero. Is such a solution necessarily close to the true sparsest solution? More
generally, is it possible to construct an upper bound on the estimation error
||s_hat-s0||_2 without knowing s0? The answer is positive, and in this paper we
construct such a bound based on minimal singular values of submatrices of A. We
will also state a tight bound, which is more complicated, but besides being
tight, enables us to study the case of random dictionaries and obtain
probabilistic upper bounds. We will also study the noisy case, that is, where
x=As+n. Moreover, we will see that where ||s0||_0 grows, to obtain a
predetermined guaranty on the maximum of ||s_hat-s0||_2, s_hat is needed to be
sparse with a better approximation. This can be seen as an explanation to the
fact that the estimation quality of sparse recovery algorithms degrades where
||s0||_0 grows.
|
1112.0791
|
Strong Equivalence of Qualitative Optimization Problems
|
cs.LO cs.AI
|
We introduce the framework of qualitative optimization problems (or, simply,
optimization problems) to represent preference theories. The formalism uses
separate modules to describe the space of outcomes to be compared (the
generator) and the preferences on outcomes (the selector). We consider two
types of optimization problems. They differ in the way the generator, which we
model by a propositional theory, is interpreted: by the standard propositional
logic semantics, and by the equilibrium-model (answer-set) semantics. Under the
latter interpretation of generators, optimization problems directly generalize
answer-set optimization programs proposed previously. We study strong
equivalence of optimization problems, which guarantees their interchangeability
within any larger context. We characterize several versions of strong
equivalence obtained by restricting the class of optimization problems that can
be used as extensions and establish the complexity of associated reasoning
tasks. Understanding strong equivalence is essential for modular representation
of optimization problems and rewriting techniques to simplify them without
changing their inherent properties.
|
1112.0805
|
Constellation Mapping for Physical-Layer Network Coding with M-QAM
Modulation
|
cs.IT cs.NI cs.SY math.IT
|
The denoise-and-forward (DNF) method of physical-layer network coding (PNC)
is a promising approach for wireless relaying networks. In this paper, we
consider DNF-based PNC with M-ary quadrature amplitude modulation (M-QAM) and
propose a mapping scheme that maps the superposed M-QAM signal to coded
symbols. The mapping scheme supports both square and non-square M-QAM
modulations, with various original constellation mappings (e.g. binary-coded or
Gray-coded). Subsequently, we evaluate the symbol error rate and bit error rate
(BER) of M-QAM modulated PNC that uses the proposed mapping scheme. Afterwards,
as an application, a rate adaptation scheme for the DNF method of PNC is
proposed. Simulation results show that the rate-adaptive PNC is advantageous in
various scenarios.
|
1112.0826
|
Clustering under Perturbation Resilience
|
cs.LG cs.DS
|
Motivated by the fact that distances between data points in many real-world
clustering instances are often based on heuristic measures, Bilu and
Linial~\cite{BL} proposed analyzing objective based clustering problems under
the assumption that the optimum clustering to the objective is preserved under
small multiplicative perturbations to distances between points. The hope is
that by exploiting the structure in such instances, one can overcome worst case
hardness results.
In this paper, we provide several results within this framework. For
center-based objectives, we present an algorithm that can optimally cluster
instances resilient to perturbations of factor $(1 + \sqrt{2})$, solving an
open problem of Awasthi et al.~\cite{ABS10}. For $k$-median, a center-based
objective of special interest, we additionally give algorithms for a more
relaxed assumption in which we allow the optimal solution to change in a small
$\epsilon$ fraction of the points after perturbation. We give the first bounds
known for $k$-median under this more realistic and more general assumption. We
also provide positive results for min-sum clustering which is typically a
harder objective than center-based objectives from approximability standpoint.
Our algorithms are based on new linkage criteria that may be of independent
interest.
Additionally, we give sublinear-time algorithms, showing algorithms that can
return an implicit clustering from only access to a small random sample.
|
1112.0857
|
I/O efficient bisimulation partitioning on very large directed acyclic
graphs
|
cs.DS cs.DB
|
In this paper we introduce the first efficient external-memory algorithm to
compute the bisimilarity equivalence classes of a directed acyclic graph (DAG).
DAGs are commonly used to model data in a wide variety of practical
applications, ranging from XML documents and data provenance models, to web
taxonomies and scientific workflows. In the study of efficient reasoning over
massive graphs, the notion of node bisimilarity plays a central role. For
example, grouping together bisimilar nodes in an XML data set is the first step
in many sophisticated approaches to building indexing data structures for
efficient XPath query evaluation. To date, however, only internal-memory
bisimulation algorithms have been investigated. As the size of real-world DAG
data sets often exceeds available main memory, storage in external memory
becomes necessary. Hence, there is a practical need for an efficient approach
to computing bisimulation in external memory.
Our general algorithm has a worst-case IO-complexity of O(Sort(|N| + |E|)),
where |N| and |E| are the numbers of nodes and edges, resp., in the data graph
and Sort(n) is the number of accesses to external memory needed to sort an
input of size n. We also study specializations of this algorithm to common
variations of bisimulation for tree-structured XML data sets. We empirically
verify efficient performance of the algorithms on graphs and XML documents
having billions of nodes and edges, and find that the algorithms can process
such graphs efficiently even when very limited internal memory is available.
The proposed algorithms are simple enough for practical implementation and use,
and open the door for further study of external-memory bisimulation algorithms.
To this end, the full open-source C++ implementation has been made freely
available.
|
1112.0896
|
On the Existence of Perfect Codes for Asymmetric Limited-Magnitude
Errors
|
cs.IT math.IT
|
Block codes, which correct asymmetric errors with limited-magnitude, are
studied. These codes have been applied recently for error correction in flash
memories. The codes will be represented by lattices and the constructions will
be based on a generalization of Sidon sequences. In particular we will consider
perfect codes for these type of errors.
|
1112.0922
|
Extending Object-Oriented Languages by Declarative Specifications of
Complex Objects using Answer-Set Programming
|
cs.PL cs.AI
|
Many applications require complexly structured data objects. Developing new
or adapting existing algorithmic solutions for creating such objects can be a
non-trivial and costly task if the considered objects are subject to different
application-specific constraints. Often, however, it is comparatively easy to
declaratively describe the required objects. In this paper, we propose to use
answer-set programming (ASP)---a well-established declarative programming
paradigm from the area of artificial intelligence---for instantiating objects
in standard object-oriented programming languages. In particular, we extend
Java with declarative specifications from which the required objects can be
automatically generated using available ASP solver technology.
|
1112.0945
|
Interleaved Product LDPC Codes
|
cs.IT math.IT
|
Product LDPC codes take advantage of LDPC decoding algorithms and the high
minimum distance of product codes. We propose to add suitable interleavers to
improve the waterfall performance of LDPC decoding. Interleaving also reduces
the number of low weight codewords, that gives a further advantage in the error
floor region.
|
1112.0974
|
Optimality Bounds for a Variational Relaxation of the Image Partitioning
Problem
|
cs.CV math.CO math.FA math.OC
|
We consider a variational convex relaxation of a class of optimal
partitioning and multiclass labeling problems, which has recently proven quite
successful and can be seen as a continuous analogue of Linear Programming (LP)
relaxation methods for finite-dimensional problems. While for the latter case
several optimality bounds are known, to our knowledge no such bounds exist in
the continuous setting. We provide such a bound by analyzing a probabilistic
rounding method, showing that it is possible to obtain an integral solution of
the original partitioning problem from a solution of the relaxed problem with
an a priori upper bound on the objective, ensuring the quality of the result
from the viewpoint of optimization. The approach has a natural interpretation
as an approximate, multiclass variant of the celebrated coarea formula.
|
1112.0983
|
The averaged control system of fast oscillating control systems
|
math.OC cs.SY
|
For control systems that either have a fast explicit periodic dependence on
time and bounded controls or have periodic solutions and small controls, we
define an average control system that takes into account all possible
variations of the control, and prove that its solutions approximate all
solutions of the oscillating system as oscillations go faster. The dimension of
its velocity set is characterized geometrically. When it is maximum the average
system defines a Finsler metric, not twice differentiable in general. For
minimum time control, this average system allows one to give a rigorous proof
that averaging the Hamiltonian given by the maximum principle is a valid
approximation.
|
1112.0992
|
The Web economy: goods, users, models and policies
|
cs.CY cs.SI
|
Web emerged as an antidote to the rapidly increasing quantity of accumulated
knowledge and become successful because it facilitates massive participation
and communication with minimum costs. Today, its enormous impact, scale and
dynamism in time and space make very difficult (and sometimes impossible) to
measure and anticipate the effects in human society. In addition to that, we
demand from the Web to be fast, secure, reliable, all-inclusive and trustworthy
in any transaction. The scope of the present article is to review a part of the
Web economy literature that will help us to identify its major participants and
their functions. The goal is to understand how the Web economy differs from the
traditional setting and what implications have these differences. Secondarily,
we attempt to establish a minimal common understanding about the incentives and
properties of the Web economy. In this direction the concept of Web Goods and a
new classification of Web Users are introduced and analyzed This article, is
not, by any means, a thorough review of the economic literature related to the
Web. We focus only on its relevant part that models the Web as a standalone
economic artifact with native functionality and processes.
|
1112.1010
|
Twitter reciprocal reply networks exhibit assortativity with respect to
happiness
|
cs.SI physics.soc-ph
|
The advent of social media has provided an extraordinary, if imperfect, 'big
data' window into the form and evolution of social networks. Based on nearly 40
million message pairs posted to Twitter between September 2008 and February
2009, we construct and examine the revealed social network structure and
dynamics over the time scales of days, weeks, and months. At the level of user
behavior, we employ our recently developed hedonometric analysis methods to
investigate patterns of sentiment expression. We find users' average happiness
scores to be positively and significantly correlated with those of users one,
two, and three links away. We strengthen our analysis by proposing and using a
null model to test the effect of network topology on the assortativity of
happiness. We also find evidence that more well connected users write happier
status updates, with a transition occurring around Dunbar's number. More
generally, our work provides evidence of a social sub-network structure within
Twitter and raises several methodological points of interest with regard to
social network reconstructions.
|
1112.1051
|
Predicting Financial Markets: Comparing Survey, News, Twitter and Search
Engine Data
|
q-fin.ST cs.CE physics.soc-ph
|
Financial market prediction on the basis of online sentiment tracking has
drawn a lot of attention recently. However, most results in this emerging
domain rely on a unique, particular combination of data sets and sentiment
tracking tools. This makes it difficult to disambiguate measurement and
instrument effects from factors that are actually involved in the apparent
relation between online sentiment and market values. In this paper, we survey a
range of online data sets (Twitter feeds, news headlines, and volumes of Google
search queries) and sentiment tracking methods (Twitter Investor Sentiment,
Negative News Sentiment and Tweet & Google Search volumes of financial terms),
and compare their value for financial prediction of market indices such as the
Dow Jones Industrial Average, trading volumes, and market volatility (VIX), as
well as gold prices. We also compare the predictive power of traditional
investor sentiment survey data, i.e. Investor Intelligence and Daily Sentiment
Index, against those of the mentioned set of online sentiment indicators. Our
results show that traditional surveys of Investor Intelligence are lagging
indicators of the financial markets. However, weekly Google Insight Search
volumes on financial search queries do have predictive value. An indicator of
Twitter Investor Sentiment and the frequency of occurrence of financial terms
on Twitter in the previous 1-2 days are also found to be very statistically
significant predictors of daily market log return. Survey sentiment indicators
are however found not to be statistically significant predictors of financial
market values, once we control for all other mood indicators as well as the
VIX.
|
1112.1115
|
On the Interplay between Social and Topical Structure
|
cs.SI physics.soc-ph
|
People's interests and people's social relationships are intuitively
connected, but understanding their interplay and whether they can help predict
each other has remained an open question. We examine the interface of two
decisive structures forming the backbone of online social media: the graph
structure of social networks - who connects with whom - and the set structure
of topical affiliations - who is interested in what. In studying this
interface, we identify key relationships whereby each of these structures can
be understood in terms of the other. The context for our analysis is Twitter, a
complex social network of both follower relationships and communication
relationships. On Twitter, "hashtags" are used to label conversation topics,
and we examine hashtag usage alongside these social structures.
We find that the hashtags that users adopt can predict their social
relationships, and also that the social relationships between the initial
adopters of a hashtag can predict the future popularity of that hashtag. By
studying weighted social relationships, we observe that while strong
reciprocated ties are the easiest to predict from hashtag structure, they are
also much less useful than weak directed ties for predicting hashtag
popularity. Importantly, we show that computationally simple structural
determinants can provide remarkable performance in both tasks. While our
analyses focus on Twitter, we view our findings as broadly applicable to
topical affiliations and social relationships in a host of diverse contexts,
including the movies people watch, the brands people like, or the locations
people frequent.
|
1112.1117
|
Finding Heavy Paths in Graphs: A Rank Join Approach
|
cs.DB
|
Graphs have been commonly used to model many applications. A natural problem
which abstracts applications such as itinerary planning, playlist
recommendation, and flow analysis in information networks is that of finding
the heaviest path(s) in a graph. More precisely, we can model these
applications as a graph with non-negative edge weights, along with a monotone
function such as sum, which aggregates edge weights into a path weight,
capturing some notion of quality. We are then interested in finding the top-k
heaviest simple paths, i.e., the $k$ simple (cycle-free) paths with the
greatest weight, whose length equals a given parameter $\ell$. We call this the
\emph{Heavy Path Problem} (HPP). It is easy to show that the problem is
NP-Hard.
In this work, we develop a practical approach to solve the Heavy Path problem
by leveraging a strong connection with the well-known Rank Join paradigm. We
first present an algorithm by adapting the Rank Join algorithm. We identify its
limitations and develop a new exact algorithm called HeavyPath and a scalable
heuristic algorithm. We conduct a comprehensive set of experiments on three
real data sets and show that HeavyPath outperforms the baseline algorithms
significantly, with respect to both $\ell$ and $k$. Further, our heuristic
algorithm scales to longer lengths, finding paths that are empirically within
50% of the optimum solution or better under various settings, and takes only a
fraction of the running time compared to the exact algorithm.
|
1112.1120
|
Classification with Invariant Scattering Representations
|
cs.CV math.FA stat.ML
|
A scattering transform defines a signal representation which is invariant to
translations and Lipschitz continuous relatively to deformations. It is
implemented with a non-linear convolution network that iterates over wavelet
and modulus operators. Lipschitz continuity locally linearizes deformations.
Complex classes of signals and textures can be modeled with low-dimensional
affine spaces, computed with a PCA in the scattering domain. Classification is
performed with a penalized model selection. State of the art results are
obtained for handwritten digit recognition over small training sets, and for
texture classification.
|
1112.1125
|
Learning in embodied action-perception loops through exploration
|
cs.LG
|
Although exploratory behaviors are ubiquitous in the animal kingdom, their
computational underpinnings are still largely unknown. Behavioral Psychology
has identified learning as a primary drive underlying many exploratory
behaviors. Exploration is seen as a means for an animal to gather sensory data
useful for reducing its ignorance about the environment. While related problems
have been addressed in Data Mining and Reinforcement Learning, the
computational modeling of learning-driven exploration by embodied agents is
largely unrepresented.
Here, we propose a computational theory for learning-driven exploration based
on the concept of missing information that allows an agent to identify
informative actions using Bayesian inference. We demonstrate that when
embodiment constraints are high, agents must actively coordinate their actions
to learn efficiently. Compared to earlier approaches, our exploration policy
yields more efficient learning across a range of worlds with diverse
structures. The improved learning in turn affords greater success in general
tasks including navigation and reward gathering. We conclude by discussing how
the proposed theory relates to previous information-theoretic objectives of
behavior, such as predictive information and the free energy principle, and how
it might contribute to a general theory of exploratory behavior.
|
1112.1133
|
Multi-timescale Nexting in a Reinforcement Learning Robot
|
cs.LG cs.RO
|
The term "nexting" has been used by psychologists to refer to the propensity
of people and many other animals to continually predict what will happen next
in an immediate, local, and personal sense. The ability to "next" constitutes a
basic kind of awareness and knowledge of one's environment. In this paper we
present results with a robot that learns to next in real time, predicting
thousands of features of the world's state, including all sensory inputs, at
timescales from 0.1 to 8 seconds. This was achieved by treating each state
feature as a reward-like target and applying temporal-difference methods to
learn a corresponding value function with a discount rate corresponding to the
timescale. We show that two thousand predictions, each dependent on six
thousand state features, can be learned and updated online at better than 10Hz
on a laptop computer, using the standard TD(lambda) algorithm with linear
function approximation. We show that this approach is efficient enough to be
practical, with most of the learning complete within 30 minutes. We also show
that a single tile-coded feature representation suffices to accurately predict
many different signals at a significant range of timescales. Finally, we show
that the accuracy of our learned predictions compares favorably with the
optimal off-line solution.
|
1112.1143
|
Mathematical model for hit phenomena as stochastic process of
interactions of human interactions
|
physics.soc-ph cs.SI
|
Mathematical model for hit phenomena in entertainments in the society is
presented as stochastic process of interactions of human dynamics. The model
use only the time distribution of advertisement budget as input and the words
of mouth (WOM) as posting in the social network system is used as the data to
compare with the calculated results. The unit of time is daily. The WOM
distribution in time is found to be very close to the residue distribution in
time. The calculations for Japanese motion picture market due to the
mathematical model agree very well with the actual residue distribution in
time.
|
1112.1156
|
Looking for grass-root sources of systemic risk: the case of
"cheques-as-collateral" network
|
q-fin.RM cs.SI q-fin.CP
|
The global financial system has become highly connected and complex. Has been
proven in practice that existing models, measures and reports of financial risk
fail to capture some important systemic dimensions. Only lately, advisory
boards have been established in high level and regulations are directly
targeted to systemic risk. In the same direction, a growing number of
researchers employ network analysis to model systemic risk in financial
networks. Current approaches are concentrated on interbank payment network
flows in national and international level. This work builds on existing
approaches to account for systemic risk assessment in micro level.
Particularly, we introduce the analysis of intra-bank financial risk
interconnections, by examining the real case of "cheques-as-collateral" network
for a major Greek bank. Our model offers useful information about the negative
spillovers of disruption to a financial entity in a bank's lending network and
could complement existing credit scoring models that account only for
idiosyncratic customer's financial profile. Most importantly, the proposed
methodology can be employed in many segments of the entire financial system,
providing a useful tool in the hands of regulatory authorities in assessing
more accurate estimates of systemic risk.
|
1112.1181
|
On the Stability Region of Multi-Queue Multi-Server Queueing Systems
with Stationary Channel Distribution
|
cs.IT cs.SY math.IT
|
In this paper, we characterize the stability region of multi-queue
multi-server (MQMS) queueing systems with stationary channel and packet arrival
processes. Toward this, the necessary and sufficient conditions for the
stability of the system are derived under general arrival processes with finite
first and second moments. We show that when the arrival processes are
stationary, the stability region form is a polytope for which we explicitly
find the coefficients of the linear inequalities which characterize the
stability region polytope.
|
1112.1187
|
Meaningful Matches in Stereovision
|
cs.CV stat.AP
|
This paper introduces a statistical method to decide whether two blocks in a
pair of of images match reliably. The method ensures that the selected block
matches are unlikely to have occurred "just by chance." The new approach is
based on the definition of a simple but faithful statistical "background model"
for image blocks learned from the image itself. A theorem guarantees that under
this model not more than a fixed number of wrong matches occurs (on average)
for the whole image. This fixed number (the number of false alarms) is the only
method parameter. Furthermore, the number of false alarms associated with each
match measures its reliability. This "a contrario" block-matching method,
however, cannot rule out false matches due to the presence of periodic objects
in the images. But it is successfully complemented by a parameterless
"self-similarity threshold." Experimental evidence shows that the proposed
method also detects occlusions and incoherent motions due to vehicles and
pedestrians in non simultaneous stereo.
|
1112.1200
|
A multi-feature tracking algorithm enabling adaptation to context
variations
|
cs.CV
|
We propose in this paper a tracking algorithm which is able to adapt itself
to different scene contexts. A feature pool is used to compute the matching
score between two detected objects. This feature pool includes 2D, 3D
displacement distances, 2D sizes, color histogram, histogram of oriented
gradient (HOG), color covariance and dominant color. An offline learning
process is proposed to search for useful features and to estimate their weights
for each context. In the online tracking process, a temporal window is defined
to establish the links between the detected objects. This enables to find the
object trajectories even if the objects are misdetected in some frames. A
trajectory filter is proposed to remove noisy trajectories. Experimentation on
different contexts is shown. The proposed tracker has been tested in videos
belonging to three public datasets and to the Caretaker European project. The
experimental results prove the effect of the proposed feature weight learning,
and the robustness of the proposed tracker compared to some methods in the
state of the art. The contributions of our approach over the state of the art
trackers are: (i) a robust tracking algorithm based on a feature pool, (ii) a
supervised learning scheme to learn feature weights for each context, (iii) a
new method to quantify the reliability of HOG descriptor, (iv) a combination of
color covariance and dominant color features with spatial pyramid distance to
manage the case of object occlusion.
|
1112.1217
|
Entropy Search for Information-Efficient Global Optimization
|
stat.ML cs.AI
|
Contemporary global optimization algorithms are based on local measures of
utility, rather than a probability measure over location and value of the
optimum. They thus attempt to collect low function values, not to learn about
the optimum. The reason for the absence of probabilistic global optimizers is
that the corresponding inference problem is intractable in several ways. This
paper develops desiderata for probabilistic optimization algorithms, then
presents a concrete algorithm which addresses each of the computational
intractabilities with a sequence of approximations and explicitly adresses the
decision problem of maximizing information gain from each evaluation.
|
1112.1220
|
Understanding mobility in a social petri dish
|
physics.soc-ph cs.SI
|
Despite the recent availability of large data sets on human movements, a full
understanding of the rules governing motion within social systems is still
missing, due to incomplete information on the socio-economic factors and to
often limited spatio-temporal resolutions. Here we study an entire society of
individuals, the players of an online-game, with complete information on their
movements in a network-shaped universe and on their social and economic
interactions. Such a "socio-economic laboratory" allows to unveil the intricate
interplay of spatial constraints, social and economic factors, and patterns of
mobility. We find that the motion of individuals is not only constrained by
physical distances, but also strongly shaped by the presence of socio-economic
areas. These regions can be recovered perfectly by community detection methods
solely based on the measured human dynamics. Moreover, we uncover that
long-term memory in the time-order of visited locations is the essential
ingredient for modeling the trajectories.
|
1112.1224
|
Information dynamics algorithm for detecting communities in networks
|
physics.soc-ph cs.SI
|
The problem of community detection is relevant in many scientific
disciplines, from social science to statistical physics. Given the impact of
community detection in many areas, such as psychology and social sciences, we
have addressed the issue of modifying existing well performing algorithms by
incorporating elements of the domain application fields, i.e. domain-inspired.
We have focused on a psychology and social network - inspired approach which
may be useful for further strengthening the link between social network studies
and mathematics of community detection. Here we introduce a community-detection
algorithm derived from the van Dongen's Markov Cluster algorithm (MCL) method
by considering networks' nodes as agents capable to take decisions. In this
framework we have introduced a memory factor to mimic a typical human behavior
such as the oblivion effect. The method is based on information diffusion and
it includes a non-linear processing phase. We test our method on two classical
community benchmark and on computer generated networks with known community
structure. Our approach has three important features: the capacity of detecting
overlapping communities, the capability of identifying communities from an
individual point of view and the fine tuning the community detectability with
respect to prior knowledge of the data. Finally we discuss how to use a Shannon
entropy measure for parameter estimation in complex networks.
|
1112.1229
|
On the Optimal Scheduling of Independent, Symmetric and Time-Sensitive
Tasks
|
math.OC cs.DS cs.SY
|
Consider a discrete-time system in which a centralized controller (CC) is
tasked with assigning at each time interval (or slot) K resources (or servers)
to K out of M>=K nodes. When assigned a server, a node can execute a task. The
tasks are independently generated at each node by stochastically symmetric and
memoryless random processes and stored in a finite-capacity task queue.
Moreover, they are time-sensitive in the sense that within each slot there is a
non-zero probability that a task expires before being scheduled. The scheduling
problem is tackled with the aim of maximizing the number of tasks completed
over time (or the task-throughput) under the assumption that the CC has no
direct access to the state of the task queues. The scheduling decisions at the
CC are based on the outcomes of previous scheduling commands, and on the known
statistical properties of the task generation and expiration processes. Based
on a Markovian modeling of the task generation and expiration processes, the CC
scheduling problem is formulated as a partially observable Markov decision
process (POMDP) that can be cast into the framework of restless multi-armed
bandit (RMAB) problems. When the task queues are of capacity one, the
optimality of a myopic (or greedy) policy is proved. It is also demonstrated
that the MP coincides with the Whittle index policy. For task queues of
arbitrary capacity instead, the myopic policy is generally suboptimal, and its
performance is compared with an upper bound obtained through a relaxation of
the original problem. Overall, the settings in this paper provide a rare
example where a RMAB problem can be explicitly solved, and in which the Whittle
index policy is proved to be optimal.
|
1112.1238
|
Cyclic Orbit Codes
|
cs.IT math.IT
|
In network coding a constant dimension code consists of a set of
k-dimensional subspaces of F_q^n. Orbit codes are constant dimension codes
which are defined as orbits of a subgroup of the general linear group, acting
on the set of all subspaces of F_q^n. If the acting group is cyclic, the
corresponding orbit codes are called cyclic orbit codes. In this paper we give
a classification of cyclic orbit codes and propose a decoding procedure for a
particular subclass of cyclic orbit codes.
|
1112.1313
|
The Target Set Selection Problem on Cycle Permutation Graphs,
Generalized Petersen Graphs and Torus Cordalis
|
math.CO cs.DM cs.DS cs.SI
|
In this paper we consider a fundamental problem in the area of viral
marketing, called T{\scriptsize ARGET} S{\scriptsize ET} S{\scriptsize
ELECTION} problem.
In a a viral marketing setting, social networks are modeled by graphs with
potential customers of a new product as vertices and friend relationships as
edges, where each vertex $v$ is assigned a threshold value $\theta(v)$. The
thresholds represent the different latent tendencies of customers (vertices) to
buy the new product when their friend (neighbors) do.
Consider a repetitive process on social network $(G,\theta)$ where each
vertex $v$ is associated with two states, active and inactive, which indicate
whether $v$ is persuaded into buying the new product. Suppose we are given a
target set $S\subseteq V(G)$. Initially, all vertices in $G$ are inactive. At
time step 0, we choose all vertices in $S$ to become active.
Then, at every time step $t>0$, all vertices that were active in time step
$t-1$ remain active, and we activate any vertex $v$ if at least $\theta(v)$ of
its neighbors were active at time step $t-1$. The activation process terminates
when no more vertices can get activated. We are interested in the following
optimization problem, called T{\scriptsize ARGET} S{\scriptsize ET}
S{\scriptsize ELECTION}: Finding a target set $S$ of smallest possible size
that activates all vertices of $G$. There is an important and well-studied
threshold called strict majority threshold, where for every vertex $v$ in $G$
we have $\theta(v)=\lceil{(d(v) +1)/2}\rceil$ and $d(v)$ is the degree of $v$
in $G$. In this paper, we consider the T{\scriptsize ARGET} S{\scriptsize ET}
S{\scriptsize ELECTION} problem under strict majority thresholds and focus on
three popular regular network structures: cycle permutation graphs, generalized
Petersen graphs and torus cordalis.
|
1112.1314
|
On Optimal Link Activation with Interference Cancellation in Wireless
Networking
|
cs.IT cs.NI math.IT
|
A fundamental aspect in performance engineering of wireless networks is
optimizing the set of links that can be concurrently activated to meet given
signal-to-interference-and-noise ratio (SINR) thresholds. The solution of this
combinatorial problem is the key element in scheduling and cross-layer resource
management. Previous works on link activation assume single-user decoding
receivers, that treat interference in the same way as noise. In this paper, we
assume multiuser decoding receivers, which can cancel strongly interfering
signals. As a result, in contrast to classical spatial reuse, links being close
to each other are more likely to be active simultaneously. Our goal here is to
deliver a comprehensive theoretical and numerical study on optimal link
activation under this novel setup, in order to provide insight into the gains
from adopting interference cancellation. We therefore consider the optimal
problem setting of successive interference cancellation (SIC), as well as the
simpler, yet instructive, case of parallel interference cancellation (PIC). We
prove that both problems are NP-hard and develop compact integer linear
programming formulations that enable us to approach the global optimum
solutions. We provide an extensive numerical performance evaluation, indicating
that for low to medium SINR thresholds the improvement is quite substantial,
especially with SIC, whereas for high SINR thresholds the improvement
diminishes and both schemes perform equally well.
|
1112.1330
|
Emotional control - conditio sine qua non for advanced artificial
intelligences?
|
q-bio.NC cs.AI
|
Humans dispose of two intertwined information processing pathways, cognitive
information processing via neural firing patterns and diffusive volume control
via neuromodulation. The cognitive information processing in the brain is
traditionally considered to be the prime neural correlate of human
intelligence, clinical studies indicate that human emotions intrinsically
correlate with the activation of the neuromodulatory system.
We examine here the question: Why do humans dispose of the diffusive
emotional control system? Is this a coincidence, a caprice of nature, perhaps a
leftover of our genetic heritage, or a necessary aspect of any advanced
intelligence, being it biological or synthetic? We argue here that emotional
control is necessary to solve the motivational problem, viz the selection of
short-term utility functions, in the context of an environment where
information, computing power and time constitute scarce resources.
|
1112.1333
|
Reaching an Optimal Consensus: Dynamical Systems that Compute
Intersections of Convex Sets
|
cs.MA
|
In this paper, multi-agent systems minimizing a sum of objective functions,
where each component is only known to a particular node, is considered for
continuous-time dynamics with time-varying interconnection topologies. Assuming
that each node can observe a convex solution set of its optimization component,
and the intersection of all such sets is nonempty, the considered optimization
problem is converted to an intersection computation problem. By a simple
distributed control rule, the considered multi-agent system with
continuous-time dynamics achieves not only a consensus, but also an optimal
agreement within the optimal solution set of the overall optimization
objective. Directed and bidirectional communications are studied, respectively,
and connectivity conditions are given to ensure a global optimal consensus. In
this way, the corresponding intersection computation problem is solved by the
proposed decentralized continuous-time algorithm. We establish several
important properties of the distance functions with respect to the global
optimal solution set and a class of invariant sets with the help of convex and
non-smooth analysis.
|
1112.1335
|
Connectivity and Set Tracking of Multi-agent Systems Guided by Multiple
Moving Leaders
|
cs.MA
|
In this paper, we investigate distributed multi-agent tracking of a convex
set specified by multiple moving leaders with unmeasurable velocities. Various
jointly-connected interaction topologies of the follower agents with
uncertainties are considered in the study of set tracking. Based on the
connectivity of the time-varying multi-agent system, necessary and sufficient
conditions are obtained for set input-to-state stability and set integral
input-to-state stability for a nonlinear neighbor-based coordination rule with
switching directed topologies. Conditions for asymptotic set tracking are also
proposed with respect to the polytope spanned by the leaders.
|
1112.1338
|
The Role of Persistent Graphs in the Agreement Seeking of Social
Networks
|
cs.MA
|
This paper investigates the role persistent arcs play for a social network to
reach a global belief agreement under discrete-time or continuous-time
evolution. Each (directed) arc in the underlying communication graph is assumed
to be associated with a time-dependent weight function which describes the
strength of the information flow from one node to another. An arc is said to be
persistent if its weight function has infinite $\mathscr{L}_1$ or $\ell_1$ norm
for continuous-time or discrete-time belief evolutions, respectively. The graph
that consists of all persistent arcs is called the persistent graph of the
underlying network. Three necessary and sufficient conditions on agreement or
$\epsilon$-agreement are established, by which we prove that the persistent
graph fully determines the convergence to a common opinion in social networks.
It is shown how the convergence rates explicitly depend on the diameter of the
persistent graph. The results adds to the understanding of the fundamentals
behind global agreements, as it is only persistent arcs that contribute to the
convergence.
|
1112.1344
|
Enhanced Inter-cell Interference Coordination for Heterogeneous Networks
in LTE-Advanced: A Survey
|
cs.IT math.IT
|
Heterogeneous networks (het-nets) - comprising of conventional macrocell base
stations overlaid with femtocells, picocells and wireless relays - offer
cellular operators burgeoning traffic demands through cell-splitting gains
obtained by bringing users closer to their access points. However, the often
random and unplanned location of these access points can cause severe near-far
problems, typically solved by coordinating base-station transmissions to
minimize interference. Towards this direction, the 3rd generation partnership
project Long Term Evolution-Advanced (3GPP-LTE or Rel-10) standard introduces
time-domain inter-cell interference coordination (ICIC) for facilitating a
seamless deployment of a het-net overlay. This article surveys the key features
encompassing the physical layer, network layer and back-hauling aspects of
time-domain ICIC in Rel-10.
|
1112.1390
|
An Identity for Kernel Ridge Regression
|
cs.LG
|
This paper derives an identity connecting the square loss of ridge regression
in on-line mode with the loss of the retrospectively best regressor. Some
corollaries about the properties of the cumulative loss of on-line ridge
regression are also obtained.
|
1112.1484
|
POCS Based Super-Resolution Image Reconstruction Using an Adaptive
Regularization Parameter
|
cs.CV
|
Crucial information barely visible to the human eye is often embedded in a
series of low-resolution images taken of the same scene. Super-resolution
enables the extraction of this information by reconstructing a single image, at
a high resolution than is present in any of the individual images. This is
particularly useful in forensic imaging, where the extraction of minute details
in an image can help to solve a crime. Super-resolution image restoration has
been one of the most important research areas in recent years which goals to
obtain a high resolution (HR) image from several low resolutions (LR) blurred,
noisy, under sampled and displaced images. Relation of the HR image and LR
images can be modeled by a linear system using a transformation matrix and
additive noise. However, a unique solution may not be available because of the
singularity of transformation matrix. To overcome this problem, POCS method has
been used. However, their performance is not good because the effect of noise
energy has been ignored. In this paper, we propose an adaptive regularization
approach based on the fact that the regularization parameter should be a linear
function of noise variance. The performance of the proposed approach has been
tested on several images and the obtained results demonstrate the superiority
of our approach compared with existing methods.
|
1112.1489
|
Multi-granular Perspectives on Covering
|
cs.AI
|
Covering model provides a general framework for granular computing in that
overlapping among granules are almost indispensable. For any given covering,
both intersection and union of covering blocks containing an element are
exploited as granules to form granular worlds at different abstraction levels,
respectively, and transformations among these different granular worlds are
also discussed. As an application of the presented multi-granular perspective
on covering, relational interpretation and axiomization of four types of
covering based rough upper approximation operators are investigated, which can
be dually applied to lower ones.
|
1112.1496
|
Re-initialization Free Level Set Evolution via Reaction Diffusion
|
cs.CV
|
This paper presents a novel reaction-diffusion (RD) method for implicit
active contours, which is completely free of the costly re-initialization
procedure in level set evolution (LSE). A diffusion term is introduced into
LSE, resulting in a RD-LSE equation, to which a piecewise constant solution can
be derived. In order to have a stable numerical solution of the RD based LSE,
we propose a two-step splitting method (TSSM) to iteratively solve the RD-LSE
equation: first iterating the LSE equation, and then solving the diffusion
equation. The second step regularizes the level set function obtained in the
first step to ensure stability, and thus the complex and costly
re-initialization procedure is completely eliminated from LSE. By successfully
applying diffusion to LSE, the RD-LSE model is stable by means of the simple
finite difference method, which is very easy to implement. The proposed RD
method can be generalized to solve the LSE for both variational level set
method and PDE-based level set method. The RD-LSE method shows very good
performance on boundary anti-leakage, and it can be readily extended to high
dimensional level set method. The extensive and promising experimental results
on synthetic and real images validate the effectiveness of the proposed RD-LSE
approach.
|
1112.1497
|
A unified graphical approach to random coding for multi-terminal
networks
|
cs.IT math.IT
|
A unified graphical approach to random coding for any memoryless, single-hop,
K-user channel with or without common information is defined through two steps.
The first step is user virtualization: each user is divided into multiple
virtual sub-users according to a chosen rate-splitting strategy. This results
in an enhanced channel with a possibly larger number of users for which more
coding possibilities are available and for which common messages to any subset
of users can be encoded. Following user virtualization, the message of each
user in the enhanced model is coded using a chosen combination of coded
time-sharing, superposition coding and joint binning. A graph is used to
represent the chosen coding strategies: nodes in the graph represent codewords
while edges represent coding operations. This graph is used to construct a
graphical Markov model which illustrates the statistical dependency among
codewords that can be introduced by the superposition coding or joint binning.
Using this statistical representation of the overall codebook distribution, the
error probability of the code is shown to vanish via a unified analysis. The
rate bounds that define the achievable rate region are obtained by linking the
error analysis to the properties of the graphical Markov model. This proposed
framework makes it possible to numerically obtain an achievable rate region by
specifying a user virtualization strategy and describing a set of coding
operations. The union of these rate regions defines the maximum achievable rate
region of our unified coding strategy.
|
1112.1517
|
Pure Strategy or Mixed Strategy?
|
cs.NE
|
Mixed strategy EAs aim to integrate several mutation operators into a single
algorithm. However few theoretical analysis has been made to answer the
question whether and when the performance of mixed strategy EAs is better than
that of pure strategy EAs. In theory, the performance of EAs can be measured by
asymptotic convergence rate and asymptotic hitting time. In this paper, it is
proven that given a mixed strategy (1+1) EAs consisting of several mutation
operators, its performance (asymptotic convergence rate and asymptotic hitting
time)is not worse than that of the worst pure strategy (1+1) EA using one
mutation operator; if these mutation operators are mutually complementary, then
it is possible to design a mixed strategy (1+1) EA whose performance is better
than that of any pure strategy (1+1) EA using one mutation operator.
|
1112.1520
|
Cooperative Game-Theoretic Approach to Spectrum Sharing in Cognitive
Radios
|
cs.GT cs.IT cs.NI math.IT
|
In this paper, a novel framework for normative modeling of the spectrum
sensing and sharing problem in cognitive radios (CRs) as a transferable utility
(TU) cooperative game is proposed. Secondary users (SUs) jointly sense the
spectrum and cooperatively detect the primary user (PU) activity for
identifying and accessing unoccupied spectrum bands. The games are designed to
be balanced and super-additive so that resource allocation is possible and
provides SUs with an incentive to cooperate and form the grand coalition. The
characteristic function of the game is derived based on the worths of SUs,
calculated according to the amount of work done for the coalition in terms of
reduction in uncertainty about PU activity. According to her worth in the
coalition, each SU gets a pay-off that is computed using various one-point
solutions such as Shapley value, \tau-value and Nucleolus. Depending upon their
data rate requirements for transmission, SUs use the earned pay-off to bid for
idle channels through a socially optimal Vickrey-Clarke-Groves (VCG) auction
mechanism. Simulation results show that, in comparison with other resource
allocation models, the proposed cooperative game-theoretic model provides the
best balance between fairness, cooperation and performance in terms of data
rates achieved by each SU.
|
1112.1528
|
Chargaff's "Grammar of Biology": New Fractal-like Rules
|
q-bio.GN cs.CE cs.DM
|
Chargaff once said that "I saw before me in dark contours the beginning of a
grammar of Biology". In linguistics, "grammar" is the set of natural language
rules, but we do not know for sure what Chargaff meant by "grammar" of Biology.
Nevertheless, assuming the metaphor, Chargaff himself started a "grammar of
Biology" discovering the so called Chargaff's rules. In this work, we further
develop his grammar. Using new concepts, we were able to discovery new genomic
rules that seem to be invariant across a large set of organisms, and show a
fractal-like property, since no matter the scale, the same pattern is observed
(self-similarity). We hope that these new invariant genomic rules may be used
in different contexts since short read data bias detection to genome assembly
quality assessment.
|
1112.1556
|
Active Learning of Halfspaces under a Margin Assumption
|
cs.LG stat.ML
|
We derive and analyze a new, efficient, pool-based active learning algorithm
for halfspaces, called ALuMA. Most previous algorithms show exponential
improvement in the label complexity assuming that the distribution over the
instance space is close to uniform. This assumption rarely holds in practical
applications. Instead, we study the label complexity under a large-margin
assumption -- a much more realistic condition, as evident by the success of
margin-based algorithms such as SVM. Our algorithm is computationally efficient
and comes with formal guarantees on its label complexity. It also naturally
extends to the non-separable case and to non-linear kernels. Experiments
illustrate the clear advantage of ALuMA over other active learning algorithms.
|
1112.1584
|
Wireless Network-Coded Three-Way Relaying Using Latin Cubes
|
cs.IT math.IT
|
The design of modulation schemes for the physical layer network-coded
three-way wireless relaying scenario is considered. The protocol employs two
phases: Multiple Access (MA) phase and Broadcast (BC) phase with each phase
utilizing one channel use. For the two-way relaying scenario, it was observed
by Koike-Akino et al. \cite{KPT}, that adaptively changing the network coding
map used at the relay according to the channel conditions greatly reduces the
impact of multiple access interference which occurs at the relay during the MA
phase and all these network coding maps should satisfy a requirement called
\textit{exclusive law}. This paper does the equivalent for the three-way
relaying scenario. We show that when the three users transmit points from the
same 4-PSK constellation, every such network coding map that satisfies the
exclusive law can be represented by a Latin Cube of Second Order. The network
code map used by the relay for the BC phase is explicitly obtained and is aimed
at reducing the effect of interference at the MA stage.
|
1112.1593
|
Low-delay, High-rate Non-square Complex Orthogonal Designs
|
cs.IT math.IT
|
The maximal rate of a non-square complex orthogonal design for $n$ transmit
antennas is $1/2+\frac{1}{n}$ if $n$ is even and $1/2+\frac{1}{n+1}$ if $n$ is
odd and the codes have been constructed for all $n$ by Liang (IEEE Trans.
Inform. Theory, 2003) and Lu et al. (IEEE Trans. Inform. Theory, 2005) to
achieve this rate. A lower bound on the decoding delay of maximal-rate complex
orthogonal designs has been obtained by Adams et al. (IEEE Trans. Inform.
Theory, 2007) and it is observed that Liang's construction achieves the bound
on delay for $n$ equal to 1 and 3 modulo 4 while Lu et al.'s construction
achieves the bound for $n=0,1,3$ mod 4. For $n=2$ mod 4, Adams et al. (IEEE
Trans. Inform. Theory, 2010) have shown that the minimal decoding delay is
twice the lower bound, in which case, both Liang's and Lu at al.'s construction
achieve the minimum decoding delay. % when $n=2$ mod 4. For large value of $n$,
it is observed that the rate is close to half and the decoding delay is very
large. A class of rate-1/2 codes with low decoding delay for all $n$ has been
constructed by Tarokh et al. (IEEE Trans. Inform. Theory, 1999). % have
constructed a class of rate-1/2 codes with low decoding delay for all $n$. In
this paper, another class of rate-1/2 codes is constructed for all $n$ in which
case the decoding delay is half the decoding delay of the rate-1/2 codes given
by Tarokh et al. This is achieved by giving first a general construction of
square real orthogonal designs which includes as special cases the well-known
constructions of Adams, Lax and Phillips and the construction of Geramita and
Pullman, and then making use of it to obtain the desired rate-1/2 codes. For
the case of 9 transmit antennas, the proposed rate-1/2 code is shown to be of
minimal-delay.
|
1112.1597
|
Enhanced Inter-Cell Interference Coordination Challenges in
Heterogeneous Networks
|
cs.NI cs.IT math.IT
|
3GPP LTE-Advanced has started a new study item to investigate Heterogeneous
Network (HetNet) deployments as a cost effective way to deal with the
unrelenting traffic demand. HetNets consist of a mix of macrocells, remote
radio heads, and low-power nodes such as picocells, femtocells, and relays.
Leveraging network topology, increasing the proximity between the access
network and the end-users, has the potential to provide the next significant
performance leap in wireless networks, improving spatial spectrum reuse and
enhancing indoor coverage. Nevertheless, deployment of a large number of small
cells overlaying the macrocells is not without new technical challenges. In
this article, we present the concept of heterogeneous networks and also
describe the major technical challenges associated with such network
architecture. We focus in particular on the standardization activities within
the 3GPP related to enhanced inter-cell interference coordination.
|
1112.1615
|
SLA Establishment with Guaranteed QoS in the Interdomain Network: A
Stock Model
|
cs.NI cs.LG
|
The new model that we present in this paper is introduced in the context of
guaranteed QoS and resources management in the inter-domain routing framework.
This model, called the stock model, is based on a reverse cascade approach and
is applied in a distributed context. So transit providers have to learn the
right capacities to buy and to stock and, therefore learning theory is applied
through an iterative process. We show that transit providers manage to learn
how to strategically choose their capacities on each route in order to maximize
their benefits, despite the very incomplete information. Finally, we provide
and analyse some simulation results given by the application of the model in a
simple case where the model quickly converges to a stable state.
|
1112.1639
|
A novel method for computation of the discrete Fourier transform over
characteristic two finite field of even extension degree
|
cs.IT math.IT
|
A novel method for computation of the discrete Fourier transform over a
finite field with reduced multiplicative complexity is described. If the number
of multiplications is to be minimized, then the novel method for the finite
field of even extension degree is the best known method of the discrete Fourier
transform computation. A constructive method of constructing for a cyclic
convolution over a finite field is introduced.
|
1112.1668
|
Data Mining and Electronic Health Records: Selecting Optimal Clinical
Treatments in Practice
|
cs.DB
|
Electronic health records (EHR's) are only a first step in capturing and
utilizing health-related data - the problem is turning that data into useful
information. Models produced via data mining and predictive analysis profile
inherited risks and environmental/behavioral factors associated with patient
disorders, which can be utilized to generate predictions about treatment
outcomes. This can form the backbone of clinical decision support systems
driven by live data based on the actual population. The advantage of such an
approach based on the actual population is that it is "adaptive". Here, we
evaluate the predictive capacity of a clinical EHR of a large mental healthcare
provider (~75,000 distinct clients a year) to provide decision support
information in a real-world clinical setting. Initial research has achieved a
70% success rate in predicting treatment outcomes using these methods.
|
1112.1670
|
Data Mining Session-Based Patient Reported Outcomes (PROs) in a Mental
Health Setting: Toward Data-Driven Clinical Decision Support and Personalized
Treatment
|
cs.AI cs.GL
|
The CDOI outcome measure - a patient-reported outcome (PRO) instrument
utilizing direct client feedback - was implemented in a large, real-world
behavioral healthcare setting in order to evaluate previous findings from
smaller controlled studies. PROs provide an alternative window into treatment
effectiveness based on client perception and facilitate detection of
problems/symptoms for which there is no discernible measure (e.g. pain). The
principal focus of the study was to evaluate the utility of the CDOI for
predictive modeling of outcomes in a live clinical setting. Implementation
factors were also addressed within the framework of the Theory of Planned
Behavior by linking adoption rates to implementation practices and clinician
perceptions. The results showed that the CDOI does contain significant capacity
to predict outcome delta over time based on baseline and early change scores in
a large, real-world clinical setting, as suggested in previous research. The
implementation analysis revealed a number of critical factors affecting
successful implementation and adoption of the CDOI outcome measure, though
there was a notable disconnect between clinician intentions and actual
behavior. Most importantly, the predictive capacity of the CDOI underscores the
utility of direct client feedback measures such as PROs and their potential use
as the basis for next generation clinical decision support tools and
personalized treatment approaches.
|
1112.1680
|
Quantifying synergistic information remains an unsolved problem
|
cs.IT math.IT
|
This paper has been withdrawn by the author. This paper is now obsolete. For
a solution please see: arXiv:/1205.4265.
|
1112.1687
|
Non-asymptotic information theoretic bound for some multi-party
scenarios
|
cs.IT math.IT quant-ph
|
In the last few years, there has been a great interest in extending the
information-theoretic scenario for the non-asymptotic or one-shot case, i.e.,
where the channel is used only once. We provide the one-shot rate region for
the distributed source-coding (Slepian-Wolf) and the multiple-access channel.
Our results are based on defining a novel one-shot typical set based on smooth
entropies that yields the one-shot achievable rate regions while leveraging the
results from the asymptotic analysis. Our results are asymptotically optimal,
i.e., for the distributed source coding they yield the same rate region as the
Slepian-Wolf in the limit of unlimited independent and identically distributed
(i.i.d.) copies. Similarly for the multiple-access channel the asymptotic
analysis of our approach yields the rate region which is equal to the rate
region of the memoryless multiple-access channel in the limit of large number
of channel uses.
|
1112.1715
|
Optimal Merging Algorithms for Lossless Codes with Generalized Criteria
|
cs.IT math.IT
|
This paper presents lossless prefix codes optimized with respect to a pay-off
criterion consisting of a convex combination of maximum codeword length and
average codeword length. The optimal codeword lengths obtained are based on a
new coding algorithm which transforms the initial source probability vector
into a new probability vector according to a merging rule. The coding algorithm
is equivalent to a partition of the source alphabet into disjoint sets on which
a new transformed probability vector is defined as a function of the initial
source probability vector and a scalar parameter. The pay-off criterion
considered encompasses a trade-off between maximum and average codeword length;
it is related to a pay-off criterion consisting of a convex combination of
average codeword length and average of an exponential function of the codeword
length, and to an average codeword length pay-off criterion subject to a
limited length constraint. A special case of the first related pay-off is
connected to coding problems involving source probability uncertainty and
codeword overflow probability, while the second related pay-off compliments
limited length Huffman coding algorithms.
|
1112.1728
|
Small-world spectra in mean field theory
|
physics.soc-ph cond-mat.dis-nn cs.SI math-ph math.MP
|
Collective dynamics on small-world networks emerge in a broad range of
systems with their spectra characterizing fundamental asymptotic features. Here
we derive analytic mean field predictions for the spectra of small-world models
that systematically interpolate between regular and random topologies by
varying their randomness. These theoretical predictions agree well with the
actual spectra (obtained by numerical diagonalization) for undirected and
directed networks and from fully regular to strongly random topologies. These
results may provide analytical insights to empirically found features of
dynamics on small-world networks from various research fields, including
biology, physics, engineering and social science.
|
1112.1730
|
Quality-Of-Service Provisioning in Decentralized Networks: A
Satisfaction Equilibrium Approach
|
cs.IT cs.GT math.IT
|
This paper introduces a particular game formulation and its corresponding
notion of equilibrium, namely the satisfaction form (SF) and the satisfaction
equilibrium (SE). A game in SF models the case where players are uniquely
interested in the satisfaction of some individual performance constraints,
instead of individual performance optimization. Under this formulation, the
notion of equilibrium corresponds to the situation where all players can
simultaneously satisfy their individual constraints. The notion of SE, models
the problem of QoS provisioning in decentralized self-configuring networks.
Here, radio devices are satisfied if they are able to provide the requested
QoS. Within this framework, the concept of SE is formalized for both pure and
mixed strategies considering finite sets of players and actions. In both cases,
sufficient conditions for the existence and uniqueness of the SE are presented.
When multiple SE exist, we introduce the idea of effort or cost of satisfaction
and we propose a refinement of the SE, namely the efficient SE (ESE). At the
ESE, all players adopt the action which requires the lowest effort for
satisfaction. A learning method that allows radio devices to achieve a SE in
pure strategies in finite time and requiring only one-bit feedback is also
presented. Finally, a power control game in the interference channel is used to
highlight the advantages of modeling QoS problems following the notion of SE
rather than other equilibrium concepts, e.g., generalized Nash equilibrium.
|
1112.1734
|
Using Taxonomies to Facilitate the Analysis of the Association Rules
|
cs.DB cs.LG
|
The Data Mining process enables the end users to analyze, understand and use
the extracted knowledge in an intelligent system or to support in the
decision-making processes. However, many algorithms used in the process
encounter large quantities of patterns, complicating the analysis of the
patterns. This fact occurs with association rules, a Data Mining technique that
tries to identify intrinsic patterns in large data sets. A method that can help
the analysis of the association rules is the use of taxonomies in the step of
post-processing knowledge. In this paper, the GART algorithm is proposed, which
uses taxonomies to generalize association rules, and the RulEE-GAR
computational module, that enables the analysis of the generalized rules.
|
1112.1757
|
Recovery of a Sparse Integer Solution to an Underdetermined System of
Linear Equations
|
cs.IT cs.DM cs.LG math.IT
|
We consider a system of m linear equations in n variables Ax=b where A is a
given m x n matrix and b is a given m-vector known to be equal to Ax' for some
unknown solution x' that is integer and k-sparse: x' in {0,1}^n and exactly k
entries of x' are 1. We give necessary and sufficient conditions for recovering
the solution x exactly using an LP relaxation that minimizes l1 norm of x. When
A is drawn from a distribution that has exchangeable columns, we show an
interesting connection between the recovery probability and a well known
problem in geometry, namely the k-set problem. To the best of our knowledge,
this connection appears to be new in the compressive sensing literature. We
empirically show that for large n if the elements of A are drawn i.i.d. from
the normal distribution then the performance of the recovery LP exhibits a
phase transition, i.e., for each k there exists a value m' of m such that the
recovery always succeeds if m > m' and always fails if m < m'. Using the
empirical data we conjecture that m' = nH(k/n)/2 where H(x) = -(x)log_2(x) -
(1-x)log_2(1-x) is the binary entropy function.
|
1112.1762
|
Heegard-Berger and Cascade Source Coding Problems with Common
Reconstruction Constraints
|
cs.IT math.IT
|
For the HB problem with the CR constraint, the rate-distortion function is
derived under the assumption that the side information sequences are
(stochastically) degraded. The rate-distortion function is also calculated
explicitly for three examples, namely Gaussian source and side information with
quadratic distortion metric, and binary source and side information with
erasure and Hamming distortion metrics. The rate-distortion function is then
characterized for the HB problem with cooperating decoders and (physically)
degraded side information. For the cascade problem with the CR constraint, the
rate-distortion region is obtained under the assumption that side information
at the final node is physically degraded with respect to that at the
intermediate node. For the latter two cases, it is worth emphasizing that the
corresponding problem without the CR constraint is still open. Outer and inner
bounds on the rate-distortion region are also obtained for the cascade problem
under the assumption that the side information at the intermediate node is
physically degraded with respect to that at the final node. For the three
examples mentioned above, the bounds are shown to coincide. Finally, for the HB
problem, the rate-distortion function is obtained under the more general
requirement of constrained reconstruction, whereby the decoder's estimate must
be recovered at the encoder only within some distortion.
|
1112.1768
|
The Extended UCB Policies for Frequentist Multi-armed Bandit Problems
|
cs.LG math.PR math.ST stat.TH
|
The multi-armed bandit (MAB) problem is a widely studied model in the field
of operations research for sequential decision making and reinforcement
learning. This paper mainly considers the classical MAB model with the
heavy-tailed reward distributions. We introduce the extended robust UCB policy,
which is an extension of the pioneering UCB policies proposed by Bubeck et al.
[5] and Lattimore [21]. The previous UCB policies require the knowledge of an
upper bound on specific moments of reward distributions or a particular moment
to exist, which can be hard to acquire or guarantee in practical scenarios. Our
extended robust UCB generalizes Lattimore's seminary work (for moments of
orders $p=4$ and $q=2$) to arbitrarily chosen $p$ and $q$ as long as the two
moments have a known controlled relationship, while still achieving the optimal
regret growth order O(log T), thus providing a broadened application area of
the UCB policies for the heavy-tailed reward distributions.
|
1112.1770
|
Polar codes for the m-user multiple access channels
|
cs.IT math.IT
|
Polar codes are constructed for m-user multiple access channels (MAC) whose
input alphabet size is a prime number. The block error probability under
successive cancelation decoding decays exponentially with the square root of
the block length. Although the sum capacity is achieved by this coding scheme,
some points in the symmetric capacity region may not be achieved. In the case
where the channel is a combination of linear channels, we provide a necessary
and sufficient condition characterizing the channels whose symmetric capacity
region is preserved upon the polarization process. We also provide a sufficient
condition for having a total loss in the dominant face.
|
1112.1831
|
Finding Overlapping Communities in Social Networks: Toward a Rigorous
Approach
|
cs.SI cs.DS physics.soc-ph
|
A "community" in a social network is usually understood to be a group of
nodes more densely connected with each other than with the rest of the network.
This is an important concept in most domains where networks arise: social,
technological, biological, etc. For many years algorithms for finding
communities implicitly assumed communities are nonoverlapping (leading to use
of clustering-based approaches) but there is increasing interest in finding
overlapping communities. A barrier to finding communities is that the solution
concept is often defined in terms of an NP-complete problem such as Clique or
Hierarchical Clustering.
This paper seeks to initiate a rigorous approach to the problem of finding
overlapping communities, where "rigorous" means that we clearly state the
following: (a) the object sought by our algorithm (b) the assumptions about the
underlying network (c) the (worst-case) running time.
Our assumptions about the network lie between worst-case and average-case. An
average case analysis would require a precise probabilistic model of the
network, on which there is currently no consensus. However, some plausible
assumptions about network parameters can be gleaned from a long body of work in
the sociology community spanning five decades focusing on the study of
individual communities and ego-centric networks. Thus our assumptions are
somewhat "local" in nature. Nevertheless they suffice to permit a rigorous
analysis of running time of algorithms that recover global structure.
Our algorithms use random sampling similar to that in property testing and
algorithms for dense graphs. However, our networks are not necessarily dense
graphs, not even in local neighborhoods.
Our algorithms explore a local-global relationship between ego-centric and
socio-centric networks that we hope will provide a fruitful framework for
future work both in computer science and sociology.
|
1112.1863
|
Delay Optimal Server Assignment to Symmetric Parallel Queues with Random
Connectivities
|
math.OC cs.IT cs.SY math.IT
|
In this paper, we investigate the problem of assignment of $K$ identical
servers to a set of $N$ parallel queues in a time slotted queueing system. The
connectivity of each queue to each server is randomly changing with time; each
server can serve at most one queue and each queue can be served by at most one
server per time slot. Such queueing systems were widely applied in modeling the
scheduling (or resource allocation) problem in wireless networks. It has been
previously proven that Maximum Weighted Matching (MWM) is a throughput optimal
server assignment policy for such queueing systems. In this paper, we prove
that for a symmetric system with i.i.d. Bernoulli packet arrivals and
connectivities, MWM minimizes, in stochastic ordering sense, a broad range of
cost functions of the queue lengths including total queue occupancy (or
equivalently average queueing delay).
|
1112.1872
|
The multicovering radius problem for some types of discrete structures
|
math.CO cs.IT math.IT
|
The covering radius problem is a question in coding theory concerned with
finding the minimum radius $r$ such that, given a code that is a subset of an
underlying metric space, balls of radius $r$ over its code words cover the
entire metric space. Klapper introduced a code parameter, called the
multicovering radius, which is a generalization of the covering radius. In this
paper, we introduce an analogue of the multicovering radius for permutation
codes (cf. Keevash and Ku, 2006) and for codes of perfect matchings (cf. Aw and
Ku, 2012). We apply probabilistic tools to give some lower bounds on the
multicovering radii of these codes. In the process of obtaining these results,
we also correct an error in the proof of the lower bound of the covering radius
that appeared in Keevash and Ku (2006). We conclude with a discussion of the
multicovering radius problem in an even more general context, which offers room
for further research.
|
1112.1937
|
Bootstrapping Intrinsically Motivated Learning with Human Demonstrations
|
cs.LG cs.AI cs.RO
|
This paper studies the coupling of internally guided learning and social
interaction, and more specifically the improvement owing to demonstrations of
the learning by intrinsic motivation. We present Socially Guided Intrinsic
Motivation by Demonstration (SGIM-D), an algorithm for learning in continuous,
unbounded and non-preset environments. After introducing social learning and
intrinsic motivation, we describe the design of our algorithm, before showing
through a fishing experiment that SGIM-D efficiently combines the advantages of
social learning and intrinsic motivation to gain a wide repertoire while being
specialised in specific subspaces.
|
1112.1966
|
Bipartite ranking algorithm for classification and survival analysis
|
cs.LG
|
Unsupervised aggregation of independently built univariate predictors is
explored as an alternative regularization approach for noisy, sparse datasets.
Bipartite ranking algorithm Smooth Rank implementing this approach is
introduced. The advantages of this algorithm are demonstrated on two types of
problems. First, Smooth Rank is applied to two-class problems from bio-medical
field, where ranking is often preferable to classification. In comparison
against SVMs with radial and linear kernels, Smooth Rank had the best
performance on 8 out of 12 benchmark benchmarks. The second area of application
is survival analysis, which is reduced here to bipartite ranking in a way which
allows one to use commonly accepted measures of methods performance. In
comparison of Smooth Rank with Cox PH regression and CoxPath methods, Smooth
Rank proved to be the best on 9 out of 10 benchmark datasets.
|
1112.1968
|
Concentration of Measure Inequalities for Toeplitz Matrices with
Applications
|
cs.IT math.IT
|
We derive Concentration of Measure (CoM) inequalities for randomized Toeplitz
matrices. These inequalities show that the norm of a high-dimensional signal
mapped by a Toeplitz matrix to a low-dimensional space concentrates around its
mean with a tail probability bound that decays exponentially in the dimension
of the range space divided by a quantity which is a function of the signal. For
the class of sparse signals, the introduced quantity is bounded by the sparsity
level of the signal. However, we observe that this bound is highly pessimistic
for most sparse signals and we show that if a random distribution is imposed on
the non-zero entries of the signal, the typical value of the quantity is
bounded by a term that scales logarithmically in the ambient dimension. As an
application of the CoM inequalities, we consider Compressive Binary Detection
(CBD).
|
1112.1989
|
Coded Single-Tone Signaling and Its Application to Resource Coordination
and Interference Management in Femtocell Networks
|
cs.IT math.IT
|
Resource coordination and interference management is the key to achieving the
benefits of femtocell networks. Over-the-air signaling is one of the most
effective means for distributed dynamic resource coordination and interference
management. However, the design of this type of signal is challenging. In this
paper, we address the challenges and propose an effective solution, referred to
as coded single-tone signaling (STS). The proposed coded STS scheme possesses
certain highly desirable properties, such as no dedicated resource requirement
(no overhead), no near-and-far effect, no inter-signal interference (no
multi-user interference), low peak-to-average power ratio (deep coverage). In
addition, the proposed coded STS can fully exploit frequency diversity and
provides a means for high quality wideband channel estimation. The coded STS
design is demonstrated through a concrete numerical example. Performance of the
proposed coded STS and its effect on cochannel traffic channels are evaluated
through simulations.
|
1112.1990
|
Efficient Neighbor Discovery for Proximity-Aware Networks
|
cs.IT math.IT
|
In this work, we propose a fast and energy-efficient neighbor discovery
scheme for proximity-aware networks such as wireless ad hoc networks. Discovery
efficiency is accomplished by the use of a special discovery signal that
provides random multiple access with low transmit power consumption and low
synchronization requirement.
|
1112.1994
|
List Decoding Barnes-Wall Lattices
|
cs.IT cs.CC cs.DS math.IT
|
The question of list decoding error-correcting codes over finite fields
(under the Hamming metric) has been widely studied in recent years. Motivated
by the similar discrete structure of linear codes and point lattices in R^N,
and their many shared applications across complexity theory, cryptography, and
coding theory, we initiate the study of list decoding for lattices. Namely: for
a lattice L in R^N, given a target vector r in R^N and a distance parameter d,
output the set of all lattice points w in L that are within distance d of r.
In this work we focus on combinatorial and algorithmic questions related to
list decoding for the well-studied family of Barnes-Wall lattices. Our main
contributions are twofold:
1) We give tight (up to polynomials) combinatorial bounds on the worst-case
list size, showing it to be polynomial in the lattice dimension for any error
radius bounded away from the lattice's minimum distance (in the Euclidean
norm).
2) Building on the unique decoding algorithm of Micciancio and Nicolosi (ISIT
'08), we give a list-decoding algorithm that runs in time polynomial in the
lattice dimension and worst-case list size, for any error radius. Moreover, our
algorithm is highly parallelizable, and with sufficiently many processors can
run in parallel time only poly-logarithmic in the lattice dimension.
In particular, our results imply a polynomial-time list-decoding algorithm
for any error radius bounded away from the minimum distance, thus beating a
typical barrier for error-correcting codes posed by the Johnson radius.
|
1112.1996
|
KL-learning: Online solution of Kullback-Leibler control problems
|
math.OC cs.AI
|
We introduce a stochastic approximation method for the solution of an ergodic
Kullback-Leibler control problem. A Kullback-Leibler control problem is a
Markov decision process on a finite state space in which the control cost is
proportional to a Kullback-Leibler divergence of the controlled transition
probabilities with respect to the uncontrolled transition probabilities. The
algorithm discussed in this work allows for a sound theoretical analysis using
the ODE method. In a numerical experiment the algorithm is shown to be
comparable to the power method and the related Z-learning algorithm in terms of
convergence speed. It may be used as the basis of a reinforcement learning
style algorithm for Markov decision problems.
|
1112.2015
|
A Framework for Picture Extraction on Search Engine Improved and
Meaningful Result
|
cs.IR
|
Searching is an important tool of information gathering, if information is in
the form of picture than it play a major role to take quick action and easy to
memorize. This is a human tendency to retain more picture than text. The
complexity and the occurrence of variety of query can give variation in result
and provide the humans to learn something new or get confused. This paper
presents a development of a framework that will focus on recourse
identification for the user so that they can get faster access with accurate &
concise results on time and analysis of the change that is evident as the
scenario changes from text to picture retrieval. This paper also provides a
glimpse how to get accurate picture information in advance and extended
technologies searching framework. The new challenges and design techniques of
picture retrieval systems are also suggested in this paper.
|
1112.2020
|
Differentially Private Trajectory Data Publication
|
cs.DB
|
With the increasing prevalence of location-aware devices, trajectory data has
been generated and collected in various application domains. Trajectory data
carries rich information that is useful for many data analysis tasks. Yet,
improper publishing and use of trajectory data could jeopardize individual
privacy. However, it has been shown that existing privacy-preserving trajectory
data publishing methods derived from partition-based privacy models, for
example k-anonymity, are unable to provide sufficient privacy protection.
In this paper, motivated by the data publishing scenario at the Societe de
transport de Montreal (STM), the public transit agency in Montreal area, we
study the problem of publishing trajectory data under the rigorous differential
privacy model. We propose an efficient data-dependent yet differentially
private sanitization algorithm, which is applicable to different types of
trajectory data. The efficiency of our approach comes from adaptively narrowing
down the output domain by building a noisy prefix tree based on the underlying
data. Moreover, as a post-processing step, we make use of the inherent
constraints of a prefix tree to conduct constrained inferences, which lead to
better utility. This is the first paper to introduce a practical solution for
publishing large volume of trajectory data under differential privacy. We
examine the utility of sanitized data in terms of count queries and frequent
sequential pattern mining. Extensive experiments on real-life trajectory data
from the STM demonstrate that our approach maintains high utility and is
scalable to large trajectory datasets.
|
1112.2026
|
Future Robotics Database Management System along with Cloud TPS
|
cs.DB cs.RO
|
This paper deals with memory management issues of robotics. In our proposal
we break one of the major issues in creating humanoid. . Database issue is the
complicated thing in robotics schema design here in our proposal we suggest new
concept called NOSQL database for the effective data retrieval, so that the
humanoid robots will get the massive thinking ability in searching each items
using chained instructions. For query transactions in robotics we need an
effective consistency transactions so by using latest technology called
CloudTPS which guarantees full ACID properties so that the robot can make their
queries using multi-item transactions through this we obtain data consistency
in data retrievals. In addition we included map reduce concepts it can splits
the job to the respective workers so that it can process the data in a parallel
way.
|
1112.2028
|
Document Classification Using Expectation Maximization with Semi
Supervised Learning
|
cs.IR
|
As the amount of online document increases, the demand for document
classification to aid the analysis and management of document is increasing.
Text is cheap, but information, in the form of knowing what classes a document
belongs to, is expensive. The main purpose of this paper is to explain the
expectation maximization technique of data mining to classify the document and
to learn how to improve the accuracy while using semi-supervised approach.
Expectation maximization algorithm is applied with both supervised and
semi-supervised approach. It is found that semi-supervised approach is more
accurate and effective. The main advantage of semi supervised approach is
"Dynamically Generation of New Class". The algorithm first trains a classifier
using the labeled document and probabilistically classifies the unlabeled
documents. The car dataset for the evaluation purpose is collected from UCI
repository dataset in which some changes have been done from our side.
|
1112.2031
|
Learning Context for Text Categorization
|
cs.IR
|
This paper describes our work which is based on discovering context for text
document categorization. The document categorization approach is derived from a
combination of a learning paradigm known as relation extraction and an
technique known as context discovery. We demonstrate the effectiveness of our
categorization approach using reuters 21578 dataset and synthetic real world
data from sports domain. Our experimental results indicate that the learned
context greatly improves the categorization performance as compared to
traditional categorization approaches.
|
1112.2038
|
Fast DOA estimation using wavelet denoising on MIMO fading channel
|
cs.NI cs.IT math.IT
|
This paper presents a tool for the analysis, and simulation of
direction-of-arrival (DOA) estimation in wireless mobile communication systems
over the fading channel. It reviews two methods of Direction of arrival (DOA)
estimation algorithm. The standard Multiple Signal Classification (MUSIC) can
be obtained from the subspace based methods. In improved MUSIC procedure called
Cyclic MUSIC, it can automatically classify the signals as desired and
undesired based on the known spectral correlation property and estimate only
the desired signal's DOA. In this paper, the DOA estimation algorithm using the
de-noising pre-processing based on time-frequency conversion analysis was
proposed, and the performances were analyzed. This is focused on the
improvement of DOA estimation at a lower SNR and interference environment. This
paper provides a fairly complete image of the performance and statistical
efficiency of each of above two methods with QPSK signal.
|
1112.2040
|
Recent Trends and Research Issues in Video Association Mining
|
cs.MM cs.DB
|
With the ever-growing digital libraries and video databases, it is
increasingly important to understand and mine the knowledge from video database
automatically. Discovering association rules between items in a large video
database plays a considerable role in the video data mining research areas.
Based on the research and development in the past years, application of
association rule mining is growing in different domains such as surveillance,
meetings, broadcast news, sports, archives, movies, medical data, as well as
personal and online media collections. The purpose of this paper is to provide
general framework of mining the association rules from video database. This
article is also represents the research issues in video association mining
followed by the recent trends.
|
1112.2067
|
Ontology-Based Emergency Management System in a Social Cloud
|
cs.SI
|
The need for Emergency Management continually grows as the population and
exposure to catastrophic failures increase. The ability to offer appropriate
services at these emergency situations can be tackled through group
communication mechanisms. The entities involved in the group communication
include people, organizations, events, locations and essential services. Cloud
computing is a "as a service" style of computing that enables on-demand network
access to a shared pool of resources. So this work focuses on proposing a
social cloud constituting group communication entities using an open source
platform, Eucalyptus. The services are exposed as semantic web services, since
the availability of machine-readable metadata (Ontology) will enable the access
of these services more intelligently. The objective of this paper is to propose
an Ontology-based Emergency Management System in a social cloud and demonstrate
the same using emergency healthcare domain.
|
1112.2071
|
Thematic Analysis and Visualization of Textual Corpus
|
cs.IR
|
The semantic analysis of documents is a domain of intense research at
present. The works in this domain can take several directions and touch several
levels of granularity. In the present work we are exactly interested in the
thematic analysis of the textual documents. In our approach, we suggest
studying the variation of the theme relevance within a text to identify the
major theme and all the minor themes evoked in the text. This allows us at the
second level of analysis to identify the relations of thematic associations in
a textual corpus. Through the identification and the analysis of these
association relations we suggest generating thematic paths allowing users,
within the frame work of information search system, to explore the corpus
according to their themes of interest and to discover new knowledge by
navigating in the thematic association relations.
|
1112.2095
|
Real-time face swapping as a tool for understanding infant
self-recognition
|
cs.AI cs.CV
|
To study the preference of infants for contingency of movements and
familiarity of faces during self-recognition task, we built, as an accurate and
instantaneous imitator, a real-time face- swapper for videos. We present a
non-constraint face-swapper based on 3D visual tracking that achieves real-time
performance through parallel computing. Our imitator system is par- ticularly
suited for experiments involving children with Autistic Spectrum Disorder who
are often strongly disturbed by the constraints of other methods.
|
1112.2112
|
Extreme events and event size fluctuations in biased random walks on
networks
|
cond-mat.stat-mech cs.SI physics.soc-ph
|
Random walk on discrete lattice models is important to understand various
types of transport processes. The extreme events, defined as exceedences of the
flux of walkers above a prescribed threshold, have been studied recently in the
context of complex networks. This was motivated by the occurrence of rare
events such as traffic jams, floods, and power black-outs which take place on
networks. In this work, we study extreme events in a generalized random walk
model in which the walk is preferentially biased by the network topology. The
walkers preferentially choose to hop toward the hubs or small degree nodes. In
this setting, we show that extremely large fluctuations in event-sizes are
possible on small degree nodes when the walkers are biased toward the hubs. In
particular, we obtain the distribution of event-sizes on the network. Further,
the probability for the occurrence of extreme events on any node in the network
depends on its 'generalized strength', a measure of the ability of a node to
attract walkers. The 'generalized strength' is a function of the degree of the
node and that of its nearest neighbors. We obtain analytical and simulation
results for the probability of occurrence of extreme events on the nodes of a
network using a generalized random walk model. The result reveals that the
nodes with a larger value of 'generalized strength', on average, display lower
probability for the occurrence of extreme events compared to the nodes with
lower values of 'generalized strength'.
|
1112.2113
|
Incremental Slow Feature Analysis: Adaptive and Episodic Learning from
High-Dimensional Input Streams
|
cs.AI
|
Slow Feature Analysis (SFA) extracts features representing the underlying
causes of changes within a temporally coherent high-dimensional raw sensory
input signal. Our novel incremental version of SFA (IncSFA) combines
incremental Principal Components Analysis and Minor Components Analysis. Unlike
standard batch-based SFA, IncSFA adapts along with non-stationary environments,
is amenable to episodic training, is not corrupted by outliers, and is
covariance-free. These properties make IncSFA a generally useful unsupervised
preprocessor for autonomous learning agents and robots. In IncSFA, the CCIPCA
and MCA updates take the form of Hebbian and anti-Hebbian updating, extending
the biological plausibility of SFA. In both single node and deep network
versions, IncSFA learns to encode its input streams (such as high-dimensional
video) by informative slow features representing meaningful abstract
environmental properties. It can handle cases where batch SFA fails.
|
1112.2137
|
Compact Weighted Class Association Rule Mining using Information Gain
|
cs.DB
|
Weighted association rule mining reflects semantic significance of item by
considering its weight. Classification constructs the classifier and predicts
the new data instance. This paper proposes compact weighted class association
rule mining method, which applies weighted association rule mining in the
classification and constructs an efficient weighted associative classifier.
This proposed associative classification algorithm chooses one non class
informative attribute from dataset and all the weighted class association rules
are generated based on that attribute. The weight of the item is considered as
one of the parameter in generating the weighted class association rules. This
proposed algorithm calculates the weight using the HITS model. Experimental
results show that the proposed system generates less number of high quality
rules which improves the classification accuracy.
|
1112.2144
|
An Information Theoretic Analysis of Decision in Computer Chess
|
cs.AI cs.IT math.IT
|
The basis of the method proposed in this article is the idea that information
is one of the most important factors in strategic decisions, including
decisions in computer chess and other strategy games. The model proposed in
this article and the algorithm described are based on the idea of a information
theoretic basis of decision in strategy games . The model generalizes and
provides a mathematical justification for one of the most popular search
algorithms used in leading computer chess programs, the fractional ply scheme.
However, despite its success in leading computer chess applications, until now
few has been published about this method. The article creates a fundamental
basis for this method in the axioms of information theory, then derives the
principles used in programming the search and describes mathematically the form
of the coefficients. One of the most important parameters of the fractional ply
search is derived from fundamental principles. Until now this coefficient has
been usually handcrafted or determined from intuitive elements or data mining.
There is a deep, information theoretical justification for such a parameter. In
one way the method proposed is a generalization of previous methods. More
important, it shows why the fractional depth ply scheme is so powerful. It is
because the algorithm navigates along the lines where the highest information
gain is possible. A working and original implementation has been written and
tested for this algorithm and is provided in the appendix. The article is
essentially self-contained and gives proper background knowledge and
references. The assumptions are intuitive and in the direction expected and
described intuitively by great champions of chess.
|
1112.2149
|
Information and Search in Computer Chess
|
cs.AI cs.IT math.IT
|
The article describes a model of chess based on information theory. A
mathematical model of the partial depth scheme is outlined and a formula for
the partial depth added for each ply is calculated from the principles of the
model. An implementation of alpha-beta with partial depth is given. The method
is tested using an experimental strategy having as objective to show the effect
of allocation of a higher amount of search resources on areas of the search
tree with higher information. The search proceeds in the direction of lines
with higher information gain. The effects on search performance of allocating
higher search resources on lines with higher information gain are tested
experimentaly and conclusive results are obtained. In order to isolate the
effects of the partial depth scheme no other heuristic is used.
|
1112.2155
|
A Concurrency Control Method Based on Commitment Ordering in Mobile
Databases
|
cs.DB
|
Disconnection of mobile clients from server, in an unclear time and for an
unknown duration, due to mobility of mobile clients, is the most important
challenges for concurrency control in mobile database with client-server model.
Applying pessimistic common classic methods of concurrency control (like 2pl)
in mobile database leads to long duration blocking and increasing waiting time
of transactions. Because of high rate of aborting transactions, optimistic
methods aren`t appropriate in mobile database. In this article, OPCOT
concurrency control algorithm is introduced based on optimistic concurrency
control method. Reducing communications between mobile client and server,
decreasing blocking rate and deadlock of transactions, and increasing
concurrency degree are the most important motivation of using optimistic method
as the basis method of OPCOT algorithm. To reduce abortion rate of
transactions, in execution time of transactions` operators a timestamp is
assigned to them. In other to checking commitment ordering property of
scheduler, the assigned timestamp is used in server on time of commitment. In
this article, serializability of OPCOT algorithm scheduler has been proved by
using serializability graph. Results of evaluating simulation show that OPCOT
algorithm decreases abortion rate and waiting time of transactions in compare
to 2pl and optimistic algorithms.
|
1112.2183
|
The Expert System Designed to Improve Customer Satisfaction
|
cs.NE
|
Customer Relationship Management becomes a leading business strategy in
highly competitive business environment. It aims to enhance the performance of
the businesses by improving the customer satisfaction and loyalty. The
objective of this paper is to improve customer satisfaction on product's colors
and design with the help of the expert system developed by using Artificial
Neural Networks. The expert system's role is to capture the knowledge of the
experts and the data from the customer requirements, and then, process the
collected data and form the appropriate rules for choosing product's colors and
design. In order to identify the hidden pattern of the customer's needs, the
Artificial Neural Networks technique has been applied to classify the colors
and design based upon a list of selected information. Moreover, the expert
system has the capability to make decisions in ranking the scores of the colors
and design presented in the selection. In addition, the expert system has been
validated with a different customer types.
|
1112.2187
|
Chinese Restaurant Game - Part II: Applications to Wireless Networking,
Cloud Computing, and Online Social Networking
|
cs.SI cs.LG
|
In Part I of this two-part paper [1], we proposed a new game, called Chinese
restaurant game, to analyze the social learning problem with negative network
externality. The best responses of agents in the Chinese restaurant game with
imperfect signals are constructed through a recursive method, and the influence
of both learning and network externality on the utilities of agents is studied.
In Part II of this two-part paper, we illustrate three applications of Chinese
restaurant game in wireless networking, cloud computing, and online social
networking. For each application, we formulate the corresponding problem as a
Chinese restaurant game and analyze how agents learn and make strategic
decisions in the problem. The proposed method is compared with four
common-sense methods in terms of agents' utilities and the overall system
performance through simulations. We find that the proposed Chinese restaurant
game theoretic approach indeed helps agents make better decisions and improves
the overall system performance. Furthermore, agents with different decision
orders have different advantages in terms of their utilities, which also
verifies the conclusions drawn in Part I of this two-part paper.
|
1112.2188
|
Chinese Restaurant Game - Part I: Theory of Learning with Negative
Network Externality
|
cs.SI cs.LG
|
In a social network, agents are intelligent and have the capability to make
decisions to maximize their utilities. They can either make wise decisions by
taking advantages of other agents' experiences through learning, or make
decisions earlier to avoid competitions from huge crowds. Both these two
effects, social learning and negative network externality, play important roles
in the decision process of an agent. While there are existing works on either
social learning or negative network externality, a general study on considering
both these two contradictory effects is still limited. We find that the Chinese
restaurant process, a popular random process, provides a well-defined structure
to model the decision process of an agent under these two effects. By
introducing the strategic behavior into the non-strategic Chinese restaurant
process, in Part I of this two-part paper, we propose a new game, called
Chinese Restaurant Game, to formulate the social learning problem with negative
network externality. Through analyzing the proposed Chinese restaurant game, we
derive the optimal strategy of each agent and provide a recursive method to
achieve the optimal strategy. How social learning and negative network
externality influence each other under various settings is also studied through
simulations.
|
1112.2239
|
Absence of influential spreaders in rumor dynamics
|
physics.soc-ph cs.SI
|
Recent research [1] has suggested that coreness, and not degree, constitutes
a better topological descriptor to identifying influential spreaders in complex
networks. This hypothesis has been verified in the context of disease
spreading. Here, we instead focus on rumor spreading models, which are more
suited for social contagion and information propagation. To this end, we
perform extensive computer simulations on top of several real-world networks
and find opposite results. Namely, we show that the spreading capabilities of
the nodes do not depend on their $k$-core index, which instead determines
whether or not a given node prevents the diffusion of a rumor to a system-wide
scale. Our findings are relevant both for sociological studies of contagious
dynamics and for the design of efficient commercial viral processes.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.