id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1202.2709
|
Potential Theory for Directed Networks
|
physics.data-an cs.IR cs.SI physics.soc-ph
|
Uncovering factors underlying the network formation is a long-standing
challenge for data mining and network analysis. In particular, the microscopic
organizing principles of directed networks are less understood than those of
undirected networks. This article proposes a hypothesis named potential theory,
which assumes that every directed link corresponds to a decrease of a unit
potential and subgraphs with definable potential values for all nodes are
preferred. Combining the potential theory with the clustering and homophily
mechanisms, it is deduced that the Bi-fan structure consisting of 4 nodes and 4
directed links is the most favored local structure in directed networks. Our
hypothesis receives strongly positive supports from extensive experiments on 15
directed networks drawn from disparate fields, as indicated by the most
accurate and robust performance of Bi-fan predictor within the link prediction
framework. In summary, our main contribution is twofold: (i) We propose a new
mechanism for the local organization of directed networks; (ii) We design the
corresponding link prediction algorithm, which can not only testify our
hypothesis, but also find out direct applications in missing link prediction
and friendship recommendation.
|
1202.2745
|
Multi-column Deep Neural Networks for Image Classification
|
cs.CV cs.AI
|
Traditional methods of computer vision and machine learning cannot match
human performance on tasks such as the recognition of handwritten digits or
traffic signs. Our biologically plausible deep artificial neural network
architectures can. Small (often minimal) receptive fields of convolutional
winner-take-all neurons yield large network depth, resulting in roughly as many
sparsely connected neural layers as found in mammals between retina and visual
cortex. Only winner neurons are trained. Several deep neural columns become
experts on inputs preprocessed in different ways; their predictions are
averaged. Graphics cards allow for fast training. On the very competitive MNIST
handwriting benchmark, our method is the first to achieve near-human
performance. On a traffic sign recognition benchmark it outperforms humans by a
factor of two. We also improve the state-of-the-art on a plethora of common
image classification benchmarks.
|
1202.2759
|
Iterative Reconstruction of Rank-One Matrices in Noise
|
cs.IT math.IT
|
We consider the problem of estimating a rank-one matrix in Gaussian noise
under a probabilistic model for the left and right factors of the matrix. The
probabilistic model can impose constraints on the factors including sparsity
and positivity that arise commonly in learning problems. We propose a family of
algorithms that reduce the problem to a sequence of scalar estimation
computations. These algorithms are similar to approximate message passing
techniques based on Gaussian approximations of loopy belief propagation that
have been used recently in compressed sensing. Leveraging analysis methods by
Bayati and Montanari, we show that the asymptotic behavior of the algorithm is
described by a simple scalar equivalent model, where the distribution of the
estimates at each iteration is identical to certain scalar estimates of the
variables in Gaussian noise. Moreover, the effective Gaussian noise level is
described by a set of state evolution equations. The proposed approach to
deriving algorithms thus provides a computationally simple and general method
for rank-one estimation problems with a precise analysis in certain
high-dimensional settings.
|
1202.2770
|
Multi-Level Error-Resilient Neural Networks with Learning
|
cs.NE cs.AI cs.IT math.IT
|
The problem of neural network association is to retrieve a previously
memorized pattern from its noisy version using a network of neurons. An ideal
neural network should include three components simultaneously: a learning
algorithm, a large pattern retrieval capacity and resilience against noise.
Prior works in this area usually improve one or two aspects at the cost of the
third.
Our work takes a step forward in closing this gap. More specifically, we show
that by forcing natural constraints on the set of learning patterns, we can
drastically improve the retrieval capacity of our neural network. Moreover, we
devise a learning algorithm whose role is to learn those patterns satisfying
the above mentioned constraints. Finally we show that our neural network can
cope with a fair amount of noise.
|
1202.2771
|
Multi-Scale Matrix Sampling and Sublinear-Time PageRank Computation
|
cs.DS cs.SI
|
A fundamental problem arising in many applications in Web science and social
network analysis is, given an arbitrary approximation factor $c>1$, to output a
set $S$ of nodes that with high probability contains all nodes of PageRank at
least $\Delta$, and no node of PageRank smaller than $\Delta/c$. We call this
problem {\sc SignificantPageRanks}. We develop a nearly optimal, local
algorithm for the problem with runtime complexity $\tilde{O}(n/\Delta)$ on
networks with $n$ nodes. We show that any algorithm for solving this problem
must have runtime of ${\Omega}(n/\Delta)$, rendering our algorithm optimal up
to logarithmic factors.
Our algorithm comes with two main technical contributions. The first is a
multi-scale sampling scheme for a basic matrix problem that could be of
interest on its own. In the abstract matrix problem it is assumed that one can
access an unknown {\em right-stochastic matrix} by querying its rows, where the
cost of a query and the accuracy of the answers depend on a precision parameter
$\epsilon$. At a cost propositional to $1/\epsilon$, the query will return a
list of $O(1/\epsilon)$ entries and their indices that provide an
$\epsilon$-precision approximation of the row. Our task is to find a set that
contains all columns whose sum is at least $\Delta$, and omits any column whose
sum is less than $\Delta/c$. Our multi-scale sampling scheme solves this
problem with cost $\tilde{O}(n/\Delta)$, while traditional sampling algorithms
would take time $\Theta((n/\Delta)^2)$.
Our second main technical contribution is a new local algorithm for
approximating personalized PageRank, which is more robust than the earlier ones
developed in \cite{JehW03,AndersenCL06} and is highly efficient particularly
for networks with large in-degrees or out-degrees. Together with our multiscale
sampling scheme we are able to optimally solve the {\sc SignificantPageRanks}
problem.
|
1202.2773
|
Decentralized Multi-agent Plan Repair in Dynamic Environments
|
cs.AI cs.MA
|
Achieving joint objectives by teams of cooperative planning agents requires
significant coordination and communication efforts. For a single-agent system
facing a plan failure in a dynamic environment, arguably, attempts to repair
the failed plan in general do not straightforwardly bring any benefit in terms
of time complexity. However, in multi-agent settings the communication
complexity might be of a much higher importance, possibly a high communication
overhead might be even prohibitive in certain domains. We hypothesize that in
decentralized systems, where coordination is enforced to achieve joint
objectives, attempts to repair failed multi-agent plans should lead to lower
communication overhead than replanning from scratch.
The contribution of the presented paper is threefold. Firstly, we formally
introduce the multi-agent plan repair problem and formally present the core
hypothesis underlying our work. Secondly, we propose three algorithms for
multi-agent plan repair reducing the problem to specialized instances of the
multi-agent planning problem. Finally, we present results of experimental
validation confirming the core hypothesis of the paper.
|
1202.2774
|
Beyond the Bethe Free Energy of LDPC Codes via Polymer Expansions
|
cs.IT cond-mat.stat-mech math-ph math.IT math.MP
|
The loop series provides a formal way to write down corrections to the Bethe
entropy (and/or free energy) of graphical models. We provide methods to
rigorously control such expansions for low-density parity-check codes used over
a highly noisy binary symmetric channel. We prove that in the asymptotic limit
of large size, with high probability, the Bethe expression gives an exact
formula for the entropy (per bit) of the input word conditioned on the output
of the channel. Our methods also apply to more general models.
|
1202.2778
|
Polymer Expansions for Cycle LDPC Codes
|
cs.IT cond-mat.stat-mech math-ph math.IT math.MP
|
We prove that the Bethe expression for the conditional input-output entropy
of cycle LDPC codes on binary symmetric channels above the MAP threshold is
exact in the large block length limit. The analysis relies on methods from
statistical physics. The finite size corrections to the Bethe expression are
expressed through a polymer expansion which is controlled thanks to expander
and counting arguments.
|
1202.2794
|
Query Matrices for Retrieving Binary Vectors Based on the Hamming
Distance Oracle
|
cs.DM cs.IR cs.IT math.IT
|
The Hamming oracle returns the Hamming distance between an unknown binary
$n$-vector $x$ and a binary query $n$-vector y. The objective is to determine
$x$ uniquely using a sequence of $m$ queries. What are the minimum number of
queries required in the worst case? We consider the query ratio $m/n$ to be our
figure of merit and derive upper bounds on the query ratio by explicitly
constructing $(m,n)$ query matrices. We show that our recursive and algebraic
construction results in query ratios arbitrarily close to zero. Our
construction is based on codes of constant weight. A decoding algorithm for
recovering the unknown binary vector is also described.
|
1202.2803
|
Efficient Relay Selection Scheme for Delay-Limited Non-Orthogonal
Hybrid-ARQ Relay Channels
|
cs.IT math.IT
|
We consider a half-duplex wireless relay network with hybrid-automatic
retransmission request (HARQ) and Rayleigh fading channels. In this paper, we
analyze the outage probability of the multi-relay delay-limited HARQ system
with opportunistic relaying scheme in decode-and-forward mode, in which the
\emph{best} relay is selected to transmit the source's regenerated signal. A
simple and distributed relay selection strategy is proposed for multi-relay
HARQ channels. Then, we utilize the non-orthogonal cooperative transmission
between the source and selected relay for retransmitting of the source data
toward the destination if needed, using space-time codes or beamforming
techniques. We analyze the performance of the system. We first derive the
cumulative density function (CDF) and probability density function (PDF) of the
selected relay HARQ channels. Then, the CDF and PDF are used to determine the
outage probability in the $l$-th round of HARQ. The outage probability is
required to compute the throughput-delay performance of this half-duplex
opportunistic relaying protocol. The packet delay constraint is represented by
$L$, the maximum number of HARQ rounds. An outage is declared if the packet is
unsuccessful after $L$ HARQ rounds. Furthermore, closed-form upper-bounds on
outage probability are derived and subsequently are used to investigate the
diversity order of the system. Based on the derived upper-bound expressions, it
is shown that the proposed schemes achieve the full spatial diversity order of
$N+1$, where $N$ is the number of potential relays. Our analytical results are
confirmed by simulation results.
|
1202.2826
|
Error Floor Approximation for LDPC Codes in the AWGN Channel
|
cs.IT math.IT
|
This paper addresses the prediction of error floors of low-density
parity-check (LDPC) codes with variable nodes of constant degree in the
additive white Gaussian noise (AWGN) channel. Specifically, we focus on the
performance of the sum-product algorithm (SPA) decoder formulated in the
log-likelihood ratio (LLR) domain. We hypothesize that several published error
floor levels are due to the manner in which decoder implementations handled the
LLRs at high SNRs. We employ an LLR-domain SPA decoder that does not saturate
near-certain messages and find the error rates of our decoder to be lower by at
least several orders of magnitude. We study the behavior of trapping sets (or
near-codewords) that are the dominant cause of the reported error floors.
We develop a refined linear model, based on the work of Sun and others, that
accurately predicts error floors caused by elementary tapping sets for
saturating decoders. Performance results of several codes at several levels of
decoder saturation are presented.
|
1202.2875
|
Uplink Performance Analysis of Multicell MU-MIMO Systems with ZF
Receivers
|
cs.IT math.IT
|
We consider the uplink of a multicell multiuser multiple-input
multiple-output system where the channel experiences both small and large-scale
fading. The data detection is done by using the linear zero-forcing technique,
assuming the base station (BS) has perfect channel state information. We derive
new, exact closed-form expressions for the uplink rate, symbol error rate, and
outage probability per user, as well as a lower bound on the achievable rate.
This bound is very tight and becomes exact in the large-number-of-antennas
limit. We further study the asymptotic system performance in the regimes of
high signal-to-noise ratio (SNR), large number of antennas, and large number of
users per cell. We show that at high SNRs, the system is interference-limited
and hence, we cannot improve the system performance by increasing the transmit
power of each user. Instead, by increasing the number of BS antennas, the
effects of interference and noise can be reduced, thereby improving the system
performance. We demonstrate that, with very large antenna arrays at the BS, the
transmit power of each user can be made inversely proportional to the number of
BS antennas while maintaining a desired quality-of-service. Numerical results
are presented to verify our analysis.
|
1202.2880
|
Approximate Recall Confidence Intervals
|
cs.IR
|
Recall, the proportion of relevant documents retrieved, is an important
measure of effectiveness in information retrieval, particularly in the legal,
patent, and medical domains. Where document sets are too large for exhaustive
relevance assessment, recall can be estimated by assessing a random sample of
documents; but an indication of the reliability of this estimate is also
required. In this article, we examine several methods for estimating two-tailed
recall confidence intervals. We find that the normal approximation in current
use provides poor coverage in many circumstances, even when adjusted to correct
its inappropriate symmetry. Analytic and Bayesian methods based on the ratio of
binomials are generally more accurate, but are inaccurate on small populations.
The method we recommend derives beta-binomial posteriors on retrieved and
unretrieved yield, with fixed hyperparameters, and a Monte Carlo estimate of
the posterior distribution of recall. We demonstrate that this method gives
mean coverage at or near the nominal level, across several scenarios, while
being balanced and stable. We offer advice on sampling design, including the
allocation of assessments to the retrieved and unretrieved segments, and
compare the proposed beta-binomial with the officially reported normal
intervals for recent TREC Legal Track iterations.
|
1202.2887
|
Semi-Quantitative Group Testing
|
cs.IT math.IT
|
We consider a novel group testing procedure, termed semi-quantitative group
testing, motivated by a class of problems arising in genome sequence
processing. Semi-quantitative group testing (SQGT) is a non-binary pooling
scheme that may be viewed as a combination of an adder model followed by a
quantizer. For the new testing scheme we define the capacity and evaluate the
capacity for some special choices of parameters using information theoretic
methods. We also define a new class of disjunct codes suitable for SQGT, termed
SQ-disjunct codes. We also provide both explicit and probabilistic code
construction methods for SQGT with simple decoding algorithms.
|
1202.2888
|
Exploiting the `Web of Trust' to improve efficiency in collaborative
networks
|
cs.SI
|
Maintaining high quality content is one of the foremost objectives of any
web-based collaborative service that depends on a large number of users. In
such systems, it is nearly impossible for automated scripts to judge semantics
as it is to expect all editors to review the content. This catalyzes the need
for trust-based mechanisms to ensure quality of an article immediately after an
edit. In this paper, we build on previous work and develop a framework based on
the `web of trust' concept to calculate satisfaction scores for all users
without the need for perusing the article. We derive some bounds for systems
based on our mechanism and show that the optimization problem of selecting the
best users to review an article is NP-Hard. Extensive simulations validate our
model and results, and show that trust-based mechanisms are essential to
improve efficiency in any online collaborative editing platform.
|
1202.2892
|
Recommender System Based on Algorithm of Bicluster Analysis RecBi
|
cs.AI cs.IR stat.ML
|
In this paper we propose two new algorithms based on biclustering analysis,
which can be used at the basis of a recommender system for educational
orientation of Russian School graduates. The first algorithm was designed to
help students make a choice between different university faculties when some of
their preferences are known. The second algorithm was developed for the special
situation when nothing is known about their preferences. The final version of
this recommender system will be used by Higher School of Economics.
|
1202.2895
|
Concept Relation Discovery and Innovation Enabling Technology (CORDIET)
|
cs.AI cs.IR stat.ML
|
Concept Relation Discovery and Innovation Enabling Technology (CORDIET), is a
toolbox for gaining new knowledge from unstructured text data. At the core of
CORDIET is the C-K theory which captures the essential elements of innovation.
The tool uses Formal Concept Analysis (FCA), Emergent Self Organizing Maps
(ESOM) and Hidden Markov Models (HMM) as main artifacts in the analysis
process. The user can define temporal, text mining and compound attributes. The
text mining attributes are used to analyze the unstructured text in documents,
the temporal attributes use these document's timestamps for analysis. The
compound attributes are XML rules based on text mining and temporal attributes.
The user can cluster objects with object-cluster rules and can chop the data in
pieces with segmentation rules. The artifacts are optimized for efficient data
analysis; object labels in the FCA lattice and ESOM map contain an URL on which
the user can click to open the selected document.
|
1202.2903
|
Scaling Laws in Human Language
|
physics.data-an cs.IR physics.soc-ph
|
Zipf's law on word frequency is observed in English, French, Spanish,
Italian, and so on, yet it does not hold for Chinese, Japanese or Korean
characters. A model for writing process is proposed to explain the above
difference, which takes into account the effects of finite vocabulary size.
Experiments, simulations and analytical solution agree well with each other.
The results show that the frequency distribution follows a power law with
exponent being equal to 1, at which the corresponding Zipf's exponent diverges.
Actually, the distribution obeys exponential form in the Zipf's plot. Deviating
from the Heaps' law, the number of distinct words grows with the text length in
three stages: It grows linearly in the beginning, then turns to a logarithmical
form, and eventually saturates. This work refines previous understanding about
Zipf's law and Heaps' law in language systems.
|
1202.2907
|
The weight Enumerator of some irreducible cyclic codes
|
cs.CR cs.IT math.IT
|
Irreducible cyclic codes are one of the largest known classes of block codes
which have been investigated for a long time. However, their weight
distributions are known only for a few cases. In this paper, a class of
irreducible cyclic codes are studied and their weight distributions are
determined. Moreover, all codewords of some irreducible cyclic codes are
obtained through programming in order to explain their distributions. The
number of distinct nonzero weights in these codes dealt with in this paper
varies among 1,2,3,6,8.
|
1202.2926
|
Detection of Calendar-Based Periodicities of Interval-Based Temporal
Patterns
|
cs.DB
|
We present a novel technique to identify calendar-based (annual, monthly and
daily) periodicities of an interval-based temporal pattern. An interval-based
temporal pattern is a pattern that occurs across a time-interval, then
disappears for some time, again recurs across another time-interval and so on
and so forth. Given the sequence of time-intervals in which an interval-based
temporal pattern has occurred, we propose a method for identifying the extent
to which the pattern is periodic with respect to a calendar cycle. In
comparison to previous work, our method is asymptotically faster. We also show
an interesting relationship between periodicities across different levels of
any hierarchical timestamp (year/month/day, hour/minute/second etc.).
|
1202.2928
|
The Diffusion of Networking Technologies
|
cs.SI cs.DS cs.NI physics.soc-ph
|
There has been significant interest in the networking community on the impact
of cascade effects on the diffusion of networking technology upgrades in the
Internet. Thinking of the global Internet as a graph, where each node
represents an economically-motivated Internet Service Provider (ISP), a key
problem is to determine the smallest set of nodes that can trigger a cascade
that causes every other node in the graph to adopt the protocol. We design the
first approximation algorithm with a provable performance guarantee for this
problem, in a model that captures the following key issue: a node's decision to
upgrade should be influenced by the decisions of the remote nodes it wishes to
communicate with.
Given an internetwork G(V,E) and threshold function \theta, we assume that
node $u$ activates (upgrades to the new technology) when it is adjacent to a
connected component of active nodes in G of size exceeding node $u$'s threshold
\theta(u). Our objective is to choose the smallest set of nodes that can cause
the rest of the graph to activate. Our main contribution is an approximation
algorithm based on linear programming, which we complement with computational
hardness results and a near-optimum integrality gap. Our algorithm, which does
not rely on submodular optimization techniques, also highlights the substantial
algorithmic difference between our problem and similar questions studied in the
context of social networks.
|
1202.2944
|
Diversity Analysis, Code Design and Tight Error Rate Lower Bound for
Binary Joint Network-Channel Coding
|
cs.IT math.IT
|
Joint network-channel codes (JNCC) can improve the performance of
communication in wireless networks, by combining, at the physical layer, the
channel codes and the network code as an overall error-correcting code. JNCC is
increasingly proposed as an alternative to a standard layered construction,
such as the OSI-model. The main performance metrics for JNCCs are scalability
to larger networks and error rate. The diversity order is one of the most
important parameters determining the error rate. The literature on JNCC is
growing, but a rigorous diversity analysis is lacking, mainly because of the
many degrees of freedom in wireless networks, which makes it very hard to prove
general statements on the diversity order. In this paper, we consider a network
with slowly varying fading point-to-point links, where all sources also act as
relay and additional non-source relays may be present. We propose a general
structure for JNCCs to be applied in such network. In the relay phase, each
relay transmits a linear transform of a set of source codewords. Our main
contributions are the proposition of an upper and lower bound on the diversity
order, a scalable code design and a new lower bound on the word error rate to
asses the performance of the network code. The lower bound on the diversity
order is only valid for JNCCs where the relays transform only two source
codewords. We then validate this analysis with an example which compares the
JNCC performance to that of a standard layered construction. Our numerical
results suggest that as networks grow, it is difficult to perform significantly
better than a standard layered construction, both on a fundamental level,
expressed by the outage probability, as on a practical level, expressed by the
word error rate.
|
1202.2963
|
Maximum Multiflow in Wireless Network Coding
|
cs.IT math.IT
|
In a multihop wireless network, wireless interference is crucial to the
maximum multiflow (MMF) problem, which studies the maximum throughput between
multiple pairs of sources and sinks. In this paper, we observe that network
coding could help to decrease the impacts of wireless interference, and propose
a framework to study the MMF problem for multihop wireless networks with
network coding. Firstly, a network model is set up to describe the new conflict
relations modified by network coding. Then, we formulate a linear programming
problem to compute the maximum throughput and show its superiority over one in
networks without coding. Finally, the MMF problem in wireless network coding is
shown to be NP-hard and a polynomial approximation algorithm is proposed.
|
1202.2998
|
Fast Adaptive S-ALOHA Scheme for Event-driven Machine-to-Machine
Communications
|
cs.IT math.IT
|
Machine-to-Machine (M2M) communication is now playing a market-changing role
in a wide range of business world. However, in event-driven M2M communications,
a large number of devices activate within a short period of time, which in turn
causes high radio congestions and severe access delay. To address this issue,
we propose a Fast Adaptive S-ALOHA (FASA) scheme for M2M communication systems
with bursty traffic. The statistics of consecutive idle and collision slots,
rather than the observation in a single slot, are used in FASA to accelerate
the tracking process of network status. Furthermore, the fast convergence
property of FASA is guaranteed by using drift analysis. Simulation results
demonstrate that the proposed FASA scheme achieves near-optimal performance in
reducing access delay, which outperforms that of traditional additive schemes
such as PB-ALOHA. Moreover, compared to multiplicative schemes, FASA shows its
robustness even under heavy traffic load in addition to better delay
performance.
|
1202.3021
|
No-reference image quality assessment through the von Mises distribution
|
cs.CV
|
An innovative way of calculating the von Mises distribution (VMD) of image
entropy is introduced in this paper. The VMD's concentration parameter and some
fitness parameter that will be later defined, have been analyzed in the
experimental part for determining their suitability as a image quality
assessment measure in some particular distortions such as Gaussian blur or
additive Gaussian noise. To achieve such measure, the local R\'{e}nyi entropy
is calculated in four equally spaced orientations and used to determine the
parameters of the von Mises distribution of the image entropy. Considering
contextual images, experimental results after applying this model show that the
best-in-focus noise-free images are associated with the highest values for the
von Mises distribution concentration parameter and the highest approximation of
image data to the von Mises distribution model. Our defined von Misses fitness
parameter experimentally appears also as a suitable no-reference image quality
assessment indicator for no-contextual images.
|
1202.3046
|
Segmentation of Offline Handwritten Bengali Script
|
cs.CV cs.AI
|
Character segmentation has long been one of the most critical areas of
optical character recognition process. Through this operation, an image of a
sequence of characters, which may be connected in some cases, is decomposed
into sub-images of individual alphabetic symbols. In this paper, segmentation
of cursive handwritten script of world's fourth popular language, Bengali, is
considered. Unlike English script, Bengali handwritten characters and its
components often encircle the main character, making the conventional
segmentation methodologies inapplicable. Experimental results, using the
proposed segmentation technique, on sample cursive handwritten data containing
218 ideal segmentation points show a success rate of 97.7%. Further
feature-analysis on these segments may lead to actual recognition of
handwritten cursive Bengali script.
|
1202.3059
|
Synchronization in Scale Free networks with degree correlation
|
physics.soc-ph cond-mat.stat-mech cs.SI
|
In this paper we study a model of synchronization process on scale free
networks with degree-degree correlations. This model was already studied on
this kind of networks without correlations by Pastore y Piontti {\it et al.},
Phys. Rev. E {\bf 76}, 046117 (2007). Here, we study the effects of the
degree-degree correlation on the behavior of the load fluctuations $W_s$ in the
steady state. We found that for assortative networks there exist a specific
correlation where the system is optimal synchronized. In addition, we found
that close to this optimally value the fluctuations does not depend on the
system size and therefore the system becomes fully scalable. This result could
be very important for some technological applications. On the other hand, far
from the optimal correlation, $W_s$ scales logarithmically with the system
size.
|
1202.3062
|
Correlated dynamics in egocentric communication networks
|
physics.soc-ph cs.SI
|
We investigate the communication sequences of millions of people through two
different channels and analyze the fine grained temporal structure of
correlated event trains induced by single individuals. By focusing on
correlations between the heterogeneous dynamics and the topology of egocentric
networks we find that the bursty trains usually evolve for pairs of individuals
rather than for the ego and his/her several neighbors thus burstiness is a
property of the links rather than of the nodes. We compare the directional
balance of calls and short messages within bursty trains to the average on the
actual link and show that for the trains of voice calls the imbalance is
significantly enhanced, while for short messages the balance within the trains
increases. These effects can be partly traced back to the technological
constrains (for short messages) and partly to the human behavioral features
(voice calls). We define a model that is able to reproduce the empirical
results and may help us to understand better the mechanisms driving technology
mediated human communication dynamics.
|
1202.3074
|
Conedy: a scientific tool to investigate Complex Network Dynamics
|
physics.comp-ph cs.SI physics.soc-ph
|
We present Conedy, a performant scientific tool to numerically investigate
dynamics on complex networks. Conedy allows to create networks and provides
automatic code generation and compilation to ensure performant treatment of
arbitrary node dynamics. Conedy can be interfaced via an internal script
interpreter or via a Python module.
|
1202.3079
|
Towards minimax policies for online linear optimization with bandit
feedback
|
cs.LG stat.ML
|
We address the online linear optimization problem with bandit feedback. Our
contribution is twofold. First, we provide an algorithm (based on exponential
weights) with a regret of order $\sqrt{d n \log N}$ for any finite action set
with $N$ actions, under the assumption that the instantaneous loss is bounded
by 1. This shaves off an extraneous $\sqrt{d}$ factor compared to previous
works, and gives a regret bound of order $d \sqrt{n \log n}$ for any compact
set of actions. Without further assumptions on the action set, this last bound
is minimax optimal up to a logarithmic factor. Interestingly, our result also
shows that the minimax regret for bandit linear optimization with expert advice
in $d$ dimension is the same as for the basic $d$-armed bandit with expert
advice. Our second contribution is to show how to use the Mirror Descent
algorithm to obtain computationally efficient strategies with minimax optimal
regret bounds in specific examples. More precisely we study two canonical
action sets: the hypercube and the Euclidean ball. In the former case, we
obtain the first computationally efficient algorithm with a $d \sqrt{n}$
regret, thus improving by a factor $\sqrt{d \log n}$ over the best known result
for a computationally efficient algorithm. In the latter case, our approach
gives the first algorithm with a $\sqrt{d n \log n}$ regret, again shaving off
an extraneous $\sqrt{d}$ compared to previous works.
|
1202.3102
|
Evolution of Zipf's Law for Indian Urban Agglomerations vis-\`{a}-vis
Chinese Urban Agglomerations
|
physics.soc-ph cs.SI
|
We investigate into the rank-size distributions of urban agglomerations for
India between 1981 to 2011. The incidence of a power law tail is prominent. A
relevant question persists regarding the evolution of the power tail
coefficient. We have developed a methodology to meaningfully track the power
law coefficient over time, when a country experience population growth. A
relevant dynamic law, Gibrat's law, is empirically tested in this connection.
We argue that these empirical findings for India goes in contrast with the
findings in case of China, another country with population growth but
monolithic political system.
|
1202.3162
|
Social Contagion: An Empirical Study of Information Spread on Digg and
Twitter Follower Graphs
|
cs.SI cs.CY physics.data-an physics.soc-ph
|
Social networks have emerged as a critical factor in information
dissemination, search, marketing, expertise and influence discovery, and
potentially an important tool for mobilizing people. Social media has made
social networks ubiquitous, and also given researchers access to massive
quantities of data for empirical analysis. These data sets offer a rich source
of evidence for studying dynamics of individual and group behavior, the
structure of networks and global patterns of the flow of information on them.
However, in most previous studies, the structure of the underlying networks was
not directly visible but had to be inferred from the flow of information from
one individual to another. As a result, we do not yet understand dynamics of
information spread on networks or how the structure of the network affects it.
We address this gap by analyzing data from two popular social news sites.
Specifically, we extract follower graphs of active Digg and Twitter users and
track how interest in news stories cascades through the graph. We compare and
contrast properties of information cascades on both sites and elucidate what
they tell us about dynamics of information flow on networks.
|
1202.3179
|
Randomization Resilient To Sensitive Reconstruction
|
cs.DB
|
With the randomization approach, sensitive data items of records are
randomized to protect privacy of individuals while allowing the distribution
information to be reconstructed for data analysis. In this paper, we
distinguish between reconstruction that has potential privacy risk, called
micro reconstruction, and reconstruction that does not, called aggregate
reconstruction. We show that the former could disclose sensitive information
about a target individual, whereas the latter is more useful for data analysis
than for privacy breaches. To limit the privacy risk of micro reconstruction,
we propose a privacy definition, called (epsilon,delta)-reconstruction-privacy.
Intuitively, this privacy notion requires that micro reconstruction has a large
error with a large probability. The promise of this approach is that micro
reconstruction is more sensitive to the number of independent trials in the
randomization process than aggregate reconstruction is; therefore, reducing the
number of independent trials helps achieve
(epsilon,delta)-reconstruction-privacy while preserving the accuracy of
aggregate reconstruction. We present an algorithm based on this idea and
evaluate the effectiveness of this approach using real life data sets.
|
1202.3184
|
Asymptotic Behavior of the Maximum and Minimum Singular Value of Random
Vandermonde Matrices
|
math.PR cs.IT math.IT
|
This work examines various statistical distributions in connection with
random Vandermonde matrices and their extension to $d$--dimensional phase
distributions. Upper and lower bound asymptotics for the maximum singular value
are found to be $O(\log^{1/2}{N^{d}})$ and $\Omega((\log N^{d} /(\log \log
N^d))^{1/2})$ respectively where $N$ is the dimension of the matrix,
generalizing the results in \cite{TW}. We further study the behavior of the
minimum singular value of these random matrices. In particular, we prove that
the minimum singular value is at most $N\exp(-C\sqrt{N}))$ with high
probability where $C$ is a constant independent on $N$. Furthermore, the value
of the constant $C$ is determined explicitly. The main result is obtained in
two different ways. One approach uses techniques from stochastic processes and
in particular, a construction related to the Brownian bridge. The other one is
a more direct analytical approach involving combinatorics and complex analysis.
As a consequence, we obtain a lower bound for the maximum absolute value of a
random complex polynomial on the unit circle, which may be of independent
mathematical interest. Lastly, for each sequence of positive integers
${k_p}_{p=1}^{\infty}$ we present a generalized version of the previously
discussed matrices. The classical random Vandermonde matrix corresponds to the
sequence $k_{p}=p-1$. We find a combinatorial formula for their moments and we
show that the limit eigenvalue distribution converges to a probability measure
supported on $[0,\infty)$. Finally, we show that for the sequence $k_p=2^{p}$
the limit eigenvalue distribution is the famous Marchenko--Pastur distribution.
|
1202.3185
|
Improving News Ranking by Community Tweets
|
cs.IR cs.SI
|
Users frequently express their information needs by means of short and
general queries that are difficult for ranking algorithms to interpret
correctly. However, users' social contexts can offer important additional
information about their information needs which can be leveraged by ranking
algorithms to provide augmented, personalized results. Existing methods mostly
rely on users' individual behavioral data such as clickstream and log data, but
as a result suffer from data sparsity and privacy issues. Here, we propose a
Community Tweets Voting Model (CTVM) to re-rank Google and Yahoo news search
results on the basis of open, large-scale Twitter community data. Experimental
results show that CTVM outperforms baseline rankings from Google and Yahoo for
certain online communities. We propose an application scenario of CTVM and
provide an agenda for further research.
|
1202.3188
|
Clustering assortativity, communities and functional modules in
real-world networks
|
physics.soc-ph cs.SI physics.data-an
|
Complex networks of real-world systems are believed to be controlled by
common phenomena, producing structures far from regular or random. Clustering,
community structure and assortative mixing by degree are perhaps among most
prominent examples of the latter. Although generally accepted for social
networks, these properties only partially explain the structure of other
networks. We first show that degree-corrected clustering is in contrast to
standard definition highly assortative. Yet interesting on its own, we further
note that non-social networks contain connected regions with very low
clustering. Hence, the structure of real-world networks is beyond communities.
We here investigate the concept of functional modules---groups of regularly
equivalent nodes---and show that such structures could explain for the
properties observed in non-social networks. Real-world networks might be
composed of functional modules that are overlaid by communities. We support the
latter by proposing a simple network model that generates scale-free
small-world networks with tunable clustering and degree mixing. Model has a
natural interpretation in many real-world networks, while it also gives
insights into an adequate community extraction framework. We also present an
algorithm for detection of arbitrary structural modules without any prior
knowledge. Algorithm is shown to be superior to state-of-the-art, while
application to real-world networks reveals well supported composites of
different structural modules that are consistent with the underlying systems.
Clear functional modules are identified in all types of networks including
social. Our findings thus expose functional modules as another key ingredient
of complex real-world networks.
|
1202.3215
|
Data quality measurement on categorical data using genetic algorithm
|
cs.DB
|
Data quality on categorical attribute is a difficult problem that has not
received as much attention as numerical counterpart. Our basic idea is to
employ association rule for the purpose of data quality measurement. Strong
rule generation is an important area of data mining. Association rule mining
problems can be considered as a multi objective problem rather than as a single
objective one. The main area of concentration was the rules generated by
association rule mining using genetic algorithm. The advantage of using genetic
algorithm is to discover high level prediction rules is that they perform a
global search and cope better with attribute interaction than the greedy rule
induction algorithm often used in data mining. Genetic algorithm based approach
utilizes the linkage between association rule and feature selection. In this
paper, we put forward a Multi objective genetic algorithm approach for data
quality on categorical attributes. The result shows that our approach is
outperformed by the objectives like accuracy, completeness, comprehensibility
and interestingness.
|
1202.3253
|
Small Count Privacy and Large Count Utility in Data Publishing
|
cs.DB
|
While the introduction of differential privacy has been a major breakthrough
in the study of privacy preserving data publication, some recent work has
pointed out a number of cases where it is not possible to limit inference about
individuals. The dilemma that is intrinsic in the problem is the simultaneous
requirement of data utility in the published data. Differential privacy does
not aim to protect information about an individual that can be uncovered even
without the participation of the individual. However, this lack of coverage may
violate the principle of individual privacy. Here we propose a solution by
providing protection to sensitive information, by which we refer to the answers
for aggregate queries with small counts. Previous works based on
$\ell$-diversity can be seen as providing a special form of this kind of
protection. Our method is developed with another goal which is to provide
differential privacy guarantee, and for that we introduce a more refined form
of differential privacy to deal with certain practical issues. Our empirical
studies show that our method can preserve better utilities than a number of
state-of-the-art methods although these methods do not provide the protections
that we provide.
|
1202.3255
|
Scalability of Data Binding in ASP.NET Web Applications
|
cs.DB cs.SE
|
ASP.NET web applications typically employ server controls to provide dynamic
web pages, and data-bound server controls to display and maintain database
data. Most developers use default properties of ASP.NET server controls when
developing web applications, which allows for rapid development of workable
applications. However, creating a high-performance, multi-user, and scalable
web application requires enhancement of server controls using custom-made code.
In this empirical study we evaluate the impact of various technical approaches
for paging and sorting functionality in data-driven ASP.NET web applications:
automatic data paging and sorting in web server controls on web server; paging
and sorting on database server; indexed and non-indexed database columns;
clustered vs. non-clustered indices. We observed significant performance
improvements when custom paging based on SQL stored procedure and clustered
index is used.
|
1202.3258
|
Stiffness matrix of manipulators with passive joints: computational
aspects
|
cs.RO
|
The paper focuses on stiffness matrix computation for manipulators with
passive joints, compliant actuators and flexible links. It proposes both
explicit analytical expressions and an efficient recursive procedure that are
applicable in the general case and allow obtaining the desired matrix either in
analytical or numerical form. Advantages of the developed technique and its
ability to produce both singular and non-singular stiffness matrices are
illustrated by application examples that deal with stiffness modeling of two
Stewart-Gough platforms.
|
1202.3261
|
Quick Detection of Nodes with Large Degrees
|
cs.DS cs.SI physics.soc-ph
|
Our goal is to quickly find top $k$ lists of nodes with the largest degrees
in large complex networks. If the adjacency list of the network is known (not
often the case in complex networks), a deterministic algorithm to find a node
with the largest degree requires an average complexity of O(n), where $n$ is
the number of nodes in the network. Even this modest complexity can be very
high for large complex networks. We propose to use the random walk based
method. We show theoretically and by numerical experiments that for large
networks the random walk method finds good quality top lists of nodes with high
probability and with computational savings of orders of magnitude. We also
propose stopping criteria for the random walk method which requires very little
knowledge about the structure of the network.
|
1202.3323
|
Mirror Descent Meets Fixed Share (and feels no regret)
|
cs.LG stat.ML
|
Mirror descent with an entropic regularizer is known to achieve shifting
regret bounds that are logarithmic in the dimension. This is done using either
a carefully designed projection or by a weight sharing technique. Via a novel
unified analysis, we show that these two approaches deliver essentially
equivalent bounds on a notion of regret generalizing shifting, adaptive,
discounted, and other related regrets. Our analysis also captures and extends
the generalized weight sharing technique of Bousquet and Warmuth, and can be
refined in several ways, including improvements for small losses and adaptive
tuning of parameters.
|
1202.3335
|
An efficient high-quality hierarchical clustering algorithm for
automatic inference of software architecture from the source code of a
software system
|
cs.AI cs.LG cs.SE
|
It is a high-quality algorithm for hierarchical clustering of large software
source code. This effectively allows to break the complexity of tens of
millions lines of source code, so that a human software engineer can comprehend
a software system at high level by means of looking at its architectural
diagram that is reconstructed automatically from the source code of the
software system. The architectural diagram shows a tree of subsystems having
OOP classes in its leaves (in the other words, a nested software
decomposition). The tool reconstructs the missing
(inconsistent/incomplete/inexistent) architectural documentation for a software
system from its source code. This facilitates software maintenance: change
requests can be performed substantially faster. Simply speaking, this unique
tool allows to lift the comprehensible grain of object-oriented software
systems from OOP class-level to subsystem-level. It is estimated that a
commercial tool, developed on the basis of this work, will reduce software
maintenance expenses 10 times on the current needs, and will allow to implement
next-generation software systems which are currently too complex to be within
the range of human comprehension, therefore can't yet be designed or
implemented. Implemented prototype in Open Source:
http://sourceforge.net/p/insoar/code-0/1/tree/
|
1202.3338
|
New constructions of CSS codes obtained by moving to higher alphabets
|
quant-ph cs.IT math.IT
|
We generalize a construction of non-binary quantum LDPC codes over $\F_{2^m}$
due to \cite{KHIS11a} and apply it in particular to toric codes. We obtain in
this way not only codes with better rates than toric codes but also improve
dramatically the performance of standard iterative decoding. Moreover, the new
codes obtained in this fashion inherit the distance properties of the
underlying toric codes and have therefore a minimum distance which grows as the
square root of the length of the code for fixed $m$.
|
1202.3399
|
Optimal error of query sets under the differentially-private matrix
mechanism
|
cs.DB cs.CR
|
A common goal of privacy research is to release synthetic data that satisfies
a formal privacy guarantee and can be used by an analyst in place of the
original data. To achieve reasonable accuracy, a synthetic data set must be
tuned to support a specified set of queries accurately, sacrificing fidelity
for other queries.
This work considers methods for producing synthetic data under differential
privacy and investigates what makes a set of queries "easy" or "hard" to
answer. We consider answering sets of linear counting queries using the matrix
mechanism, a recent differentially-private mechanism that can reduce error by
adding complex correlated noise adapted to a specified workload.
Our main result is a novel lower bound on the minimum total error required to
simultaneously release answers to a set of workload queries. The bound reveals
that the hardness of a query workload is related to the spectral properties of
the workload when it is represented in matrix form. The bound is most
informative for $(\epsilon,\delta)$-differential privacy but also applies to
$\epsilon$-differential privacy.
|
1202.3405
|
On the Feasibility of Precoding-Based Network Alignment for Three
Unicast Sessions
|
cs.IT math.IT
|
We consider the problem of network coding across three unicast sessions over
a directed acyclic graph, when each session has min-cut one. Previous work by
Das et al. adapted a precoding-based interference alignment technique,
originally developed for the wireless interference channel, specifically to
this problem. We refer to this approach as precoding-based network alignment
(PBNA). Similar to the wireless setting, PBNA asymptotically achieves half the
minimum cut; different from the wireless setting, its feasibility depends on
the graph structure. Das et al. provided a set of feasibility conditions for
PBNA with respect to a particular precoding matrix. However, the set consisted
of an infinite number of conditions, which is impossible to check in practice.
Furthermore, the conditions were purely algebraic, without interpretation with
regards to the graph structure. In this paper, we first prove that the set of
conditions provided by Das. et al are also necessary for the feasibility of
PBNA with respect to any precoding matrix. Then, using two graph-related
properties and a degree-counting technique, we reduce the set to just four
conditions. This reduction enables an efficient algorithm for checking the
feasibility of PBNA on a given graph.
|
1202.3451
|
The Future of Search and Discovery in Big Data Analytics: Ultrametric
Information Spaces
|
cs.IR stat.ML
|
Consider observation data, comprised of n observation vectors with values on
a set of attributes. This gives us n points in attribute space. Having data
structured as a tree, implied by having our observations embedded in an
ultrametric topology, offers great advantage for proximity searching. If we
have preprocessed data through such an embedding, then an observation's nearest
neighbor is found in constant computational time, i.e. O(1) time. A further
powerful approach is discussed in this work: the inducing of a hierarchy, and
hence a tree, in linear computational time, i.e. O(n) time for n observations.
It is with such a basis for proximity search and best match that we can address
the burgeoning problems of processing very large, and possibly also very high
dimensional, data sets.
|
1202.3461
|
Adaptively Sharing Time-Series with Differential Privacy
|
cs.DB
|
Sharing real-time aggregate statistics of private data is of great value to
the public to perform data mining for understanding important phenomena, such
as Influenza outbreaks and traffic congestion. However, releasing time-series
data with standard differential privacy mechanism has limited utility due to
high correlation between data values. We propose FAST, a novel framework to
release real-time aggregate statistics under differential privacy based on
filtering and adaptive sampling. To minimize the overall privacy cost, FAST
adaptively samples long time-series according to the detected data dynamics. To
improve the accuracy of data release per time stamp, FAST predicts data values
at non-sampling points and corrects noisy observations at sampling points. Our
experiments with real-world as well as synthetic data sets confirm that FAST
improves the accuracy of released aggregates even under small privacy cost and
can be used to enable a wide range of monitoring applications.
|
1202.3467
|
Joint source-channel coding for a quantum multiple access channel
|
quant-ph cs.IT math.IT
|
Suppose that two senders each obtain one share of the output of a classical,
bivariate, correlated information source. They would like to transmit the
correlated source to a receiver using a quantum multiple access channel. In
prior work, Cover, El Gamal, and Salehi provided a combined source-channel
coding strategy for a classical multiple access channel which outperforms the
simpler "separation" strategy where separate codebooks are used for the source
coding and the channel coding tasks. In the present paper, we prove that a
coding strategy similar to the Cover-El Gamal-Salehi strategy and a
corresponding quantum simultaneous decoder allow for the reliable transmission
of a source over a quantum multiple access channel, as long as a set of
information inequalities involving the Holevo quantity hold.
|
1202.3468
|
Partially-blind Estimation of Reciprocal Channels for AF Two-Way Relay
Networks Employing M-PSK Modulation
|
cs.IT math.IT stat.OT
|
We consider the problem of channel estimation for amplify-and-forward two-way
relays assuming channel reciprocity and M-PSK modulation. In an earlier work, a
partially-blind maximum-likelihood estimator was derived by treating the data
as deterministic unknowns. We prove that this estimator approaches the true
channel with high probability at high signal-to-noise ratio (SNR) but is not
consistent. We then propose an alternative estimator which is consistent and
has similarly favorable high SNR performance. We also derive the Cramer-Rao
bound on the variance of unbiased estimators.
|
1202.3471
|
Quantum Navigation and Ranking in Complex Networks
|
quant-ph cond-mat.stat-mech cs.SI physics.soc-ph
|
Complex networks are formal frameworks capturing the interdependencies
between the elements of large systems and databases. This formalism allows to
use network navigation methods to rank the importance that each constituent has
on the global organization of the system. A key example is Pagerank navigation
which is at the core of the most used search engine of the World Wide Web.
Inspired in this classical algorithm, we define a quantum navigation method
providing a unique ranking of the elements of a network. We analyze the
convergence of quantum navigation to the stationary rank of networks and show
that quantumness decreases the number of navigation steps before convergence.
In addition, we show that quantum navigation allows to solve degeneracies found
in classical ranks. By implementing the quantum algorithm in real networks, we
confirm these improvements and show that quantum coherence unveils new
hierarchical features about the global organization of complex systems.
|
1202.3473
|
Are we there yet? When to stop a Markov chain while generating random
graphs
|
cs.SI physics.data-an physics.soc-ph
|
Markov chains are a convenient means of generating realizations of networks,
since they require little more than a procedure for rewiring edges. If a
rewiring procedure exists for generating new graphs with specified statistical
properties, then a Markov chain sampler can generate an ensemble of graphs with
prescribed characteristics. However, successive graphs in a Markov chain cannot
be used when one desires independent draws from the distribution of graphs; the
realizations are correlated. Consequently, one runs a Markov chain for N
iterations before accepting the realization as an independent sample. In this
work, we devise two methods for calculating N. They are both based on the
binary "time-series" denoting the occurrence/non-occurrence of edge (u, v)
between vertices u and v in the Markov chain of graphs generated by the
sampler. They differ in their underlying assumptions. We test them on the
generation of graphs with a prescribed joint degree distribution. We find the N
proportional |E|, where |E| is the number of edges in the graph. The two
methods are compared by sampling on real, sparse graphs with 10^3 - 10^4
vertices.
|
1202.3492
|
Why does attention to web articles fall with time?
|
cs.IR physics.soc-ph
|
We analyze access statistics of a hundred and fifty blog entries and news
articles, for periods of up to three years. Access rate falls as an inverse
power of time passed since publication. The power law holds for periods of up
to thousand days. The exponents are different for different blogs and are
distributed between 0.6 and 3.2. We argue that the decay of attention to a web
article is caused by the link to it first dropping down the list of links on
the website's front page, and then disappearing from the front page and its
subsequent movement further into background. The other proposed explanations
that use a decaying with time novelty factor, or some intricate theory of human
dynamics cannot explain all of the experimental observations.
|
1202.3504
|
They Know Where You Live!
|
cs.SI
|
In this paper, we demonstrate the possibility of predicting people's
hometowns by using their geotagged photos posted on Flickr website. We employ
Kruskal's algorithm to cluster photos taken by a user and predict the user's
hometown. Our results prove that using social profiles of photographers allows
researchers to predict the locations of their taken photos with higher
accuracies. This in return can improve the previous methods which were purely
based on visual features of photos \cite{Hays:im2gps}.
|
1202.3505
|
Near-optimal Coresets For Least-Squares Regression
|
cs.DS cs.LG
|
We study (constrained) least-squares regression as well as multiple response
least-squares regression and ask the question of whether a subset of the data,
a coreset, suffices to compute a good approximate solution to the regression.
We give deterministic, low order polynomial-time algorithms to construct such
coresets with approximation guarantees, together with lower bounds indicating
that there is not much room for improvement upon our results.
|
1202.3510
|
Energy Efficiency Optimization for MIMO Broadcast Channels
|
cs.IT math.IT
|
Characterizing the fundamental energy efficiency (EE) limits of MIMO
broadcast channels (BC) is significant for the development of green wireless
communications. We address the EE optimization problem for MIMO-BC in this
paper and consider a practical power model, i.e., taking into account a
transmit independent power which is related to the number of active transmit
antennas. Under this setup, we propose a new optimization approach, in which
the transmit covariance is optimized under fixed active transmit antenna sets,
and then active transmit antenna selection (ATAS) is utilized. During the
transmit covariance optimization, we propose a globally optimal energy
efficient iterative water-filling scheme through solving a series of concave
fractional programs based on the block-coordinate ascent algorithm. After that,
ATAS is employed to determine the active transmit antenna set. Since activating
more transmit antennas can achieve higher sum-rate but at the cost of larger
transmit independent power consumption, there exists a tradeoff between the
sum-rate gain and the power consumption. Here ATAS can exploit the best
tradeoff and thus further improve the EE. Optimal exhaustive search and
low-complexity norm based ATAS schemes are developed. Through simulations, we
discuss the effect of different parameters on the EE of the MIMO-BC.
|
1202.3514
|
A Note on Weight Distributions of Irreducible Cyclic Codes
|
cs.IT math.IT
|
Usually, it is difficult to determine the weight distribution of an
irreducible cyclic code. In this paper, we discuss the case when an irreducible
cyclic code has the maximal number of distinct nonzero weights and give a
necessary and sufficient condition. In this case, we also obtain a divisible
property for the weight of a codeword. Further, we present a necessary and
sufficient condition for an irreducible cyclic code with only one nonzero
weight. Finally, we determine the weight distribution of an irreducible cyclic
code for some cases.
|
1202.3531
|
Recovering Jointly Sparse Signals via Joint Basis Pursuit
|
cs.IT math.IT math.OC
|
This work considers recovery of signals that are sparse over two bases. For
instance, a signal might be sparse in both time and frequency, or a matrix can
be low rank and sparse simultaneously. To facilitate recovery, we consider
minimizing the sum of the $\ell_1$-norms that correspond to each basis, which
is a tractable convex approach. We find novel optimality conditions which
indicates a gain over traditional approaches where $\ell_1$ minimization is
done over only one basis. Next, we analyze these optimality conditions for the
particular case of time-frequency bases. Denoting sparsity in the first and
second bases by $k_1,k_2$ respectively, we show that, for a general class of
signals, using this approach, one requires as small as
$O(\max\{k_1,k_2\}\log\log n)$ measurements for successful recovery hence
overcoming the classical requirement of
$\Theta(\min\{k_1,k_2\}\log(\frac{n}{\min\{k_1,k_2\}}))$ for $\ell_1$
minimization when $k_1\approx k_2$. Extensive simulations show that, our
analysis is approximately tight.
|
1202.3538
|
Refinement Modal Logic
|
cs.LO cs.AI
|
In this paper we present {\em refinement modal logic}. A refinement is like a
bisimulation, except that from the three relational requirements only `atoms'
and `back' need to be satisfied. Our logic contains a new operator 'all' in
addition to the standard modalities 'box' for each agent. The operator 'all'
acts as a quantifier over the set of all refinements of a given model. As a
variation on a bisimulation quantifier, this refinement operator or refinement
quantifier 'all' can be seen as quantifying over a variable not occurring in
the formula bound by it. The logic combines the simplicity of multi-agent modal
logic with some powers of monadic second-order quantification. We present a
sound and complete axiomatization of multi-agent refinement modal logic. We
also present an extension of the logic to the modal mu-calculus, and an
axiomatization for the single-agent version of this logic. Examples and
applications are also discussed: to software verification and design (the set
of agents can also be seen as a set of actions), and to dynamic epistemic
logic. We further give detailed results on the complexity of satisfiability,
and on succinctness.
|
1202.3572
|
Calculation of statistical entropic measures in a model of solids
|
nlin.AO cond-mat.other cs.IT math.IT
|
In this work, a one-dimensional model of crystalline solids based on the
Dirac comb limit of the Kronig-Penney model is considered. From the wave
functions of the valence electrons, we calculate a statistical measure of
complexity and the Fisher-Shannon information for the lower energy electronic
bands appearing in the system. All these magnitudes present an extremal value
for the case of solids having half-filled bands, a configuration where in
general a high conductivity is attained in real solids, such as it happens with
the monovalent metals.
|
1202.3602
|
Towards quantitative measures in applied ontology
|
cs.AI q-bio.QM
|
Applied ontology is a relatively new field which aims to apply theories and
methods from diverse disciplines such as philosophy, cognitive science,
linguistics and formal logics to perform or improve domain-specific tasks. To
support the development of effective research methodologies for applied
ontology, we critically discuss the question how its research results should be
evaluated. We propose that results in applied ontology must be evaluated within
their domain of application, based on some ontology-based task within the
domain, and discuss quantitative measures which would facilitate the objective
evaluation and comparison of research results in applied ontology.
|
1202.3625
|
From Linear Codes to Hyperplane Arrangements via Thomas Decomposition
|
cs.IT math.IT
|
We establish a connection between linear codes and hyperplane arrangements
using the Thomas decomposition of polynomial systems and the resulting counting
polynomial. This yields both a generalization and a refinement of the weight
enumerator of a linear code. In particular, one can deal with infinitely many
finite fields simultaneously by defining a weight enumerator for codes over
infinite fields.
|
1202.3639
|
Finding a most biased coin with fewest flips
|
cs.DS cs.LG
|
We study the problem of learning a most biased coin among a set of coins by
tossing the coins adaptively. The goal is to minimize the number of tosses
until we identify a coin i* whose posterior probability of being most biased is
at least 1-delta for a given delta. Under a particular probabilistic model, we
give an optimal algorithm, i.e., an algorithm that minimizes the expected
number of future tosses. The problem is closely related to finding the best arm
in the multi-armed bandit problem using adaptive strategies. Our algorithm
employs an optimal adaptive strategy -- a strategy that performs the best
possible action at each step after observing the outcomes of all previous coin
tosses. Consequently, our algorithm is also optimal for any starting history of
outcomes. To our knowledge, this is the first algorithm that employs an optimal
adaptive strategy under a Bayesian setting for this problem. Our proof of
optimality employs tools from the field of Markov games.
|
1202.3641
|
Control of Towing Kites for Seagoing Vessels
|
cs.SY
|
In this paper we present the basic features of the flight control of the
SkySails towing kite system. After introduction of coordinate definitions and
basic system dynamics we introduce a novel model used for controller design and
justify its main dynamics with results from system identification based on
numerous sea trials. We then present the controller design which we
successfully use for operational flights for several years. Finally we explain
the generation of dynamical flight patterns.
|
1202.3643
|
Dynamics of conflicts in Wikipedia
|
physics.soc-ph cs.SI physics.data-an
|
In this work we study the dynamical features of editorial wars in Wikipedia
(WP). Based on our previously established algorithm, we build up samples of
controversial and peaceful articles and analyze the temporal characteristics of
the activity in these samples. On short time scales, we show that there is a
clear correspondence between conflict and burstiness of activity patterns, and
that memory effects play an important role in controversies. On long time
scales, we identify three distinct developmental patterns for the overall
behavior of the articles. We are able to distinguish cases eventually leading
to consensus from those cases where a compromise is far from achievable.
Finally, we analyze discussion networks and conclude that edit wars are mainly
fought by few editors only.
|
1202.3653
|
Information Transmission using the Nonlinear Fourier Transform, Part I:
Mathematical Tools
|
cs.IT math.IT
|
The nonlinear Fourier transform (NFT), a powerful tool in soliton theory and
exactly solvable models, is a method for solving integrable partial
differential equations governing wave propagation in certain nonlinear media.
The NFT decorrelates signal degrees-of-freedom in such models, in much the same
way that the Fourier transform does for linear systems. In this three-part
series of papers, this observation is exploited for data transmission over
integrable channels such as optical fibers, where pulse propagation is governed
by the nonlinear Schr\"odinger equation. In this transmission scheme, which can
be viewed as a nonlinear analogue of orthogonal frequency-division multiplexing
commonly used in linear channels, information is encoded in the nonlinear
frequencies and their spectral amplitudes. Unlike most other fiber-optic
transmission schemes, this technique deals with both dispersion and
nonlinearity directly and unconditionally without the need for dispersion or
nonlinearity compensation methods. This first paper explains the mathematical
tools that underlie the method.
|
1202.3663
|
Guaranteed clustering and biclustering via semidefinite programming
|
math.OC cs.LG
|
Identifying clusters of similar objects in data plays a significant role in a
wide range of applications. As a model problem for clustering, we consider the
densest k-disjoint-clique problem, whose goal is to identify the collection of
k disjoint cliques of a given weighted complete graph maximizing the sum of the
densities of the complete subgraphs induced by these cliques. In this paper, we
establish conditions ensuring exact recovery of the densest k cliques of a
given graph from the optimal solution of a particular semidefinite program. In
particular, the semidefinite relaxation is exact for input graphs corresponding
to data consisting of k large, distinct clusters and a smaller number of
outliers. This approach also yields a semidefinite relaxation for the
biclustering problem with similar recovery guarantees. Given a set of objects
and a set of features exhibited by these objects, biclustering seeks to
simultaneously group the objects and features according to their expression
levels. This problem may be posed as partitioning the nodes of a weighted
bipartite complete graph such that the sum of the densities of the resulting
bipartite complete subgraphs is maximized. As in our analysis of the densest
k-disjoint-clique problem, we show that the correct partition of the objects
and features can be recovered from the optimal solution of a semidefinite
program in the case that the given data consists of several disjoint sets of
objects exhibiting similar features. Empirical evidence from numerical
experiments supporting these theoretical guarantees is also provided.
|
1202.3667
|
On Directly Mapping Relational Databases to RDF and OWL (Extended
Version)
|
cs.DB
|
Mapping relational databases to RDF is a fundamental problem for the
development of the Semantic Web. We present a solution, inspired by draft
methods defined by the W3C where relational databases are directly mapped to
RDF and OWL. Given a relational database schema and its integrity constraints,
this direct mapping produces an OWL ontology, which, provides the basis for
generating RDF instances. The semantics of this mapping is defined using
Datalog. Two fundamental properties are information preservation and query
preservation. We prove that our mapping satisfies both conditions, even for
relational databases that contain null values. We also consider two desirable
properties: monotonicity and semantics preservation. We prove that our mapping
is monotone and also prove that no monotone mapping, including ours, is
semantic preserving. We realize that monotonicity is an obstacle for semantic
preservation and thus present a non-monotone direct mapping that is semantics
preserving.
|
1202.3684
|
Generalized Boundaries from Multiple Image Interpretations
|
cs.CV
|
Boundary detection is essential for a variety of computer vision tasks such
as segmentation and recognition. In this paper we propose a unified formulation
and a novel algorithm that are applicable to the detection of different types
of boundaries, such as intensity edges, occlusion boundaries or object category
specific boundaries. Our formulation leads to a simple method with
state-of-the-art performance and significantly lower computational cost than
existing methods. We evaluate our algorithm on different types of boundaries,
from low-level boundaries extracted in natural images, to occlusion boundaries
obtained using motion cues and RGB-D cameras, to boundaries from
soft-segmentation. We also propose a novel method for figure/ground
soft-segmentation that can be used in conjunction with our boundary detection
method and improve its accuracy at almost no extra computational cost.
|
1202.3686
|
Inferential or Differential: Privacy Laws Dictate
|
cs.DB
|
So far, privacy models follow two paradigms. The first paradigm, termed
inferential privacy in this paper, focuses on the risk due to statistical
inference of sensitive information about a target record from other records in
the database. The second paradigm, known as differential privacy, focuses on
the risk to an individual when included in, versus when not included in, the
database. The contribution of this paper consists of two parts. The first part
presents a critical analysis on differential privacy with two results: (i) the
differential privacy mechanism does not provide inferential privacy, (ii) the
impossibility result about achieving Dalenius's privacy goal [5] is based on an
adversary simulated by a Turing machine, but a human adversary may behave
differently; consequently, the practical implication of the impossibility
result remains unclear. The second part of this work is devoted to a solution
addressing three major drawbacks in previous approaches to inferential privacy:
lack of flexibility for handling variable sensitivity, poor utility, and
vulnerability to auxiliary information.
|
1202.3698
|
Extended Lifted Inference with Joint Formulas
|
cs.AI
|
The First-Order Variable Elimination (FOVE) algorithm allows exact inference
to be applied directly to probabilistic relational models, and has proven to be
vastly superior to the application of standard inference methods on a grounded
propositional model. Still, FOVE operators can be applied under restricted
conditions, often forcing one to resort to propositional inference. This paper
aims to extend the applicability of FOVE by providing two new model conversion
operators: the first and the primary is joint formula conversion and the second
is just-different counting conversion. These new operations allow efficient
inference methods to be applied directly on relational models, where no
existing efficient method could be applied hitherto. In addition, aided by
these capabilities, we show how to adapt FOVE to provide exact solutions to
Maximum Expected Utility (MEU) queries over relational models for decision
under uncertainty. Experimental evaluations show our algorithms to provide
significant speedup over the alternatives.
|
1202.3699
|
Learning is planning: near Bayes-optimal reinforcement learning via
Monte-Carlo tree search
|
cs.AI
|
Bayes-optimal behavior, while well-defined, is often difficult to achieve.
Recent advances in the use of Monte-Carlo tree search (MCTS) have shown that it
is possible to act near-optimally in Markov Decision Processes (MDPs) with very
large or infinite state spaces. Bayes-optimal behavior in an unknown MDP is
equivalent to optimal behavior in the known belief-space MDP, although the size
of this belief-space MDP grows exponentially with the amount of history
retained, and is potentially infinite. We show how an agent can use one
particular MCTS algorithm, Forward Search Sparse Sampling (FSSS), in an
efficient way to act nearly Bayes-optimally for all but a polynomial number of
steps, assuming that FSSS can be used to act efficiently in any possible
underlying MDP.
|
1202.3700
|
Solving Cooperative Reliability Games
|
cs.GT cs.AI
|
Cooperative games model the allocation of profit from joint actions,
following considerations such as stability and fairness. We propose the
reliability extension of such games, where agents may fail to participate in
the game. In the reliability extension, each agent only "survives" with a
certain probability, and a coalition's value is the probability that its
surviving members would be a winning coalition in the base game. We study
prominent solution concepts in such games, showing how to approximate the
Shapley value and how to compute the core in games with few agent types. We
also show that applying the reliability extension may stabilize the game,
making the core non-empty even when the base game has an empty core.
|
1202.3701
|
Active Diagnosis via AUC Maximization: An Efficient Approach for
Multiple Fault Identification in Large Scale, Noisy Networks
|
cs.LG cs.AI stat.ML
|
The problem of active diagnosis arises in several applications such as
disease diagnosis, and fault diagnosis in computer networks, where the goal is
to rapidly identify the binary states of a set of objects (e.g., faulty or
working) by sequentially selecting, and observing, (noisy) responses to binary
valued queries. Current algorithms in this area rely on loopy belief
propagation for active query selection. These algorithms have an exponential
time complexity, making them slow and even intractable in large networks. We
propose a rank-based greedy algorithm that sequentially chooses queries such
that the area under the ROC curve of the rank-based output is maximized. The
AUC criterion allows us to make a simplifying assumption that significantly
reduces the complexity of active query selection (from exponential to near
quadratic), with little or no compromise on the performance quality.
|
1202.3702
|
Semi-supervised Learning with Density Based Distances
|
cs.LG stat.ML
|
We present a simple, yet effective, approach to Semi-Supervised Learning. Our
approach is based on estimating density-based distances (DBD) using a shortest
path calculation on a graph. These Graph-DBD estimates can then be used in any
distance-based supervised learning method, such as Nearest Neighbor methods and
SVMs with RBF kernels. In order to apply the method to very large data sets, we
also present a novel algorithm which integrates nearest neighbor computations
into the shortest path search and can find exact shortest paths even in
extremely large dense graphs. Significant runtime improvement over the commonly
used Laplacian regularization method is then shown on a large scale dataset.
|
1202.3703
|
Factored Filtering of Continuous-Time Systems
|
cs.SY cs.AI
|
We consider filtering for a continuous-time, or asynchronous, stochastic
system where the full distribution over states is too large to be stored or
calculated. We assume that the rate matrix of the system can be compactly
represented and that the belief distribution is to be approximated as a product
of marginals. The essential computation is the matrix exponential. We look at
two different methods for its computation: ODE integration and uniformization
of the Taylor expansion. For both we consider approximations in which only a
factored belief state is maintained. For factored uniformization we demonstrate
that the KL-divergence of the filtering is bounded. Our experimental results
confirm our factored uniformization performs better than previously suggested
uniformization methods and the mean field algorithm.
|
1202.3704
|
Near-Optimal Target Learning With Stochastic Binary Signals
|
cs.LG stat.ML
|
We study learning in a noisy bisection model: specifically, Bayesian
algorithms to learn a target value V given access only to noisy realizations of
whether V is less than or greater than a threshold theta. At step t = 0, 1, 2,
..., the learner sets threshold theta t and observes a noisy realization of
sign(V - theta t). After T steps, the goal is to output an estimate V^ which is
within an eta-tolerance of V . This problem has been studied, predominantly in
environments with a fixed error probability q < 1/2 for the noisy realization
of sign(V - theta t). In practice, it is often the case that q can approach
1/2, especially as theta -> V, and there is little known when this happens. We
give a pseudo-Bayesian algorithm which provably converges to V. When the true
prior matches our algorithm's Gaussian prior, we show near-optimal expected
performance. Our methods extend to the general multiple-threshold setting where
the observation noisily indicates which of k >= 2 regions V belongs to.
|
1202.3705
|
Filtered Fictitious Play for Perturbed Observation Potential Games and
Decentralised POMDPs
|
cs.GT cs.AI
|
Potential games and decentralised partially observable MDPs (Dec-POMDPs) are
two commonly used models of multi-agent interaction, for static optimisation
and sequential decisionmaking settings, respectively. In this paper we
introduce filtered fictitious play for solving repeated potential games in
which each player's observations of others' actions are perturbed by random
noise, and use this algorithm to construct an online learning method for
solving Dec-POMDPs. Specifically, we prove that noise in observations prevents
standard fictitious play from converging to Nash equilibrium in potential
games, which also makes fictitious play impractical for solving Dec-POMDPs. To
combat this, we derive filtered fictitious play, and provide conditions under
which it converges to a Nash equilibrium in potential games with noisy
observations. We then use filtered fictitious play to construct a solver for
Dec-POMDPs, and demonstrate our new algorithm's performance in a box pushing
problem. Our results show that we consistently outperform the state-of-the-art
Dec-POMDP solver by an average of 100% across the range of noise in the
observation function.
|
1202.3706
|
A Framework for Optimizing Paper Matching
|
cs.IR cs.AI
|
At the heart of many scientific conferences is the problem of matching
submitted papers to suitable reviewers. Arriving at a good assignment is a
major and important challenge for any conference organizer. In this paper we
propose a framework to optimize paper-to-reviewer assignments. Our framework
uses suitability scores to measure pairwise affinity between papers and
reviewers. We show how learning can be used to infer suitability scores from a
small set of provided scores, thereby reducing the burden on reviewers and
organizers. We frame the assignment problem as an integer program and propose
several variations for the paper-to-reviewer matching domain. We also explore
how learning and matching interact. Experiments on two conference data sets
examine the performance of several learning methods as well as the
effectiveness of the matching formulations.
|
1202.3707
|
A temporally abstracted Viterbi algorithm
|
cs.AI
|
Hierarchical problem abstraction, when applicable, may offer exponential
reductions in computational complexity. Previous work on coarse-to-fine dynamic
programming (CFDP) has demonstrated this possibility using state abstraction to
speed up the Viterbi algorithm. In this paper, we show how to apply temporal
abstraction to the Viterbi problem. Our algorithm uses bounds derived from
analysis of coarse timescales to prune large parts of the state trellis at
finer timescales. We demonstrate improvements of several orders of magnitude
over the standard Viterbi algorithm, as well as significant speedups over CFDP,
for problems whose state variables evolve at widely differing rates.
|
1202.3708
|
Smoothing Proximal Gradient Method for General Structured Sparse
Learning
|
cs.LG stat.ML
|
We study the problem of learning high dimensional regression models
regularized by a structured-sparsity-inducing penalty that encodes prior
structural information on either input or output sides. We consider two widely
adopted types of such penalties as our motivating examples: 1) overlapping
group lasso penalty, based on the l1/l2 mixed-norm penalty, and 2) graph-guided
fusion penalty. For both types of penalties, due to their non-separability,
developing an efficient optimization method has remained a challenging problem.
In this paper, we propose a general optimization approach, called smoothing
proximal gradient method, which can solve the structured sparse regression
problems with a smooth convex loss and a wide spectrum of
structured-sparsity-inducing penalties. Our approach is based on a general
smoothing technique of Nesterov. It achieves a convergence rate faster than the
standard first-order method, subgradient method, and is much more scalable than
the most widely used interior-point method. Numerical results are reported to
demonstrate the efficiency and scalability of the proposed method.
|
1202.3709
|
EDML: A Method for Learning Parameters in Bayesian Networks
|
cs.AI
|
We propose a method called EDML for learning MAP parameters in binary
Bayesian networks under incomplete data. The method assumes Beta priors and can
be used to learn maximum likelihood parameters when the priors are
uninformative. EDML exhibits interesting behaviors, especially when compared to
EM. We introduce EDML, explain its origin, and study some of its properties
both analytically and empirically.
|
1202.3710
|
Strictly Proper Mechanisms with Cooperating Players
|
cs.GT cs.AI
|
Prediction markets provide an efficient means to assess uncertain quantities
from forecasters. Traditional and competitive strictly proper scoring rules
have been shown to incentivize players to provide truthful probabilistic
forecasts. However, we show that when those players can cooperate, these
mechanisms can instead discourage them from reporting what they really believe.
When players with different beliefs are able to cooperate and form a coalition,
these mechanisms admit arbitrage and there is a report that will always pay
coalition members more than their truthful forecasts. If the coalition were
created by an intermediary, such as a web portal, the intermediary would be
guaranteed a profit.
|
1202.3711
|
A Logical Characterization of Constraint-Based Causal Discovery
|
cs.AI
|
We present a novel approach to constraint-based causal discovery, that takes
the form of straightforward logical inference, applied to a list of simple,
logical statements about causal relations that are derived directly from
observed (in)dependencies. It is both sound and complete, in the sense that all
invariant features of the corresponding partial ancestral graph (PAG) are
identified, even in the presence of latent variables and selection bias. The
approach shows that every identifiable causal relation corresponds to one of
just two fundamental forms. More importantly, as the basic building blocks of
the method do not rely on the detailed (graphical) structure of the
corresponding PAG, it opens up a range of new opportunities, including more
robust inference, detailed accountability, and application to large models.
|
1202.3712
|
Ensembles of Kernel Predictors
|
cs.LG stat.ML
|
This paper examines the problem of learning with a finite and possibly large
set of p base kernels. It presents a theoretical and empirical analysis of an
approach addressing this problem based on ensembles of kernel predictors. This
includes novel theoretical guarantees based on the Rademacher complexity of the
corresponding hypothesis sets, the introduction and analysis of a learning
algorithm based on these hypothesis sets, and a series of experiments using
ensembles of kernel predictors with several data sets. Both convex combinations
of kernel-based hypotheses and more general Lq-regularized nonnegative
combinations are analyzed. These theoretical, algorithmic, and empirical
results are compared with those achieved by using learning kernel techniques,
which can be viewed as another approach for solving the same problem.
|
1202.3713
|
Bayesian network learning with cutting planes
|
cs.AI
|
The problem of learning the structure of Bayesian networks from complete
discrete data with a limit on parent set size is considered. Learning is cast
explicitly as an optimisation problem where the goal is to find a BN structure
which maximises log marginal likelihood (BDe score). Integer programming,
specifically the SCIP framework, is used to solve this optimisation problem.
Acyclicity constraints are added to the integer program (IP) during solving in
the form of cutting planes. Finding good cutting planes is the key to the
success of the approach -the search for such cutting planes is effected using a
sub-IP. Results show that this is a particularly fast method for exact BN
learning.
|
1202.3714
|
Active Learning for Developing Personalized Treatment
|
cs.LG stat.ML
|
The personalization of treatment via bio-markers and other risk categories
has drawn increasing interest among clinical scientists. Personalized treatment
strategies can be learned using data from clinical trials, but such trials are
very costly to run. This paper explores the use of active learning techniques
to design more efficient trials, addressing issues such as whom to recruit, at
what point in the trial, and which treatment to assign, throughout the duration
of the trial. We propose a minimax bandit model with two different optimization
criteria, and discuss the computational challenges and issues pertaining to
this approach. We evaluate our active learning policies using both simulated
data, and data modeled after a clinical trial for treating depressed
individuals, and contrast our methods with other plausible active learning
policies.
|
1202.3715
|
A Unifying Framework for Linearly Solvable Control
|
cs.SY math.OC
|
Recent work has led to the development of an elegant theory of Linearly
Solvable Markov Decision Processes (LMDPs) and related Path-Integral Control
Problems. Traditionally, MDPs have been formulated using stochastic policies
and a control cost based on the KL divergence. In this paper, we extend this
framework to a more general class of divergences: the Renyi divergences. These
are a more general class of divergences parameterized by a continuous parameter
that include the KL divergence as a special case. The resulting control
problems can be interpreted as solving a risk-sensitive version of the LMDP
problem. For a > 0, we get risk-averse behavior (the degree of risk-aversion
increases with a) and for a < 0, we get risk-seeking behavior. We recover LMDPs
in the limit as a -> 0. This work generalizes the recently developed
risk-sensitive path-integral control formalism which can be seen as the
continuous-time limit of results obtained in this paper. To the best of our
knowledge, this is a general theory of linearly solvable control and includes
all previous work as a special case. We also present an alternative
interpretation of these results as solving a 2-player (cooperative or
competitive) Markov Game. From the linearity follow a number of nice properties
including compositionality of control laws and a path-integral representation
of the value function. We demonstrate the usefulness of the framework on
control problems with noise where different values of lead to qualitatively
different control behaviors.
|
1202.3716
|
Boosting as a Product of Experts
|
cs.LG stat.ML
|
In this paper, we derive a novel probabilistic model of boosting as a Product
of Experts. We re-derive the boosting algorithm as a greedy incremental model
selection procedure which ensures that addition of new experts to the ensemble
does not decrease the likelihood of the data. These learning rules lead to a
generic boosting algorithm - POE- Boost which turns out to be similar to the
AdaBoost algorithm under certain assumptions on the expert probabilities. The
paper then extends the POEBoost algorithm to POEBoost.CS which handles
hypothesis that produce probabilistic predictions. This new algorithm is shown
to have better generalization performance compared to other state of the art
algorithms.
|
1202.3717
|
PAC-Bayesian Policy Evaluation for Reinforcement Learning
|
cs.LG stat.ML
|
Bayesian priors offer a compact yet general means of incorporating domain
knowledge into many learning tasks. The correctness of the Bayesian analysis
and inference, however, largely depends on accuracy and correctness of these
priors. PAC-Bayesian methods overcome this problem by providing bounds that
hold regardless of the correctness of the prior distribution. This paper
introduces the first PAC-Bayesian bound for the batch reinforcement learning
problem with function approximation. We show how this bound can be used to
perform model-selection in a transfer learning scenario. Our empirical results
confirm that PAC-Bayesian policy evaluation is able to leverage prior
distributions when they are informative and, unlike standard Bayesian RL
approaches, ignore them when they are misleading.
|
1202.3718
|
On the Complexity of Decision Making in Possibilistic Decision Trees
|
cs.AI
|
When the information about uncertainty cannot be quantified in a simple,
probabilistic way, the topic of possibilistic decision theory is often a
natural one to consider. The development of possibilistic decision theory has
lead to a series of possibilistic criteria, e.g pessimistic possibilistic
qualitative utility, possibilistic likely dominance, binary possibilistic
utility and possibilistic Choquet integrals. This paper focuses on sequential
decision making in possibilistic decision trees. It proposes a complexity study
of the problem of finding an optimal strategy depending on the monotonicity
property of the optimization criteria which allows the application of dynamic
programming that offers a polytime reduction of the decision problem. It also
shows that possibilistic Choquet integrals do not satisfy this property, and
that in this case the optimization problem is NP - hard.
|
1202.3719
|
Inference in Probabilistic Logic Programs using Weighted CNF's
|
cs.AI
|
Probabilistic logic programs are logic programs in which some of the facts
are annotated with probabilities. Several classical probabilistic inference
tasks (such as MAP and computing marginals) have not yet received a lot of
attention for this formalism. The contribution of this paper is that we develop
efficient inference algorithms for these tasks. This is based on a conversion
of the probabilistic logic program and the query and evidence to a weighted CNF
formula. This allows us to reduce the inference tasks to well-studied tasks
such as weighted model counting. To solve such tasks, we employ
state-of-the-art methods. We consider multiple methods for the conversion of
the programs as well as for inference on the weighted CNF. The resulting
approach is evaluated experimentally and shown to improve upon the
state-of-the-art in probabilistic logic programming.
|
1202.3720
|
Efficient Inference in Markov Control Problems
|
cs.SY cs.AI
|
Markov control algorithms that perform smooth, non-greedy updates of the
policy have been shown to be very general and versatile, with policy gradient
and Expectation Maximisation algorithms being particularly popular. For these
algorithms, marginal inference of the reward weighted trajectory distribution
is required to perform policy updates. We discuss a new exact inference
algorithm for these marginals in the finite horizon case that is more efficient
than the standard approach based on classical forward-backward recursions. We
also provide a principled extension to infinite horizon Markov Decision
Problems that explicitly accounts for an infinite horizon. This extension
provides a novel algorithm for both policy gradients and Expectation
Maximisation in infinite horizon problems.
|
1202.3721
|
Dynamic consistency and decision making under vacuous belief
|
cs.AI
|
The ideas about decision making under ignorance in economics are combined
with the ideas about uncertainty representation in computer science. The
combination sheds new light on the question of how artificial agents can act in
a dynamically consistent manner. The notion of sequential consistency is
formalized by adapting the law of iterated expectation for plausibility
measures. The necessary and sufficient condition for a certainty equivalence
operator for Nehring-Puppe's preference to be sequentially consistent is given.
This result sheds light on the models of decision making under uncertainty.
|
1202.3722
|
Hierarchical Affinity Propagation
|
cs.LG cs.AI stat.ML
|
Affinity propagation is an exemplar-based clustering algorithm that finds a
set of data-points that best exemplify the data, and associates each datapoint
with one exemplar. We extend affinity propagation in a principled way to solve
the hierarchical clustering problem, which arises in a variety of domains
including biology, sensor networks and decision making in operational research.
We derive an inference algorithm that operates by propagating information up
and down the hierarchy, and is efficient despite the high-order potentials
required for the graphical model formulation. We demonstrate that our method
outperforms greedy techniques that cluster one layer at a time. We show that on
an artificial dataset designed to mimic the HIV-strain mutation dynamics, our
method outperforms related methods. For real HIV sequences, where the ground
truth is not available, we show our method achieves better results, in terms of
the underlying objective function, and show the results correspond meaningfully
to geographical location and strain subtypes. Finally we report results on
using the method for the analysis of mass spectra, showing it performs
favorably compared to state-of-the-art methods.
|
1202.3723
|
Approximation by Quantization
|
cs.AI
|
Inference in graphical models consists of repeatedly multiplying and summing
out potentials. It is generally intractable because the derived potentials
obtained in this way can be exponentially large. Approximate inference
techniques such as belief propagation and variational methods combat this by
simplifying the derived potentials, typically by dropping variables from them.
We propose an alternate method for simplifying potentials: quantizing their
values. Quantization causes different states of a potential to have the same
value, and therefore introduces context-specific independencies that can be
exploited to represent the potential more compactly. We use algebraic decision
diagrams (ADDs) to do this efficiently. We apply quantization and ADD reduction
to variable elimination and junction tree propagation, yielding a family of
bounded approximate inference schemes. Our experimental tests show that our new
schemes significantly outperform state-of-the-art approaches on many benchmark
instances.
|
1202.3724
|
Probabilistic Theorem Proving
|
cs.AI
|
Many representation schemes combining first-order logic and probability have
been proposed in recent years. Progress in unifying logical and probabilistic
inference has been slower. Existing methods are mainly variants of lifted
variable elimination and belief propagation, neither of which take logical
structure into account. We propose the first method that has the full power of
both graphical model inference and first-order theorem proving (in finite
domains with Herbrand interpretations). We first define probabilistic theorem
proving, their generalization, as the problem of computing the probability of a
logical formula given the probabilities or weights of a set of formulas. We
then show how this can be reduced to the problem of lifted weighted model
counting, and develop an efficient algorithm for the latter. We prove the
correctness of this algorithm, investigate its properties, and show how it
generalizes previous approaches. Experiments show that it greatly outperforms
lifted variable elimination when logical structure is present. Finally, we
propose an algorithm for approximate probabilistic theorem proving, and show
that it can greatly outperform lifted belief propagation.
|
1202.3725
|
Generalized Fisher Score for Feature Selection
|
cs.LG stat.ML
|
Fisher score is one of the most widely used supervised feature selection
methods. However, it selects each feature independently according to their
scores under the Fisher criterion, which leads to a suboptimal subset of
features. In this paper, we present a generalized Fisher score to jointly
select features. It aims at finding an subset of features, which maximize the
lower bound of traditional Fisher score. The resulting feature selection
problem is a mixed integer programming, which can be reformulated as a
quadratically constrained linear programming (QCLP). It is solved by cutting
plane algorithm, in each iteration of which a multiple kernel learning problem
is solved alternatively by multivariate ridge regression and projected gradient
descent. Experiments on benchmark data sets indicate that the proposed method
outperforms Fisher score as well as many other state-of-the-art feature
selection methods.
|
1202.3726
|
Active Semi-Supervised Learning using Submodular Functions
|
cs.LG stat.ML
|
We consider active, semi-supervised learning in an offline transductive
setting. We show that a previously proposed error bound for active learning on
undirected weighted graphs can be generalized by replacing graph cut with an
arbitrary symmetric submodular function. Arbitrary non-symmetric submodular
functions can be used via symmetrization. Different choices of submodular
functions give different versions of the error bound that are appropriate for
different kinds of problems. Moreover, the bound is deterministic and holds for
adversarially chosen labels. We show exactly minimizing this error bound is
NP-complete. However, we also introduce for any submodular function an
associated active semi-supervised learning method that approximately minimizes
the corresponding error bound. We show that the error bound is tight in the
sense that there is no other bound of the same form which is better. Our
theoretical results are supported by experiments on real data.
|
1202.3727
|
Bregman divergence as general framework to estimate unnormalized
statistical models
|
cs.LG stat.ML
|
We show that the Bregman divergence provides a rich framework to estimate
unnormalized statistical models for continuous or discrete random variables,
that is, models which do not integrate or sum to one, respectively. We prove
that recent estimation methods such as noise-contrastive estimation, ratio
matching, and score matching belong to the proposed framework, and explain
their interconnection based on supervised learning. Further, we discuss the
role of boosting in unsupervised learning.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.