id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1202.1484
|
Coding With Action-dependent Side Information and Additional
Reconstruction Requirements
|
cs.IT math.IT
|
Constrained lossy source coding and channel coding with side information
problems which extend the classic Wyner-Ziv and Gel'fand-Pinsker problems are
considered. Inspired by applications in sensor networking and control, we first
consider lossy source coding with two-sided partial side information where the
quality/availability of the side information can be influenced by a
cost-constrained action sequence. A decoder reconstructs a source sequence
subject to the distortion constraint, and at the same time, an encoder is
additionally required to be able to estimate the decoder's reconstruction.
Next, we consider the channel coding "dual" where the channel state is assumed
to depend on the action sequence, and the decoder is required to decode both
the transmitted message and channel input reliably. Implications on the
fundamental limits of communication in discrete memoryless systems due to the
additional reconstruction constraints are investigated. Single-letter
expressions for the rate-distortion-cost function and channel capacity for the
respective source and channel coding problems are derived. The dual relation
between the two problems is discussed. Additionally, based on the two-stage
coding structure and the additional reconstruction constraint of the channel
coding problem, we discuss and give an interpretation of the two-stage coding
condition which appears in the channel capacity expression. Besides the rate
constraint on the message, this condition is a necessary and sufficient
condition for reliable transmission of the channel input sequence over the
channel in our "two-stage" communication problem. It is also shown in one
example that there exists a case where the two-stage coding condition can be
active in computing the capacity, and it thus can actively restrict the set of
capacity achieving input distributions.
|
1202.1498
|
Preferential attachment alone is not sufficient to generate scale free
random networks
|
physics.soc-ph cs.SI
|
Many networks exhibit scale free behavior where their degree distribution
obeys a power law for large vertex degrees. Models constructed to explain this
phenomena have relied on preferential attachment where the networks grow by the
addition of both vertices and edges, and the edges attach themselves to a
vertex with a probability proportional to its degree. Simulations hint, though
not conclusively, that both growth and preferential attachment are necessary
for scale free behavior. We derive analytic expressions for degree
distributions for networks that grow by the addition of edges to a fixed number
of vertices, based on both linear and non-linear preferential attachment, and
show that they fall off exponentially as would be expected for purely random
networks. From this we conclude that preferential attachment alone might be
necessary but is certainly not a sufficient condition for generating scale free
networks.
|
1202.1523
|
Information Forests
|
cs.LG stat.ML
|
We describe Information Forests, an approach to classification that
generalizes Random Forests by replacing the splitting criterion of non-leaf
nodes from a discriminative one -- based on the entropy of the label
distribution -- to a generative one -- based on maximizing the information
divergence between the class-conditional distributions in the resulting
partitions. The basic idea consists of deferring classification until a measure
of "classification confidence" is sufficiently high, and instead breaking down
the data so as to maximize this measure. In an alternative interpretation,
Information Forests attempt to partition the data into subsets that are "as
informative as possible" for the purpose of the task, which is to classify the
data. Classification confidence, or informative content of the subsets, is
quantified by the Information Divergence. Our approach relates to active
learning, semi-supervised learning, mixed generative/discriminative learning.
|
1202.1547
|
Nash Codes for Noisy Channels
|
cs.GT cs.IT math.IT
|
This paper studies the stability of communication protocols that deal with
transmission errors. We consider a coordination game between an informed sender
and an uninformed decision maker, the receiver, who communicate over a noisy
channel. The sender's strategy, called a code, maps states of nature to
signals. The receiver's best response is to decode the received channel output
as the state with highest expected receiver payoff. Given this decoding, an
equilibrium or "Nash code" results if the sender encodes every state as
prescribed. We show two theorems that give sufficient conditions for Nash
codes. First, a receiver-optimal code defines a Nash code. A second, more
surprising observation holds for communication over a binary channel which is
used independently a number of times, a basic model of information
transmission: Under a minimal "monotonicity" requirement for breaking ties when
decoding, which holds generically, EVERY code is a Nash code.
|
1202.1558
|
On the Performance of Maximum Likelihood Inverse Reinforcement Learning
|
cs.LG
|
Inverse reinforcement learning (IRL) addresses the problem of recovering a
task description given a demonstration of the optimal policy used to solve such
a task. The optimal policy is usually provided by an expert or teacher, making
IRL specially suitable for the problem of apprenticeship learning. The task
description is encoded in the form of a reward function of a Markov decision
process (MDP). Several algorithms have been proposed to find the reward
function corresponding to a set of demonstrations. One of the algorithms that
has provided best results in different applications is a gradient method to
optimize a policy squared error criterion. On a parallel line of research,
other authors have presented recently a gradient approximation of the maximum
likelihood estimate of the reward signal. In general, both approaches
approximate the gradient estimate and the criteria at different stages to make
the algorithm tractable and efficient. In this work, we provide a detailed
description of the different methods to highlight differences in terms of
reward estimation, policy similarity and computational costs. We also provide
experimental results to evaluate the differences in performance of the methods.
|
1202.1568
|
Beyond Sentiment: The Manifold of Human Emotions
|
cs.CL
|
Sentiment analysis predicts the presence of positive or negative emotions in
a text document. In this paper we consider higher dimensional extensions of the
sentiment concept, which represent a richer set of human emotions. Our approach
goes beyond previous work in that our model contains a continuous manifold
rather than a finite set of human emotions. We investigate the resulting model,
compare it to psychological observations, and explore its predictive
capabilities. Besides obtaining significant improvements over a baseline
without manifold, we are also able to visualize different notions of positive
sentiment in different domains.
|
1202.1572
|
Expansion coding: Achieving the capacity of an AEN channel
|
cs.IT math.IT
|
A general method of coding over expansions is proposed, which allows one to
reduce the highly non-trivial problem of coding over continuous channels to a
much simpler discrete ones. More specifically, the focus is on the additive
exponential noise (AEN) channel, for which the (binary) expansion of the
(exponential) noise random variable is considered. It is shown that each of the
random variables in the expansion corresponds to independent Bernoulli random
variables. Thus, each of the expansion levels (of the underlying channel)
corresponds to a binary symmetric channel (BSC), and the coding problem is
reduced to coding over these parallel channels while satisfying the channel
input constraint. This optimization formulation is stated as the achievable
rate result, for which a specific choice of input distribution is shown to
achieve a rate which is arbitrarily close to the channel capacity in the high
SNR regime. Remarkably, the scheme allows for low-complexity capacity-achieving
codes for AEN channels, using the codes that are originally designed for BSCs.
Extensions to different channel models and applications to other coding
problems are discussed.
|
1202.1574
|
Classification with High-Dimensional Sparse Samples
|
cs.IT math.IT math.ST stat.TH
|
The task of the binary classification problem is to determine which of two
distributions has generated a length-$n$ test sequence. The two distributions
are unknown; two training sequences of length $N$, one from each distribution,
are observed. The distributions share an alphabet of size $m$, which is
significantly larger than $n$ and $N$. How does $N,n,m$ affect the probability
of classification error? We characterize the achievable error rate in a
high-dimensional setting in which $N,n,m$ all tend to infinity, under the
assumption that probability of any symbol is $O(m^{-1})$. The results are:
1. There exists an asymptotically consistent classifier if and only if
$m=o(\min\{N^2,Nn\})$. This extends the previous consistency result in [1] to
the case $N\neq n$.
2. For the sparse sample case where $\max\{n,N\}=o(m)$, finer results are
obtained: The best achievable probability of error decays as $-\log(P_e)=J
\min\{N^2, Nn\}(1+o(1))/m$ with $J>0$.
3. A weighted coincidence-based classifier has non-zero generalized error
exponent $J$.
4. The $\ell_2$-norm based classifier has J=0.
|
1202.1585
|
Robust seed selection algorithm for k-means type algorithms
|
cs.CV cs.CE
|
Selection of initial seeds greatly affects the quality of the clusters and in
k-means type algorithms. Most of the seed selection methods result different
results in different independent runs. We propose a single, optimal, outlier
insensitive seed selection algorithm for k-means type algorithms as extension
to k-means++. The experimental results on synthetic, real and on microarray
data sets demonstrated that effectiveness of the new algorithm in producing the
clustering results
|
1202.1587
|
Automatic Clustering with Single Optimal Solution
|
cs.CV
|
Determining optimal number of clusters in a dataset is a challenging task.
Though some methods are available, there is no algorithm that produces unique
clustering solution. The paper proposes an Automatic Merging for Single Optimal
Solution (AMSOS) which aims to generate unique and nearly optimal clusters for
the given datasets automatically. The AMSOS is iteratively merges the closest
clusters automatically by validating with cluster validity measure to find
single and nearly optimal clusters for the given data set. Experiments on both
synthetic and real data have proved that the proposed algorithm finds single
and nearly optimal clustering structure in terms of number of clusters,
compactness and separation.
|
1202.1595
|
Signal Recovery on Incoherent Manifolds
|
cs.IT math.IT stat.ML
|
Suppose that we observe noisy linear measurements of an unknown signal that
can be modeled as the sum of two component signals, each of which arises from a
nonlinear sub-manifold of a high dimensional ambient space. We introduce SPIN,
a first order projected gradient method to recover the signal components.
Despite the nonconvex nature of the recovery problem and the possibility of
underdetermined measurements, SPIN provably recovers the signal components,
provided that the signal manifolds are incoherent and that the measurement
operator satisfies a certain restricted isometry property. SPIN significantly
extends the scope of current recovery models and algorithms for low dimensional
linear inverse problems and matches (or exceeds) the current state of the art
in terms of performance.
|
1202.1596
|
Allocations for Heterogenous Distributed Storage
|
cs.IT math.IT
|
We study the problem of storing a data object in a set of data nodes that
fail independently with given probabilities. Our problem is a natural
generalization of a homogenous storage allocation problem where all the nodes
had the same reliability and is naturally motivated for peer-to-peer and cloud
storage systems with different types of nodes. Assuming optimal erasure coding
(MDS), the goal is to find a storage allocation (i.e, how much to store in each
node) to maximize the probability of successful recovery. This problem turns
out to be a challenging combinatorial optimization problem. In this work we
introduce an approximation framework based on large deviation inequalities and
convex optimization. We propose two approximation algorithms and study the
asymptotic performance of the resulting allocations.
|
1202.1612
|
Data Exchange Problem with Helpers
|
cs.IT cs.CR math.IT
|
In this paper we construct a deterministic polynomial time algorithm for the
problem where a set of users is interested in gaining access to a common file,
but where each has only partial knowledge of the file. We further assume the
existence of another set of terminals in the system, called helpers, who are
not interested in the common file, but who are willing to help the users. Given
that the collective information of all the terminals is sufficient to allow
recovery of the entire file, the goal is to minimize the (weighted) sum of bits
that these terminals need to exchange over a noiseless public channel in order
achieve this goal. Based on established connections to the multi-terminal
secrecy problem, our algorithm also implies a polynomial-time method for
constructing the largest shared secret key in the presence of an eavesdropper.
We consider the following side-information settings: (i) side-information in
the form of uncoded packets of the file, where the terminals' side-information
consists of subsets of the file; (ii) side-information in the form of linearly
correlated packets, where the terminals have access to linear combinations of
the file packets; and (iii) the general setting where the the terminals'
side-information has an arbitrary (i.i.d.) correlation structure. We provide a
polynomial-time algorithm (in the number of terminals) that finds the optimal
rate allocations for these terminals, and then determines an explicit optimal
transmission scheme for cases (i) and (ii).
|
1202.1618
|
Isospectral flows on a class of finite-dimensional Jacobi matrices
|
math.DS cs.SY math.OC
|
We present a new matrix-valued isospectral ordinary differential equation
that asymptotically block-diagonalizes $n\times n$ zero-diagonal Jacobi
matrices employed as its initial condition. This o.d.e.\ features a right-hand
side with a nested commutator of matrices, and structurally resembles the
double-bracket o.d.e.\ studied by R.W.\ Brockett in 1991. We prove that its
solutions converge asymptotically, that the limit is block-diagonal, and above
all, that the limit matrix is defined uniquely as follows: For $n$ even, a
block-diagonal matrix containing $2\times 2$ blocks, such that the
super-diagonal entries are sorted by strictly increasing absolute value.
Furthermore, the off-diagonal entries in these $2\times 2$ blocks have the same
sign as the respective entries in the matrix employed as initial condition. For
$n$ odd, there is one additional $1\times 1$ block containing a zero that is
the top left entry of the limit matrix. The results presented here extend some
early work of Kac and van Moerbeke.
|
1202.1639
|
FastSIR Algorithm: A Fast Algorithm for simulation of epidemic spread in
large networks by using SIR compartment model
|
cs.DS cs.SI physics.soc-ph
|
The epidemic spreading on arbitrary complex networks is studied in SIR
(Susceptible Infected Recovered) compartment model. We propose our
implementation of a Naive SIR algorithm for epidemic simulation spreading on
networks that uses data structures efficiently to reduce running time. The
Naive SIR algorithm models full epidemic dynamics and can be easily upgraded to
parallel version. We also propose novel algorithm for epidemic simulation
spreading on networks called the FastSIR algorithm that has better average case
running time than the Naive SIR algorithm. The FastSIR algorithm uses novel
approach to reduce average case running time by constant factor by using
probability distributions of the number of infected nodes. Moreover, the
FastSIR algorithm does not follow epidemic dynamics in time, but still captures
all infection transfers. Furthermore, we also propose an efficient recursive
method for calculating probability distributions of the number of infected
nodes. Average case running time of both algorithms has also been derived and
experimental analysis was made on five different empirical complex networks.
|
1202.1643
|
Genetic algorithms in astronomy and astrophysics
|
astro-ph.IM cs.NE
|
Genetic algorithms (GAs) emulate the process of biological evolution, in a
computational setting, in order to generate good solutions to difficult search
and optimisation problems. GA-based optimisers tend to be extremely robust and
versatile compared to most traditional techniques used to solve optimisation
problems. This review paper provides a very brief introduction to GAs and
outlines their utility in astronomy and astrophysics.
|
1202.1644
|
A characterization of the number of subsequences obtained via the
deletion channel
|
cs.IT math.IT
|
Motivated by the study of deletion channels, this work presents improved
bounds on the number of subsequences obtained from a binary sting X of length n
under t deletions. It is known that the number of subsequences in this setting
strongly depends on the number of runs in the string X; where a run is a
maximal sequence of the same character. Our improved bounds are obtained by a
structural analysis of the family of r-run strings X, an analysis in which we
identify the extremal strings with respect to the number of subsequences.
Specifically, for every r, we present r-run strings with the minimum
(respectively maximum) number of subsequences under any t deletions; and
perform an exact analysis of the number of subsequences of these extremal
strings.
|
1202.1656
|
Open Data: Reverse Engineering and Maintenance Perspective
|
cs.SE cs.DL cs.IR
|
Open data is an emerging paradigm to share large and diverse datasets --
primarily from governmental agencies, but also from other organizations -- with
the goal to enable the exploitation of the data for societal, academic, and
commercial gains. There are now already many datasets available with diverse
characteristics in terms of size, encoding and structure. These datasets are
often created and maintained in an ad-hoc manner. Thus, open data poses many
challenges and there is a need for effective tools and techniques to manage and
maintain it. In this paper we argue that software maintenance and reverse
engineering have an opportunity to contribute to open data and to shape its
future development. From the perspective of reverse engineering research, open
data is a new artifact that serves as input for reverse engineering techniques
and processes. Specific challenges of open data are document scraping, image
processing, and structure/schema recognition. From the perspective of
maintenance research, maintenance has to accommodate changes of open data
sources by third-party providers, traceability of data transformation
pipelines, and quality assurance of data and transformations. We believe that
the increasing importance of open data and the research challenges that it
brings with it may possibly lead to the emergence of new research streams for
reverse engineering as well as for maintenance.
|
1202.1683
|
Deployment of mobile routers ensuring coverage and connectivity
|
cs.RO cs.NI
|
Maintaining connectivity among a group of autonomous agents exploring an area
is very important, as it promotes cooperation between the agents and also helps
message exchanges which are very critical for their mission. Creating an
underlying Ad-hoc Mobile Router Network (AMRoNet) using simple robotic routers
is an approach that facilitates communication between the agents without
restricting their movements. We address the following question in our paper:
How to create an AMRoNet with local information and with minimum number of
routers? We propose two new localized and distributed algorithms 1)
agent-assisted router deployment and 2) a self-spreading for creating AMRoNet.
The algorithms use a greedy deployment strategy for deploying routers
effectively into the area maximizing coverage and a triangular deployment
strategy to connect different connected component of routers from different
base stations. Empirical analysis shows that the proposed algorithms are the
two best localized approaches to create AMRoNets.
|
1202.1685
|
Combined Haar-Hilbert and Log-Gabor Based Iris Encoders
|
cs.CV
|
This chapter shows that combining Haar-Hilbert and Log-Gabor improves iris
recognition performance leading to a less ambiguous biometric decision
landscape in which the overlap between the experimental intra- and interclass
score distributions diminishes or even vanishes. Haar-Hilbert, Log-Gabor and
combined Haar-Hilbert and Log-Gabor encoders are tested here both for single
and dual iris approach. The experimental results confirm that the best
performance is obtained for the dual iris approach when the iris code is
generated using the combined Haar-Hilbert and Log-Gabor encoder, and when the
matching score fuses the information from both Haar-Hilbert and Log-Gabor
channels of the combined encoder.
|
1202.1692
|
Efficient Decoding of Partial Unit Memory Codes of Arbitrary Rate
|
cs.IT math.IT
|
Partial Unit Memory (PUM) codes are a special class of convolutional codes,
which are often constructed by means of block codes. Decoding of PUM codes may
take advantage of existing decoders for the block code. The Dettmar--Sorger
algorithm is an efficient decoding algorithm for PUM codes, but allows only low
code rates. The same restriction holds for several known PUM code
constructions. In this paper, an arbitrary-rate construction, the analysis of
its distance parameters and a generalized decoding algorithm for PUM codes of
arbitrary rate are provided. The correctness of the algorithm is proven and it
is shown that its complexity is cubic in the length.
|
1202.1694
|
Learning to Place New Objects in a Scene
|
cs.RO
|
Placing is a necessary skill for a personal robot to have in order to perform
tasks such as arranging objects in a disorganized room. The object placements
should not only be stable but also be in their semantically preferred placing
areas and orientations. This is challenging because an environment can have a
large variety of objects and placing areas that may not have been seen by the
robot before.
In this paper, we propose a learning approach for placing multiple objects in
different placing areas in a scene. Given point-clouds of the objects and the
scene, we design appropriate features and use a graphical model to encode
various properties, such as the stacking of objects, stability, object-area
relationship and common placing constraints. The inference in our model is an
integer linear program, which we solve efficiently via an LP relaxation. We
extensively evaluate our approach on 98 objects from 16 categories being placed
into 40 areas. Our robotic experiments show a success rate of 98% in placing
known objects and 82% in placing new objects stably. We use our method on our
robots for performing tasks such as loading several dish-racks, a bookshelf and
a fridge with multiple items.
|
1202.1708
|
A Polynomial Time Approximation Scheme for a Single Machine Scheduling
Problem Using a Hybrid Evolutionary Algorithm
|
cs.NE
|
Nowadays hybrid evolutionary algorithms, i.e, heuristic search algorithms
combining several mutation operators some of which are meant to implement
stochastically a well known technique designed for the specific problem in
question while some others playing the role of random search, have become
rather popular for tackling various NP-hard optimization problems. While
empirical studies demonstrate that hybrid evolutionary algorithms are
frequently successful at finding solutions having fitness sufficiently close to
the optimal, many fewer articles address the computational complexity in a
mathematically rigorous fashion. This paper is devoted to a mathematically
motivated design and analysis of a parameterized family of evolutionary
algorithms which provides a polynomial time approximation scheme for one of the
well-known NP-hard combinatorial optimization problems, namely the "single
machine scheduling problem without precedence constraints". The authors hope
that the techniques and ideas developed in this article may be applied in many
other situations.
|
1202.1734
|
Superiority of TDMA in a Class of Gaussian Multiple-Access Channels with
a MIMO-AF-Relay
|
cs.IT math.IT
|
We consider a Gaussian multiple-access channel (MAC) with an
amplify-and-forward (AF) relay, where all nodes except the receiver have
multiple antennas and the direct links between transmitters and receivers are
neglected. Thus, spatial processing can be applied both at the transmitters and
at the relay, which is subject to optimization for increasing the data rates.
In general, this optimization problem is non-convex and hard to solve. While in
prior work on this problem, it is assumed that all transmitters access the
channel jointly, we propose a solution where each transmitter accesses the
channel exclusively, using a time-division multiple-access (TDMA) scheme. It is
shown that this scheme provides higher achievable sum rates, which raises the
question of the need for TDMA to achieve the general capacity region of MACs
with AF relay.
|
1202.1740
|
A Diversity-Multiplexing-Delay Tradeoff of ARQ Protocols in The
Z-interference Channel
|
cs.IT math.IT
|
In this work, we analyze the fundamental performance tradeoff of the
single-antenna Automatic Retransmission reQuest (ARQ) Z-interference channel
(ZIC). Specifically, we characterize the achievable three-dimensional tradeoff
between diversity (reliability), multiplexing (throughput), and delay (maximum
number of retransmissions) of two ARQ protocols: A non-cooperative protocol and
a cooperative one. Considering no cooperation exists, we study the achievable
tradeoff of the fixed-power split Han-Kobayashi (HK) approach. Interestingly,
we demonstrate that if the second user transmits the common part only of its
message in the event of its successful decoding and a decoding failure at the
first user, communication is improved over that achieved by keeping or stopping
the transmission of both the common and private messages. We obtain closed-form
expressions for the achievable tradeoff under the HK splitting. Under
cooperation, two special cases of the HK are considered for static and dynamic
decoders. The difference between the two decoders lies in the ability of the
latter to dynamically choose which HK special-case decoding to apply.
Cooperation is shown to dramatically increase the achievable first user
diversity.
|
1202.1742
|
Stabilizing sliding mode control design and application for a dc motor:
Speed control
|
cs.SY
|
The regulation by sliding mode control (SMC) is recognized for its qualities
of robustness and dynamic response. This article will briefly talk about the
regulation principles by sliding mode as well as the application of this
approach to the adjustment of a speed control DC motor bench using the TY36A/EV
unit. This unit, from Electronica Veneta products, uses a PID controller to
control the speed and position of the DC motor. Our purpose is to improve the
set time answer and the robustness of the system when disturbances take place.
The experimental results show very good performances of the proposed approach
relatively to the PID.
|
1202.1747
|
Growth Patterns of Subway/Metro Systems Tracked by Degree Correlation
|
physics.soc-ph cs.SI
|
Urban transportation systems grow over time as city populations grow and move
and their transportation needs evolve. Typical network growth models, such as
preferential attachment, grow the network node by node whereas rail and metro
systems grow by adding entire lines with all their nodes. The objective of this
paper is to see if any canonical regular network forms such as stars or grids
capture the growth patterns of urban metro systems for which we have historical
data in terms of old maps. Data from these maps reveal that the systems'
Pearson degree correlation grows increasingly from initially negative values
toward positive values over time and in some cases becomes decidedly positive.
We have derived closed form expressions for degree correlation and clustering
coefficient for a variety of canonical forms that might be similar to metro
systems. Of all those examined, only a few types patterned after a wide area
network (WAN) with a "core-periphery" structure show similar positive-trending
degree correlation as network size increases. This suggests that large metro
systems either are designed or evolve into the equivalent of message carriers
that seek to balance travel between arbitrary node-destination pairs with
avoidance of congestion in the central regions of the network.
Keywords: metro, subway, urban transport networks, degree correlation
|
1202.1779
|
Finding the Graph of Epidemic Cascades
|
cs.SI physics.soc-ph stat.ML
|
We consider the problem of finding the graph on which an epidemic cascade
spreads, given only the times when each node gets infected. While this is a
problem of importance in several contexts -- offline and online social
networks, e-commerce, epidemiology, vulnerabilities in infrastructure networks
-- there has been very little work, analytical or empirical, on finding the
graph. Clearly, it is impossible to do so from just one cascade; our interest
is in learning the graph from a small number of cascades.
For the classic and popular "independent cascade" SIR epidemics, we
analytically establish the number of cascades required by both the global
maximum-likelihood (ML) estimator, and a natural greedy algorithm. Both results
are based on a key observation: the global graph learning problem decouples
into $n$ local problems -- one for each node. For a node of degree $d$, we show
that its neighborhood can be reliably found once it has been infected $O(d^2
\log n)$ times (for ML on general graphs) or $O(d\log n)$ times (for greedy on
trees). We also provide a corresponding information-theoretic lower bound of
$\Omega(d\log n)$; thus our bounds are essentially tight. Furthermore, if we
are given side-information in the form of a super-graph of the actual graph (as
is often the case), then the number of cascade samples required -- in all cases
-- becomes independent of the network size $n$.
Finally, we show that for a very general SIR epidemic cascade model, the
Markov graph of infection times is obtained via the moralization of the network
graph.
|
1202.1801
|
Network Coded Gossip with Correlated Data
|
cs.IT cs.DC cs.DS math.IT
|
We design and analyze gossip algorithms for networks with correlated data. In
these networks, either the data to be distributed, the data already available
at the nodes, or both, are correlated. This model is applicable for a variety
of modern networks, such as sensor, peer-to-peer and content distribution
networks.
Although coding schemes for correlated data have been studied extensively,
the focus has been on characterizing the rate region in static memory-free
networks. In a gossip-based scheme, however, nodes communicate among each other
by continuously exchanging packets according to some underlying communication
model. The main figure of merit in this setting is the stopping time -- the
time required until nodes can successfully decode. While Gossip schemes are
practical, distributed and scalable, they have only been studied for
uncorrelated data.
We wish to close this gap by providing techniques to analyze network coded
gossip in (dynamic) networks with correlated data. We give a clean framework
for oblivious network models that applies to a multitude of network and
communication scenarios, specify a general setting for distributed correlated
data, and give tight bounds on the stopping times of network coded protocols in
this wide range of scenarios.
|
1202.1808
|
Personalised product design using virtual interactive techniques
|
cs.MM cs.CV cs.GR
|
Use of Virtual Interactive Techniques for personalized product design is
described in this paper. Usually products are designed and built by considering
general usage patterns and Prototyping is used to mimic the static or working
behaviour of an actual product before manufacturing the product. The user does
not have any control on the design of the product. Personalized design
postpones design to a later stage. It allows for personalized selection of
individual components by the user. This is implemented by displaying the
individual components over a physical model constructed using Cardboard or
Thermocol in the actual size and shape of the original product. The components
of the equipment or product such as screen, buttons etc. are then projected
using a projector connected to the computer into the physical model. Users can
interact with the prototype like the original working equipment and they can
select, shape, position the individual components displayed on the interaction
panel using simple hand gestures. Computer Vision techniques as well as sound
processing techniques are used to detect and recognize the user gestures
captured using a web camera and microphone.
|
1202.1837
|
A Proposed Architecture for Continuous Web Monitoring Through Online
Crawling of Blogs
|
cs.IR cs.SI
|
Getting informed of what is registered in the Web space on time, can greatly
help the psychologists, marketers and political analysts to familiarize,
analyse, make decision and act correctly based on the society`s different
needs. The great volume of information in the Web space hinders us to
continuously online investigate the whole space of the Web. Focusing on the
considered blogs limits our working domain and makes the online crawling in the
Web space possible. In this article, an architecture is offered which
continuously online crawls the related blogs, using focused crawler, and
investigates and analyses the obtained data. The online fetching is done based
on the latest announcements of the ping server machines. A weighted graph is
formed based on targeting the important key phrases, so that a focused crawler
can do the fetching of the complete texts of the related Web pages, based on
the weighted graph.
|
1202.1841
|
Semantic Visualization and Navigation in Textual Corpus
|
cs.IR cs.DL cs.GR cs.SI
|
This paper gives a survey of related work on the information visualization
domain and study the real integration of the cartography paradigms in actual
information search systems. Based on this study, we propose a semantic
visualization and navigation approach which offer to users three search modes:
precise search, connotative search and thematic search. The objective is to
propose to the users of an information search system, new interaction paradigms
which support the semantic aspect of the considered information space and guide
users in their searches by assisting them to locate their interest center and
to improve serendipity.
|
1202.1842
|
Network Backbone Discovery Using Edge Clustering
|
cs.SI cs.DS
|
In this paper, we investigate the problem of network backbone discovery. In
complex systems, a "backbone" takes a central role in carrying out the system
functionality and carries the bulk of system traffic. It also both simplifies
and highlight underlying networking structure. Here, we propose an integrated
graph theoretical and information theoretical network backbone model. We
develop an efficient mining algorithm based on Kullback-Leibler divergence
optimization procedure and maximal weight connected subgraph discovery
procedure. A detailed experimental evaluation demonstrates both the
effectiveness and efficiency of our approach. The case studies in the real
world domain further illustrates the usefulness of the discovered network
backbones.
|
1202.1881
|
A personalized web page content filtering model based on segmentation
|
cs.IR
|
In the view of massive content explosion in World Wide Web through diverse
sources, it has become mandatory to have content filtering tools. The filtering
of contents of the web pages holds greater significance in cases of access by
minor-age people. The traditional web page blocking systems goes by the Boolean
methodology of either displaying the full page or blocking it completely. With
the increased dynamism in the web pages, it has become a common phenomenon that
different portions of the web page holds different types of content at
different time instances. This paper proposes a model to block the contents at
a fine-grained level i.e. instead of completely blocking the page it would be
efficient to block only those segments which holds the contents to be blocked.
The advantages of this method over the traditional methods are fine-graining
level of blocking and automatic identification of portions of the page to be
blocked. The experiments conducted on the proposed model indicate 88% of
accuracy in filtering out the segments.
|
1202.1886
|
Classification of artificial intelligence ids for smurf attack
|
cs.AI
|
Many methods have been developed to secure the network infrastructure and
communication over the Internet. Intrusion detection is a relatively new
addition to such techniques. Intrusion detection systems (IDS) are used to find
out if someone has intrusion into or is trying to get it the network. One big
problem is amount of Intrusion which is increasing day by day. We need to know
about network attack information using IDS, then analysing the effect. Due to
the nature of IDSs which are solely signature based, every new intrusion cannot
be detected; so it is important to introduce artificial intelligence (AI)
methods / techniques in IDS. Introduction of AI necessitates the importance of
normalization in intrusions. This work is focused on classification of AI based
IDS techniques which will help better design intrusion detection systems in the
future. We have also proposed a support vector machine for IDS to detect Smurf
attack with much reliable accuracy.
|
1202.1888
|
Equivalence of SLNR Precoder and RZF Precoder in Downlink MU-MIMO
Systems
|
cs.IT math.IT
|
The signal-to-leakage-and-noise ratio (SLNR) precoder is widely used for
MU-MIMO systems in many works, and observed with improved performance from
zeroforcing (ZF) precoder. Our work proofs SLNR precoder is completely
equivalent to conventional regulated ZF (RZF) precoder, which has significant
gain over ZF precoder at low SNRs. Therefore, with our conclusion, the existing
performance analysis about RZF precoder can be readily applicable to SLNR
precoder.
|
1202.1891
|
Hyper heuristic based on great deluge and its variants for exam
timetabling problem
|
cs.AI
|
Today, University Timetabling problems are occurred annually and they are
often hard and time consuming to solve. This paper describes Hyper Heuristics
(HH) method based on Great Deluge (GD) and its variants for solving large,
highly constrained timetabling problems from different domains. Generally, in
hyper heuristic framework, there are two main stages: heuristic selection and
move acceptance. This paper emphasizes on the latter stage to develop Hyper
Heuristic (HH) framework. The main contribution of this paper is that Great
Deluge (GD) and its variants: Flex Deluge(FD), Non-linear(NLGD), Extended Great
Deluge(EGD) are used as move acceptance method in HH by combining Reinforcement
learning (RL).These HH methods are tested on exam benchmark timetabling problem
and best results and comparison analysis are reported.
|
1202.1909
|
On the Degrees of Freedom of time correlated MISO broadcast channel with
delayed CSIT
|
cs.IT math.IT
|
We consider the time correlated MISO broadcast channel where the transmitter
has partial knowledge on the current channel state, in addition to delayed
channel state information (CSI). Rather than exploiting only the current CSI,
as the zero-forcing precoding, or only the delayed CSI, as the Maddah-Ali-Tse
(MAT) scheme, we propose a seamless strategy that takes advantage of both. The
achievable degrees of freedom of the proposed scheme is characterized in terms
of the quality of the current channel knowledge.
|
1202.1914
|
Global Maps of Science based on the new Web-of-Science Categories
|
cs.DL cs.SI
|
In August 2011, Thomson Reuters launched version 5 of the Science and Social
Science Citation Index in the Web of Science (WoS). Among other things, the 222
ISI Subject Categories (SCs) for these two databases in version 4 of WoS were
renamed and extended to 225 WoS Categories (WCs). A new set of 151 Subject
Categories (SCs) was added, but at a higher level of aggregation. Since we
previously used the ISI SCs as the baseline for a global map in Pajek (Rafols
et al., 2010) and brought this facility online (at
http://www.leydesdorff.net/overlaytoolkit), we recalibrated this map for the
new WC categories using the Journal Citation Reports 2010. In the new
installation, the base maps can also be made using VOSviewer (Van Eck &
Waltman, 2010).
|
1202.1941
|
An Intelligent Mobile-Agent Based Scalable Network Management
Architecture for Large-Scale Enterprise System
|
cs.NI cs.DC cs.MA
|
Several Mobile Agent based distributed network management models have been
proposed in recent times to address the scalability and flexibility problems of
centralized (SNMP or CMIP management models) models. Though the use of Mobile
Agents to distribute and delegate management tasks comes handy in dealing with
the previously stated issues, many of the agent-based management frameworks
like initial flat bed models and static mid-level managers employing mobile
agents models cannot efficiently meet the demands of current networks which are
growing in size and complexity. Moreover, varied technologies, such as SONET,
ATM, Ethernet, DWDM etc., present at different layers of the Access, Metro and
Core (long haul) sections of the network, have contributed to the complexity in
terms of their own framing and protocol structures. Thus, controlling and
managing the traffic in these networks is a challenging task. This paper
presents an intelligent scalable hierarchical agent based model for the
management of large-scale complex networks to address aforesaid issues. The
cost estimation, carried out with a view to compute the overall management cost
in terms of management data overhead, is being presented. The results obtained
thereafter establish the usefulness of the presented architecture as compare to
centralized and flat bed agent based models.
|
1202.1943
|
3D Model Assisted Image Segmentation
|
cs.CV
|
The problem of segmenting a given image into coherent regions is important in
Computer Vision and many industrial applications require segmenting a known
object into its components. Examples include identifying individual parts of a
component for process control work in a manufacturing plant and identifying
parts of a car from a photo for automatic damage detection. Unfortunately most
of an object's parts of interest in such applications share the same pixel
characteristics, having similar colour and texture. This makes segmenting the
object into its components a non-trivial task for conventional image
segmentation algorithms. In this paper, we propose a "Model Assisted
Segmentation" method to tackle this problem. A 3D model of the object is
registered over the given image by optimising a novel gradient based loss
function. This registration obtains the full 3D pose from an image of the
object. The image can have an arbitrary view of the object and is not limited
to a particular set of views. The segmentation is subsequently performed using
a level-set based method, using the projected contours of the registered 3D
model as initialisation curves. The method is fully automatic and requires no
user interaction. Also, the system does not require any prior training. We
present our results on photographs of a real car.
|
1202.1945
|
A framework: Cluster detection and multidimensional visualization of
automated data mining using intelligent agents
|
cs.AI
|
Data Mining techniques plays a vital role like extraction of required
knowledge, finding unsuspected information to make strategic decision in a
novel way which in term understandable by domain experts. A generalized frame
work is proposed by considering non - domain experts during mining process for
better understanding, making better decision and better finding new patters in
case of selecting suitable data mining techniques based on the user profile by
means of intelligent agents. KEYWORDS: Data Mining Techniques, Intelligent
Agents, User Profile, Multidimensional Visualization, Knowledge Discovery.
|
1202.1990
|
Non-parametric convolution based image-segmentation of ill-posed objects
applying context window approach
|
cs.CV
|
Context-dependence in human cognition process is a well-established fact.
Following this, we introduced the image segmentation method that can use
context to classify a pixel on the basis of its membership to a particular
object-class of the concerned image. In the broad methodological steps, each
pixel was defined by its context window (CW) surrounding it the size of which
was fixed heuristically. CW texture defined by the intensities of its pixels
was convoluted with weights optimized through a non-parametric function
supported by a backpropagation network. Result of convolution was used to
classify them. The training data points (i.e., pixels) were carefully chosen to
include all variety of contexts of types, i) points within the object, ii)
points near the edge but inside the objects, iii) points at the border of the
objects, iv) points near the edge but outside the objects, v) points near or at
the edge of the image frame. Moreover the training data points were selected
from all the images within image-dataset. CW texture information for 1000
pixels from face area and background area of images were captured, out of which
700 CWs were used as training input data, and remaining 300 for testing. Our
work gives the first time foundation of quantitative enumeration of efficiency
of image-segmentation which is extendable to segment out more than 2 objects
within an image.
|
1202.1992
|
A Comparison of Soft and Hard Coded Relaying
|
cs.IT math.IT
|
"Amplify and Forward" and "Decode and Forward" are the two main relaying
functions that have been proposed since the advent of cooperative
communication. "\textit{Soft} Decode and Forward" is a recently introduced
relaying principle that is to combine the benefits of the classical two
relaying algorithms. In this work, we thoroughly investigate \textit{soft}
relaying algorithms when convolutional or turbo codes are applied. We study the
error performance of two cooperative scenarios employing soft-relaying. A novel
approach, the mutual information loss due to data processing, is proposed to
analyze the relay-based soft encoder. We also introduce a novel approach to
derive the estimated bit error rate and the equivalent channel SNR for the
relaying techniques considered in the paper.
|
1202.2026
|
A quantum genetic algorithm with quantum crossover and mutation
operations
|
cs.NE quant-ph
|
In the context of evolutionary quantum computing in the literal meaning, a
quantum crossover operation has not been introduced so far. Here, we introduce
a novel quantum genetic algorithm which has a quantum crossover procedure
performing crossovers among all chromosomes in parallel for each generation. A
complexity analysis shows that a quadratic speedup is achieved over its
classical counterpart in the dominant factor of the run time to handle each
generation.
|
1202.2037
|
Note on RIP-based Co-sparse Analysis
|
cs.IT math.IT
|
Over the past years, there are increasing interests in recovering the signals
from undersampling data where such signals are sparse under some orthogonal
dictionary or tight framework, which is referred to be sparse synthetic model.
More recently, its counterpart, i.e., the sparse analysis model, has also
attracted researcher's attentions where many practical signals which are sparse
in the truly redundant dictionary are concerned. This short paper presents
important complement to the results in existing literatures for treating sparse
analysis model. Firstly, we give the natural generalization of well-known
restricted isometry property (RIP) to deal with sparse analysis model, where
the truly arbitrary incoherent dictionary is considered. Secondly, we studied
the theoretical guarantee for the accurate recovery of signal which is sparse
in general redundant dictionaries through solving l1-norm sparsity-promoted
optimization problem. This work shows not only that compressed sensing is
viable in the context of sparse analysis, but also that accurate recovery is
possible via solving l1-minimization problem.
|
1202.2082
|
Multiuser Detection and Channel Estimation for Multibeam Satellite
Communications
|
cs.NI cs.IT math.IT
|
In this paper, iterative multi-user detection techniques for multi-beam
communications are presented. The solutions are based on a successive
interference cancellation architecture and a channel decoding to treat the
co-channel interference. Beams forming and channels coefficients are estimated
and updated iteratively. A developed technique of signals combining allows
power improvement of the useful received signal; and then reduction of the bit
error rates with low signal to noise ratios. The approach is applied to a
synchronous multi-beam satellite link under an additive white Gaussian channel.
Evaluation of the techniques is done with computer simulations, where a noised
and multi-access environment is considered. The simulations results show the
good performance of the proposed solutions.
|
1202.2088
|
Coded Cooperative Data Exchange Problem for General Topologies
|
cs.IT math.IT
|
We consider the "coded cooperative data exchange problem" for general graphs.
In this problem, given a graph G=(V,E) representing clients in a broadcast
network, each of which initially hold a (not necessarily disjoint) set of
information packets; one wishes to design a communication scheme in which
eventually all clients will hold all the packets of the network. Communication
is performed in rounds, where in each round a single client broadcasts a single
(possibly encoded) information packet to its neighbors in G. The objective is
to design a broadcast scheme that satisfies all clients with the minimum number
of broadcast rounds.
The coded cooperative data exchange problem has seen significant research
over the last few years; mostly when the graph G is the complete broadcast
graph in which each client is adjacent to all other clients in the network, but
also on general topologies, both in the fractional and integral setting. In
this work we focus on the integral setting in general undirected topologies G.
We tie the data exchange problem on G to certain well studied combinatorial
properties of G and in such show that solving the problem exactly or even
approximately within a multiplicative factor of \log{|V|} is intractable (i.e.,
NP-Hard). We then turn to study efficient data exchange schemes yielding a
number of communication rounds comparable to our intractability result. Our
communication schemes do not involve encoding, and in such yield bounds on the
"coding advantage" in the setting at hand.
|
1202.2089
|
The Supermarket Game
|
cs.IT cs.GT math.IT
|
A supermarket game is considered with $N$ FCFS queues with unit exponential
service rate and global Poisson arrival rate $N \lambda$. Upon arrival each
customer chooses a number of queues to be sampled uniformly at random and joins
the least loaded sampled queue. Customers are assumed to have cost for both
waiting and sampling, and they want to minimize their own expected total cost.
We study the supermarket game in a mean field model that corresponds to the
limit as $N$ converges to infinity in the sense that (i) for a fixed symmetric
customer strategy, the joint equilibrium distribution of any fixed number of
queues converges as $N \to \infty$ to a product distribution determined by the
mean field model and (ii) a Nash equilibrium for the mean field model is an
$\epsilon$-Nash equilibrium for the finite $N$ model with $N$ sufficiently
large. It is shown that there always exists a Nash equilibrium for $\lambda <1$
and the Nash equilibrium is unique with homogeneous waiting cost for $\lambda^2
\le 1/2$. Furthermore, we find that the action of sampling more queues by some
customers has a positive externality on the other customers in the mean field
model, but can have a negative externality for finite $N$.
|
1202.2111
|
Curves on torus layers and coding for continuous alphabet sources
|
cs.IT math.DG math.IT
|
In this paper we consider the problem of transmitting a continuous alphabet
discrete-time source over an AWGN channel. The design of good curves for this
purpose relies on geometrical properties of spherical codes and projections of
$N$-dimensional lattices. We propose a constructive scheme based on a set of
curves on the surface of a 2N-dimensional sphere and present comparisons with
some previous works.
|
1202.2112
|
Predicting Contextual Sequences via Submodular Function Maximization
|
cs.AI cs.LG cs.RO
|
Sequence optimization, where the items in a list are ordered to maximize some
reward has many applications such as web advertisement placement, search, and
control libraries in robotics. Previous work in sequence optimization produces
a static ordering that does not take any features of the item or context of the
problem into account. In this work, we propose a general approach to order the
items within the sequence based on the context (e.g., perceptual information,
environment description, and goals). We take a simple, efficient,
reduction-based approach where the choice and order of the items is established
by repeatedly learning simple classifiers or regressors for each "slot" in the
sequence. Our approach leverages recent work on submodular function
maximization to provide a formal regret reduction from submodular sequence
optimization to simple cost-sensitive prediction. We apply our contextual
sequence prediction algorithm to optimize control libraries and demonstrate
results on two robotics problems: manipulator trajectory prediction and mobile
robot path planning.
|
1202.2113
|
Decentralized Delay Optimal Control for Interference Networks with
Limited Renewable Energy Storage
|
cs.IT math.IT
|
In this paper, we consider delay minimization for interference networks with
renewable energy source, where the transmission power of a node comes from both
the conventional utility power (AC power) and the renewable energy source. We
assume the transmission power of each node is a function of the local channel
state, local data queue state and local energy queue state only. In turn, we
consider two delay optimization formulations, namely the decentralized
partially observable Markov decision process (DEC-POMDP) and Non-cooperative
partially observable stochastic game (POSG). In DEC-POMDP formulation, we
derive a decentralized online learning algorithm to determine the control
actions and Lagrangian multipliers (LMs) simultaneously, based on the policy
gradient approach. Under some mild technical conditions, the proposed
decentralized policy gradient algorithm converges almost surely to a local
optimal solution. On the other hand, in the non-cooperative POSG formulation,
the transmitter nodes are non-cooperative. We extend the decentralized policy
gradient solution and establish the technical proof for almost-sure convergence
of the learning algorithms. In both cases, the solutions are very robust to
model variations. Finally, the delay performance of the proposed solutions are
compared with conventional baseline schemes for interference networks and it is
illustrated that substantial delay performance gain and energy savings can be
achieved.
|
1202.2143
|
Active Bayesian Optimization: Minimizing Minimizer Entropy
|
stat.ME cs.LG stat.ML
|
The ultimate goal of optimization is to find the minimizer of a target
function.However, typical criteria for active optimization often ignore the
uncertainty about the minimizer. We propose a novel criterion for global
optimization and an associated sequential active learning strategy using
Gaussian processes.Our criterion is the reduction of uncertainty in the
posterior distribution of the function minimizer. It can also flexibly
incorporate multiple global minimizers. We implement a tractable approximation
of the criterion and demonstrate that it obtains the global minimizer
accurately compared to conventional Bayesian optimization criteria.
|
1202.2160
|
Scene Parsing with Multiscale Feature Learning, Purity Trees, and
Optimal Covers
|
cs.CV cs.LG
|
Scene parsing, or semantic segmentation, consists in labeling each pixel in
an image with the category of the object it belongs to. It is a challenging
task that involves the simultaneous detection, segmentation and recognition of
all the objects in the image.
The scene parsing method proposed here starts by computing a tree of segments
from a graph of pixel dissimilarities. Simultaneously, a set of dense feature
vectors is computed which encodes regions of multiple sizes centered on each
pixel. The feature extractor is a multiscale convolutional network trained from
raw pixels. The feature vectors associated with the segments covered by each
node in the tree are aggregated and fed to a classifier which produces an
estimate of the distribution of object categories contained in the segment. A
subset of tree nodes that cover the image are then selected so as to maximize
the average "purity" of the class distributions, hence maximizing the overall
likelihood that each segment will contain a single object. The convolutional
network feature extractor is trained end-to-end from raw pixels, alleviating
the need for engineered features. After training, the system is parameter free.
The system yields record accuracies on the Stanford Background Dataset (8
classes), the Sift Flow Dataset (33 classes) and the Barcelona Dataset (170
classes) while being an order of magnitude faster than competing approaches,
producing a 320 \times 240 image labeling in less than 1 second.
|
1202.2167
|
Abstract Representations and Frequent Pattern Discovery
|
cs.AI cs.IT math.IT
|
We discuss the frequent pattern mining problem in a general setting. From an
analysis of abstract representations, summarization and frequent pattern
mining, we arrive at a generalization of the problem. Then, we show how the
problem can be cast into the powerful language of algorithmic information
theory. This allows us to formulate a simple algorithm to mine for all frequent
patterns.
|
1202.2175
|
On the Capacity Region of Cognitive Multiple Access over White Space
Channels
|
cs.IT math.IT
|
Opportunistically sharing the white spaces, or the temporarily unoccupied
spectrum licensed to the primary user (PU), is a practical way to improve the
spectrum utilization. In this paper, we consider the fundamental problem of
rate regions achievable for multiple secondary users (SUs) which send their
information to a common receiver over such a white space channel. In
particular, the PU activities are treated as on/off side information, which can
be obtained causally or non-causally by the SUs. The system is then modeled as
a multi-switch channel and its achievable rate regions are characterized in
some scenarios. Explicit forms of outer and inner bounds of the rate regions
are derived by assuming additional side information, and they are shown to be
tight in some special cases. An optimal rate and power allocation scheme that
maximizes the sum rate is also proposed. The numerical results reveal the
impacts of side information, channel correlation and PU activity on the
achievable rates, and also verify the effectiveness of our rate and power
allocation scheme. Our work may shed some light on the fundamental limit and
design tradeoffs in practical cognitive radio systems.
|
1202.2185
|
Temporal Logic Motion Control using Actor-Critic Methods
|
cs.RO cs.SY math.OC
|
In this paper, we consider the problem of deploying a robot from a
specification given as a temporal logic statement about some properties
satisfied by the regions of a large, partitioned environment. We assume that
the robot has noisy sensors and actuators and model its motion through the
regions of the environment as a Markov Decision Process (MDP). The robot
control problem becomes finding the control policy maximizing the probability
of satisfying the temporal logic task on the MDP. For a large environment,
obtaining transition probabilities for each state-action pair, as well as
solving the necessary optimization problem for the optimal policy are usually
not computationally feasible. To address these issues, we propose an
approximate dynamic programming framework based on a least-square temporal
difference learning method of the actor-critic type. This framework operates on
sample paths of the robot and optimizes a randomized control policy with
respect to a small set of parameters. The transition probabilities are obtained
only when needed. Hardware-in-the-loop simulations confirm that convergence of
the parameters translates to an approximately optimal policy.
|
1202.2187
|
Museum: Multidimensional web page segment evaluation model
|
cs.IR
|
The evaluation of a web page with respect to a query is a vital task in the
web information retrieval domain. This paper proposes the evaluation of a web
page as a bottom-up process from the segment level to the page level. A model
for evaluating the relevancy is proposed incorporating six different
dimensions. An algorithm for evaluating the segments of a web page, using the
above mentioned six dimensions is proposed. The benefits of fine-granining the
evaluation process to the segment level instead of the page level are explored.
The proposed model can be incorporated for various tasks like web page
personalization, result re-ranking, mobile device page rendering etc.
|
1202.2209
|
Choosing Products in Social Networks
|
cs.SI cs.GT physics.soc-ph
|
We study the consequences of adopting products by agents who form a social
network. To this end we use the threshold model introduced in Apt and Markakis,
arXiv:1105.2434, in which the nodes influenced by their neighbours can adopt
one out of several alternatives, and associate with such each social network a
strategic game between the agents. The possibility of not choosing any product
results in two special types of (pure) Nash equilibria.
We show that such games may have no Nash equilibrium and that determining the
existence of a Nash equilibrium, also of a special type, is NP-complete. The
situation changes when the underlying graph of the social network is a DAG, a
simple cycle, or has no source nodes. For these three classes we determine the
complexity of establishing whether a (special type of) Nash equilibrium exists.
We also clarify for these categories of games the status and the complexity
of the finite improvement property (FIP). Further, we introduce a new property
of the uniform FIP which is satisfied when the underlying graph is a simple
cycle, but determining it is co-NP-hard in the general case and also when the
underlying graph has no source nodes. The latter complexity results also hold
for verifying the property of being a weakly acyclic game.
|
1202.2215
|
Topic Diffusion and Emergence of Virality in Social Networks
|
cs.SI physics.soc-ph
|
We propose a stochastic model for the diffusion of topics entering a social
network modeled by a Watts-Strogatz graph. Our model sets into play an implicit
competition between these topics as they vie for the attention of users in the
network. The dynamics of our model are based on notions taken from real-world
OSNs like Twitter where users either adopt an exogenous topic or copy topics
from their neighbors leading to endogenous propagation. When instantiated
correctly, the model achieves a viral regime where a few topics garner
unusually good response from the network, closely mimicking the behavior of
real-world OSNs. Our main contribution is our description of how clusters of
proximate users that have spoken on the topic merge to form a large giant
component making a topic go viral. This demonstrates that it is not weak ties
but actually strong ties that play a major part in virality. We further
validate our model and our hypotheses about its behavior by comparing our
simulation results with the results of a measurement study conducted on real
data taken from Twitter.
|
1202.2223
|
Performance Analysis of $\ell_1$-synthesis with Coherent Frames
|
cs.IT math.IT
|
Signals with sparse frame representations comprise a much more realistic
model of nature than that with orthonomal bases. Studies about the signal
recovery associated with such sparsity models have been one of major focuses in
compressed sensing. In such settings, one important and widely used signal
recovery approach is known as $\ell_1$-synthesis (or Basis Pursuit). We present
in this article a more effective performance analysis (than what are available)
of this approach in which the dictionary $\Dbf$ may be highly, and even
perfectly correlated. Under suitable conditions on the sensing matrix $\Phibf$,
an error bound of the recovered signal $\hat{\fbf}$ (by the $\ell_1$-synthesis
method) is established. Such an error bound is governed by the decaying
property of $\tilde{\Dbf}_{\text{o}}^*\fbf$, where $\fbf$ is the true signal
and $\tilde{\Dbf}_{\text{o}}$ denotes the optimal dual frame of $\Dbf$ in the
sense that $\|\tilde{\Dbf}_{\text{o}}^*\hat{\fbf}\|_1$ produces the smallest
$\|\tilde{\Dbf}^*\tilde{\fbf}\|_1$ in value among all dual frames
$\tilde{\Dbf}$ of $\Dbf$ and all feasible signals $\tilde{\fbf}$. This new
performance analysis departs from the usual description of the combo
$\Phibf\Dbf$, and places the description on $\Phibf$. Examples are demonstrated
to show that when the usual analysis fails to explain the working performance
of the synthesis approach, the newly established results do.
|
1202.2231
|
Achieving Global Optimality for Weighted Sum-Rate Maximization in the
K-User Gaussian Interference Channel with Multiple Antennas
|
cs.IT math.IT
|
Characterizing the global maximum of weighted sum-rate (WSR) for the K-user
Gaussian interference channel (GIC), with the interference treated as Gaussian
noise, is a key problem in wireless communication. However, due to the users'
mutual interference, this problem is in general non-convex and thus cannot be
solved directly by conventional convex optimization techniques. In this paper,
by jointly utilizing the monotonic optimization and rate profile techniques, we
develop a new framework to obtain the globally optimal power control and/or
beamforming solutions to the WSR maximization problems for the GICs with
single-antenna transmitters and single-antenna receivers (SISO), single-antenna
transmitters and multi-antenna receivers (SIMO), or multi-antenna transmitters
and single-antenna receivers (MISO). Different from prior work, this paper
proposes to maximize the WSR in the achievable rate region of the GIC directly
by exploiting the facts that the achievable rate region is a "normal" set and
the users' WSR is a "strictly increasing" function over the rate region.
Consequently, the WSR maximization is shown to be in the form of monotonic
optimization over a normal set and thus can be solved globally optimally by the
existing outer polyblock approximation algorithm. However, an essential step in
the algorithm hinges on how to efficiently characterize the intersection point
on the Pareto boundary of the achievable rate region with any prescribed "rate
profile" vector. This paper shows that such a problem can be transformed into a
sequence of signal-to-interference-plus-noise ratio (SINR) feasibility
problems, which can be solved efficiently by existing techniques. Numerical
results validate that the proposed algorithms can achieve the global WSR
maximum for the SISO, SIMO or MISO GIC.
|
1202.2249
|
Supervised Learning in Multilayer Spiking Neural Networks
|
cs.NE q-bio.NC
|
The current article introduces a supervised learning algorithm for multilayer
spiking neural networks. The algorithm presented here overcomes some
limitations of existing learning algorithms as it can be applied to neurons
firing multiple spikes and it can in principle be applied to any linearisable
neuron model. The algorithm is applied successfully to various benchmarks, such
as the XOR problem and the Iris data set, as well as complex classifications
problems. The simulations also show the flexibility of this supervised learning
algorithm which permits different encodings of the spike timing patterns,
including precise spike trains encoding.
|
1202.2251
|
Hierarchies of Local-Optimality Characterizations in Decoding of Tanner
Codes
|
cs.IT math.IT
|
Recent developments in decoding of Tanner codes with maximum-likelihood
certificates are based on a sufficient condition called local-optimality. We
define hierarchies of locally-optimal codewords with respect to two parameters.
One parameter is related to the minimum distance of the local codes in Tanner
codes. The second parameter is related to the finite number of iterations used
in iterative decoding. We show that these hierarchies satisfy inclusion
properties as these parameters are increased. In particular, this implies that
a codeword that is decoded with a certificate using an iterative decoder after
$h$ iterations is decoded with a certificate after $k\cdot h$ iterations, for
every integer $k$.
|
1202.2261
|
Multi-robot coverage to locate fixed targets using formation structures
|
cs.RO
|
This paper develops an algorithm that guides a multi-robot system in an
unknown environment in search of fixed targets. The area to be scanned contains
an unknown number of convex obstacles of unknown size and shape. The algorithm
covers the entire free space in a sweeping fashion and as such relies on the
use of robot formations. The geometry of the robot group is a lateral line
formation, which is allowed to split and rejoin when passing obstacles. It is
our main goal to exploit this formation structure in order to reduce robot
resources to a minimum. Each robot has a limited and finite amount of memory
available. No information of the topography is recorded. Communication between
two robots is only possible up to a maximum inter-robot distance, and if the
line-of-sight between both robots is not obstructed. Broadcasting capabilities
and indirect communication are not allowed. Supervisory control is prohibited.
The number of robots equipped with GPS is kept as small as possible.
Applications of the algorithm are mine field clearance, search-and-rescue
missions, and intercept missions. Simulations are included and made available
on the internet, demonstrating the flexibility of the algorithm.
|
1202.2283
|
Quantum Cournot equilibrium for the Hotelling-Smithies model of product
choice
|
quant-ph cs.IT math-ph math.IT math.MP
|
This paper demonstrates the quantization of a spatial Cournot duopoly model
with product choice, a two stage game focusing on non-cooperation in locations
and quantities. With quantization, the players can access a continuous set of
strategies, using continuous variable quantum mechanical approach. The presence
of quantum entanglement in the initial state identifies a quantity equilibrium
for every location pair choice with any transport cost. Also higher profit is
obtained by the firms at Nash equilibrium. Adoption of quantum strategies
rewards us by the existence of a larger quantum strategic space at equilibrium.
|
1202.2293
|
Remarks on Category-Based Routing in Social Networks
|
cs.SI cs.DC cs.DM physics.soc-ph
|
It is well known that individuals can route messages on short paths through
social networks, given only simple information about the target and using only
local knowledge about the topology. Sociologists conjecture that people find
routes greedily by passing the message to an acquaintance that has more in
common with the target than themselves, e.g. if a dentist in Saarbr\"ucken
wants to send a message to a specific lawyer in Munich, he may forward it to
someone who is a lawyer and/or lives in Munich. Modelling this setting,
Eppstein et al. introduced the notion of category-based routing. The goal is to
assign a set of categories to each node of a graph such that greedy routing is
possible. By proving bounds on the number of categories a node has to be in we
can argue about the plausibility of the underlying sociological model. In this
paper we substantially improve the upper bounds introduced by Eppstein et al.
and prove new lower bounds.
|
1202.2319
|
Detection Performance of M-ary Relay Trees with Non-binary Message
Alphabets
|
cs.IT math.IT
|
We study the detection performance of $M$-ary relay trees, where only the
leaves of the tree represent sensors making measurements. The root of the tree
represents the fusion center which makes an overall detection decision. Each of
the other nodes is a relay node which aggregates $M$ messages sent by its child
nodes into a new compressed message and sends the message to its parent node.
Building on previous work on the detection performance of $M$-ary relay trees
with binary messages, in this paper we study the case of non-binary relay
message alphabets. We characterize the exponent of the error probability with
respect to the message alphabet size $\mathcal D$, showing how the detection
performance increases with $\mathcal D$. Our method involves reducing a tree
with non-binary relay messages into an equivalent higher-degree tree with only
binary messages.
|
1202.2335
|
Getting It All from the Crowd
|
cs.DB
|
Hybrid human/computer systems promise to greatly expand the usefulness of
query processing by incorporating the crowd for data gathering and other tasks.
Such systems raise many database system implementation questions. Perhaps most
fundamental is that the closed world assumption underlying relational query
semantics does not hold in such systems. As a consequence the meaning of even
simple queries can be called into question. Furthermore query progress
monitoring becomes difficult due to non-uniformities in the arrival of
crowdsourced data and peculiarities of how people work in crowdsourcing
systems. To address these issues, we develop statistical tools that enable
users and systems developers to reason about tradeoffs between time/cost and
completeness. These tools can also help drive query execution and crowdsourcing
strategies. We evaluate our techniques using experiments on a popular
crowdsourcing platform.
|
1202.2350
|
Streaming an image through the eye: The retina seen as a dithered
scalable image coder
|
cs.CV cs.NE
|
We propose the design of an original scalable image coder/decoder that is
inspired from the mammalians retina. Our coder accounts for the time-dependent
and also nondeterministic behavior of the actual retina. The present work
brings two main contributions: As a first step, (i) we design a deterministic
image coder mimicking most of the retinal processing stages and then (ii) we
introduce a retinal noise in the coding process, that we model here as a dither
signal, to gain interesting perceptual features. Regarding our first
contribution, our main source of inspiration will be the biologically plausible
model of the retina called Virtual Retina. The main novelty of this coder is to
show that the time-dependent behavior of the retina cells could ensure, in an
implicit way, scalability and bit allocation. Regarding our second
contribution, we reconsider the inner layers of the retina. We emit a possible
interpretation for the non-determinism observed by neurophysiologists in their
output. For this sake, we model the retinal noise that occurs in these layers
by a dither signal. The dithering process that we propose adds several
interesting features to our image coder. The dither noise whitens the
reconstruction error and decorrelates it from the input stimuli. Furthermore,
integrating the dither noise in our coder allows a faster recognition of the
fine details of the image during the decoding process. Our present paper goal
is twofold. First, we aim at mimicking as closely as possible the retina for
the design of a novel image coder while keeping encouraging performances.
Second, we bring a new insight concerning the non-deterministic behavior of the
retina.
|
1202.2368
|
An evaluation of local shape descriptors for 3D shape retrieval
|
cs.CV cs.CG cs.DL cs.IR cs.MM
|
As the usage of 3D models increases, so does the importance of developing
accurate 3D shape retrieval algorithms. A common approach is to calculate a
shape descriptor for each object, which can then be compared to determine two
objects' similarity. However, these descriptors are often evaluated
independently and on different datasets, making them difficult to compare.
Using the SHREC 2011 Shape Retrieval Contest of Non-rigid 3D Watertight Meshes
dataset, we systematically evaluate a collection of local shape descriptors. We
apply each descriptor to the bag-of-words paradigm and assess the effects of
varying the dictionary's size and the number of sample points. In addition,
several salient point detection methods are used to choose sample points; these
methods are compared to each other and to random selection. Finally,
information from two local descriptors is combined in two ways and changes in
performance are investigated. This paper presents results of these experiment
|
1202.2369
|
The Groupon Effect on Yelp Ratings: A Root Cause Analysis
|
cs.SI
|
Daily deals sites such as Groupon offer deeply discounted goods and services
to tens of millions of customers through geographically targeted daily e-mail
marketing campaigns. In our prior work we observed that a negative side effect
for merchants using Groupons is that, on average, their Yelp ratings decline
significantly. However, this previous work was essentially observational,
rather than explanatory. In this work, we rigorously consider and evaluate
various hypotheses about underlying consumer and merchant behavior in order to
understand this phenomenon, which we dub the Groupon effect. We use statistical
analysis and mathematical modeling, leveraging a dataset we collected spanning
tens of thousands of daily deals and over 7 million Yelp reviews. In
particular, we investigate hypotheses such as whether Groupon subscribers are
more critical than their peers, or whether some fraction of Groupon merchants
provide significantly worse service to customers using Groupons. We suggest an
additional novel hypothesis: reviews from Groupon subscribers are lower on
average because such reviews correspond to real, unbiased customers, while the
body of reviews on Yelp contain some fraction of reviews from biased or even
potentially fake sources. Although we focus on a specific question, our work
provides broad insights into both consumer and merchant behavior within the
daily deals marketplace.
|
1202.2393
|
Statistical reliability and path diversity based PageRank algorithm
improvements
|
cs.IR cs.DM
|
In this paper we present new improvement ideas of the original PageRank
algorithm. The first idea is to introduce an evaluation of the statistical
reliability of the ranking score of each node based on the local graph property
and the second one is to introduce the notion of the path diversity. The path
diversity can be exploited to dynamically modify the increment value of each
node in the random surfer model or to dynamically adapt the damping factor. We
illustrate the impact of such modifications through examples and simple
simulations.
|
1202.2408
|
Spectral Estimation from Undersampled Data: Correlogram and Model-Based
Least Squares
|
math.ST cs.IT math.IT stat.TH
|
This paper studies two spectrum estimation methods for the case that the
samples are obtained at a rate lower than the Nyquist rate. The first method is
the correlogram method for undersampled data. The algorithm partitions the
spectrum into a number of segments and estimates the average power within each
spectral segment. We derive the bias and the variance of the spectrum
estimator, and show that there is a tradeoff between the accuracy of the
estimation and the frequency resolution. The asymptotic behavior of the
estimator is also investigated, and it is proved that this spectrum estimator
is consistent.
A new algorithm for reconstructing signals with sparse spectrum from noisy
compressive measurements is also introduced. Such model-based algorithm takes
the signal structure into account for estimating the unknown parameters which
are the frequencies and the amplitudes of linearly combined sinusoidal signals.
A high-resolution spectral estimation method is used to recover the frequencies
of the signal elements, while the amplitudes of the signal components are
estimated by minimizing the squared norm of the compressed estimation error
using the least squares technique. The Cramer-Rao bound for the given system
model is also derived. It is shown that the proposed algorithm approaches the
bound at high signal to noise ratios.
|
1202.2412
|
Sum-Rate Maximization in Two-Way AF MIMO Relaying: Polynomial Time
Solutions to a Class of DC Programming Problems
|
cs.IT math.IT math.OC
|
Sum-rate maximization in two-way amplify-and-forward (AF) multiple-input
multiple-output (MIMO) relaying belongs to the class of difference-of-convex
functions (DC) programming problems. DC programming problems occur as well in
other signal processing applications and are typically solved using different
modifications of the branch-and-bound method. This method, however, does not
have any polynomial time complexity guarantees. In this paper, we show that a
class of DC programming problems, to which the sum-rate maximization in two-way
MIMO relaying belongs, can be solved very efficiently in polynomial time, and
develop two algorithms. The objective function of the problem is represented as
a product of quadratic ratios and parameterized so that its convex part (versus
the concave part) contains only one (or two) optimization variables. One of the
algorithms is called POlynomial-Time DC (POTDC) and is based on semi-definite
programming (SDP) relaxation, linearization, and an iterative search over a
single parameter. The other algorithm is called RAte-maximization via
Generalized EigenvectorS (RAGES) and is based on the generalized eigenvectors
method and an iterative search over two (or one, in its approximate version)
optimization variables. We also derive an upper-bound for the optimal values of
the corresponding optimization problem and show by simulations that this
upper-bound can be achieved by both algorithms. The proposed methods for
maximizing the sum-rate in the two-way AF MIMO relaying system are shown to be
superior to other state-of-the-art algorithms.
|
1202.2414
|
Optimal Linear Codes with a Local-Error-Correction Property
|
cs.IT math.IT
|
Motivated by applications to distributed storage, Gopalan \textit{et al}
recently introduced the interesting notion of information-symbol locality in a
linear code. By this it is meant that each message symbol appears in a
parity-check equation associated with small Hamming weight, thereby enabling
recovery of the message symbol by examining a small number of other code
symbols. This notion is expanded to the case when all code symbols, not just
the message symbols, are covered by such "local" parity. In this paper, we
extend the results of Gopalan et. al. so as to permit recovery of an erased
code symbol even in the presence of errors in local parity symbols. We present
tight bounds on the minimum distance of such codes and exhibit codes that are
optimal with respect to the local error-correction property. As a corollary, we
obtain an upper bound on the minimum distance of a concatenated code.
|
1202.2419
|
A High Order Sliding Mode Control with PID Sliding Surface: Simulation
on a Torpedo
|
cs.SY
|
Position and speed control of the torpedo present a real problem for the
actuators because of the high level of the system non linearity and because of
the external disturbances. The non linear systems control is based on several
different approaches, among it the sliding mode control. The sliding mode
control has proved its effectiveness through the different studies. The
advantage that makes such an important approach is its robustness versus the
disturbances and the model uncertainties. However, this approach implies a
disadvantage which is the chattering phenomenon caused by the discontinuous
part of this control and which can have a harmful effect on the actuators. This
paper deals with the basic concepts, mathematics, and design aspects of a
control for nonlinear systems that make the chattering effect lower. As
solution to this problem we will adopt as a starting point the high order
sliding mode approaches then the PID sliding surface. Simulation results show
that this control strategy can attain excellent control performance with no
chattering problem.
|
1202.2449
|
Efficient Web-based Facial Recognition System Employing 2DHOG
|
cs.CV cs.NI
|
In this paper, a system for facial recognition to identify missing and found
people in Hajj and Umrah is described as a web portal. Explicitly, we present a
novel algorithm for recognition and classifications of facial images based on
applying 2DPCA to a 2D representation of the Histogram of oriented gradients
(2D-HOG) which maintains the spatial relation between pixels of the input
images. This algorithm allows a compact representation of the images which
reduces the computational complexity and the storage requirments, while
maintaining the highest reported recognition accuracy. This promotes this
method for usage with very large datasets. Large dataset was collected for
people in Hajj. Experimental results employing ORL, UMIST, JAFFE, and HAJJ
datasets confirm these excellent properties.
|
1202.2461
|
How the Scientific Community Reacts to Newly Submitted Preprints:
Article Downloads, Twitter Mentions, and Citations
|
cs.SI cs.DL physics.soc-ph
|
We analyze the online response to the preprint publication of a cohort of
4,606 scientific articles submitted to the preprint database arXiv.org between
October 2010 and May 2011. We study three forms of responses to these
preprints: downloads on the arXiv.org site, mentions on the social media site
Twitter, and early citations in the scholarly record. We perform two analyses.
First, we analyze the delay and time span of article downloads and Twitter
mentions following submission, to understand the temporal configuration of
these reactions and whether one precedes or follows the other. Second, we run
regression and correlation tests to investigate the relationship between
Twitter mentions, arXiv downloads and article citations. We find that Twitter
mentions and arXiv downloads of scholarly articles follow two distinct temporal
patterns of activity, with Twitter mentions having shorter delays and narrower
time spans than arXiv downloads. We also find that the volume of Twitter
mentions is statistically correlated with arXiv downloads and early citations
just months after the publication of a preprint, with a possible bias that
favors highly mentioned articles.
|
1202.2465
|
Towards Linear Time Overlapping Community Detection in Social Networks
|
cs.SI cs.CY cs.DS physics.soc-ph
|
Membership diversity is a characteristic aspect of social networks in which a
person may belong to more than one social group. For this reason, discovering
overlapping structures is necessary for realistic social analysis. In this
paper, we present a fast algorithm1, called SLPA, for overlapping community
detection in large-scale networks. SLPA spreads labels according to dynamic
interaction rules. It can be applied to both unipartite and bipartite networks.
It is also able to uncover overlapping nested hierarchy. The time complexity of
SLPA scales linearly with the number of edges in the network. Experiments in
both synthetic and real- world networks show that SLPA has an excellent
performance in identifying both node and community level overlapping
structures.
|
1202.2503
|
Experimental study of the impact of historical information in human
coordination
|
cs.SI physics.soc-ph
|
We perform laboratory experiments to elucidate the role of historical
information in games involving human coordination. Our approach follows prior
work studying human network coordination using the task of graph coloring. We
first motivate this research by showing empirical evidence that the resolution
of coloring conflicts is dependent upon the recent local history of that
conflict. We also conduct two tailored experiments to manipulate the game
history that can be used by humans in order to determine (i) whether humans use
historical information, and (ii) whether they use it effectively. In the first
variant, during the course of each coloring task, the network positions of the
subjects were periodically swapped while maintaining the global coloring state
of the network. In the second variant, participants completed a series of
2-coloring tasks, some of which were restarts from checkpoints of previous
tasks. Thus, the participants restarted the coloring task from a point in the
middle of a previous task without knowledge of the history that led to that
point. We report on the game dynamics and average completion times for the
diverse graph topologies used in the swap and restart experiments.
|
1202.2518
|
Segmenting DNA sequence into `words'
|
q-bio.GN cs.CL
|
This paper presents a novel method to segment/decode DNA sequences based on
n-grams statistical language model. Firstly, we find the length of most DNA
'words' is 12 to 15 bps by analyzing the genomes of 12 model species. Then we
design an unsupervised probability based approach to segment the DNA sequences.
The benchmark of segmenting method is also proposed.
|
1202.2523
|
Evolutionary Computation in Astronomy and Astrophysics: A Review
|
cs.AI astro-ph.IM cs.NE
|
In general Evolutionary Computation (EC) includes a number of optimization
methods inspired by biological mechanisms of evolution. The methods catalogued
in this area use the Darwinian principles of life evolution to produce
algorithms that returns high quality solutions to hard-to-solve optimization
problems. The main strength of EC is precisely that they provide good solutions
even if the computational resources (e.g., running time) are limited. Astronomy
and Astrophysics are two fields that often require optimizing problems of high
complexity or analyzing a huge amount of data and the so-called complete
optimization methods are inherently limited by the size of the problem/data.
For instance, reliable analysis of large amounts of data is central to modern
astrophysics and astronomical sciences in general. EC techniques perform well
where other optimization methods are inherently limited (as complete methods
applied to NP-hard problems), and in the last ten years, numerous proposals
have come up that apply with greater or lesser success methodologies of
evolutional computation to common engineering problems. Some of these problems,
such as the estimation of non-lineal parameters, the development of automatic
learning techniques, the implementation of control systems, or the resolution
of multi-objective optimization problems, have had (and have) a special
repercussion in the fields. For these reasons EC emerges as a feasible
alternative for traditional methods. In this paper, we discuss some promising
applications in this direction and a number of recent works in this area; the
paper also includes a general description of EC to provide a global perspective
to the reader and gives some guidelines of application of EC techniques for
future research
|
1202.2525
|
Subsampling at Information Theoretically Optimal Rates
|
cs.IT math.IT math.ST stat.TH
|
We study the problem of sampling a random signal with sparse support in
frequency domain. Shannon famously considered a scheme that instantaneously
samples the signal at equispaced times. He proved that the signal can be
reconstructed as long as the sampling rate exceeds twice the bandwidth (Nyquist
rate). Cand\`es, Romberg, Tao introduced a scheme that acquires instantaneous
samples of the signal at random times. They proved that the signal can be
uniquely and efficiently reconstructed, provided the sampling rate exceeds the
frequency support of the signal, times logarithmic factors.
In this paper we consider a probabilistic model for the signal, and a
sampling scheme inspired by the idea of spatial coupling in coding theory.
Namely, we propose to acquire non-instantaneous samples at random times.
Mathematically, this is implemented by acquiring a small random subset of Gabor
coefficients. We show empirically that this scheme achieves correct
reconstruction as soon as the sampling rate exceeds the frequency support of
the signal, thus reaching the information theoretic limit.
|
1202.2528
|
Using Covariance Matrices as Feature Descriptors for Vehicle Detection
from a Fixed Camera
|
cs.CV
|
A method is developed to distinguish between cars and trucks present in a
video feed of a highway. The method builds upon previously done work using
covariance matrices as an accurate descriptor for regions. Background
subtraction and other similar proven image processing techniques are used to
identify the regions where the vehicles are most likely to be, and a distance
metric comparing the vehicle inside the region to a fixed library of vehicles
is used to determine the class of vehicle.
|
1202.2536
|
Message passing for quantified Boolean formulas
|
cs.AI cond-mat.dis-nn
|
We introduce two types of message passing algorithms for quantified Boolean
formulas (QBF). The first type is a message passing based heuristics that can
prove unsatisfiability of the QBF by assigning the universal variables in such
a way that the remaining formula is unsatisfiable. In the second type, we use
message passing to guide branching heuristics of a Davis-Putnam
Logemann-Loveland (DPLL) complete solver. Numerical experiments show that on
random QBFs our branching heuristics gives robust exponential efficiency gain
with respect to the state-of-art solvers. We also manage to solve some
previously unsolved benchmarks from the QBFLIB library. Apart from this our
study sheds light on using message passing in small systems and as subroutines
in complete solvers.
|
1202.2561
|
On the Diversity Gain Region of the Z-interference Channels
|
cs.IT math.IT
|
In this work, we analyze the diversity gain region (DGR) of the
single-antenna Rayleigh fading Z-Interference channel (ZIC). More specifically,
we characterize the achievable DGR of the fixed-power split Han-Kobayashi (HK)
approach under these assumptions. Our characterization comes in a closed form
and demonstrates that the HK scheme with only a common message is a singular
case, which achieves the best DGR among all HK schemes for certain multiplexing
gains. Finally, we show that generalized time sharing, with variable rate and
power assignments for the common and private messages, does not improve the
achievable DGR.
|
1202.2564
|
A better Beta for the H measure of classification performance
|
stat.ME cs.CV stat.ML
|
The area under the ROC curve is widely used as a measure of performance of
classification rules. However, it has recently been shown that the measure is
fundamentally incoherent, in the sense that it treats the relative severities
of misclassifications differently when different classifiers are used. To
overcome this, Hand (2009) proposed the $H$ measure, which allows a given
researcher to fix the distribution of relative severities to a
classifier-independent setting on a given problem. This note extends the
discussion, and proposes a modified standard distribution for the $H$ measure,
which better matches the requirements of researchers, in particular those faced
with heavily unbalanced datasets, the $Beta(\pi_1+1,\pi_0+1)$ distribution.
[Preprint submitted at Pattern Recognition Letters]
|
1202.2576
|
New Results on the Sum of Gamma Random Variates With Application to the
Performance of Wireless Communication Systems over Nakagami-m Fading Channels
|
cs.IT math.IT math.PR math.ST stat.TH
|
The probability density function (PDF) and cumulative distribution function
of the sum of L independent but not necessarily identically distributed Gamma
variates, applicable to the output statistics of maximal ratio combining (MRC)
receiver operating over Nakagami-m fading channels or in other words to the
statistical analysis of the scenario where the sum of squared Nakagami-m
distributions are user-of-interest, is presented in closed-form in terms of
well-known Meijer's G function and easily computable Fox's H-bar function for
integer valued and non-integer valued m fading parameters. Further analysis,
particularly on bit error rate via a PDF-based approach is also offered in
closed form in terms of Meijer's G function and Fox's H-bar function for
integer valued fading parameters, and extended Fox's H-bar function (H-hat) for
non-integer valued fading parameters. Our proposed results complement previous
known results that are either expressed in terms of infinite sums, nested sums,
or higher order derivatives of the fading parameter m.
|
1202.2577
|
Citizen Science: Contributions to Astronomy Research
|
astro-ph.IM cs.AI
|
The contributions of everyday individuals to significant research has grown
dramatically beyond the early days of classical birdwatching and endeavors of
amateurs of the 19th century. Now people who are casually interested in science
can participate directly in research covering diverse scientific fields.
Regarding astronomy, volunteers, either as individuals or as networks of
people, are involved in a variety of types of studies. Citizen Science is
intuitive, engaging, yet necessarily robust in its adoption of sci-entific
principles and methods. Herein, we discuss Citizen Science, focusing on fully
participatory projects such as Zooniverse (by several of the au-thors CL, AS,
LF, SB), with mention of other programs. In particular, we make the case that
citizen science (CS) can be an important aspect of the scientific data analysis
pipelines provided to scientists by observatories.
|
1202.2586
|
Gossip-based Information Spreading in Mobile Networks
|
cs.SI cs.NI
|
Mobile networks receive increasing research interest recently due to their
increasingly wide applications in various areas; mobile ad hoc networks (MANET)
and Vehicular ad hoc networks (VANET) are two prominent examples. Mobility
introduces challenges as well as opportunities: it is known to improve the
network throughput as shown in [1]. In this paper, we analyze the effect of
mobility on the information spreading based on gossip algorithms. Our
contributions are twofold. Firstly, we propose a new performance metric, mobile
conductance, which allows us to separate the details of mobility models from
the study of mobile spreading time. Secondly, we explore the mobile
conductances of several popular mobility models, and offer insights on the
corresponding results. Large scale network simulation is conducted to verify
our analysis.
|
1202.2591
|
Database queries and constraints via lifting problems
|
math.CT cs.DB math.AT
|
Previous work has demonstrated that categories are useful and expressive
models for databases. In the present paper we build on that model, showing that
certain queries and constraints correspond to lifting problems, as found in
modern approaches to algebraic topology. In our formulation, each so-called
SPARQL graph pattern query corresponds to a category-theoretic lifting problem,
whereby the set of solutions to the query is precisely the set of lifts. We
interpret constraints within the same formalism and then investigate some basic
properties of queries and constraints. In particular, to any database $\pi$ we
can associate a certain derived database $\Qry(\pi)$ of queries on $\pi$. As an
application, we explain how giving users access to certain parts of
$\Qry(\pi)$, rather than direct access to $\pi$, improves ones ability to
manage the impact of schema evolution.
|
1202.2614
|
Semantic snippet construction for search engine results based on segment
evaluation
|
cs.IR
|
The result listing from search engines includes a link and a snippet from the
web page for each result item. The snippet in the result listing plays a vital
role in assisting the user to click on it. This paper proposes a novel approach
to construct the snippets based on a semantic evaluation of the segments in the
page. The target segment(s) is/are identified by applying a model to evaluate
segments present in the page and selecting the segments with top scores. The
proposed model makes the user judgment to click on a result item easier since
the snippet is constructed semantically after a critical evaluation based on
multiple factors. A prototype implementation of the proposed model confirms the
empirical validation.
|
1202.2615
|
Live-marker: A personalized web page content marking tool
|
cs.IR
|
The tremendous amount of increase in the quantity of information resources
available on the web has made the total time that the user spends on a single
page very minimal. Users revisiting the same page would be able to fetch the
required information much faster if the information that they consumed during
the previous visit(s) gets presented to them with a special style. This paper
proposes a model which empowers the users to mark the content interesting to
them, so that it can be identified easily during successive visits. In addition
to the explicit marking by the users, the model facilitates implicit marking
based on the user preferences. The prototype implementation based on proposed
model validates the model's efficiency.
|
1202.2617
|
Segmentation Based Approach to Dynamic Page Construction from Search
Engine Results
|
cs.IR
|
The results rendered by the search engines are mostly a linear snippet list.
With the prolific increase in the dynamism of web pages there is a need for
enhanced result lists from search engines in order to cope-up with the
expectations of the users. This paper proposes a model for dynamic construction
of a resultant page from various results fetched by the search engine, based on
the web page segmentation approach. With the incorporation of personalization
through user profile during the candidate segment selection, the enriched
resultant page is constructed. The benefits of this approach include instant,
one-shot navigation to relevant portions from various result items, in contrast
to a linear page-by-page visit approach. The experiments conducted on the
prototype model with various levels of users, quantifies the improvements in
terms of amount of relevant information fetched.
|
1202.2619
|
We.I.Pe: Web Identification of People using e-mail ID
|
cs.IR
|
With the phenomenal growth of content in the World Wide Web, the diversity of
user supplied queries have become vivid. Searching for people on the web has
become an important type of search activity in the web search engines. This
paper proposes a model named "We.I.Pe" to identify people on the World Wide Web
using e-mail Id as the primary input. The approach followed in this research
work provides the collected information, based on the user supplied e-mail id,
in an easier to navigate manner. The grouping of collected information based on
various sources makes the result visualization process more effective. The
proposed model is validated by a prototype implementation. Experiments
conducted on the prototype implementation provide encouraging results
|
1202.2622
|
A Model for Web Page Usage Mining Based on Segmentation
|
cs.IR
|
The web page usage mining plays a vital role in enriching the page's content
and structure based on the feedbacks received from the user's interactions with
the page. This paper proposes a model for micro-managing the tracking
activities by fine-tuning the mining from the page level to the segment level.
The proposed model enables the web-master to identify the segments which
receives more focus from users comparing with others. The segment level
analytics of user actions provides an important metric to analyse the factors
which facilitate the increase in traffic for the page. The empirical validation
of the model is performed through prototype implementation.
|
1202.2684
|
Core-Periphery Structure in Networks
|
cs.SI cond-mat.stat-mech physics.soc-ph
|
Intermediate-scale (or `meso-scale') structures in networks have received
considerable attention, as the algorithmic detection of such structures makes
it possible to discover network features that are not apparent either at the
local scale of nodes and edges or at the global scale of summary statistics.
Numerous types of meso-scale structures can occur in networks, but
investigations of such features have focused predominantly on the
identification and study of community structure. In this paper, we develop a
new method to investigate the meso-scale feature known as core-periphery
structure, which entails identifying densely-connected core nodes and
sparsely-connected periphery nodes. In contrast to communities, the nodes in a
core are also reasonably well-connected to those in the periphery. Our new
method of computing core-periphery structure can identify multiple cores in a
network and takes different possible cores into account. We illustrate the
differences between our method and several existing methods for identifying
which nodes belong to a core, and we use our technique to examine
core-periphery structure in examples of friendship, collaboration,
transportation, and voting networks.
|
1202.2687
|
Worst-Case Additive Noise in Wireless Networks
|
cs.IT math.IT
|
A classical result in Information Theory states that the Gaussian noise is
the worst-case additive noise in point-to-point channels, meaning that, for a
fixed noise variance, the Gaussian noise minimizes the capacity of an additive
noise channel. In this paper, we significantly generalize this result and show
that the Gaussian noise is also the worst-case additive noise in wireless
networks with additive noises that are independent from the transmit signals.
More specifically, we show that, if we fix the noise variance at each node,
then the capacity region with Gaussian noises is a subset of the capacity
region with any other set of noise distributions. We prove this result by
showing that a coding scheme that achieves a given set of rates on a network
with Gaussian additive noises can be used to construct a coding scheme that
achieves the same set of rates on a network that has the same topology and
traffic demands, but with non-Gaussian additive noises.
|
1202.2703
|
Craniofacial reconstruction as a prediction problem using a Latent Root
Regression model
|
cs.LG q-bio.TO
|
In this paper, we present a computer-assisted method for facial
reconstruction. This method provides an estimation of the facial shape
associated with unidentified skeletal remains. Current computer-assisted
methods using a statistical framework rely on a common set of extracted points
located on the bone and soft-tissue surfaces. Most of the facial reconstruction
methods then consist of predicting the position of the soft-tissue surface
points, when the positions of the bone surface points are known. We propose to
use Latent Root Regression for prediction. The results obtained are then
compared to those given by Principal Components Analysis linear models. In
conjunction, we have evaluated the influence of the number of skull landmarks
used. Anatomical skull landmarks are completed iteratively by points located
upon geodesics which link these anatomical landmarks, thus enabling us to
artificially increase the number of skull points. Facial points are obtained
using a mesh-matching algorithm between a common reference mesh and individual
soft-tissue surface meshes. The proposed method is validated in term of
accuracy, based on a leave-one-out cross-validation test applied to a
homogeneous database. Accuracy measures are obtained by computing the distance
between the original face surface and its reconstruction. Finally, these
results are discussed referring to current computer-assisted reconstruction
facial techniques.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.