id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1210.2195
|
Annotating Answer-Set Programs in LANA?
|
cs.SE cs.AI
|
While past research in answer-set programming (ASP) mainly focused on theory,
ASP solver technology, and applications, the present work situates itself in
the context of a quite recent research trend: development support for ASP. In
particular, we propose to augment answer-set programs with additional
meta-information formulated in a dedicated annotation language, called LANA.
This language allows the grouping of rules into coherent blocks and to specify
language signatures, types, pre- and postconditions, as well as unit tests for
such blocks. While these annotations are invisible to an ASP solver, as they
take the form of program comments, they can be interpreted by tools for
documentation, testing, and verification purposes, as well as to eliminate
sources of common programming errors by realising syntax checking or code
completion features. To demonstrate its versatility, we introduce two such
tools, viz. (i) ASPDOC, for generating an HTML documentation for a program
based on the annotated information, and (ii) ASPUNIT, for running and
monitoring unit tests on program blocks. LANA is also exploited in the SeaLion
system, an integrated development environment for ASP based on Eclipse. To
appear in Theory and Practice of Logic Programming
|
1210.2211
|
Network Null Model based on Maximal Entropy and the Rich-Club
|
physics.soc-ph cs.SI stat.ME
|
We present a method to construct a network null-model based on the maximum
entropy principle and where the restrictions that the rich-club and the degree
sequence impose are conserved. We show that the probability that two nodes
share a link can be described with a simple probability function. The
null-model closely approximates the assortative properties of the network.
|
1210.2259
|
Degrees of freedom in vector interference channels
|
cs.IT math.IT
|
This paper continues the Wu-Shamai-Verdu program [3] on characterizing the
degrees of freedom (DoF) of interference channels (ICs) through Renyi
information dimension. Specifically, we find a single-letter formula for the
DoF of vector ICs, encompassing multiple-input multiple-output (MIMO) ICs,
time- and/or frequency-selective ICs, and combinations thereof, as well as
scalar ICs as considered in [3]. The DoF-formula we obtain lower-bounds the DoF
of all channels--with respect to the choice of the channel matrix--and
upper-bounds the DoF of almost all channels. It applies to a large class of
noise distributions, and its proof is based on an extension of a result by
Guionnet and Shlyakthenko [3] to the vector case in combination with the Ruzsa
triangle inequality for differential entropy introduced by Kontoyiannis and
Madiman [4]. As in scalar ICs, achieving full DoF requires the use of singular
input distributions. Strikingly, in the vector case it suffices to enforce
singularity on the joint distribution of each individual transmit vector. This
can be realized through signaling in subspaces of the ambient signal space,
which is in accordance with the idea of interference alignment, and, most
importantly, allows the scalar entries of the transmit vectors to have
non-singular distributions. The DoF-formula for vector ICs we obtain enables a
unified treatment of "classical" interference alignment a la Cadambe and Jafar
[5], and Maddah-Ali et al. [6], and the number-theoretic schemes proposed in
[7], [8]. Moreover, it allows to calculate the DoF achieved by new signaling
schemes for vector ICs. We furthermore recover the result by Cadambe and Jafar
on the non-separability of parallel ICs [9] and we show that almost all
parallel ICs are separable in terms of DoF. Finally, our results apply to
complex vector ICs, thereby extending the main findings of [2] to the complex
case.
|
1210.2272
|
Joint Sparsity with Different Measurement Matrices
|
cs.IT math.IT
|
We consider a generalization of the multiple measurement vector (MMV)
problem, where the measurement matrices are allowed to differ across
measurements. This problem arises naturally when multiple measurements are
taken over time, e.g., and the measurement modality (matrix) is time-varying.
We derive probabilistic recovery guarantees showing that---under certain (mild)
conditions on the measurement matrices---l2/l1-norm minimization and a variant
of orthogonal matching pursuit fail with a probability that decays
exponentially in the number of measurements. This allows us to conclude that,
perhaps surprisingly, recovery performance does not suffer from the individual
measurements being taken through different measurement matrices. What is more,
recovery performance typically benefits (significantly) from diversity in the
measurement matrices; we specify conditions under which such improvements are
obtained. These results continue to hold when the measurements are subject to
(bounded) noise.
|
1210.2276
|
A Map-Reduce Parallel Approach to Automatic Synthesis of Control
Software
|
cs.DC cs.SY
|
Many Control Systems are indeed Software Based Control Systems, i.e. control
systems whose controller consists of control software running on a
microcontroller device. This motivates investigation on Formal Model Based
Design approaches for automatic synthesis of control software.
Available algorithms and tools (e.g., QKS) may require weeks or even months
of computation to synthesize control software for large-size systems. This
motivates search for parallel algorithms for control software synthesis.
In this paper, we present a Map-Reduce style parallel algorithm for control
software synthesis when the controlled system (plant) is modeled as discrete
time linear hybrid system. Furthermore we present an MPI-based implementation
PQKS of our algorithm. To the best of our knowledge, this is the first parallel
approach for control software synthesis.
We experimentally show effectiveness of PQKS on two classical control
synthesis problems: the inverted pendulum and the multi-input buck DC/DC
converter. Experiments show that PQKS efficiency is above 65%. As an example,
PQKS requires about 16 hours to complete the synthesis of control software for
the pendulum on a cluster with 60 processors, instead of the 25 days needed by
the sequential algorithm in QKS.
|
1210.2283
|
Unfolding accessibility provides a macroscopic approach to temporal
networks
|
physics.soc-ph cond-mat.stat-mech cs.SI
|
An accessibility graph of a network contains a link, wherever there is a path
of arbitrary length between two nodes. We generalize the concept of
accessibility to temporal networks. Building an accessibility graph by
consecutively adding paths of growing length (unfolding), we obtain information
about the distribution of shortest path durations and characteristic
time-scales in temporal networks. Moreover, we define causal fidelity to
measure the goodness of their static representation. The practicability of our
proposed methods is demonstrated for three examples: networks of social
contacts, livestock trade and sexual contacts.
|
1210.2289
|
A Fast Distributed Proximal-Gradient Method
|
cs.DC cs.LG stat.ML
|
We present a distributed proximal-gradient method for optimizing the average
of convex functions, each of which is the private local objective of an agent
in a network with time-varying topology. The local objectives have distinct
differentiable components, but they share a common nondifferentiable component,
which has a favorable structure suitable for effective computation of the
proximal operator. In our method, each agent iteratively updates its estimate
of the global minimum by optimizing its local objective function, and
exchanging estimates with others via communication in the network. Using
Nesterov-type acceleration techniques and multiple communication steps per
iteration, we show that this method converges at the rate 1/k (where k is the
number of communication rounds between the agents), which is faster than the
convergence rate of the existing distributed methods for solving this problem.
The superior convergence rate of our method is also verified by numerical
experiments.
|
1210.2316
|
Disjunctive Datalog with Existential Quantifiers: Semantics,
Decidability, and Complexity Issues
|
cs.AI cs.LO
|
Datalog is one of the best-known rule-based languages, and extensions of it
are used in a wide context of applications. An important Datalog extension is
Disjunctive Datalog, which significantly increases the expressivity of the
basic language. Disjunctive Datalog is useful in a wide range of applications,
ranging from Databases (e.g., Data Integration) to Artificial Intelligence
(e.g., diagnosis and planning under incomplete knowledge). However, in recent
years an important shortcoming of Datalog-based languages became evident, e.g.
in the context of data-integration (consistent query-answering, ontology-based
data access) and Semantic Web applications: The language does not permit any
generation of and reasoning with unnamed individuals in an obvious way. In
general, it is weak in supporting many cases of existential quantification. To
overcome this problem, Datalogex has recently been proposed, which extends
traditional Datalog by existential quantification in rule heads. In this work,
we propose a natural extension of Disjunctive Datalog and Datalogex, called
Datalogexor, which allows both disjunctions and existential quantification in
rule heads and is therefore an attractive language for knowledge representation
and reasoning, especially in domains where ontology-based reasoning is needed.
We formally define syntax and semantics of the language Datalogexor, and
provide a notion of instantiation, which we prove to be adequate for
Datalogexor. A main issue of Datalogex and hence also of Datalogexor is that
decidability is no longer guaranteed for typical reasoning tasks. In order to
address this issue, we identify many decidable fragments of the language, which
extend, in a natural way, analog classes defined in the non-disjunctive case.
Moreover, we carry out an in-depth complexity analysis, deriving interesting
results which range from Logarithmic Space to Exponential Time.
|
1210.2346
|
Blending Learning and Inference in Structured Prediction
|
cs.LG
|
In this paper we derive an efficient algorithm to learn the parameters of
structured predictors in general graphical models. This algorithm blends the
learning and inference tasks, which results in a significant speedup over
traditional approaches, such as conditional random fields and structured
support vector machines. For this purpose we utilize the structures of the
predictors to describe a low dimensional structured prediction task which
encourages local consistencies within the different structures while learning
the parameters of the model. Convexity of the learning task provides the means
to enforce the consistencies between the different parts. The
inference-learning blending algorithm that we propose is guaranteed to converge
to the optimum of the low dimensional primal and dual programs. Unlike many of
the existing approaches, the inference-learning blending allows us to learn
efficiently high-order graphical models, over regions of any size, and very
large number of parameters. We demonstrate the effectiveness of our approach,
while presenting state-of-the-art results in stereo estimation, semantic
segmentation, shape reconstruction, and indoor scene understanding.
|
1210.2352
|
A notion of continuity in discrete spaces and applications
|
math.MG cs.CV math.CO math.GN
|
We propose a notion of continuous path for locally finite metric spaces,
taking inspiration from the recent development of A-theory for locally finite
connected graphs. We use this notion of continuity to derive an analogue in Z^2
of the Jordan curve theorem and to extend to a quite large class of locally
finite metric spaces (containing all finite metric spaces) an inequality for
the \ell^p-distortion of a metric space that has been recently proved by
Pierre-Nicolas Jolissaint and Alain Valette for finite connected graphs.
|
1210.2354
|
Fisher information distance: a geometrical reading
|
stat.ME cs.IT math-ph math.IT math.MP
|
This paper is a strongly geometrical approach to the Fisher distance, which
is a measure of dissimilarity between two probability distribution functions.
The Fisher distance, as well as other divergence measures, are also used in
many applications to establish a proper data average. The main purpose is to
widen the range of possible interpretations and relations of the Fisher
distance and its associated geometry for the prospective applications. It
focuses on statistical models of the normal probability distribution functions
and takes advantage of the connection with the classical hyperbolic geometry to
derive closed forms for the Fisher distance in several cases. Connections with
the well-known Kullback-Leibler divergence measure are also devised.
|
1210.2376
|
Interdependence and Predictability of Human Mobility and Social
Interactions
|
physics.soc-ph cs.SI nlin.CD
|
Previous studies have shown that human movement is predictable to a certain
extent at different geographic scales. Existing prediction techniques exploit
only the past history of the person taken into consideration as input of the
predictors. In this paper, we show that by means of multivariate nonlinear time
series prediction techniques it is possible to increase the forecasting
accuracy by considering movements of friends, people, or more in general
entities, with correlated mobility patterns (i.e., characterised by high mutual
information) as inputs. Finally, we evaluate the proposed techniques on the
Nokia Mobile Data Challenge and Cabspotting datasets.
|
1210.2380
|
Stable and robust sampling strategies for compressive imaging
|
cs.CV cs.IT math.IT math.NA
|
In many signal processing applications, one wishes to acquire images that are
sparse in transform domains such as spatial finite differences or wavelets
using frequency domain samples. For such applications, overwhelming empirical
evidence suggests that superior image reconstruction can be obtained through
variable density sampling strategies that concentrate on lower frequencies. The
wavelet and Fourier transform domains are not incoherent because low-order
wavelets and low-order frequencies are correlated, so compressive sensing
theory does not immediately imply sampling strategies and reconstruction
guarantees. In this paper we turn to a more refined notion of coherence -- the
so-called local coherence -- measuring for each sensing vector separately how
correlated it is to the sparsity basis. For Fourier measurements and Haar
wavelet sparsity, the local coherence can be controlled and bounded explicitly,
so for matrices comprised of frequencies sampled from a suitable inverse square
power-law density, we can prove the restricted isometry property with
near-optimal embedding dimensions. Consequently, the variable-density sampling
strategy we provide allows for image reconstructions that are stable to
sparsity defects and robust to measurement noise. Our results cover both
reconstruction by $\ell_1$-minimization and by total variation minimization.
The local coherence framework developed in this paper should be of independent
interest in sparse recovery problems more generally, as it implies that for
optimal sparse recovery results, it suffices to have bounded \emph{average}
coherence from sensing basis to sparsity basis -- as opposed to bounded maximal
coherence -- as long as the sampling strategy is adapted accordingly.
|
1210.2381
|
The Power of Linear Reconstruction Attacks
|
cs.DS cs.CR cs.LG math.PR
|
We consider the power of linear reconstruction attacks in statistical data
privacy, showing that they can be applied to a much wider range of settings
than previously understood. Linear attacks have been studied before (Dinur and
Nissim PODS'03, Dwork, McSherry and Talwar STOC'07, Kasiviswanathan, Rudelson,
Smith and Ullman STOC'10, De TCC'12, Muthukrishnan and Nikolov STOC'12) but
have so far been applied only in settings with releases that are obviously
linear.
Consider a database curator who manages a database of sensitive information
but wants to release statistics about how a sensitive attribute (say, disease)
in the database relates to some nonsensitive attributes (e.g., postal code,
age, gender, etc). We show one can mount linear reconstruction attacks based on
any release that gives: a) the fraction of records that satisfy a given
non-degenerate boolean function. Such releases include contingency tables
(previously studied by Kasiviswanathan et al., STOC'10) as well as more complex
outputs like the error rate of classifiers such as decision trees; b) any one
of a large class of M-estimators (that is, the output of empirical risk
minimization algorithms), including the standard estimators for linear and
logistic regression.
We make two contributions: first, we show how these types of releases can be
transformed into a linear format, making them amenable to existing
polynomial-time reconstruction algorithms. This is already perhaps surprising,
since many of the above releases (like M-estimators) are obtained by solving
highly nonlinear formulations. Second, we show how to analyze the resulting
attacks under various distributional assumptions on the data. Specifically, we
consider a setting in which the same statistic (either a) or b) above) is
released about how the sensitive attribute relates to all subsets of size k
(out of a total of d) nonsensitive boolean attributes.
|
1210.2388
|
Video De-fencing
|
cs.CV cs.MM
|
This paper describes and provides an initial solution to a novel video
editing task, i.e., video de-fencing. It targets automatic restoration of the
video clips that are corrupted by fence-like occlusions during capture. Our key
observation lies in the visual parallax between fences and background scenes,
which is caused by the fact that the former are typically closer to the camera.
Unlike in traditional image inpainting, fence-occluded pixels in the videos
tend to appear later in the temporal dimension and are therefore recoverable
via optimized pixel selection from relevant frames. To eventually produce
fence-free videos, major challenges include cross-frame sub-pixel image
alignment under diverse scene depth, and "correct" pixel selection that is
robust to dominating fence pixels. Several novel tools are developed in this
paper, including soft fence detection, weighted truncated optical flow method
and robust temporal median filter. The proposed algorithm is validated on
several real-world video clips with fences.
|
1210.2406
|
Quick Search for Rare Events
|
cs.IT math.IT
|
Rare events can potentially occur in many applications. When manifested as
opportunities to be exploited, risks to be ameliorated, or certain features to
be extracted, such events become of paramount significance. Due to their
sporadic nature, the information-bearing signals associated with rare events
often lie in a large set of irrelevant signals and are not easily accessible.
This paper provides a statistical framework for detecting such events so that
an optimal balance between detection reliability and agility, as two opposing
performance measures, is established. The core component of this framework is a
sampling procedure that adaptively and quickly focuses the
information-gathering resources on the segments of the dataset that bear the
information pertinent to the rare events. Particular focus is placed on
Gaussian signals with the aim of detecting signals with rare mean and variance
values.
|
1210.2421
|
Simulated Tom Thumb, the Rule Of Thumb for Autonomous Robots
|
cs.RO cs.AI
|
For a mobile robot to be truly autonomous, it must solve the simultaneous
localization and mapping (SLAM) problem. We develop a new metaheuristic
algorithm called Simulated Tom Thumb (STT), based on the detailed adventure of
the clever Tom Thumb and advances in researches relating to path planning based
on potential functions. Investigations show that it is very promising and could
be seen as an optimization of the powerful solution of SLAM with data
association and learning capabilities. STT outperform JCBB. The performance is
100 % match.
|
1210.2429
|
Mining Permission Request Patterns from Android and Facebook
Applications (extended author version)
|
cs.CR cs.AI stat.ML
|
Android and Facebook provide third-party applications with access to users'
private data and the ability to perform potentially sensitive operations (e.g.,
post to a user's wall or place phone calls). As a security measure, these
platforms restrict applications' privileges with permission systems: users must
approve the permissions requested by applications before the applications can
make privacy- or security-relevant API calls. However, recent studies have
shown that users often do not understand permission requests and lack a notion
of typicality of requests. As a first step towards simplifying permission
systems, we cluster a corpus of 188,389 Android applications and 27,029
Facebook applications to find patterns in permission requests. Using a method
for Boolean matrix factorization for finding overlapping clusters, we find that
Facebook permission requests follow a clear structure that exhibits high
stability when fitted with only five clusters, whereas Android applications
demonstrate more complex permission requests. We also find that low-reputation
applications often deviate from the permission request patterns that we
identified for high-reputation applications suggesting that permission request
patterns are indicative for user satisfaction or application quality.
|
1210.2440
|
Group Model Selection Using Marginal Correlations: The Good, the Bad and
the Ugly
|
math.ST cs.IT math.IT stat.ML stat.TH
|
Group model selection is the problem of determining a small subset of groups
of predictors (e.g., the expression data of genes) that are responsible for
majority of the variation in a response variable (e.g., the malignancy of a
tumor). This paper focuses on group model selection in high-dimensional linear
models, in which the number of predictors far exceeds the number of samples of
the response variable. Existing works on high-dimensional group model selection
either require the number of samples of the response variable to be
significantly larger than the total number of predictors contributing to the
response or impose restrictive statistical priors on the predictors and/or
nonzero regression coefficients. This paper provides comprehensive
understanding of a low-complexity approach to group model selection that avoids
some of these limitations. The proposed approach, termed Group Thresholding
(GroTh), is based on thresholding of marginal correlations of groups of
predictors with the response variable and is reminiscent of existing
thresholding-based approaches in the literature. The most important
contribution of the paper in this regard is relating the performance of GroTh
to a polynomial-time verifiable property of the predictors for the general case
of arbitrary (random or deterministic) predictors and arbitrary nonzero
regression coefficients.
|
1210.2448
|
Modelling Implicit Communication in Multi-Agent Systems with Hybrid
Input/Output Automata
|
cs.FL cs.MA
|
We propose an extension of Hybrid I/O Automata (HIOAs) to model agent systems
and their implicit communication through perturbation of the environment, like
localization of objects or radio signals diffusion and detection. To this end
we decided to specialize some variables of the HIOAs whose values are functions
both of time and space. We call them world variables. Basically they are
treated similarly to the other variables of HIOAs, but they have the function
of representing the interaction of each automaton with the surrounding
environment, hence they can be output, input or internal variables. Since these
special variables have the role of simulating implicit communication, their
dynamics are specified both in time and space, because they model the
perturbations induced by the agent to the environment, and the perturbations of
the environment as perceived by the agent. Parallel composition of world
variables is slightly different from parallel composition of the other
variables, since their signals are summed. The theory is illustrated through a
simple example of agents systems.
|
1210.2449
|
Rapid Recovery for Systems with Scarce Faults
|
cs.SY
|
Our goal is to achieve a high degree of fault tolerance through the control
of a safety critical systems. This reduces to solving a game between a
malicious environment that injects failures and a controller who tries to
establish a correct behavior. We suggest a new control objective for such
systems that offers a better balance between complexity and precision: we seek
systems that are k-resilient. In order to be k-resilient, a system needs to be
able to rapidly recover from a small number, up to k, of local faults
infinitely many times, provided that blocks of up to k faults are separated by
short recovery periods in which no fault occurs. k-resilience is a simple but
powerful abstraction from the precise distribution of local faults, but much
more refined than the traditional objective to maximize the number of local
faults. We argue why we believe this to be the right level of abstraction for
safety critical systems when local faults are few and far between. We show that
the computational complexity of constructing optimal control with respect to
resilience is low and demonstrate the feasibility through an implementation and
experimental results.
|
1210.2453
|
Automata-based Static Analysis of XML Document Adaptation
|
cs.DB cs.DS cs.FL
|
The structure of an XML document can be optionally specified by means of XML
Schema, thus enabling the exploitation of structural information for efficient
document handling. Upon schema evolution, or when exchanging documents among
different collections exploiting related but not identical schemas, the need
may arise of adapting a document, known to be valid for a given schema S, to a
target schema S'. The adaptation may require knowledge of the element semantics
and cannot always be automatically derived. In this paper, we present an
automata-based method for the static analysis of user-defined XML document
adaptations, expressed as sequences of XQuery Update update primitives. The key
feature of the method is the use of an automatic inference method for
extracting the type, expressed as a Hedge Automaton, of a sequence of document
updates. The type is computed starting from the original schema S and from
rewriting rules that formally define the operational semantics of a sequence of
document updates. Type inclusion can then be used as conformance test w.r.t.
the type extracted from the target schema S'.
|
1210.2473
|
Enhanced Community Structure Detection in Complex Networks with Partial
Background Information
|
cs.SI physics.comp-ph physics.data-an
|
Community structure detection in complex networks is important since it can
help better understand the network topology and how the network works. However,
there is still not a clear and widely-accepted definition of community
structure, and in practice, different models may give very different results of
communities, making it hard to explain the results. In this paper, different
from the traditional methodologies, we design an enhanced semi-supervised
learning framework for community detection, which can effectively incorporate
the available prior information to guide the detection process and can make the
results more explainable. By logical inference, the prior information is more
fully utilized. The experiments on both the synthetic and the real-world
networks confirm the effectiveness of the framework.
|
1210.2474
|
Level Set Estimation from Compressive Measurements using Box Constrained
Total Variation Regularization
|
cs.CV stat.AP stat.ML
|
Estimating the level set of a signal from measurements is a task that arises
in a variety of fields, including medical imaging, astronomy, and digital
elevation mapping. Motivated by scenarios where accurate and complete
measurements of the signal may not available, we examine here a simple
procedure for estimating the level set of a signal from highly incomplete
measurements, which may additionally be corrupted by additive noise. The
proposed procedure is based on box-constrained Total Variation (TV)
regularization. We demonstrate the performance of our approach, relative to
existing state-of-the-art techniques for level set estimation from compressive
measurements, via several simulation examples.
|
1210.2484
|
Semi-Quantitative Group Testing: A Unifying Framework for Group Testing
with Applications in Genotyping
|
cs.IT math.IT
|
We propose a novel group testing method, termed semi-quantitative group
testing, motivated by a class of problems arising in genome screening
experiments. Semi-quantitative group testing (SQGT) is a (possibly) non-binary
pooling scheme that may be viewed as a concatenation of an adder channel and an
integer-valued quantizer. In its full generality, SQGT may be viewed as a
unifying framework for group testing, in the sense that most group testing
models are special instances of SQGT. For the new testing scheme, we define the
notion of SQ-disjunct and SQ-separable codes, representing generalizations of
classical disjunct and separable codes. We describe several combinatorial and
probabilistic constructions for such codes. While for most of these
constructions we assume that the number of defectives is much smaller than
total number of test subjects, we also consider the case in which there is no
restriction on the number of defectives and they may be as large as the total
number of subjects. For the codes constructed in this paper, we describe a
number of efficient decoding algorithms. In addition, we describe a belief
propagation decoder for sparse SQGT codes for which no other efficient decoder
is currently known. Finally, we define the notion of capacity of SQGT and
evaluate it for some special choices of parameters using information theoretic
methods.
|
1210.2502
|
Structured Dispersion Matrices from Space-Time Block Codes for
Space-Time Shift Keying
|
cs.IT math.IT
|
Coherent Space-Time Shift Keying (CSTSK) is a recently developed generalized
shift-keying framework for Multiple-Input Multiple-Output systems, which uses a
set of Space-Time matrices termed as Dispersion Matrices (DM). CSTSK may be
combined with a classic signaling set (eg. QAM, PSK) in order to strike a
flexible tradeoff between the achievable diversity and multiplexing gain. One
of the key benefits of the CSTSK scheme is its Inter-Channel Interference (ICI)
free system that makes single-stream Maximum Likelihood detection possible at
low-complexity. In the existing CSTSK scheme, DMs are chosen by maximizing the
mutual information over a large set of complex valued, Gaussian random matrices
through numerical simulations. We refer to them as Capacity-Optimized (CO) DMs.
In this contribution we establish a connection between the STSK scheme as well
as the Space-Time Block Codes (STBC) and show that a class of STBCs termed as
Decomposable Dispersion Codes (DDC) enjoy all the benefits that are specific to
the STSK scheme. Two STBCs belonging to this class are proposed, a rate-one
code from Field Extensions and a full-rate code from Cyclic Division Algebras,
that offer structured DMs with desirable properties such as full-diversity, and
a high coding gain. We show that the DMs derived from these codes are capable
of achieving a performance than CO-DMs, and emphasize the importance of DMs
having a higher coding gain than CO-DMs in scenarios having realistic,
imperfect channel state information at the receiver.
|
1210.2515
|
Protein Inference and Protein Quantification: Two Sides of the Same Coin
|
cs.CE cs.DS q-bio.QM
|
Motivation: In mass spectrometry-based shotgun proteomics, protein
quantification and protein identification are two major computational problems.
To quantify the protein abundance, a list of proteins must be firstly inferred
from the sample. Then the relative or absolute protein abundance is estimated
with quantification methods, such as spectral counting. Until now, researchers
have been dealing with these two processes separately. In fact, they are two
sides of same coin in the sense that truly present proteins are those proteins
with non-zero abundances. Then, one interesting question is if we regard the
protein inference problem as a special protein quantification problem, is it
possible to achieve better protein inference performance?
Contribution: In this paper, we investigate the feasibility of using protein
quantification methods to solve the protein inference problem. Protein
inference is to determine whether each candidate protein is present in the
sample or not. Protein quantification is to calculate the abundance of each
protein. Naturally, the absent proteins should have zero abundances. Thus, we
argue that the protein inference problem can be viewed as a special case of
protein quantification problem: present proteins are those proteins with
non-zero abundances. Based on this idea, our paper tries to use three very
simple protein quantification methods to solve the protein inference problem
effectively.
Results: The experimental results on six datasets show that these three
methods are competitive with previous protein inference algorithms. This
demonstrates that it is plausible to take the protein inference problem as a
special case of protein quantification, which opens the door of devising more
effective protein inference algorithms from a quantification perspective.
|
1210.2529
|
Performance Analysis of Two-Step Bi-Directional Relaying with Multiple
Antennas
|
cs.IT math.IT
|
In this paper we study decode-and-forward multi-antenna relay systems that
achieve bi-directional communication in two time slots. We investigate
different downlink broadcast schemes which employ binary or analog network
coding at the relay. We also analyze and compare their performances in terms of
diversity order and symbol error probability. It is shown that if exact
downlink channel state information is available at the relay, using analog
network coding in the form of multi-antenna maximal-ratio transmit beamforming
to precode the information vectors at the relay gives the best performance.
Then, we propose a Max-Min antenna selection with binary network coding scheme
that can approach this performance with only partial channel state information.
|
1210.2582
|
Degrees of Freedom Region of the MIMO X channel with an Arbitrary Number
of Antennas
|
cs.IT math.IT
|
We characterize the total degrees of freedom (DoF) of the full-rank MIMO X
channel with arbitrary number of antennas at each node. We elucidate that the
existing outer bound is tight for any antenna configuration and provide
transmit and receive filter designs that attain this outer bound. The proposed
achievable scheme exploits channel extensions in terms of both, symbol and
asymmetric complex signaling when the communication is carried out over a
constant channel case, and is also applicable to time varying channels. The
proposed scheme represents a general framework for the derivation of the total
DoF of any two-by-two multiuser channels. Furthermore, the rank-deficient MIMO
channels case is naturally addressed, and it is shown that the total DoF of the
interference (IC) and MIMO X channels are in general superior to the full rank
MIMO case
|
1210.2592
|
New Generalizations of the Bethe Approximation via Asymptotic Expansion
|
cs.IT cond-mat.stat-mech math.IT
|
The Bethe approximation, discovered in statistical physics, gives an
efficient algorithm called belief propagation (BP) for approximating a
partition function. BP empirically gives an accurate approximation for many
problems, e.g., low-density parity-check codes, compressed sensing, etc.
Recently, Vontobel gives a novel characterization of the Bethe approximation
using graph cover. In this paper, a new approximation based on the Bethe
approximation is proposed. The new approximation is derived from Vontobel's
characterization using graph cover, and expressed by using the edge zeta
function, which is related with the Hessian of the Bethe free energy as shown
by Watanabe and Fukumizu. On some conditions, it is proved that the new
approximation is asymptotically better than the Bethe approximation.
|
1210.2613
|
Measuring the Influence of Observations in HMMs through the
Kullback-Leibler Distance
|
cs.IT cs.LG math.IT math.PR
|
We measure the influence of individual observations on the sequence of the
hidden states of the Hidden Markov Model (HMM) by means of the Kullback-Leibler
distance (KLD). Namely, we consider the KLD between the conditional
distribution of the hidden states' chain given the complete sequence of
observations and the conditional distribution of the hidden chain given all the
observations but the one under consideration. We introduce a linear complexity
algorithm for computing the influence of all the observations. As an
illustration, we investigate the application of our algorithm to the problem of
detecting outliers in HMM data series.
|
1210.2629
|
Optimization in Differentiable Manifolds in Order to Determine the
Method of Construction of Prehistoric Wall-Paintings
|
cs.CV cs.AI cs.CG
|
In this paper a general methodology is introduced for the determination of
potential prototype curves used for the drawing of prehistoric wall-paintings.
The approach includes a) preprocessing of the wall-paintings contours to
properly partition them, according to their curvature, b) choice of prototype
curves families, c) analysis and optimization in 4-manifold for a first
estimation of the form of these prototypes, d) clustering of the contour parts
and the prototypes, to determine a minimal number of potential guides, e)
further optimization in 4-manifold, applied to each cluster separately, in
order to determine the exact functional form of the potential guides, together
with the corresponding drawn contour parts. The introduced methodology
simultaneously deals with two problems: a) the arbitrariness in data-points
orientation and b) the determination of one proper form for a prototype curve
that optimally fits the corresponding contour data. Arbitrariness in
orientation has been dealt with a novel curvature based error, while the proper
forms of curve prototypes have been exhaustively determined by embedding
curvature deformations of the prototypes into 4-manifolds. Application of this
methodology to celebrated wall-paintings excavated at Tyrins, Greece and the
Greek island of Thera, manifests it is highly probable that these
wall-paintings had been drawn by means of geometric guides that correspond to
linear spirals and hyperbolae. These geometric forms fit the drawings' lines
with an exceptionally low average error, less than 0.39mm. Hence, the approach
suggests the existence of accurate realizations of complicated geometric
entities, more than 1000 years before their axiomatic formulation in Classical
Ages.
|
1210.2640
|
Multi-view constrained clustering with an incomplete mapping between
views
|
cs.LG cs.AI
|
Multi-view learning algorithms typically assume a complete bipartite mapping
between the different views in order to exchange information during the
learning process. However, many applications provide only a partial mapping
between the views, creating a challenge for current methods. To address this
problem, we propose a multi-view algorithm based on constrained clustering that
can operate with an incomplete mapping. Given a set of pairwise constraints in
each view, our approach propagates these constraints using a local similarity
measure to those instances that can be mapped to the other views, allowing the
propagated constraints to be transferred across views via the partial mapping.
It uses co-EM to iteratively estimate the propagation within each view based on
the current clustering model, transfer the constraints across views, and then
update the clustering model. By alternating the learning process between views,
this approach produces a unified clustering model that is consistent with all
views. We show that this approach significantly improves clustering performance
over several other methods for transferring constraints and allows multi-view
clustering to be reliably applied when given a limited mapping between the
views. Our evaluation reveals that the propagated constraints have high
precision with respect to the true clusters in the data, explaining their
benefit to clustering performance in both single- and multi-view learning
scenarios.
|
1210.2646
|
A General Methodology for the Determination of 2D Bodies Elastic
Deformation Invariants. Application to the Automatic Identification of
Parasites
|
cs.CV cs.AI
|
A novel methodology is introduced here that exploits 2D images of arbitrary
elastic body deformation instances, so as to quantify mechano-elastic
characteristics that are deformation invariant. Determination of such
characteristics allows for developing methods offering an image of the
undeformed body. General assumptions about the mechano-elastic properties of
the bodies are stated, which lead to two different approaches for obtaining
bodies' deformation invariants. One was developed to spot deformed body's
neutral line and its cross sections, while the other solves deformation PDEs by
performing a set of equivalent image operations on the deformed body images.
Both these processes may furnish a body undeformed version from its deformed
image. This was confirmed by obtaining the undeformed shape of deformed
parasites, cells (protozoa), fibers and human lips. In addition, the method has
been applied to the important problem of parasite automatic classification from
their microscopic images. To achieve this, we first apply the previous method
to straighten the highly deformed parasites and then we apply a dedicated curve
classification method to the straightened parasite contours. It is demonstrated
that essentially different deformations of the same parasite give rise to
practically the same undeformed shape, thus confirming the consistency of the
introduced methodology. Finally, the developed pattern recognition method
classifies the unwrapped parasites into 6 families, with an accuracy rate of
97.6 %.
|
1210.2687
|
Deconvolving Images with Unknown Boundaries Using the Alternating
Direction Method of Multipliers
|
math.OC cs.CV
|
The alternating direction method of multipliers (ADMM) has recently sparked
interest as a flexible and efficient optimization tool for imaging inverse
problems, namely deconvolution and reconstruction under non-smooth convex
regularization. ADMM achieves state-of-the-art speed by adopting a divide and
conquer strategy, wherein a hard problem is split into simpler, efficiently
solvable sub-problems (e.g., using fast Fourier or wavelet transforms, or
simple proximity operators). In deconvolution, one of these sub-problems
involves a matrix inversion (i.e., solving a linear system), which can be done
efficiently (in the discrete Fourier domain) if the observation operator is
circulant, i.e., under periodic boundary conditions. This paper extends
ADMM-based image deconvolution to the more realistic scenario of unknown
boundary, where the observation operator is modeled as the composition of a
convolution (with arbitrary boundary conditions) with a spatial mask that keeps
only pixels that do not depend on the unknown boundary. The proposed approach
also handles, at no extra cost, problems that combine the recovery of missing
pixels (i.e., inpainting) with deconvolution. We show that the resulting
algorithms inherit the convergence guarantees of ADMM and illustrate its
performance on non-periodic deblurring (with and without inpainting of interior
pixels) under total-variation and frame-based regularization.
|
1210.2688
|
Similarity and bisimilarity notions appropriate for characterizing
indistinguishability in fragments of the calculus of relations
|
cs.LO cs.DB
|
Motivated by applications in databases, this paper considers various
fragments of the calculus of binary relations. The fragments are obtained by
leaving out, or keeping in, some of the standard operators, along with some
derived operators such as set difference, projection, coprojection, and
residuation. For each considered fragment, a characterization is obtained for
when two given binary relational structures are indistinguishable by
expressions in that fragment. The characterizations are based on appropriately
adapted notions of simulation and bisimulation.
|
1210.2704
|
On the Capacity of the One-Bit Deletion and Duplication Channel
|
cs.IT math.IT
|
The one-bit deletion and duplication channel is investigated. An input to
this channel consists of a block of bits which experiences either a deletion,
or a duplication, or remains unchanged. For this channel a capacity expression
is obtained in a certain asymptotic regime where the deletion and duplication
probabilities tend to zero. As a corollary, we obtain an asymptotic expression
for the capacity of the segmented deletion and duplication channel where the
input now consists of several blocks and each block independently experiences
either a deletion, or a duplication, or remains unchanged.
|
1210.2715
|
AI in arbitrary world
|
cs.AI
|
In order to build AI we have to create a program which copes well in an
arbitrary world. In this paper we will restrict our attention on one concrete
world, which represents the game Tick-Tack-Toe. This world is a very simple one
but it is sufficiently complicated for our task because most people cannot
manage with it. The main difficulty in this world is that the player cannot see
the entire internal state of the world so he has to build a model in order to
understand the world. The model which we will offer will consist of final
automata and first order formulas.
|
1210.2748
|
Quantifying Causal Coupling Strength: A Lag-specific Measure For
Multivariate Time Series Related To Transfer Entropy
|
physics.data-an cs.IT math.IT stat.ML
|
While it is an important problem to identify the existence of causal
associations between two components of a multivariate time series, a topic
addressed in Runge et al. (2012), it is even more important to assess the
strength of their association in a meaningful way. In the present article we
focus on the problem of defining a meaningful coupling strength using
information theoretic measures and demonstrate the short-comings of the
well-known mutual information and transfer entropy. Instead, we propose a
certain time-delayed conditional mutual information, the momentary information
transfer (MIT), as a measure of association that is general, causal and
lag-specific, reflects a well interpretable notion of coupling strength and is
practically computable. MIT is based on the fundamental concept of source
entropy, which we utilize to yield a notion of coupling strength that is,
compared to mutual information and transfer entropy, well interpretable, in
that for many cases it solely depends on the interaction of the two components
at a certain lag. In particular, MIT is thus in many cases able to exclude the
misleading influence of autodependency within a process in an
information-theoretic way. We formalize and prove this idea analytically and
numerically for a general class of nonlinear stochastic processes and
illustrate the potential of MIT on climatological data.
|
1210.2752
|
Statistical Properties of Inter-arrival Times Distribution in Social
Tagging Systems
|
physics.soc-ph cs.IR cs.SI
|
Folksonomies provide a rich source of data to study social patterns taking
place on the World Wide Web. Here we study the temporal patterns of users'
tagging activity. We show that the statistical properties of inter-arrival
times between subsequent tagging events cannot be explained without taking into
account correlation in users' behaviors. This shows that social interaction in
collaborative tagging communities shapes the evolution of folksonomies. A
consensus formation process involving the usage of a small number of tags for a
given resources is observed through a numerical and analytical analysis of some
well-known folksonomy datasets.
|
1210.2771
|
Cost-Sensitive Tree of Classifiers
|
stat.ML cs.LG
|
Recently, machine learning algorithms have successfully entered large-scale
real-world industrial applications (e.g. search engines and email spam
filters). Here, the CPU cost during test time must be budgeted and accounted
for. In this paper, we address the challenge of balancing the test-time cost
and the classifier accuracy in a principled fashion. The test-time cost of a
classifier is often dominated by the computation required for feature
extraction-which can vary drastically across eatures. We decrease this
extraction time by constructing a tree of classifiers, through which test
inputs traverse along individual paths. Each path extracts different features
and is optimized for a specific sub-partition of the input space. By only
computing features for inputs that benefit from them the most, our cost
sensitive tree of classifiers can match the high accuracies of the current
state-of-the-art at a small fraction of the computational cost.
|
1210.2776
|
Contagion dynamics in time-varying metapopulation networks
|
physics.soc-ph cs.SI
|
The metapopulation framework is adopted in a wide array of disciplines to
describe systems of well separated yet connected subpopulations. The subgroups
or patches are often represented as nodes in a network whose links represent
the migration routes among them. The connections have been so far mostly
considered as static, but in general evolve in time. Here we address this case
by investigating simple contagion processes on time-varying metapopulation
networks. We focus on the SIR process and determine analytically the mobility
threshold for the onset of an epidemic spreading in the framework of
activity-driven network models. We find profound differences from the case of
static networks. The threshold is entirely described by the dynamical
parameters defining the average number of instantaneously migrating individuals
and does not depend on the properties of the static network representation.
Remarkably, the diffusion and contagion processes are slower in time-varying
graphs than in their aggregated static counterparts, the mobility threshold
being even two orders of magnitude larger in the first case. The presented
results confirm the importance of considering the time-varying nature of
complex networks.
|
1210.2806
|
Risk-Sensitive Mean Field Games
|
math.OC cs.GT cs.SY
|
In this paper, we study a class of risk-sensitive mean-field stochastic
differential games. We show that under appropriate regularity conditions, the
mean-field value of the stochastic differential game with exponentiated
integral cost functional coincides with the value function described by a
Hamilton-Jacobi-Bellman (HJB) equation with an additional quadratic term. We
provide an explicit solution of the mean-field best response when the
instantaneous cost functions are log-quadratic and the state dynamics are
affine in the control. An equivalent mean-field risk-neutral problem is
formulated and the corresponding mean-field equilibria are characterized in
terms of backward-forward macroscopic McKean-Vlasov equations,
Fokker-Planck-Kolmogorov equations, and HJB equations. We provide numerical
examples on the mean field behavior to illustrate both linear and McKean-Vlasov
dynamics.
|
1210.2826
|
An anisotropy preserving metric for DTI processing
|
cs.CV math.DG
|
Statistical analysis of Diffusion Tensor Imaging (DTI) data requires a
computational framework that is both numerically tractable (to account for the
high dimensional nature of the data) and geometric (to account for the
nonlinear nature of diffusion tensors). Building upon earlier studies that have
shown that a Riemannian framework is appropriate to address these challenges,
the present paper proposes a novel metric and an accompanying computational
framework for DTI data processing. The proposed metric retains the geometry and
the computational tractability of earlier methods grounded in the affine
invariant metric. In addition, and in contrast to earlier methods, it provides
an interpolation method which preserves anisotropy, a central information
carried by diffusion tensor data.
|
1210.2838
|
Kinects and Human Kinetics: A New Approach for Studying Crowd Behavior
|
cs.CV physics.soc-ph
|
Modeling crowd behavior relies on accurate data of pedestrian movements at a
high level of detail. Imaging sensors such as cameras provide a good basis for
capturing such detailed pedestrian motion data. However, currently available
computer vision technologies, when applied to conventional video footage, still
cannot automatically unveil accurate motions of groups of people or crowds from
the image sequences. We present a novel data collection approach for studying
crowd behavior which uses the increasingly popular low-cost sensor Microsoft
Kinect. The Kinect captures both standard camera data and a three-dimensional
depth map. Our human detection and tracking algorithm is based on agglomerative
clustering of depth data captured from an elevated view - in contrast to the
lateral view used for gesture recognition in Kinect gaming applications. Our
approach transforms local Kinect 3D data to a common world coordinate system in
order to stitch together human trajectories from multiple Kinects, which allows
for a scalable and flexible capturing area. At a testbed with real-world
pedestrian traffic we demonstrate that our approach can provide accurate
trajectories from three Kinects with a Pedestrian Detection Rate of up to 94%
and a Multiple Object Tracking Precision of 4 cm. Using a comprehensive dataset
of 2240 captured human trajectories we calibrate three variations of the Social
Force model. The results of our model validations indicate their particular
ability to reproduce the observed crowd behavior in microscopic simulations.
|
1210.2856
|
Quantum Hyperdense Coding for Distributed Communications
|
quant-ph cs.IT math.IT
|
Superdense coding proved that entanglement-assisted quantum communications
can improve the data transmission rates compared to classical systems. It
allows sending 2 classical bits between the parties in exchange of 1 quantum
bit and a pre-shared entangled Bell pair. This paper introduces a new protocol
which is intended for distributed communication. Using a pre-shared entangled
Bell pair and 1 classical bit 2,5 classical bits can be transmitted in average.
This means not only valuable increase in capacity but the two-way distributed
operation opens new fields of investigation.
|
1210.2872
|
Data Mining and Its Applications for Knowledge Management: A Literature
Review from 2007 to 2012
|
cs.DB
|
Data mining is one of the most important steps of the knowledge discovery in
databases process and is considered as significant subfield in knowledge
management. Research in data mining continues growing in business and in
learning organization over coming decades. This review paper explores the
applications of data mining techniques which have been developed to support
knowledge management process. The journal articles indexed in ScienceDirect
Database from 2007 to 2012 are analyzed and classified. The discussion on the
findings is divided into 4 topics: (i) knowledge resource; (ii) knowledge types
and/or knowledge datasets; (iii) data mining tasks; and (iv) data mining
techniques and applications used in knowledge management. The article first
briefly describes the definition of data mining and data mining functionality.
Then the knowledge management rationale and major knowledge management tools
integrated in knowledge management cycle are described. Finally, the
applications of data mining techniques in the process of knowledge management
are summarized and discussed.
|
1210.2877
|
Efficient Solution to the 3D Problem of Automatic Wall Paintings
Reassembly
|
cs.CV math.DG
|
This paper introduces a new approach for the automated reconstruction -
reassembly of fragmented objects having one surface near to plane, on the basis
of the 3D representation of their constituent fragments. The whole process
starts by 3D scanning of the available fragments. The obtained representations
are properly processed so that they can be tested for possible matches. Next,
four novel criteria are introduced, that lead to the determination of pairs of
matching fragments. These criteria have been chosen so as the whole process
imitates the instinctive reassembling method dedicated scholars apply. The
first criterion exploits the volume of the gap between two properly placed
fragments. The second one considers the fragments' overlapping in each possible
matching position. Criteria 3,4 employ principles from calculus of variations
to obtain bounds for the area and the mean curvature of the contact surfaces
and the length of contact curves, which must hold if the two fragments match.
The method has been applied, with great success, both in the reconstruction of
objects artificially broken by the authors and, most importantly, in the
virtual reassembling of parts of wall paintings belonging to the Mycenaic
civilization (c. 1300 B.C.), excavated in a highly fragmented condition in
Tyrins, Greece.
|
1210.2882
|
Online Adaptive Fault Tolerant based Feedback Control Scheduling
Algorithm for Multiprocessor Embedded Systems
|
cs.SY cs.OS
|
Since some years ago, use of Feedback Control Scheduling Algorithm (FCSA) in
the control scheduling co-design of multiprocessor embedded system has
increased. FCSA provides Quality of Service (QoS) in terms of overall system
performance and resource allocation in open and unpredictable environment. FCSA
uses quality control feedback loop to keep CPU utilization under desired
unitization bound by avoiding overloading and deadline miss ratio. Integrated
Fault tolerance (FT) based FCSA design methodology guarantees that the Safety
Critical (SC) tasks will meet their deadlines in the presence of faults.
However, current FCSA design model does not provide the optimal solution with
dynamic load fluctuation. This paper presented a novel methodology of designing
an online adaptive fault tolerant based feedback control algorithm for
multiprocessor embedded systems. This procedure is important for control
scheduling co-design for multiprocessor embedded systems.
|
1210.2897
|
A Proposed General Method for Parameter Estimation of Noise Corrupted
Oscillator Systems
|
cs.SY physics.data-an
|
This paper provides a proposed means to estimate parameters of noise
corrupted oscillator systems. An application for a submarine combat control
systems (CCS) rack is described as exemplary of the method.
|
1210.2935
|
Local Bifurcations in DC-DC Converters
|
cs.SY math.DS nlin.CD
|
Three local bifurcations in DC-DC converters are reviewed. They are
period-doubling bifurcation, saddle-node bifurcation, and Neimark bifurcation.
A general sampled-data model is employed to study the types of loss of
stability of the nominal (periodic) solution and their connection with local
bifurcations. More accurate prediction of instability and bifurcation than
using the averaging approach is obtained. Examples of bifurcations associated
with instabilities in DC-DC converters are given.
|
1210.2967
|
Robust Analog Function Computation via Wireless Multiple-Access Channels
|
cs.IT cs.DC cs.MA math.IT
|
Various wireless sensor network applications involve the computation of a
pre-defined function of the measurements without the need for reconstructing
each individual sensor reading. Widely-considered examples of such functions
include the arithmetic mean and the maximum value. Standard approaches to the
computation problem separate computation from communication: quantized sensor
readings are transmitted interference-free to a fusion center that reconstructs
each sensor reading and subsequently computes the sought function value. Such
separation-based computation schemes are generally highly inefficient as a
complete reconstruction of individual sensor readings is not necessary for the
fusion center to compute a function of them. In particular, if the mathematical
structure of the wireless channel is suitably matched (in some sense) to the
function, then channel collisions induced by concurrent transmissions of
different nodes can be beneficially exploited for computation purposes.
Therefore, in this paper a practically relevant analog computation scheme is
proposed that allows for an efficient estimate of linear and nonlinear
functions over the wireless multiple-access channel. After analyzing the
asymptotic properties of the estimation error, numerical simulations are
presented to show the potential for huge performance gains when compared with
time-division multiple-access based computation schemes.
|
1210.2984
|
Learning Onto-Relational Rules with Inductive Logic Programming
|
cs.AI cs.DB cs.LG cs.LO
|
Rules complement and extend ontologies on the Semantic Web. We refer to these
rules as onto-relational since they combine DL-based ontology languages and
Knowledge Representation formalisms supporting the relational data model within
the tradition of Logic Programming and Deductive Databases. Rule authoring is a
very demanding Knowledge Engineering task which can be automated though
partially by applying Machine Learning algorithms. In this chapter we show how
Inductive Logic Programming (ILP), born at the intersection of Machine Learning
and Logic Programming and considered as a major approach to Relational
Learning, can be adapted to Onto-Relational Learning. For the sake of
illustration, we provide details of a specific Onto-Relational Learning
solution to the problem of learning rule-based definitions of DL concepts and
roles with ILP.
|
1210.3012
|
Coding for Fast Content Download
|
cs.IT cs.DC math.IT
|
We study the fundamental trade-off between storage and content download time.
We show that the download time can be significantly reduced by dividing the
content into chunks, encoding it to add redundancy and then distributing it
across multiple disks. We determine the download time for two content access
models - the fountain and fork-join models that involve simultaneous content
access, and individual access from enqueued user requests respectively. For the
fountain model we explicitly characterize the download time, while in the
fork-join model we derive the upper and lower bounds. Our results show that
coding reduces download time, through the diversity of distributing the data
across more disks, even for the total storage used.
|
1210.3075
|
On Walsh code assignment
|
cs.IT math.IT
|
The paper considers the problem of orthogonal variable spreading Walsh-code
assignments. The aim of the paper is to provide assignments that can avoid both
complicated signaling from the BS to the users and blind rate and code
detection amongst a great number of possible codes. The assignments considered
here use a partition of all users into several pools. Each pool can use its own
codes that are different for different pools. Each user has only a few codes
assigned to it within the pool. We state the problem as a combinatorial one
expressed in terms of a binary n x k matrix M where is the number n of users,
and k is the number of Walsh codes in the pool. A solution to the problem is
given as a construction of M, which has the assignment property defined in the
paper. Two constructions of such M are presented under different conditions on
n and k. The first construction is optimal in the sense that it gives the
minimal number of Walsh codes assigned to each user for given n and k. The
optimality follows from a proved necessary condition for the existence of M
with the assignment property. In addition, we propose a simple algorithm of
optimal assignment for the first construction.
|
1210.3098
|
Near-optimal compressed sensing guarantees for total variation
minimization
|
math.NA cs.CV cs.IT math.IT
|
Consider the problem of reconstructing a multidimensional signal from an
underdetermined set of measurements, as in the setting of compressed sensing.
Without any additional assumptions, this problem is ill-posed. However, for
signals such as natural images or movies, the minimal total variation estimate
consistent with the measurements often produces a good approximation to the
underlying signal, even if the number of measurements is far smaller than the
ambient dimensionality. This paper extends recent reconstruction guarantees for
two-dimensional images to signals of arbitrary dimension d>1 and to isotropic
total variation problems. To be precise, we show that a multidimensional signal
x can be reconstructed from O(sd*log(N^d)) linear measurements using total
variation minimization to within a factor of the best s-term approximation of
its gradient. The reconstruction guarantees we provide are necessarily optimal
up to polynomial factors in the spatial dimension d.
|
1210.3101
|
Unique Decoding of General AG Codes
|
cs.IT math.IT
|
A unique decoding algorithm for general AG codes, namely multipoint
evaluation codes on algebraic curves, is presented. It is a natural
generalization of the previous decoding algorithm which was only for one-point
AG codes. As such, it retains the same advantages of fast speed and regular
structure with the previous algorithm. Compared with other known decoding
algorithms for general AG codes, it is much simpler in its description and
implementation.
|
1210.3121
|
A simple model clarifies the complicated relationships of complex
networks
|
cs.SI nlin.AO physics.soc-ph
|
Real-world networks such as the Internet and WWW have many common traits.
Until now, hundreds of models were proposed to characterize these traits for
understanding the networks. Because different models used very different
mechanisms, it is widely believed that these traits origin from different
causes. However, we find that a simple model based on optimisation can produce
many traits, including scale-free, small-world, ultra small-world,
Delta-distribution, compact, fractal, regular and random networks. Moreover, by
revising the proposed model, the community-structure networks are generated. By
this model and the revised versions, the complicated relationships of complex
networks are illustrated. The model brings a new universal perspective to the
understanding of complex networks and provide a universal method to model
complex networks from the viewpoint of optimisation.
|
1210.3131
|
A Survey on Web Spam Detection Methods: Taxonomy
|
cs.IR cs.CR
|
Web spam refers to some techniques, which try to manipulate search engine
ranking algorithms in order to raise web page position in search engine
results. In the best case, spammers encourage viewers to visit their sites, and
provide undeserved advertisement gains to the page owner. In the worst case,
they use malicious contents in their pages and try to install malware on the
victims machine. Spammers use three kinds of spamming techniques to get higher
score in ranking. These techniques are Link based techniques, hiding techniques
and content-based techniques. Existing spam pages cause distrust to search
engine results. This not only wastes the time of visitors, but also wastes lots
of search engine resources. Hence spam detection methods have been proposed as
a solution for web spam in order to reduce negative effects of spam pages.
Experimental results show that some of these techniques are working well and
can find spam pages more accurate than the others. This paper classifies web
spam techniques and the related detection methods.
|
1210.3139
|
A Benchmark to Select Data Mining Based Classification Algorithms For
Business Intelligence And Decision Support Systems
|
cs.DB cs.LG
|
DSS serve the management, operations, and planning levels of an organization
and help to make decisions, which may be rapidly changing and not easily
specified in advance. Data mining has a vital role to extract important
information to help in decision making of a decision support system.
Integration of data mining and decision support systems (DSS) can lead to the
improved performance and can enable the tackling of new types of problems.
Artificial Intelligence methods are improving the quality of decision support,
and have become embedded in many applications ranges from ant locking
automobile brakes to these days interactive search engines. It provides various
machine learning techniques to support data mining. The classification is one
of the main and valuable tasks of data mining. Several types of classification
algorithms have been suggested, tested and compared to determine the future
trends based on unseen data. There has been no single algorithm found to be
superior over all others for all data sets. The objective of this paper is to
compare various classification algorithms that have been frequently used in
data mining for decision support systems. Three decision trees based
algorithms, one artificial neural network, one statistical, one support vector
machines with and without ada boost and one clustering algorithm are tested and
compared on four data sets from different domains in terms of predictive
accuracy, error rate, classification index, comprehensibility and training
time. Experimental results demonstrate that Genetic Algorithm (GA) and support
vector machines based algorithms are better in terms of predictive accuracy.
SVM without adaboost shall be the first choice in context of speed and
predictive accuracy. Adaboost improves the accuracy of SVM but on the cost of
large training time.
|
1210.3165
|
Computationally Efficient Implementation of Convolution-based Locally
Adaptive Binarization Techniques
|
cs.CV
|
One of the most important steps of document image processing is binarization.
The computational requirements of locally adaptive binarization techniques make
them unsuitable for devices with limited computing facilities. In this paper,
we have presented a computationally efficient implementation of convolution
based locally adaptive binarization techniques keeping the performance
comparable to the original implementation. The computational complexity has
been reduced from O(W2N2) to O(WN2) where WxW is the window size and NxN is the
image size. Experiments over benchmark datasets show that the computation time
has been reduced by 5 to 15 times depending on the window size while memory
consumption remains the same with respect to the state-of-the-art algorithmic
implementation.
|
1210.3187
|
An asymptotically optimal push-pull method for multicasting over a
random network
|
cs.IT cs.NI math.IT
|
We consider allcast and multicast flow problems where either all of the nodes
or only a subset of the nodes may be in session. Traffic from each node in the
session has to be sent to every other node in the session. If the session does
not consist of all the nodes, the remaining nodes act as relays. The nodes are
connected by undirected links whose capacities are independent and identically
distributed random variables. We study the asymptotics of the capacity region
(with network coding) in the limit of a large number of nodes, and show that
the normalized sum rate converges to a constant almost surely. We then provide
a decentralized push-pull algorithm that asymptotically achieves this
normalized sum rate without network coding.
|
1210.3210
|
Fitness Landscape-Based Characterisation of Nature-Inspired Algorithms
|
cs.NE
|
A significant challenge in nature-inspired algorithmics is the identification
of specific characteristics of problems that make them harder (or easier) to
solve using specific methods. The hope is that, by identifying these
characteristics, we may more easily predict which algorithms are best-suited to
problems sharing certain features. Here, we approach this problem using fitness
landscape analysis. Techniques already exist for measuring the "difficulty" of
specific landscapes, but these are often designed solely with evolutionary
algorithms in mind, and are generally specific to discrete optimisation. In
this paper we develop an approach for comparing a wide range of continuous
optimisation algorithms. Using a fitness landscape generation technique, we
compare six different nature-inspired algorithms and identify which methods
perform best on landscapes exhibiting specific features.
|
1210.3234
|
Risks of Friendships on Social Networks
|
cs.SI physics.soc-ph
|
In this paper, we explore the risks of friends in social networks caused by
their friendship patterns, by using real life social network data and starting
from a previously defined risk model. Particularly, we observe that risks of
friendships can be mined by analyzing users' attitude towards friends of
friends. This allows us to give new insights into friendship and risk dynamics
on social networks.
|
1210.3241
|
Distributional Framework for Emergent Knowledge Acquisition and its
Application to Automated Document Annotation
|
cs.AI cs.IR
|
The paper introduces a framework for representation and acquisition of
knowledge emerging from large samples of textual data. We utilise a
tensor-based, distributional representation of simple statements extracted from
text, and show how one can use the representation to infer emergent knowledge
patterns from the textual data in an unsupervised manner. Examples of the
patterns we investigate in the paper are implicit term relationships or
conjunctive IF-THEN rules. To evaluate the practical relevance of our approach,
we apply it to annotation of life science articles with terms from MeSH (a
controlled biomedical vocabulary and thesaurus).
|
1210.3265
|
Multi-threaded ASP Solving with clasp
|
cs.LO cs.AI cs.DC
|
We present the new multi-threaded version of the state-of-the-art answer set
solver clasp. We detail its component and communication architecture and
illustrate how they support the principal functionalities of clasp. Also, we
provide some insights into the data representation used for different
constraint types handled by clasp. All this is accompanied by an extensive
experimental analysis of the major features related to multi-threading in
clasp.
|
1210.3266
|
Detecting dense communities in large social and information networks
with the Core & Peel algorithm
|
cs.SI cs.DS physics.soc-ph
|
Detecting and characterizing dense subgraphs (tight communities) in social
and information networks is an important exploratory tool in social network
analysis. Several approaches have been proposed that either (i) partition the
whole network into clusters, even in low density region, or (ii) are aimed at
finding a single densest community (and need to be iterated to find the next
one). As social networks grow larger both approaches (i) and (ii) result in
algorithms too slow to be practical, in particular when speed in analyzing the
data is required. In this paper we propose an approach that aims at balancing
efficiency of computation and expressiveness and manageability of the output
community representation. We define the notion of a partial dense cover (PDC)
of a graph. Intuitively a PDC of a graph is a collection of sets of nodes that
(a) each set forms a disjoint dense induced subgraphs and (b) its removal
leaves the residual graph without dense regions. Exact computation of PDC is an
NP-complete problem, thus, we propose an efficient heuristic algorithms for
computing a PDC which we christen Core and Peel. Moreover we propose a novel
benchmarking technique that allows us to evaluate algorithms for computing PDC
using the classical IR concepts of precision and recall even without a golden
standard. Tests on 25 social and technological networks from the Stanford Large
Network Dataset Collection confirm that Core and Peel is efficient and attains
very high precison and recall.
|
1210.3269
|
The role of distances in the World Trade Web
|
physics.soc-ph cs.SI q-fin.GN
|
In the economic literature, geographic distances are considered fundamental
factors to be included in any theoretical model whose aim is the quantification
of the trade between countries. Quantitatively, distances enter into the
so-called gravity models that successfully predict the weight of non-zero trade
flows. However, it has been recently shown that gravity models fail to
reproduce the binary topology of the World Trade Web. In this paper a different
approach is presented: the formalism of exponential random graphs is used and
the distances are treated as constraints, to be imposed on a previously chosen
ensemble of graphs. Then, the information encoded in the geographical distances
is used to explain the binary structure of the World Trade Web, by testing it
on the degree-degree correlations and the reciprocity structure. This leads to
the definition of a novel null model that combines spatial and non-spatial
effects. The effectiveness of spatial constraints is compared to that of
nonspatial ones by means of the Akaike Information Criterion and the Bayesian
Information Criterion. Even if it is commonly believed that the World Trade Web
is strongly dependent on the distances, what emerges from our analysis is that
distances do not play a crucial role in shaping the World Trade Web binary
structure and that the information encoded into the reciprocity is far more
useful in explaining the observed patterns.
|
1210.3288
|
Unsupervised Detection and Tracking of Arbitrary Objects with Dependent
Dirichlet Process Mixtures
|
stat.ML cs.CV cs.LG
|
This paper proposes a technique for the unsupervised detection and tracking
of arbitrary objects in videos. It is intended to reduce the need for detection
and localization methods tailored to specific object types and serve as a
general framework applicable to videos with varied objects, backgrounds, and
image qualities. The technique uses a dependent Dirichlet process mixture
(DDPM) known as the Generalized Polya Urn (GPUDDPM) to model image pixel data
that can be easily and efficiently extracted from the regions in a video that
represent objects. This paper describes a specific implementation of the model
using spatial and color pixel data extracted via frame differencing and gives
two algorithms for performing inference in the model to accomplish detection
and tracking. This technique is demonstrated on multiple synthetic and
benchmark video datasets that illustrate its ability to, without modification,
detect and track objects with diverse physical characteristics moving over
non-uniform backgrounds and through occlusion.
|
1210.3307
|
Modelling an Automatic Proof Generator for Functional Dependency Rules
Using Colored Petri Net
|
cs.DB cs.FL cs.SE
|
Database administrators need to compute closure of functional dependencies
(FDs) for normalization of database systems and enforcing integrity rules.
Colored Petri net (CPN) is a powerful formal method for modelling and
verification of various systems. In this paper, we modelled Armstrong's axioms
for automatic proof generation of a new FD rule from initial FD rules using
CPN. For this purpose, a CPN model of Armstrong's axioms presents and initial
FDs considered in the model as initial color set. Then we search required FD in
the state space of the model via model checking. If it exists in the state
space, then a recursive ML code extracts the proof of this FD rule using
further searches in the state space of the model.
|
1210.3312
|
Artex is AnotheR TEXt summarizer
|
cs.IR cs.AI cs.CL
|
This paper describes Artex, another algorithm for Automatic Text
Summarization. In order to rank sentences, a simple inner product is calculated
between each sentence, a document vector (text topic) and a lexical vector
(vocabulary used by a sentence). Summaries are then generated by assembling the
highest ranked sentences. No ruled-based linguistic post-processing is
necessary in order to obtain summaries. Tests over several datasets (coming
from Document Understanding Conferences (DUC), Text Analysis Conferences (TAC),
evaluation campaigns, etc.) in French, English and Spanish have shown that
summarizer achieves interesting results.
|
1210.3326
|
Three dimensional tracking of gold nanoparticles using digital
holographic microscopy
|
physics.optics cs.CV
|
In this paper we present a digital holographic microscope to track gold
colloids in three dimensions. We report observations of 100nm gold particles in
motion in water. The expected signal and the chosen method of reconstruction
are described. We also discuss about how to implement the numerical calculation
to reach real-time 3D tracking.
|
1210.3350
|
Enhanced Compressed Sensing Recovery with Level Set Normals
|
cs.CV
|
We propose a compressive sensing algorithm that exploits geometric properties
of images to recover images of high quality from few measurements. The image
reconstruction is done by iterating the two following steps: 1) estimation of
normal vectors of the image level curves and 2) reconstruction of an image
fitting the normal vectors, the compressed sensing measurements and the
sparsity constraint. The proposed technique can naturally extend to non local
operators and graphs to exploit the repetitive nature of textured images in
order to recover fine detail structures. In both cases, the problem is reduced
to a series of convex minimization problems that can be efficiently solved with
a combination of variable splitting and augmented Lagrangian methods, leading
to fast and easy-to-code algorithms. Extended experiments show a clear
improvement over related state-of-the-art algorithms in the quality of the
reconstructed images and the robustness of the proposed method to noise,
different kind of images and reduced measurements.
|
1210.3354
|
Averting group failures in collective-risk social dilemmas
|
physics.soc-ph cs.SI q-bio.PE
|
Free-riding on a joint venture bears the risk of losing personal endowment as
the group may fail to reach the collective target due to insufficient
contributions. A collective-risk social dilemma emerges, which we here study in
the realm of the spatial public goods game with group-performance-dependent
risk levels. Instead of using an overall fixed value, we update the risk level
in each group based on the difference between the actual contributions and the
declared target. A single parameter interpolates between a step-like risk
function and virtual irrelevance of the group's performance in averting the
failure, thus bridging the two extremes constituting maximal and minimal
feedback. We show that stronger feedback between group performance and risk
level is in general more favorable for the successful evolution of public
cooperation, yet only if the collective target to be reached is moderate.
Paradoxically, if the goals are overambitious, intermediate feedback strengths
yield optimal conditions for cooperation. This can be explained by the
propagation of players that employ identical strategies but experience
different individual success while trying to cope with the collective-risk
social dilemma.
|
1210.3375
|
An Agent-based framework for cooperation in Supply Chain
|
cs.AI
|
Supply Chain coordination has become a critical success factor for Supply
Chain management (SCM) and effectively improving the performance of
organizations in various industries. Companies are increasingly located at the
intersection of one or more corporate networks which are designated by "Supply
Chain". Managing this chain is mainly based on an 'information sharing' and
redeployment activities between the various links that comprise it. Several
attempts have been made by industrialists and researchers to educate
policymakers about the gains to be made by the implementation of cooperative
relationships. The approach presented in this paper here is among the works
that aim to propose solutions related to information systems distributed Supply
Chains to enable the different actors of the chain to improve their
performance. We propose in particular solutions that focus on cooperation
between actors in the Supply Chain.
|
1210.3384
|
Inferring clonal evolution of tumors from single nucleotide somatic
mutations
|
cs.LG q-bio.PE q-bio.QM stat.ML
|
High-throughput sequencing allows the detection and quantification of
frequencies of somatic single nucleotide variants (SNV) in heterogeneous tumor
cell populations. In some cases, the evolutionary history and population
frequency of the subclonal lineages of tumor cells present in the sample can be
reconstructed from these SNV frequency measurements. However, automated methods
to do this reconstruction are not available and the conditions under which
reconstruction is possible have not been described.
We describe the conditions under which the evolutionary history can be
uniquely reconstructed from SNV frequencies from single or multiple samples
from the tumor population and we introduce a new statistical model, PhyloSub,
that infers the phylogeny and genotype of the major subclonal lineages
represented in the population of cancer cells. It uses a Bayesian nonparametric
prior over trees that groups SNVs into major subclonal lineages and
automatically estimates the number of lineages and their ancestry. We sample
from the joint posterior distribution over trees to identify evolutionary
histories and cell population frequencies that have the highest probability of
generating the observed SNV frequency data. When multiple phylogenies are
consistent with a given set of SNV frequencies, PhyloSub represents the
uncertainty in the tumor phylogeny using a partial order plot. Experiments on a
simulated dataset and two real datasets comprising tumor samples from acute
myeloid leukemia and chronic lymphocytic leukemia patients demonstrate that
PhyloSub can infer both linear (or chain) and branching lineages and its
inferences are in good agreement with ground truth, where it is available.
|
1210.3395
|
The Restricted Isometry Property for Random Block Diagonal Matrices
|
cs.IT math.IT math.PR
|
In Compressive Sensing, the Restricted Isometry Property (RIP) ensures that
robust recovery of sparse vectors is possible from noisy, undersampled
measurements via computationally tractable algorithms. It is by now well-known
that Gaussian (or, more generally, sub-Gaussian) random matrices satisfy the
RIP under certain conditions on the number of measurements. Their use can be
limited in practice, however, due to storage limitations, computational
considerations, or the mismatch of such matrices with certain measurement
architectures. These issues have recently motivated considerable effort towards
studying the RIP for structured random matrices. In this paper, we study the
RIP for block diagonal measurement matrices where each block on the main
diagonal is itself a sub-Gaussian random matrix. Our main result states that
such matrices can indeed satisfy the RIP but that the requisite number of
measurements depends on certain properties of the basis in which the signals
are sparse. In the best case, these matrices perform nearly as well as dense
Gaussian random matrices, despite having many fewer nonzero entries.
|
1210.3404
|
A polygon-based interpolation operator for super-resolution imaging
|
cs.CV
|
We outline the super-resolution reconstruction problem posed as a
maximization of probability. We then introduce an interpolation method based on
polygonal pixel overlap, express it as a linear operator, and use it to improve
reconstruction. Polygon interpolation outperforms the simpler bilinear
interpolation operator and, unlike Gaussian modeling of pixels, requires no
parameter estimation. A free software implementation that reproduces the
results shown is provided.
|
1210.3420
|
Contrasting Multiple Social Network Autocorrelations for Binary
Outcomes, With Applications To Technology Adoption
|
cs.SI physics.soc-ph stat.ME
|
The rise of socially targeted marketing suggests that decisions made by
consumers can be predicted not only from their personal tastes and
characteristics, but also from the decisions of people who are close to them in
their networks. One obstacle to consider is that there may be several different
measures for "closeness" that are appropriate, either through different types
of friendships, or different functions of distance on one kind of friendship,
where only a subset of these networks may actually be relevant. Another is that
these decisions are often binary and more difficult to model with conventional
approaches, both conceptually and computationally. To address these issues, we
present a hierarchical model for individual binary outcomes that uses and
extends the machinery of the auto-probit method for binary data. We demonstrate
the behavior of the parameters estimated by the multiple network-regime
auto-probit model (m-NAP) under various sensitivity conditions, such as the
impact of the prior distribution and the nature of the structure of the
network, and demonstrate on several examples of correlated binary data in
networks of interest to Information Systems, including the adoption of Caller
Ring-Back Tones, whose use is governed by direct connection but explained by
additional network topologies.
|
1210.3427
|
On Multi-rate Sequential Data Transmission
|
cs.IT math.IT
|
In this report, we investigate the data transmission model in which a
sequence of data is broadcasted to a number of receivers. The receivers, which
have different channel capacities, wish to decode the data sequentially at
different rates. Our results are applicable to a wide range of scenarios. For
instance, it can be employed in the broadcast streaming of a video clip through
the internet, so that receivers with different bandwidths can play the video at
different speed. Receivers with greater bandwidths can provide a smooth
playback, while receivers with smaller bandwidths can play the video at a
slower speed, or with short pauses or rebuffering.
|
1210.3435
|
Multiple Service providers sharing Spectrum using Cognitive Radio in
Wireless Communication Networks
|
cs.NI cs.IT math.IT
|
The current utilization of the spectrum is quite inefficient; consequently,
if properly used, there is no shortage of the spectrum that is at present
available. Therefore, it is anticipated that more flexible use of spectrum and
spectrum sharing between radio systems will be key enablers to facilitate the
successful implementation of future systems. Cognitive radio, however, is known
as the most intelligent and promising technique in solving the problem of
spectrum sharing. In this paper, we consider the technique of spectrum sharing
among users of service providers to share the licensed spectrum of licensed
service providers. It is shown that the proposed technique reduces the call
blocking rate and improves the spectrum utilization.
|
1210.3437
|
Comparing Spectrum Utilization using Fuzzy Logic System for
Heterogeneous Wireless Networks via Cognitive Radio
|
cs.NI cs.IT math.IT
|
At present, lots of works focus on spectrum allocation of wireless networks.
In this paper, we proposed a Cognitive based spectrum access by
opportunistically approach of Heterogeneous Wireless networks based on Fuzzy
Logic system. The Cognitive Radio is a technology where a network or a wireless
system changes its environment parameters to communicate efficiently by
avoiding the interference with the users. By applying FLS (Fuzzy Logic System),
the available spectrum utilization is effectively utilized with the help of the
three antecedents namely Spectrum utilization efficiency, Degree of mobility,
Distance from primary user to the secondary users. The proposed work is
compared with normal Spectrum Utilization method. Finally, Simulation results
of the proposed work Fuzzy Logic System shows more efficient than the normal
Spectrum utilization method.
|
1210.3438
|
Stochastic Surveillance Strategies for Spatial Quickest Detection
|
cs.RO cs.MA
|
We design persistent surveillance strategies for the quickest detection of
anomalies taking place in an environment of interest. From a set of predefined
regions in the environment, a team of autonomous vehicles collects noisy
observations, which a control center processes. The overall objective is to
minimize detection delay while maintaining the false alarm rate below a desired
threshold. We present joint (i) anomaly detection algorithms for the control
center and (ii) vehicle routing policies. For the control center, we propose
parallel cumulative sum (CUSUM) algorithms (one for each region) to detect
anomalies from noisy observations. For the vehicles, we propose a stochastic
routing policy, in which the regions to be visited are chosen according to a
probability vector. We study stationary routing policy (the probability vector
is constant) as well as adaptive routing policies (the probability vector
varies in time as a function of the likelihood of regional anomalies). In the
context of stationary policies, we design a performance metric and minimize it
to design an efficient stationary routing policy. Our adaptive policy improves
upon the stationary counterpart by adaptively increasing the selection
probability of regions with high likelihood of anomaly. Finally, we show the
effectiveness of the proposed algorithms through numerical simulations and a
persistent surveillance experiment.
|
1210.3448
|
Notes on image annotation
|
cs.CV cs.HC
|
We are under the illusion that seeing is effortless, but frequently the
visual system is lazy and makes us believe that we understand something when in
fact we don't. Labeling a picture forces us to become aware of the difficulties
underlying scene understanding. Suddenly, the act of seeing is not effortless
anymore. We have to make an effort in order to understand parts of the picture
that we neglected at first glance.
In this report, an expert image annotator relates her experience on
segmenting and labeling tens of thousands of images. During this process, the
notes she took try to highlight the difficulties encountered, the solutions
adopted, and the decisions made in order to get a consistent set of
annotations. Those annotations constitute the SUN database.
|
1210.3449
|
Construction of Block Orthogonal STBCs and Reducing Their Sphere
Decoding Complexity
|
cs.IT math.IT
|
Construction of high rate Space Time Block Codes (STBCs) with low decoding
complexity has been studied widely using techniques such as sphere decoding and
non Maximum-Likelihood (ML) decoders such as the QR decomposition decoder with
M paths (QRDM decoder). Recently Ren et al., presented a new class of STBCs
known as the block orthogonal STBCs (BOSTBCs), which could be exploited by the
QRDM decoders to achieve significant decoding complexity reduction without
performance loss. The block orthogonal property of the codes constructed was
however only shown via simulations. In this paper, we give analytical proofs
for the block orthogonal structure of various existing codes in literature
including the codes constructed in the paper by Ren et al. We show that codes
formed as the sum of Clifford Unitary Weight Designs (CUWDs) or Coordinate
Interleaved Orthogonal Designs (CIODs) exhibit block orthogonal structure. We
also provide new construction of block orthogonal codes from Cyclic Division
Algebras (CDAs) and Crossed-Product Algebras (CPAs). In addition, we show how
the block orthogonal property of the STBCs can be exploited to reduce the
decoding complexity of a sphere decoder using a depth first search approach.
Simulation results of the decoding complexity show a 30% reduction in the
number of floating point operations (FLOPS) of BOSTBCs as compared to STBCs
without the block orthogonal structure.
|
1210.3456
|
Bayesian Analysis for miRNA and mRNA Interactions Using Expression Data
|
stat.AP cs.LG q-bio.GN q-bio.MN stat.ML
|
MicroRNAs (miRNAs) are small RNA molecules composed of 19-22 nt, which play
important regulatory roles in post-transcriptional gene regulation by
inhibiting the translation of the mRNA into proteins or otherwise cleaving the
target mRNA. Inferring miRNA targets provides useful information for
understanding the roles of miRNA in biological processes that are potentially
involved in complex diseases. Statistical methodologies for point estimation,
such as the Least Absolute Shrinkage and Selection Operator (LASSO) algorithm,
have been proposed to identify the interactions of miRNA and mRNA based on
sequence and expression data. In this paper, we propose using the Bayesian
LASSO (BLASSO) and the non-negative Bayesian LASSO (nBLASSO) to analyse the
interactions between miRNA and mRNA using expression data. The proposed
Bayesian methods explore the posterior distributions for those parameters
required to model the miRNA-mRNA interactions. These approaches can be used to
observe the inferred effects of the miRNAs on the targets by plotting the
posterior distributions of those parameters. For comparison purposes, the Least
Squares Regression (LSR), Ridge Regression (RR), LASSO, non-negative LASSO
(nLASSO), and the proposed Bayesian approaches were applied to four public
datasets. We concluded that nLASSO and nBLASSO perform best in terms of
sensitivity and specificity. Compared to the point estimate algorithms, which
only provide single estimates for those parameters, the Bayesian methods are
more meaningful and provide credible intervals, which take into account the
uncertainty of the inferred interactions of the miRNA and mRNA. Furthermore,
Bayesian methods naturally provide statistical significance to select
convincing inferred interactions, while point estimate algorithms require a
manually chosen threshold, which is less meaningful, to choose the possible
interactions.
|
1210.3512
|
Digital Network Coding Aided Two-way Relaying: Energy Minimization and
Queue Analysis
|
cs.IT math.IT
|
In this paper, we consider a three node, two-way relay system with digital
network coding over static channels where all link gains are assumed to be
constant during transmission. The aim is to minimize total energy consumption
while ensuring queue stability at all nodes, for a given pair of random packet
arrival rates. Specifically, we allow for a set of transmission modes and solve
for the optimal fraction of resources allocated to each mode, including
multiaccess uplink transmission mode and network coding broadcasting mode. In
addition, for the downlink, we find the condition to determine whether
superposition coding with excess data over the better link and network coded
data for both users is energy efficient and the corresponding optimization is
formulated and solved. To tackle the queue evolution in this network, we
present a detailed analysis of the queues at each node using a random
scheduling method that closely approximates the theoretical design, through a
two-dimensional Markov chain model.
|
1210.3563
|
Degrees of Freedom of Multi-hop MIMO Broadcast Networks with Delayed
CSIT
|
cs.IT math.IT
|
We study the sum degrees of freedom (DoF) of a class of multi-layer
relay-aided MIMO broadcast networks with delayed channel state information at
transmitters (CSIT). In the assumed network a K-antenna source intends to
communicate to K single-antenna destinations, with the help of N-2 layers of K
full-duplex single-antenna relays. We consider two practical delayed CSIT
feedback scenarios. If the source can obtain the CSI feedback signals from all
layers, we prove the optimal sum DoF of the network to be K/(1+1/2+...+1/K). If
the CSI feedback is only within each hop, we show that when K=2 the optimal sum
DoF is 4/3, and when K >= 3 the sum DoF 3/2 is achievable. Our results reveal
that the sum DoF performance in the considered class of N-layer MIMO broadcast
networks with delayed CSIT may depend not on N, the number of layers in the
network, but only on K, the number of antennas/terminals in each layer.
|
1210.3569
|
Autonomous Reinforcement of Behavioral Sequences in Neural Dynamics
|
cs.NE
|
We introduce a dynamic neural algorithm called Dynamic Neural (DN)
SARSA(\lambda) for learning a behavioral sequence from delayed reward.
DN-SARSA(\lambda) combines Dynamic Field Theory models of behavioral sequence
representation, classical reinforcement learning, and a computational
neuroscience model of working memory, called Item and Order working memory,
which serves as an eligibility trace. DN-SARSA(\lambda) is implemented on both
a simulated and real robot that must learn a specific rewarding sequence of
elementary behaviors from exploration. Results show DN-SARSA(\lambda) performs
on the level of the discrete SARSA(\lambda), validating the feasibility of
general reinforcement learning without compromising neural dynamics.
|
1210.3583
|
Adaptive Quantizers for Estimation
|
cs.IT math.IT
|
In this paper, adaptive estimation based on noisy quantized observations is
studied. A low complexity adaptive algorithm using a quantizer with adjustable
input gain and offset is presented. Three possible scalar models for the
parameter to be estimated are considered: constant, Wiener process and Wiener
process with deterministic drift. After showing that the algorithm is
asymptotically unbiased for estimating a constant, it is shown, in the three
cases, that the asymptotic mean squared error depends on the Fisher information
for the quantized measurements. It is also shown that the loss of performance
due to quantization depends approximately on the ratio of the Fisher
information for quantized and continuous measurements. At the end of the paper
the theoretical results are validated through simulation under two different
classes of noise, generalized Gaussian noise and Student's-t noise.
|
1210.3587
|
Inferring the Underlying Structure of Information Cascades
|
cs.SI cs.AI physics.soc-ph
|
In social networks, information and influence diffuse among users as
cascades. While the importance of studying cascades has been recognized in
various applications, it is difficult to observe the complete structure of
cascades in practice. Moreover, much less is known on how to infer cascades
based on partial observations. In this paper we study the cascade inference
problem following the independent cascade model, and provide a full treatment
from complexity to algorithms: (a) We propose the idea of consistent trees as
the inferred structures for cascades; these trees connect source nodes and
observed nodes with paths satisfying the constraints from the observed temporal
information. (b) We introduce metrics to measure the likelihood of consistent
trees as inferred cascades, as well as several optimization problems for
finding them. (c) We show that the decision problems for consistent trees are
in general NP-complete, and that the optimization problems are hard to
approximate. (d) We provide approximation algorithms with performance
guarantees on the quality of the inferred cascades, as well as heuristics. We
experimentally verify the efficiency and effectiveness of our inference
algorithms, using real and synthetic data.
|
1210.3609
|
Optimal Power Allocation Policy over Two Identical Gilbert-Elliott
Channels
|
cs.IT math.IT
|
We study the fundamental problem of optimal power allocation over two
identical Gilbert-Elliott (Binary Markov) communication channels. Our goal is
to maximize the expected discounted number of bits transmitted over an infinite
time span by judiciously choosing one of the four actions for each time slot:
1) allocating power equally to both channels, 2) allocating all the power to
channel 1, 3) allocating all the power to channel 2, and 4) allocating no power
to any of the channels. As the channel state is unknown when power allocation
decision is made, we model this problem as a partially observable Markov
decision process(POMDP), and derive the optimal policy which gives the optimal
action to take under different possible channel states. Two different
structures of the optimal policy are derived analytically and verified by
linear programming simulation. We also illustrate how to construct the optimal
policy by the combination of threshold calculation and linear programming
simulation once system parameters are known.
|
1210.3634
|
Quick Summary
|
cs.CL cs.AI
|
Quick Summary is an innovate implementation of an automatic document
summarizer that inputs a document in the English language and evaluates each
sentence. The scanner or evaluator determines criteria based on its grammatical
structure and place in the paragraph. The program then asks the user to specify
the number of sentences the person wishes to highlight. For example should the
user ask to have three of the most important sentences, it would highlight the
first and most important sentence in green. Commonly this is the sentence
containing the conclusion. Then Quick Summary finds the second most important
sentence usually called a satellite and highlights it in yellow. This is
usually the topic sentence. Then the program finds the third most important
sentence and highlights it in red. The implementations of this technology are
useful in a society of information overload when a person typically receives 42
emails a day (Microsoft). The paper also is a candid look at difficulty that
machine learning has in textural translating. However, it speaks on how to
overcome the obstacles that historically prevented progress. This paper
proposes mathematical meta-data criteria that justify the place of importance
of a sentence. Just as tools for the study of relational symmetry in
bio-informatics, this tool seeks to classify words with greater clarity.
"Survey Finds Workers Average Only Three Productive Days per Week." Microsoft
News Center. Microsoft. Web. 31 Mar. 2012.
|
1210.3652
|
A Flexible Mixed Integer Programming framework for Nurse Scheduling
|
cs.DS cs.NE
|
In this paper, a nurse-scheduling model is developed using mixed integer
programming model. It is deployed to a general care ward to replace and
automate the current manual approach for scheduling. The developed model
differs from other similar studies in that it optimizes both hospitals
requirement as well as nurse preferences by allowing flexibility in the
transfer of nurses from different duties. The model also incorporated
additional policies which are part of the hospitals requirement but not part of
the legislations. Hospitals key primary mission is to ensure continuous ward
care service with appropriate number of nursing staffs and the right mix of
nursing skills. The planning and scheduling is done to avoid additional non
essential cost for hospital. Nurses preferences are taken into considerations
such as the number of night shift and consecutive rest days. We will also
reformulate problems from another paper which considers the penalty objective
using the model but without the flexible components. The models are built using
AIMMS which solves the problem in very short amount of time.
|
1210.3664
|
Secure Cooperative Regenerating Codes for Distributed Storage Systems
|
cs.IT math.IT
|
Regenerating codes enable trading off repair bandwidth for storage in
distributed storage systems (DSS). Due to their distributed nature, these
systems are intrinsically susceptible to attacks, and they may also be subject
to multiple simultaneous node failures. Cooperative regenerating codes allow
bandwidth efficient repair of multiple simultaneous node failures. This paper
analyzes storage systems that employ cooperative regenerating codes that are
robust to (passive) eavesdroppers. The analysis is divided into two parts,
studying both minimum bandwidth and minimum storage cooperative regenerating
scenarios. First, the secrecy capacity for minimum bandwidth cooperative
regenerating codes is characterized. Second, for minimum storage cooperative
regenerating codes, a secure file size upper bound and achievability results
are provided. These results establish the secrecy capacity for the minimum
storage scenario for certain special cases. In all scenarios, the achievability
results correspond to exact repair, and secure file size upper bounds are
obtained using min-cut analyses over a suitable secrecy graph representation of
DSS. The main achievability argument is based on an appropriate pre-coding of
the data to eliminate the information leakage to the eavesdropper.
|
1210.3667
|
A New Analysis of the DS-CDMA Cellular Downlink Under Spatial
Constraints
|
cs.IT math.IT
|
The direct-sequence code-division multiple access (DS-CDMA) cellular downlink
is modeled by a constrained random spatial model involving a fixed number of
base stations placed over a finite area with a minimum separation. The analysis
is driven by a new closed-form expression for the conditional outage
probability at each mobile, where the conditioning is with respect to the
network realization. The analysis features a flexible channel model, accounting
for path loss, Nakagami fading, and shadowing. By generating many random
networks and applying a given resource allocation policy, the distribution of
the rates provided to each user is obtained. In addition to determining the
average rate, the analysis can determine the transmission capacity of the
network and can characterize fairness in terms of the fraction of users that
achieve a specified rate. The analysis is used to compare a rate-control policy
against a power-control policy and investigate the influence of the minimum
base-station separation.
|
1210.3709
|
A Rank-Corrected Procedure for Matrix Completion with Fixed Basis
Coefficients
|
math.OC cs.IT cs.NA math.IT stat.ML
|
For the problems of low-rank matrix completion, the efficiency of the
widely-used nuclear norm technique may be challenged under many circumstances,
especially when certain basis coefficients are fixed, for example, the low-rank
correlation matrix completion in various fields such as the financial market
and the low-rank density matrix completion from the quantum state tomography.
To seek a solution of high recovery quality beyond the reach of the nuclear
norm, in this paper, we propose a rank-corrected procedure using a nuclear
semi-norm to generate a new estimator. For this new estimator, we establish a
non-asymptotic recovery error bound. More importantly, we quantify the
reduction of the recovery error bound for this rank-corrected procedure.
Compared with the one obtained for the nuclear norm penalized least squares
estimator, this reduction can be substantial (around 50%). We also provide
necessary and sufficient conditions for rank consistency in the sense of Bach
(2008). Very interestingly, these conditions are highly related to the concept
of constraint nondegeneracy in matrix optimization. As a byproduct, our results
provide a theoretical foundation for the majorized penalty method of Gao and
Sun (2010) and Gao (2010) for structured low-rank matrix optimization problems.
Extensive numerical experiments demonstrate that our proposed rank-corrected
procedure can simultaneously achieve a high recovery accuracy and capture the
low-rank structure.
|
1210.3718
|
On the Role of Contrast and Regularity in Perceptual Boundary Saliency
|
cs.CV stat.AP
|
Mathematical Morphology proposes to extract shapes from images as connected
components of level sets. These methods prove very suitable for shape
recognition and analysis. We present a method to select the perceptually
significant (i.e., contrasted) level lines (boundaries of level sets), using
the Helmholtz principle as first proposed by Desolneux et al. Contrarily to the
classical formulation by Desolneux et al. where level lines must be entirely
salient, the proposed method allows to detect partially salient level lines,
thus resulting in more robust and more stable detections. We then tackle the
problem of combining two gestalts as a measure of saliency and propose a method
that reinforces detections. Results in natural images show the good performance
of the proposed methods.
|
1210.3729
|
Inference of Fine-grained Attributes of Bengali Corpus for Stylometry
Detection
|
cs.CL cs.CV
|
Stylometry, the science of inferring characteristics of the author from the
characteristics of documents written by that author, is a problem with a long
history and belongs to the core task of Text categorization that involves
authorship identification, plagiarism detection, forensic investigation,
computer security, copyright and estate disputes etc. In this work, we present
a strategy for stylometry detection of documents written in Bengali. We adopt a
set of fine-grained attribute features with a set of lexical markers for the
analysis of the text and use three semi-supervised measures for making
decisions. Finally, a majority voting approach has been taken for final
classification. The system is fully automatic and language-independent.
Evaluation results of our attempt for Bengali author's stylometry detection
show reasonably promising accuracy in comparison to the baseline model.
|
1210.3735
|
On the Analysis of a Label Propagation Algorithm for Community Detection
|
cs.DC cs.SI physics.soc-ph
|
This paper initiates formal analysis of a simple, distributed algorithm for
community detection on networks. We analyze an algorithm that we call
\textsc{Max-LPA}, both in terms of its convergence time and in terms of the
"quality" of the communities detected. \textsc{Max-LPA} is an instance of a
class of community detection algorithms called \textit{label propagation}
algorithms. As far as we know, most analysis of label propagation algorithms
thus far has been empirical in nature and in this paper we seek a theoretical
understanding of label propagation algorithms. In our main result, we define a
clustered version of \er random graphs with clusters $V_1, V_2,..., V_k$ where
the probability $p$, of an edge connecting nodes within a cluster $V_i$ is
higher than $p'$, the probability of an edge connecting nodes in distinct
clusters. We show that even with fairly general restrictions on $p$ and $p'$
($p = \Omega(\frac{1}{n^{1/4-\epsilon}})$ for any $\epsilon > 0$, $p' =
O(p^2)$, where $n$ is the number of nodes), \textsc{Max-LPA} detects the
clusters $V_1, V_2,..., V_n$ in just two rounds. Based on this and on empirical
results, we conjecture that \textsc{Max-LPA} can correctly and quickly identify
communities on clustered \er graphs even when the clusters are much sparser,
i.e., with $p = \frac{c\log n}{n}$ for some $c > 1$.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.