id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1112.6098
|
Your browsing behavior for a Big Mac: Economics of Personal Information
Online
|
cs.HC cs.CY cs.SI
|
Most online services (Google, Facebook etc.) operate by providing a service
to users for free, and in return they collect and monetize personal information
(PI) of the users. This operational model is inherently economic, as the "good"
being traded and monetized is PI. This model is coming under increased scrutiny
as online services are moving to capture more PI of users, raising serious
privacy concerns. However, little is known on how users valuate different types
of PI while being online, as well as the perceptions of users with regards to
exploitation of their PI by online service providers.
In this paper, we study how users valuate different types of PI while being
online, while capturing the context by relying on Experience Sampling. We were
able to extract the monetary value that 168 participants put on different
pieces of PI. We find that users value their PI related to their offline
identities more (3 times) than their browsing behavior. Users also value
information pertaining to financial transactions and social network
interactions more than activities like search and shopping. We also found that
while users are overwhelmingly in favor of exchanging their PI in return for
improved online services, they are uncomfortable if these same providers
monetize their PI.
|
1112.6108
|
Competition among reputations in the 2D Sznajd model: Spontaneous
emergence of democratic states
|
physics.soc-ph cond-mat.stat-mech cs.SI
|
We propose a modification in the Sznajd sociophysics model defined on the
square lattice. For this purpose, we consider reputation-a mechanism limiting
the agents' persuasive power. The reputation is introduced as a time-dependent
score, which can be positive or negative. This mechanism avoids dictatorship
(full consensus, all spins parallel) for a wide range of model parameters. We
consider two different situations: case 1, in which the agents' reputation
increases for each persuaded neighbor, and case 2, in which the agents'
reputation increases for each persuasion and decreases when a neighbor keeps
his opinion. Our results show that the introduction of reputation avoids full
consensus even for initial densities of up spins greater than 1/2. The
relaxation times follow a log-normal-like distribution in both cases, but they
are larger in case 2 due to the competition among reputations. In addition, we
show that the usual phase transition occurs and depends on the initial
concentration $d$ of individuals with the same opinion, but the critical points
$d_{c}$ in the two cases are different.
|
1112.6117
|
On the optimal frequency selectivity to maximize multiuser diversity in
an OFDMA scheduling system
|
cs.IT math.IT
|
We consider an orthogonal frequency division multiple access (OFDMA)
scheduling system. A scheduling unit block consists of contiguous multiple
subcarriers. Users are scheduled based on their block average throughput in a
proportional fair way. The multiuser diversity gain increases with the degree
and dynamic range of channel fluctuations. %Lack of diversity in a limited
frequency selective channel may decrease the sum rate. However, a decrease of
the block average throughput in a too much selective channel may lessen the sum
rate as well. In this paper, we first study optimal channel selectivity in view
of maximizing the maximum of the block average throughput of an arbitrary user.
Based on this study, we then propose a method to determine a per-user optimal
cyclic delay when cyclic delay diversity (CDD) is used to enhance the sum rate
by increasing channel selectivity for a limited fluctuating channel. We show
that the proposed technique achieves better performance than a conventional
fixed cyclic delay scheme and that the throughput is very close to the optimal
sum rate possible with CDD.
|
1112.6209
|
Building high-level features using large scale unsupervised learning
|
cs.LG
|
We consider the problem of building high-level, class-specific feature
detectors from only unlabeled data. For example, is it possible to learn a face
detector using only unlabeled images? To answer this, we train a 9-layered
locally connected sparse autoencoder with pooling and local contrast
normalization on a large dataset of images (the model has 1 billion
connections, the dataset has 10 million 200x200 pixel images downloaded from
the Internet). We train this network using model parallelism and asynchronous
SGD on a cluster with 1,000 machines (16,000 cores) for three days. Contrary to
what appears to be a widely-held intuition, our experimental results reveal
that it is possible to train a face detector without having to label images as
containing a face or not. Control experiments show that this feature detector
is robust not only to translation but also to scaling and out-of-plane
rotation. We also find that the same network is sensitive to other high-level
concepts such as cat faces and human bodies. Starting with these learned
features, we trained our network to obtain 15.8% accuracy in recognizing 20,000
object categories from ImageNet, a leap of 70% relative improvement over the
previous state-of-the-art.
|
1112.6210
|
Vectorial FCSR constructed on totally ramified extension of the p-adic
numbers
|
cs.IT cs.CR math.IT
|
In this paper, we introduce a vectorial conception of d-FCSRs to build these
registers over any finite field. We describe the structure of d-vectorial FCSRs
and we develop an analysis to obtain basic properties like periodicity and the
existence of maximal length sequences. To illustrate these vectorial d-FCSRs,
we present simple examples and we compare with those of Goresky, Klapper and
Xu. Keywords: LFSR, FCSR, vectorial FCSR, d-FCSR, sequences, periodicity,
p-adic, ?-adic, maximal period.
|
1112.6212
|
Diffusion Adaptation over Networks under Imperfect Information Exchange
and Non-stationary Data
|
math.OC cs.SI physics.soc-ph stat.CO
|
Adaptive networks rely on in-network and collaborative processing among
distributed agents to deliver enhanced performance in estimation and inference
tasks. Information is exchanged among the nodes, usually over noisy links. The
combination weights that are used by the nodes to fuse information from their
neighbors play a critical role in influencing the adaptation and tracking
abilities of the network. This paper first investigates the mean-square
performance of general adaptive diffusion algorithms in the presence of various
sources of imperfect information exchanges, quantization errors, and model
non-stationarities. Among other results, the analysis reveals that link noise
over the regression data modifies the dynamics of the network evolution in a
distinct way, and leads to biased estimates in steady-state. The analysis also
reveals how the network mean-square performance is dependent on the combination
weights. We use these observations to show how the combination weights can be
optimized and adapted. Simulation results illustrate the theoretical findings
and match well with theory.
|
1112.6219
|
Document Clustering based on Topic Maps
|
cs.IR cs.AI
|
Importance of document clustering is now widely acknowledged by researchers
for better management, smart navigation, efficient filtering, and concise
summarization of large collection of documents like World Wide Web (WWW). The
next challenge lies in semantically performing clustering based on the semantic
contents of the document. The problem of document clustering has two main
components: (1) to represent the document in such a form that inherently
captures semantics of the text. This may also help to reduce dimensionality of
the document, and (2) to define a similarity measure based on the semantic
representation such that it assigns higher numerical values to document pairs
which have higher semantic relationship. Feature space of the documents can be
very challenging for document clustering. A document may contain multiple
topics, it may contain a large set of class-independent general-words, and a
handful class-specific core-words. With these features in mind, traditional
agglomerative clustering algorithms, which are based on either Document Vector
model (DVM) or Suffix Tree model (STC), are less efficient in producing results
with high cluster quality. This paper introduces a new approach for document
clustering based on the Topic Map representation of the documents. The document
is being transformed into a compact form. A similarity measure is proposed
based upon the inferred information through topic maps data and structures. The
suggested method is implemented using agglomerative hierarchal clustering and
tested on standard Information retrieval (IR) datasets. The comparative
experiment reveals that the proposed approach is effective in improving the
cluster quality.
|
1112.6220
|
Optimal decentralized control of coupled subsystems with control sharing
|
cs.SY math.OC
|
Subsystems that are coupled due to dynamics and costs arise naturally in
various communication applications. In many such applications the control
actions are shared between different control stations giving rise to a
\emph{control sharing} information structure. Previous studies of
control-sharing have concentrated on the linear quadratic Gaussian setup and a
solution approach tailored to continuous valued control actions. In this paper
a three step solution approach for finite valued control actions is presented.
In the first step, a person-by-person approach is used to identify redundant
data or a sufficient statistic for local information at each control station.
In the second step, the common-information based approach of Nayyar et al.\
(2011) is used to find a sufficient statistic for the common information shared
between all control stations and to obtain a dynamic programming decomposition.
In the third step, the specifics of the model are used to simplify the
sufficient statistic and the dynamic program. As an example, an exact solution
of a two-user multiple access broadcast system is presented.
|
1112.6222
|
A comparison of two suffix tree-based document clustering algorithms
|
cs.IR cs.AI
|
Document clustering as an unsupervised approach extensively used to navigate,
filter, summarize and manage large collection of document repositories like the
World Wide Web (WWW). Recently, focuses in this domain shifted from traditional
vector based document similarity for clustering to suffix tree based document
similarity, as it offers more semantic representation of the text present in
the document. In this paper, we compare and contrast two recently introduced
approaches to document clustering based on suffix tree data model. The first is
an Efficient Phrase based document clustering, which extracts phrases from
documents to form compact document representation and uses a similarity measure
based on common suffix tree to cluster the documents. The second approach is a
frequent word/word meaning sequence based document clustering, it similarly
extracts the common word sequence from the document and uses the common
sequence/ common word meaning sequence to perform the compact representation,
and finally, it uses document clustering approach to cluster the compact
documents. These algorithms are using agglomerative hierarchical document
clustering to perform the actual clustering step, the difference in these
approaches are mainly based on extraction of phrases, model representation as a
compact document, and the similarity measures used for clustering. This paper
investigates the computational aspect of the two algorithms, and the quality of
results they produced.
|
1112.6231
|
Low and Upper Bound of Approximate Sequence for the Entropy Rate of
Binary Hidden Markov Processes
|
cs.IT math.IT
|
In the paper, the approximate sequence for entropy of some binary hidden
Markov models has been found to have two bound sequences, the low bound
sequence and the upper bound sequence. The error bias of the approximate
sequence is bound by a geometric sequence with a scale factor less than 1 which
decreases quickly to zero. It helps to understand the convergence of entropy
rate of generic hidden Markov models, and it provides a theoretical base for
estimating the entropy rate of some hidden Markov models at any accuracy.
|
1112.6234
|
Sparse Recovery from Nonlinear Measurements with Applications in Bad
Data Detection for Power Networks
|
cs.IT cs.LG cs.SY math.IT
|
In this paper, we consider the problem of sparse recovery from nonlinear
measurements, which has applications in state estimation and bad data detection
for power networks. An iterative mixed $\ell_1$ and $\ell_2$ convex program is
used to estimate the true state by locally linearizing the nonlinear
measurements. When the measurements are linear, through using the almost
Euclidean property for a linear subspace, we derive a new performance bound for
the state estimation error under sparse bad data and additive observation
noise. As a byproduct, in this paper we provide sharp bounds on the almost
Euclidean property of a linear subspace, using the "escape-through-the-mesh"
theorem from geometric functional analysis. When the measurements are
nonlinear, we give conditions under which the solution of the iterative
algorithm converges to the true state even though the locally linearized
measurements may not be the actual nonlinear measurements. We numerically
evaluate our iterative convex programming approach to perform bad data
detections in nonlinear electrical power networks problems. We are able to use
semidefinite programming to verify the conditions for convergence of the
proposed iterative sparse recovery algorithms from nonlinear measurements.
|
1112.6235
|
Detecting a Vector Based on Linear Measurements
|
math.ST cs.IT math.IT stat.TH
|
We consider a situation where the state of a system is represented by a
real-valued vector. Under normal circumstances, the vector is zero, while an
event manifests as non-zero entries in this vector, possibly few. Our interest
is in the design of algorithms that can reliably detect events (i.e., test
whether the vector is zero or not) with the least amount of information. We
place ourselves in a situation, now common in the signal processing literature,
where information about the vector comes in the form of noisy linear
measurements. We derive information bounds in an active learning setup and
exhibit some simple near-optimal algorithms. In particular, our results show
that the task of detection within this setting is at once much easier, simpler
and different than the tasks of estimation and support recovery.
|
1112.6269
|
Automated PolyU Palmprint sample Registration and Coarse Classification
|
cs.CV
|
Biometric based authentication for secured access to resources has gained
importance, due to their reliable, invariant and discriminating features.
Palmprint is one such biometric entity. Prior to classification and
identification registering a sample palmprint is an important activity. In this
paper we propose a computationally effective method for automated registration
of samples from PlolyU palmprint database. In our approach we preprocess the
sample and trace the border to find the nearest point from center of sample.
Angle between vector representing the nearest point and vector passing through
the center is used for automated palm sample registration. The angle of
inclination between start and end point of heart line and life line is used for
basic classification of palmprint samples in left class and right class.
|
1112.6275
|
Reasoning About Strategies: On the Model-Checking Problem
|
cs.LO cs.MA math.LO
|
In open systems verification, to formally check for reliability, one needs an
appropriate formalism to model the interaction between agents and express the
correctness of the system no matter how the environment behaves. An important
contribution in this context is given by modal logics for strategic ability, in
the setting of multi-agent games, such as ATL, ATL\star, and the like.
Recently, Chatterjee, Henzinger, and Piterman introduced Strategy Logic, which
we denote here by CHP-SL, with the aim of getting a powerful framework for
reasoning explicitly about strategies. CHP-SL is obtained by using first-order
quantifications over strategies and has been investigated in the very specific
setting of two-agents turned-based games, where a non-elementary model-checking
algorithm has been provided. While CHP-SL is a very expressive logic, we claim
that it does not fully capture the strategic aspects of multi-agent systems. In
this paper, we introduce and study a more general strategy logic, denoted SL,
for reasoning about strategies in multi-agent concurrent games. We prove that
SL includes CHP-SL, while maintaining a decidable model-checking problem. In
particular, the algorithm we propose is computationally not harder than the
best one known for CHP-SL. Moreover, we prove that such a problem for SL is
NonElementarySpace-hard. This negative result has spurred us to investigate
here syntactic fragments of SL, strictly subsuming ATL\star, with the hope of
obtaining an elementary model-checking problem. Among the others, we study the
sublogics SL[NG], SL[BG], and SL[1G]. They encompass formulas in a special
prenex normal form having, respectively, nested temporal goals, Boolean
combinations of goals and, a single goal at a time. About these logics, we
prove that the model-checking problem for SL[1G] is 2ExpTime-complete, thus not
harder than the one for ATL\star.
|
1112.6286
|
Visualization and Analysis of Frames in Collections of Messages: Content
Analysis and the Measurement of Meaning
|
cs.CL
|
A step-to-step introduction is provided on how to generate a semantic map
from a collection of messages (full texts, paragraphs or statements) using
freely available software and/or SPSS for the relevant statistics and the
visualization. The techniques are discussed in the various theoretical contexts
of (i) linguistics (e.g., Latent Semantic Analysis), (ii) sociocybernetics and
social systems theory (e.g., the communication of meaning), and (iii)
communication studies (e.g., framing and agenda-setting). We distinguish
between the communication of information in the network space (social network
analysis) and the communication of meaning in the vector space. The vector
space can be considered a generated as an architecture by the network of
relations in the network space; words are then not only related, but also
positioned. These positions are expected rather than observed and therefore one
can communicate meaning. Knowledge can be generated when these meanings can
recursively be communicated and therefore also further codified.
|
1112.6291
|
Descriptor learning for omnidirectional image matching
|
cs.CV cs.NE
|
Feature matching in omnidirectional vision systems is a challenging problem,
mainly because complicated optical systems make the theoretical modelling of
invariance and construction of invariant feature descriptors hard or even
impossible. In this paper, we propose learning invariant descriptors using a
training set of similar and dissimilar descriptor pairs. We use the
similarity-preserving hashing framework, in which we are trying to map the
descriptor data to the Hamming space preserving the descriptor similarity on
the training set. A neural network is used to solve the underlying optimization
problem. Our approach outperforms not only straightforward descriptor matching,
but also state-of-the-art similarity-preserving hashing methods.
|
1112.6320
|
Threshold Saturation in Spatially Coupled Constraint Satisfaction
Problems
|
cs.CC cond-mat.stat-mech cs.IT math.IT
|
We consider chains of random constraint satisfaction models that are
spatially coupled across a finite window along the chain direction. We
investigate their phase diagram at zero temperature using the survey
propagation formalism and the interpolation method. We prove that the SAT-UNSAT
phase transition threshold of an infinite chain is identical to the one of the
individual standard model, and is therefore not affected by spatial coupling.
We compute the survey propagation complexity using population dynamics as well
as large degree approximations, and determine the survey propagation threshold.
We find that a clustering phase survives coupling. However, as one increases
the range of the coupling window, the survey propagation threshold increases
and saturates towards the phase transition threshold. We also briefly discuss
other aspects of the problem. Namely, the condensation threshold is not
affected by coupling, but the dynamic threshold displays saturation towards the
condensation one. All these features may provide a new avenue for obtaining
better provable algorithmic lower bounds on phase transition thresholds of the
individual standard model.
|
1112.6344
|
On the Impact of Energy Dissipation Model on Characteristic Distance in
Wireless Networks
|
cs.IT math.IT
|
In this paper we investigate the dependency of characteristic distance on
energy dissipation model. Both the many-to-one and any-to-any communication
paradigm have been presented for performance analysis. Characteristic distance
has been derived for three different cases. This study will be useful for
designing an energy-efficient wireless network where nodes are
energy-constrained.
|
1112.6367
|
Rate Region of the Vector Gaussian One-Helper Source-Coding Problem
|
cs.IT math.IT
|
We determine the rate region of the vector Gaussian one-helper source-coding
problem under a covariance matrix distortion constraint. The rate region is
achieved by a simple scheme that separates the lossy vector quantization from
the lossless spatial compression. The converse is established by extending and
combining three analysis techniques that have been employed in the past to
obtain partial results for the problem.
|
1112.6371
|
Multi-q Analysis of Image Patterns
|
physics.data-an cs.AI cs.CV physics.comp-ph
|
This paper studies the use of the Tsallis Entropy versus the classic
Boltzmann-Gibbs-Shannon entropy for classifying image patterns. Given a
database of 40 pattern classes, the goal is to determine the class of a given
image sample. Our experiments show that the Tsallis entropy encoded in a
feature vector for different $q$ indices has great advantage over the
Boltzmann-Gibbs-Shannon entropy for pattern classification, boosting
recognition rates by a factor of 3. We discuss the reasons behind this success,
shedding light on the usefulness of the Tsallis entropy.
|
1112.6382
|
SDPTools: High Precision SDP Solver in Maple
|
math.OC cs.SY
|
Semidefinite programs are an important class of convex optimization problems.
It can be solved efficiently by SDP solvers in Matlab, such as SeDuMi, SDPT3,
DSDP. However, since we are running fixed precision SDP solvers in Matlab, for
some applications, due to the numerical error, we can not get good results.
SDPTools is a Maple package to solve SDP in high precision. We apply SDPTools
to the certification of the global optimum of rational functions. For the Rumps
Model Problem, we obtain the best numerical results so far.
|
1112.6384
|
Proof nets for the Lambek-Grishin calculus
|
cs.CL
|
Grishin's generalization of Lambek's Syntactic Calculus combines a
non-commutative multiplicative conjunction and its residuals (product, left and
right division) with a dual family: multiplicative disjunction, right and left
difference. Interaction between these two families takes the form of linear
distributivity principles. We study proof nets for the Lambek-Grishin calculus
and the correspondence between these nets and unfocused and focused versions of
its sequent calculus.
|
1112.6399
|
Two-Manifold Problems
|
cs.LG
|
Recently, there has been much interest in spectral approaches to learning
manifolds---so-called kernel eigenmap methods. These methods have had some
successes, but their applicability is limited because they are not robust to
noise. To address this limitation, we look at two-manifold problems, in which
we simultaneously reconstruct two related manifolds, each representing a
different view of the same data. By solving these interconnected learning
problems together and allowing information to flow between them, two-manifold
algorithms are able to succeed where a non-integrated approach would fail: each
view allows us to suppress noise in the other, reducing bias in the same way
that an instrumental variable allows us to remove bias in a {linear}
dimensionality reduction problem. We propose a class of algorithms for
two-manifold problems, based on spectral decomposition of cross-covariance
operators in Hilbert space. Finally, we discuss situations where two-manifold
problems are useful, and demonstrate that solving a two-manifold problem can
aid in learning a nonlinear dynamical system from limited data.
|
1112.6411
|
High-dimensional Sparse Inverse Covariance Estimation using Greedy
Methods
|
cs.LG math.ST stat.ML stat.TH
|
In this paper we consider the task of estimating the non-zero pattern of the
sparse inverse covariance matrix of a zero-mean Gaussian random vector from a
set of iid samples. Note that this is also equivalent to recovering the
underlying graph structure of a sparse Gaussian Markov Random Field (GMRF). We
present two novel greedy approaches to solving this problem. The first
estimates the non-zero covariates of the overall inverse covariance matrix
using a series of global forward and backward greedy steps. The second
estimates the neighborhood of each node in the graph separately, again using
greedy forward and backward steps, and combines the intermediate neighborhoods
to form an overall estimate. The principal contribution of this paper is a
rigorous analysis of the sparsistency, or consistency in recovering the
sparsity pattern of the inverse covariance matrix. Surprisingly, we show that
both the local and global greedy methods learn the full structure of the model
with high probability given just $O(d\log(p))$ samples, which is a
\emph{significant} improvement over state of the art $\ell_1$-regularized
Gaussian MLE (Graphical Lasso) that requires $O(d^2\log(p))$ samples. Moreover,
the restricted eigenvalue and smoothness conditions imposed by our greedy
methods are much weaker than the strong irrepresentable conditions required by
the $\ell_1$-regularization based methods. We corroborate our results with
extensive simulations and examples, comparing our local and global greedy
methods to the $\ell_1$-regularized Gaussian MLE as well as the Neighborhood
Greedy method to that of nodewise $\ell_1$-regularized linear regression
(Neighborhood Lasso).
|
1112.6414
|
Evolution of opinions on social networks in the presence of competing
committed groups
|
physics.soc-ph cond-mat.stat-mech cs.SI
|
Public opinion is often affected by the presence of committed groups of
individuals dedicated to competing points of view. Using a model of pairwise
social influence, we study how the presence of such groups within social
networks affects the outcome and the speed of evolution of the overall opinion
on the network. Earlier work indicated that a single committed group within a
dense social network can cause the entire network to quickly adopt the group's
opinion (in times scaling logarithmically with the network size), so long as
the committed group constitutes more than about 10% of the population (with the
findings being qualitatively similar for sparse networks as well). Here we
study the more general case of opinion evolution when two groups committed to
distinct, competing opinions $A$ and $B$, and constituting fractions $p_A$ and
$p_B$ of the total population respectively, are present in the network. We show
for stylized social networks (including Erd\H{o}s-R\'enyi random graphs and
Barab\'asi-Albert scale-free networks) that the phase diagram of this system in
parameter space $(p_A,p_B)$ consists of two regions, one where two stable
steady-states coexist, and the remaining where only a single stable
steady-state exists. These two regions are separated by two fold-bifurcation
(spinodal) lines which meet tangentially and terminate at a cusp (critical
point). We provide further insights to the phase diagram and to the nature of
the underlying phase transitions by investigating the model on infinite
(mean-field limit), finite complete graphs and finite sparse networks. For the
latter case, we also derive the scaling exponent associated with the
exponential growth of switching times as a function of the distance from the
critical point.
|
1201.0011
|
Partial decode-forward for quantum relay channels
|
quant-ph cs.IT math.IT
|
A relay channel is one in which a Source and Destination use an intermediate
Relay station in order to improve communication rates. We propose the study of
relay channels with classical inputs and quantum outputs and prove that a
"partial decode and forward" strategy is achievable. We divide the channel uses
into many blocks and build codes in a randomized, block-Markov manner within
each block. The Relay performs a standard Holevo-Schumacher-Westmoreland
quantum measurement on each block in order to decode part of the Source's
message and then forwards this partial message in the next block. The
Destination performs a novel "sliding-window" quantum measurement on two
adjacent blocks in order to decode the Source's message. This strategy achieves
non-trivial rates for classical communication over a quantum relay channel.
|
1201.0022
|
Spatio-temporal wavelet regularization for parallel MRI reconstruction:
application to functional MRI
|
stat.AP cs.CV physics.med-ph
|
Parallel MRI is a fast imaging technique that enables the acquisition of
highly resolved images in space or/and in time. The performance of parallel
imaging strongly depends on the reconstruction algorithm, which can proceed
either in the original k-space (GRAPPA, SMASH) or in the image domain
(SENSE-like methods). To improve the performance of the widely used SENSE
algorithm, 2D- or slice-specific regularization in the wavelet domain has been
deeply investigated. In this paper, we extend this approach using 3D-wavelet
representations in order to handle all slices together and address
reconstruction artifacts which propagate across adjacent slices. The gain
induced by such extension (3D-Unconstrained Wavelet Regularized -SENSE:
3D-UWR-SENSE) is validated on anatomical image reconstruction where no temporal
acquisition is considered. Another important extension accounts for temporal
correlations that exist between successive scans in functional MRI (fMRI). In
addition to the case of 2D+t acquisition schemes addressed by some other
methods like kt-FOCUSS, our approach allows us to deal with 3D+t acquisition
schemes which are widely used in neuroimaging. The resulting 3D-UWR-SENSE and
4D-UWR-SENSE reconstruction schemes are fully unsupervised in the sense that
all regularization parameters are estimated in the maximum likelihood sense on
a reference scan. The gain induced by such extensions is illustrated on both
anatomical and functional image reconstruction, and also measured in terms of
statistical sensitivity for the 4D-UWR-SENSE approach during a fast
event-related fMRI protocol. Our 4D-UWR-SENSE algorithm outperforms the SENSE
reconstruction at the subject and group levels (15 subjects) for different
contrasts of interest (eg, motor or computation tasks) and using different
parallel acceleration factors (R=2 and R=4) on 2x2x3mm3 EPI images.
|
1201.0035
|
The information path functional approach for solution of a controllable
stochastic problem
|
cs.SY cs.IT math.DS math.IT nlin.AO
|
We study a stochastic control system, described by Ito controllable equation,
and evaluate the solutions by an entropy functional (EF), defined by the
equation functions of controllable drift and diffusion. Considering a control
problem for this functional, we solve the EF control variation problem (VP),
which leads to both a dynamic approximation of the process entropy functional
by an information path functional (IPF) and information dynamic model (IDM) of
the stochastic process. The IPF variation equations allow finding the optimal
control functions, applied to both stochastic system and the IDM for joint
solution of the identification and optimal control problems, combined with
state consolidation. In this optimal dual strategy, the IPF optimum predicts
each current control action not only in terms of total functional path goal,
but also by setting for each following control action the renovated values of
this functional controllable drift and diffusion, identified during the optimal
movement, which concurrently correct this goal. The VP information invariants
allow optimal encoding of the identified dynamic model operator and control.
The introduced method of cutting off the process by applying an impulse control
estimates the cutoff information, accumulated by the process inner connections
between its states. It has shown that such a functional information measure
contains more information than the sum of Shannon entropies counted for all
process separated states, and provides information measure of Feller kernel.
Examples illustrate the procedure of solving these problems, which has been
implemented in practice. Key words: Entropy and information path functionals,
variation equations, information invariants, controllable dynamics, impulse
controls, cutting off the diffusion process, identification, cooperation,
encoding.
|
1201.0040
|
Spam filtering by quantitative profiles
|
cs.IR stat.AP
|
Instead of the 'bag-of-words' representation, in the quantitative profile
approach to spam filtering and email categorization, an email is represented by
an m-dimensional vector of numbers, with m fixed in advance. Inspired by Sroufe
et al. [Sroufe, P., Phithakkitnukoon, S., Dantu, R., and Cangussu, J. (2010).
Email shape analysis. In \emph{LNCS}, 5935, pp. 18-29] two instances of
quantitative profiles are considered: line profile and character profile.
Performance of these profiles is studied on the TREC 2007, CEAS 2008 and a
private corpuses. At low computational costs, the two quantitative profiles
achieve performance that is at least comparable to that of heuristic rules and
naive Bayes.
|
1201.0041
|
An Amendment of Fast Subspace Tracking Methods
|
math.NA cs.NE
|
Tuning stepsize between convergence rate and steady state error level or
stability is a problem in some subspace tracking schemes. Methods in DPM and
OJA class may show sparks in their steady state error sometimes, even with a
rather small stepsize. By a study on the schemes' updating formula, it is found
that the update only happens in a specific plane but not all the subspace
basis. Through an analysis on relationship between the vectors in that plane,
an amendment as needed is made on the algorithm routine to fix the problem by
constricting the stepsize at every update step. The simulation confirms
elimination of the sparks.
|
1201.0067
|
Topologies and Price of Stability of Complex Strategic Networks with
Localized Payoffs : Analytical and Simulation Studies
|
cs.SI cs.DM physics.soc-ph
|
We analyze a network formation game in a strategic setting where payoffs of
individuals depend only on their immediate neighbourhood. We call these payoffs
as localized payoffs. In this game, the payoff of each individual captures (1)
the gain from immediate neighbors, (2) the bridging benefits, and (3) the cost
to form links. This implies that the payoff of each individual can be computed
using only its single-hop neighbourhood information. Based on this simple model
of network formation, our study explores the structure of networks that form,
satisfying one or both of the properties, namely, pairwise stability and
efficiency. We analytically prove the pairwise stability of several interesting
network structures, notably, the complete bi-partite network, complete
equi-k-partite network, complete network and cycle network, under various
configurations of the model. We validate and extend these results through
extensive simulations. We characterize topologies of efficient networks by
drawing upon classical results from extremal graph theory and discover that the
Turan graph (or the complete equi-bi-partite network) is the unique efficient
network under many configurations of parameters. We examine the tradeoffs
between topologies of pairwise stable networks and efficient networks using the
notion of price of stability, which is the ratio of the sum of payoffs of the
players in an optimal pairwise stable network to that of an efficient network.
Interestingly, we find that price of stability is equal to 1 for almost all
configurations of parameters in the proposed model; and for the rest of the
configurations of the parameters, we obtain a lower bound of 0.5 on the price
of stability. This leads to another key insight of this paper: under mild
conditions, efficient networks will form when strategic individuals choose to
add or delete links based on only localized payoffs.
|
1201.0081
|
Resource Allocation with Subcarrier Pairing in OFDMA Two-Way Relay
Networks
|
cs.IT math.IT
|
This study considers an orthogonal frequency-division multiple-access
(OFDMA)-based multi-user two-way relay network where multiple mobile stations
(MSs) communicate with a common base station (BS) via multiple relay stations
(RSs). We study the joint optimization problem of subcarrier-pairing based
relay-power allocation, relay selection, and subcarrier assignment. The problem
is formulated as a mixed integer programming problem. By using the dual method,
we propose an efficient algorithm to solve the problem in an asymptotically
optimal manner. Simulation results show that the proposed method can improve
system performance significantly over the conventional methods.
|
1201.0110
|
Weighted-Sum-Rate-Maximizing Linear Transceiver Filters for the K-User
MIMO Interference Channel
|
cs.IT math.IT
|
This letter is concerned with transmit and receive filter optimization for
the K-user MIMO interference channel. Specifically, linear transmit and receive
filter sets are designed which maximize the weighted sum rate while allowing
each transmitter to utilize only the local channel state information. Our
approach is based on extending the existing method of minimizing the weighted
mean squared error (MSE) for the MIMO broadcast channel to the K-user
interference channel at hand. For the case of the individual transmitter power
constraint, however, a straightforward generalization of the existing method
does not reveal a viable solution. It is in fact shown that there exists no
closed-form solution for the transmit filter but simple one-dimensional
parameter search yields the desired solution. Compared to the direct filter
optimization using gradient-based search, our solution requires considerably
less computational complexity and a smaller amount of feedback resources while
achieving essentially the same level of weighted sum rate. A modified filter
design is also presented which provides desired robustness in the presence of
channel uncertainty
|
1201.0148
|
An Upper Bound to the Marginal PDF of the Ordered Eigenvalues of Wishart
Matrices
|
cs.IT math.IT
|
Diversity analysis of a number of Multiple-Input Multiple-Output (MIMO)
applications requires the calculation of the expectation of a function whose
variables are the ordered multiple eigenvalues of a Wishart matrix. In order to
carry out this calculation, we need the marginal pdf of an arbitrary subset of
the ordered eigenvalues. In this letter, we derive an upper bound to the
marginal pdf of the eigenvalues. The derivation is based on the multiple
integration of the well-known joint pdf, which is very complicated due to the
exponential factors of the joint pdf. We suggest an alternative function that
provides simpler calculation of the multiple integration. As a result, the
marginal pdf is shown to be bounded by a multivariate polynomial with a given
degree. After a standard bounding procedure in a Pairwise Error Probability
(PEP) analysis, by applying the marginal pdf to the calculation of the
expectation, the diversity order for a number of MIMO systems can be obtained
in a simple manner. Simulation results that support the analysis are presented.
|
1201.0178
|
Distributed Data Collection and Storage Algorithms for Collaborative
Learning Vision Sensor Devices with Applications to Pilgrimage
|
cs.NI cs.IT math.IT
|
This work presents novel distributed data collection systems and storage
algorithms for collaborative learning wireless sensor networks (WSNs). In a
large WSN, consider $n$ collaborative sensor devices distributed randomly to
acquire information and learn about a certain field. Such sensors have less
power, small bandwidth, and short memory, and they might disappear from the
network after certain time of operations. The goal of this work is to design
efficient strategies to learn about the field by collecting sensed data from
these $n$ sensors with less computational overhead and efficient storage
encoding operations.
In this data collection system, we propose two distributed data storage
algorithms (DSA's) to solve this problem with the means of network flooding and
connectivity among sensor devices. In the first algorithm denoted, DSA-I, it's
assumed that the total number of nodes is known for each node in the network.
We show that this algorithm is efficient in terms of the encoding/decoding
operations. Furthermore, every node uses network flooding to disseminate its
data throughout the network using mixing time approximately O(n). In the second
algorithm denoted, DSA-II, it's assumed that the total number of nodes is not
known for each learning sensor, hence dissemination of the data does not depend
on the value of $n$. In this case we show that the encoding operations take
$O(C\mu^2)$, where $\mu$ is the mean degree of the network graph and $C$ is a
system parameter. Performance of these two algorithms match the derived
theoretical results. Finally, we show how to deploy these algorithms for
monitoring and measuring certain phenomenons in American-made camp tents
located in Minna field in south-east side of Makkah.
|
1201.0216
|
Building Smart Communities with Cyber-Physical Systems
|
cs.SI cs.AI cs.CY
|
There is a growing trend towards the convergence of cyber-physical systems
(CPS) and social computing, which will lead to the emergence of smart
communities composed of various objects (including both human individuals and
physical things) that interact and cooperate with each other. These smart
communities promise to enable a number of innovative applications and services
that will improve the quality of life. This position paper addresses some
opportunities and challenges of building smart communities characterized by
cyber-physical and social intelligence.
|
1201.0226
|
Towards Cost-Effective Storage Provisioning for DBMSs
|
cs.DB
|
Data center operators face a bewildering set of choices when considering how
to provision resources on machines with complex I/O subsystems. Modern I/O
subsystems often have a rich mix of fast, high performing, but expensive SSDs
sitting alongside with cheaper but relatively slower (for random accesses)
traditional hard disk drives. The data center operators need to determine how
to provision the I/O resources for specific workloads so as to abide by
existing Service Level Agreements (SLAs), while minimizing the total operating
cost (TOC) of running the workload, where the TOC includes the amortized
hardware costs and the run time energy costs. The focus of this paper is on
introducing this new problem of TOC-based storage allocation, cast in a
framework that is compatible with traditional DBMS query optimization and query
processing architecture. We also present a heuristic-based solution to this
problem, called DOT. We have implemented DOT in PostgreSQL, and experiments
using TPC-H and TPC-C demonstrate significant TOC reduction by DOT in various
settings.
|
1201.0227
|
B+-tree Index Optimization by Exploiting Internal Parallelism of
Flash-based Solid State Drives
|
cs.DB
|
Previous research addressed the potential problems of the hard-disk oriented
design of DBMSs of flashSSDs. In this paper, we focus on exploiting potential
benefits of flashSSDs. First, we examine the internal parallelism issues of
flashSSDs by conducting benchmarks to various flashSSDs. Then, we suggest
algorithm-design principles in order to best benefit from the internal
parallelism. We present a new I/O request concept, called psync I/O that can
exploit the internal parallelism of flashSSDs in a single process. Based on
these ideas, we introduce B+-tree optimization methods in order to utilize
internal parallelism. By integrating the results of these methods, we present a
B+-tree variant, PIO B-tree. We confirmed that each optimization method
substantially enhances the index performance. Consequently, PIO B-tree enhanced
B+-tree's insert performance by a factor of up to 16.3, while improving
point-search performance by a factor of 1.2. The range search of PIO B-tree was
up to 5 times faster than that of the B+-tree. Moreover, PIO B-tree
outperformed other flash-aware indexes in various synthetic workloads. We also
confirmed that PIO B-tree outperforms B+-tree in index traces collected inside
the Postgresql DBMS with TPC-C benchmark.
|
1201.0228
|
High-Performance Concurrency Control Mechanisms for Main-Memory
Databases
|
cs.DB
|
A database system optimized for in-memory storage can support much higher
transaction rates than current systems. However, standard concurrency control
methods used today do not scale to the high transaction rates achievable by
such systems. In this paper we introduce two efficient concurrency control
methods specifically designed for main-memory databases. Both use
multiversioning to isolate read-only transactions from updates but differ in
how atomicity is ensured: one is optimistic and one is pessimistic. To avoid
expensive context switching, transactions never block during normal processing
but they may have to wait before commit to ensure correct serialization
ordering. We also implemented a main-memory optimized version of single-version
locking. Experimental results show that while single-version locking works well
when transactions are short and contention is low performance degrades under
more demanding conditions. The multiversion schemes have higher overhead but
are much less sensitive to hotspots and the presence of long-running
transactions.
|
1201.0229
|
Capturing Topology in Graph Pattern Matching
|
cs.DB
|
Graph pattern matching is often defined in terms of subgraph isomorphism, an
NP-complete problem. To lower its complexity, various extensions of graph
simulation have been considered instead. These extensions allow pattern
matching to be conducted in cubic-time. However, they fall short of capturing
the topology of data graphs, i.e., graphs may have a structure drastically
different from pattern graphs they match, and the matches found are often too
large to understand and analyze. To rectify these problems, this paper proposes
a notion of strong simulation, a revision of graph simulation, for graph
pattern matching. (1) We identify a set of criteria for preserving the topology
of graphs matched. We show that strong simulation preserves the topology of
data graphs and finds a bounded number of matches. (2) We show that strong
simulation retains the same complexity as earlier extensions of simulation, by
providing a cubic-time algorithm for computing strong simulation. (3) We
present the locality property of strong simulation, which allows us to
effectively conduct pattern matching on distributed graphs. (4) We
experimentally verify the effectiveness and efficiency of these algorithms,
using real-life data and synthetic data.
|
1201.0230
|
RTED: A Robust Algorithm for the Tree Edit Distance
|
cs.DB
|
We consider the classical tree edit distance between ordered labeled trees,
which is defined as the minimum-cost sequence of node edit operations that
transform one tree into another. The state-of-the-art solutions for the tree
edit distance are not satisfactory. The main competitors in the field either
have optimal worst-case complexity, but the worst case happens frequently, or
they are very efficient for some tree shapes, but degenerate for others. This
leads to unpredictable and often infeasible runtimes. There is no obvious way
to choose between the algorithms. In this paper we present RTED, a robust tree
edit distance algorithm. The asymptotic complexity of RTED is smaller or equal
to the complexity of the best competitors for any input instance, i.e., RTED is
both efficient and worst-case optimal. We introduce the class of LRH
(Left-Right-Heavy) algorithms, which includes RTED and the fastest tree edit
distance algorithms presented in literature. We prove that RTED outperforms all
previously proposed LRH algorithms in terms of runtime complexity. In our
experiments on synthetic and real world data we empirically evaluate our
solution and compare it to the state-of-the-art.
|
1201.0231
|
Putting Lipstick on Pig: Enabling Database-style Workflow Provenance
|
cs.DB
|
Workflow provenance typically assumes that each module is a "black-box", so
that each output depends on all inputs (coarse-grained dependencies).
Furthermore, it does not model the internal state of a module, which can change
between repeated executions. In practice, however, an output may depend on only
a small subset of the inputs (fine-grained dependencies) as well as on the
internal state of the module. We present a novel provenance framework that
marries database-style and workflow-style provenance, by using Pig Latin to
expose the functionality of modules, thus capturing internal state and
fine-grained dependencies. A critical ingredient in our solution is the use of
a novel form of provenance graph that models module invocations and yields a
compact representation of fine-grained workflow provenance. It also enables a
number of novel graph transformation operations, allowing to choose the desired
level of granularity in provenance querying (ZoomIn and ZoomOut), and
supporting "what-if" workflow analytic queries. We implemented our approach in
the Lipstick system and developed a benchmark in support of a systematic
performance evaluation. Our results demonstrate the feasibility of tracking and
querying fine-grained workflow provenance.
|
1201.0232
|
Relational Approach for Shortest Path Discovery over Large Graphs
|
cs.DB
|
With the rapid growth of large graphs, we cannot assume that graphs can still
be fully loaded into memory, thus the disk-based graph operation is inevitable.
In this paper, we take the shortest path discovery as an example to investigate
the technique issues when leveraging existing infrastructure of relational
database (RDB) in the graph data management. Based on the observation that a
variety of graph search queries can be implemented by iterative operations
including selecting frontier nodes from visited nodes, making expansion from
the selected frontier nodes, and merging the expanded nodes into the visited
ones, we introduce a relational FEM framework with three corresponding
operators to implement graph search tasks in the RDB context. We show new
features such as window function and merge statement introduced by recent SQL
standards can not only simplify the expression but also improve the performance
of the FEM framework. In addition, we propose two optimization strategies
specific to shortest path discovery inside the FEM framework. First, we take a
bi-directional set Dijkstra's algorithm in the path finding. The bi-directional
strategy can reduce the search space, and set Dijkstra's algorithm finds the
shortest path in a set-at-a-time fashion. Second, we introduce an index named
SegTable to preserve the local shortest segments, and exploit SegTable to
further improve the performance. The final extensive experimental results
illustrate our relational approach with the optimization strategies achieves
high scalability and performance.
|
1201.0233
|
Mining Flipping Correlations from Large Datasets with Taxonomies
|
cs.DB
|
In this paper we introduce a new type of pattern -- a flipping correlation
pattern. The flipping patterns are obtained from contrasting the correlations
between items at different levels of abstraction. They represent surprising
correlations, both positive and negative, which are specific for a given
abstraction level, and which "flip" from positive to negative and vice versa
when items are generalized to a higher level of abstraction. We design an
efficient algorithm for finding flipping correlations, the Flipper algorithm,
which outperforms naive pattern mining methods by several orders of magnitude.
We apply Flipper to real-life datasets and show that the discovered patterns
are non-redundant, surprising and actionable. Flipper finds strong contrasting
correlations in itemsets with low-to-medium support, while existing techniques
cannot handle the pattern discovery in this frequency range.
|
1201.0234
|
A Statistical Approach Towards Robust Progress Estimation
|
cs.DB
|
The need for accurate SQL progress estimation in the context of decision
support administration has led to a number of techniques proposed for this
task. Unfortunately, no single one of these progress estimators behaves
robustly across the variety of SQL queries encountered in practice, meaning
that each technique performs poorly for a significant fraction of queries. This
paper proposes a novel estimator selection framework that uses a statistical
model to characterize the sets of conditions under which certain estimators
outperform others, leading to a significant increase in estimation robustness.
The generality of this framework also enables us to add a number of novel
"special purpose" estimators which increase accuracy further. Most importantly,
the resulting model generalizes well to queries very different from the ones
used to train it. We validate our findings using a large number of industrial
real-life and benchmark workloads.
|
1201.0274
|
Overview of EIREX 2010: Computing
|
cs.IR
|
The first Information Retrieval Education through Experimentation track
(EIREX 2010) was run at the University Carlos III of Madrid, during the 2010
spring semester. EIREX 2010 is the first in a series of experiments designed to
foster new Information Retrieval (IR) education methodologies and resources,
with the specific goal of teaching undergraduate IR courses from an
experimental perspective. For an introduction to the motivation behind the
EIREX experiments, see the first sections of [Urbano et al., 2011]. For
information on other editions of EIREX and related data, see the website at
http://ir.kr.inf.uc3m.es/eirex/. The EIREX series have the following goals: a)
to help students get a view of the Information Retrieval process as they would
find it in a real-world scenario, either industrial or academic; b) to make
students realize the importance of laboratory experiments in Computer Science
and have them initiated in their execution and analysis; c) to create a public
repository of resources to teach Information Retrieval courses; d) to seek the
collaboration and active participation of other Universities in this endeavor.
This overview paper summarizes the results of the EIREX 2010 track, focusing on
the creation of the test collection and the analysis to assess its reliability.
|
1201.0292
|
T-Learning
|
cs.LG
|
Traditional Reinforcement Learning (RL) has focused on problems involving
many states and few actions, such as simple grid worlds. Most real world
problems, however, are of the opposite type, Involving Few relevant states and
many actions. For example, to return home from a conference, humans identify
only few subgoal states such as lobby, taxi, airport etc. Each valid behavior
connecting two such states can be viewed as an action, and there are trillions
of them. Assuming the subgoal identification problem is already solved, the
quality of any RL method---in real-world settings---depends less on how well it
scales with the number of states than on how well it scales with the number of
actions. This is where our new method T-Learning excels, by evaluating the
relatively few possible transits from one state to another in a
policy-independent way, rather than a huge number of state-action pairs, or
states in traditional policy-dependent ways. Illustrative experiments
demonstrate that performance improvements of T-Learning over Q-learning can be
arbitrarily large.
|
1201.0304
|
Bounds on Shannon Capacity and Ramsey Numbers from Product of Graphs
|
math.CO cs.IT math.IT
|
In this note we study Shannon capacity of channels in the context of
classical Ramsey numbers. We overview some of the results on capacity of noisy
channels modelled by graphs, and how some constructions may contribute to our
knowledge of this capacity.
We present an improvement to the constructions by Abbott and Song and thus
establish new lower bounds for a special type of multicolor Ramsey numbers. We
prove that our construction implies that the supremum of the Shannon capacity
over all graphs with independence number 2 cannot be achieved by any finite
graph power. This can be generalized to graphs with any bounded independence
number.
|
1201.0320
|
Optimal Distributed Resource Allocation for Decode-and-Forward Relay
Networks
|
cs.IT math.IT
|
This paper presents a distributed resource allocation algorithm to jointly
optimize the power allocation, channel allocation and relay selection for
decode-and-forward (DF) relay networks with a large number of sources, relays,
and destinations. The well-known dual decomposition technique cannot directly
be applied to resolve this problem, because the achievable data rate of DF
relaying is not strictly concave, and thus the local resource allocation
subproblem may have non-unique solutions. We resolve this non-strict concavity
problem by using the idea of the proximal point method, which adds quadratic
terms to make the objective function strictly concave. However, the proximal
solution adds an extra layer of iterations over typical duality based
approaches, which can significantly slow down the speed of convergence. To
address this key weakness, we devise a fast algorithm without the need for this
additional layer of iterations, which converges to the optimal solution. Our
algorithm only needs local information exchange, and can easily adapt to
variations of network size and topology. We prove that our distributed resource
allocation algorithm converges to the optimal solution. A channel resource
adjustment method is further developed to provide more channel resources to the
bottleneck links and realize traffic load balance. Numerical results are
provided to illustrate the benefits of our algorithm.
|
1201.0328
|
Let us first agree on what the term "semantics" means: An unorthodox
approach to an age-old debate
|
cs.AI q-bio.NC
|
Traditionally, semantics has been seen as a feature of human language. The
advent of the information era has led to its widespread redefinition as an
information feature. Contrary to this praxis, I define semantics as a special
kind of information. Revitalizing the ideas of Bar-Hillel and Carnap I have
recreated and re-established the notion of semantics as the notion of Semantic
Information. I have proposed a new definition of information (as a description,
a linguistic text, a piece of a story or a tale) and a clear segregation
between two different types of information - physical and semantic information.
I hope, I have clearly explained the (usually obscured and mysterious)
interrelations between data and physical information as well as the relation
between physical information and semantic information. Consequently, usually
indefinable notions of "information", "knowledge", "memory", "learning" and
"semantics" have also received their suitable illumination and explanation.
|
1201.0341
|
Collaborative Filtering via Group-Structured Dictionary Learning
|
math.OC cs.LG math.ST stat.ML stat.TH
|
Structured sparse coding and the related structured dictionary learning
problems are novel research areas in machine learning. In this paper we present
a new application of structured dictionary learning for collaborative filtering
based recommender systems. Our extensive numerical experiments demonstrate that
the presented technique outperforms its state-of-the-art competitors and has
several advantages over approaches that do not put structured constraints on
the dictionary elements.
|
1201.0351
|
Liquid-gas-solid flows with lattice Boltzmann: Simulation of floating
bodies
|
cs.CE physics.flu-dyn
|
This paper presents a model for the simulation of liquid-gas-solid flows by
means of the lattice Boltzmann method. The approach is built upon previous
works for the simulation of liquid-solid particle suspensions on the one hand,
and on a liquid-gas free surface model on the other. We show how the two
approaches can be unified by a novel set of dynamic cell conversion rules. For
evaluation, we concentrate on the rotational stability of non-spherical rigid
bodies floating on a plane water surface - a classical hydrostatic problem
known from naval architecture. We show the consistency of our method in this
kind of flows and obtain convergence towards the ideal solution for the
measured heeling stability of a floating box.
|
1201.0362
|
Compressive sampling with chaotic dynamical systems
|
cs.IT math.IT
|
We investigate the possibility of using different chaotic sequences to
construct measurement matrices in compressive sampling. In particular, we
consider sequences generated by Chua, Lorenz and Rossler dynamical systems and
investigate the accuracy of reconstruction when using each of them to construct
measurement matrices. Chua and Lorenz sequences appear to be suitable to
construct measurement matrices. We compare the recovery rate of the original
sequence with that obtained by using Gaussian, Bernoulli and uniformly
distributed random measurement matrices. We also investigate the impact of
correlation on the recovery rate. It appears that correlation does not
influence the probability of exact reconstruction significantly.
|
1201.0375
|
Gossip on Weighted Networks
|
cs.SI nlin.AO physics.soc-ph
|
We investigate how suitable a weighted network is for gossip spreading. The
proposed model is based on the gossip spreading model introduced by Lind et.al.
on unweighted networks. Weight represents "friendship." Potential spreader
prefers not to spread if the victim of gossip is a "close friend". Gossip
spreading is related to the triangles and cascades of triangles. It gives more
insight about the structure of a network.
We analyze gossip spreading on real weighted networks of human interactions.
6 co-occurrence and 7 social pattern networks are investigated. Gossip
propagation is found to be a good parameter to distinguish co-occurrence and
social pattern networks. As a comparison some miscellaneous networks and
computer generated networks based on ER, BA, WS models are also investigated.
They are found to be quite different than the human interaction networks.
|
1201.0394
|
2D Barcode for DNA Encoding
|
cs.IT math.IT
|
The paper presents a solution for endcoding/decoding DNA information in 2D
barcodes. First part focuses on the existing techniques and symbologies in 2D
barcodes field. The 2D barcode PDF417 is presented as starting point. The
adaptations and optimizations on PDF417 and on DataMatrix lead to the solution
- DNA2DBC - DeoxyriboNucleic Acid Two Dimensional Barcode. The second part
shows the DNA2DBC encoding/decoding process step by step. In conclusions are
enumerated the most important features of 2D barcode implementation for DNA.
|
1201.0409
|
Code Design for the Noisy Slepian-Wolf Problem
|
cs.IT math.IT
|
We consider a noisy Slepian-Wolf problem where two correlated sources are
separately encoded (using codes of fixed rate) and transmitted over two
independent binary memoryless symmetric channels. The capacity of each channel
is characterized by a single parameter which is not known at the transmitter.
The goal is to design systems that retain near-optimal performance without
channel knowledge at the transmitter.
It was conjectured that it may be hard to design codes that perform well for
symmetric channel conditions. In this work, we present a provable
capacity-achieving sequence of LDGM ensembles for the erasure Slepian-Wolf
problem with symmetric channel conditions. We also introduce a staggered
structure which enables codes optimized for single user channels to perform
well for symmetric channel conditions.
We provide a generic framework for analyzing the performance of joint
iterative decoding, using density evolution. Using differential evolution, we
design punctured systematic LDPC codes to maximize the region of achievable
channel conditions. The resulting codes are then staggered to further increase
the region of achievable parameters. The main contribution of this paper is to
demonstrate that properly designed irregular LDPC codes can perform well
simultaneously over a wide range of channel parameters.
|
1201.0410
|
A note on anti-coordination and social interactions
|
cs.GT cs.CC cs.MA
|
This note confirms a conjecture of [Bramoull\'{e}, Anti-coordination and
social interactions, Games and Economic Behavior, 58, 2007: 30-49]. The
problem, which we name the maximum independent cut problem, is a restricted
version of the MAX-CUT problem, requiring one side of the cut to be an
independent set. We show that the maximum independent cut problem does not
admit any polynomial time algorithm with approximation ratio better than
$n^{1-\epsilon}$, where $n$ is the number of nodes, and $\epsilon$ arbitrarily
small, unless P=NP. For the rather special case where each node has a degree of
at most four, the problem is still MAXSNP-hard.
|
1201.0414
|
Continuity in Information Algebras
|
cs.AI
|
In this paper, the continuity and strong continuity in domain-free
information algebras and labeled information algebras are introduced
respectively. A more general concept of continuous function which is defined
between two domain-free continuous information algebras is presented. It is
shown that, with the operations combination and focusing, the set of all
continuous functions between two domain-free s-continuous information algebras
forms a new s-continuous information algebra. By studying the relationship
between domain-free information algebras and labeled information algebras, it
is demonstrated that they do correspond to each other on s-compactness.
|
1201.0418
|
A New Family of Bounded Divergence Measures and Application to Signal
Detection
|
math.ST cs.IT math.IT math.PR stat.TH
|
We introduce a new one-parameter family of divergence measures, called
bounded Bhattacharyya distance (BBD) measures, for quantifying the
dissimilarity between probability distributions. These measures are bounded,
symmetric and positive semi-definite and do not require absolute continuity. In
the asymptotic limit, BBD measure approaches the squared Hellinger distance. A
generalized BBD measure for multiple distributions is also introduced. We prove
an extension of a theorem of Bradt and Karlin for BBD relating Bayes error
probability and divergence ranking. We show that BBD belongs to the class of
generalized Csiszar f-divergence and derive some properties such as curvature
and relation to Fisher Information. For distributions with vector valued
parameters, the curvature matrix is related to the Fisher-Rao metric. We derive
certain inequalities between BBD and well known measures such as Hellinger and
Jensen-Shannon divergence. We also derive bounds on the Bayesian error
probability. We give an application of these measures to the problem of signal
detection where we compare two monochromatic signals buried in white noise and
differing in frequency and amplitude.
|
1201.0423
|
Interference-Aware Scheduling for Connectivity in MIMO Ad Hoc Multicast
Networks
|
cs.IT math.IT
|
We consider a multicast scenario involving an ad hoc network of co-channel
MIMO nodes in which a source node attempts to share a streaming message with
all nodes in the network via some pre-defined multi-hop routing tree. The
message is assumed to be broken down into packets, and the transmission is
conducted over multiple frames. Each frame is divided into time slots, and each
link in the routing tree is assigned one time slot in which to transmit its
current packet. We present an algorithm for determining the number of time
slots and the scheduling of the links in these time slots in order to optimize
the connectivity of the network, which we define to be the probability that all
links can achieve the required throughput. In addition to time multiplexing,
the MIMO nodes also employ beamforming to manage interference when links are
simultaneously active, and the beamformers are designed with the maximum
connectivity metric in mind. The effects of outdated channel state information
(CSI) are taken into account in both the scheduling and the beamforming
designs. We also derive bounds on the network connectivity and sum transmit
power in order to illustrate the impact of interference on network performance.
Our simulation results demonstrate that the choice of the number of time slots
is critical in optimizing network performance, and illustrate the significant
advantage provided by multiple antennas in improving network connectivity.
|
1201.0426
|
Phase-Only Analog Encoding for a Multi-Antenna Fusion Center
|
cs.IT math.IT
|
We consider a distributed sensor network in which the single antenna sensor
nodes observe a deterministic unknown parameter and after encoding the observed
signal with a phase parameter, the sensor nodes transmit it simultaneously to a
multi-antenna fusion center (FC). The FC optimizes the phase encoding parameter
and feeds it back to the sensor nodes such that the variance of estimation
error can be minimized. We relax the phase optimization problem to a
semidefinite programming problem and the numerical results show that the
performance of the proposed method is close to the theoretical bound. Also,
asymptotic results show that when the number of sensors is very large and the
variance of the distance between the sensor nodes and FC is small, multiple
antennas do not provide a benefit compared with a single antenna system; when
the number of antennas $M$ is large and the measurement noise at the sensor
nodes is small compared with the additive noise at the FC, the estimation error
variance can be reduced by a factor of $M$.
|
1201.0435
|
Capacity Factors of a Point-to-point Network
|
cs.IT cs.NI math.IT
|
In this paper, we investigate some properties on capacity factors, which were
proposed to investigate the link failure problem from network coding. A
capacity factor (CF) of a network is an edge set, deleting which will cause the
maximum flow to decrease while deleting any proper subset will not. Generally,
a $k$-CF is a minimal (not minimum) edge set which will cause the network
maximum flow decrease by $k$.
Under point to point acyclic scenario, we characterize all the edges which
are contained in some CF, and propose an efficient algorithm to classify. And
we show that all edges on some $s$-$t$ path in an acyclic point-to-point
acyclic network are contained in some 2-CF. We also study some other properties
of CF of point to point network, and a simple relationship with CF in multicast
network.
On the other hand, some computational hardness results relating to capacity
factors are obtained. We prove that deciding whether there is a capacity factor
of a cyclic network with size not less a given number is NP-complete, and the
time complexity of calculating the capacity rank is lowered bounded by solving
the maximal flow. Besides that, we propose the analogous definition of CF on
vertices and show it captures edge capacity factors as a special case.
|
1201.0469
|
Computing Critical $k$-tuples in Power Networks
|
cs.CE
|
In this paper the problem of finding the sparsest (i.e., minimum cardinality)
critical $k$-tuple including one arbitrarily specified measurement is
considered. The solution to this problem can be used to identify weak points in
the measurement set, or aid the placement of new meters. The critical $k$-tuple
problem is a combinatorial generalization of the critical measurement
calculation problem. Using topological network observability results, this
paper proposes an efficient and accurate approximate solution procedure for the
considered problem based on solving a minimum-cut (Min-Cut) problem and
enumerating all its optimal solutions. It is also shown that the sparsest
critical $k$-tuple problem can be formulated as a mixed integer linear
programming (MILP) problem. This MILP problem can be solved exactly using
available solvers such as CPLEX and Gurobi. A detailed numerical study is
presented to evaluate the efficiency and the accuracy of the proposed Min-Cut
and MILP calculations.
|
1201.0478
|
Technical Note: Exploring \Sigma^P_2 / \Pi^P_2-hardness for
Argumentation Problems with fixed distance to tractable classes
|
cs.AI cs.CC
|
We study the complexity of reasoning in abstracts argumentation frameworks
close to graph classes that allow for efficient reasoning methods, i.e.\ to one
of the classes of acyclic, noeven, biparite and symmetric AFs. In this work we
show that certain reasoning problems on the second level of the polynomial
hierarchy still maintain their full complexity when restricted to instances of
fixed distance to one of the above graph classes.
|
1201.0490
|
Scikit-learn: Machine Learning in Python
|
cs.LG cs.MS
|
Scikit-learn is a Python module integrating a wide range of state-of-the-art
machine learning algorithms for medium-scale supervised and unsupervised
problems. This package focuses on bringing machine learning to non-specialists
using a general-purpose high-level language. Emphasis is put on ease of use,
performance, documentation, and API consistency. It has minimal dependencies
and is distributed under the simplified BSD license, encouraging its use in
both academic and commercial settings. Source code, binaries, and documentation
can be downloaded from http://scikit-learn.org.
|
1201.0533
|
Tightened Exponential Bounds for Discrete Time, Conditionally Symmetric
Martingales with Bounded Jumps
|
math.PR cs.IT math.IT
|
This letter derives some new exponential bounds for discrete time, real
valued, conditionally symmetric martingales with bounded jumps. The new bounds
are extended to conditionally symmetric sub/ supermartingales, and they are
compared to some existing bounds.
|
1201.0552
|
Reliability Analysis of Electric Power Systems Using an Object-oriented
Hybrid Modeling Approach
|
cs.SY
|
The ongoing evolution of the electric power systems brings about the need to
cope with increasingly complex interactions of technical components and
relevant actors. In order to integrate a more comprehensive spectrum of
different aspects into a probabilistic reliability assessment and to include
time-dependent effects, this paper proposes an object-oriented hybrid approach
combining agent-based modeling techniques with classical methods such as Monte
Carlo simulation. Objects represent both technical components such as
generators and transmission lines and non-technical components such as grid
operators. The approach allows the calculation of conventional reliability
indices and the estimation of blackout frequencies. Furthermore, the influence
of the time needed to remove line overloads on the overall system reliability
can be assessed. The applicability of the approach is demonstrated by
performing simulations on the IEEE Reliability Test System 1996 and on a model
of the Swiss high-voltage grid.
|
1201.0564
|
The RegularGcc Matrix Constraint
|
cs.AI
|
We study propagation of the RegularGcc global constraint. This ensures that
each row of a matrix of decision variables satisfies a Regular constraint, and
each column satisfies a Gcc constraint. On the negative side, we prove that
propagation is NP-hard even under some strong restrictions (e.g. just 3 values,
just 4 states in the automaton, or just 5 columns to the matrix). On the
positive side, we identify two cases where propagation is fixed parameter
tractable. In addition, we show how to improve propagation over a simple
decomposition into separate Regular and Gcc constraints by identifying some
necessary but insufficient conditions for a solution. We enforce these
conditions with some additional weighted row automata. Experimental results
demonstrate the potential of these methods on some standard benchmark problems.
|
1201.0566
|
Learning joint intensity-depth sparse representations
|
cs.CV
|
This paper presents a method for learning overcomplete dictionaries composed
of two modalities that describe a 3D scene: image intensity and scene depth. We
propose a novel Joint Basis Pursuit (JBP) algorithm that finds related sparse
features in two modalities using conic programming and integrate it into a
two-step dictionary learning algorithm. JBP differs from related convex
algorithms because it finds joint sparsity models with different atoms and
different coefficient values for intensity and depth. This is crucial for
recovering generative models where the same sparse underlying causes (3D
features) give rise to different signals (intensity and depth). We give a
theoretical bound for the sparse coefficient recovery error obtained by JBP,
and show experimentally that JBP is far superior to the state of the art Group
Lasso algorithm. When applied to the Middlebury depth-intensity database, our
learning algorithm converges to a set of related features, such as pairs of
depth and intensity edges or image textures and depth slants. Finally, we show
that the learned dictionary and JBP achieve the state of the art depth
inpainting performance on time-of-flight 3D data.
|
1201.0610
|
Random Forests for Metric Learning with Implicit Pairwise Position
Dependence
|
stat.ML cs.LG
|
Metric learning makes it plausible to learn distances for complex
distributions of data from labeled data. However, to date, most metric learning
methods are based on a single Mahalanobis metric, which cannot handle
heterogeneous data well. Those that learn multiple metrics throughout the space
have demonstrated superior accuracy, but at the cost of computational
efficiency. Here, we take a new angle to the metric learning problem and learn
a single metric that is able to implicitly adapt its distance function
throughout the feature space. This metric adaptation is accomplished by using a
random forest-based classifier to underpin the distance function and
incorporate both absolute pairwise position and standard relative position into
the representation. We have implemented and tested our method against state of
the art global and multi-metric methods on a variety of data sets. Overall, the
proposed method outperforms both types of methods in terms of accuracy
(consistently ranked first) and is an order of magnitude faster than state of
the art multi-metric methods (16x faster in the worst case).
|
1201.0638
|
Constrained Randomisation of Weighted Networks
|
physics.data-an cs.SI physics.soc-ph
|
We propose a Markov chain method to efficiently generate 'surrogate networks'
that are random under the constraint of given vertex strengths. With these
strength-preserving surrogates and with edge-weight-preserving surrogates we
investigate the clustering coefficient and the average shortest path length of
functional networks of the human brain as well as of the International Trade
Networks. We demonstrate that surrogate networks can provide additional
information about network-specific characteristics and thus help interpreting
empirical weighted networks.
|
1201.0662
|
Transmission capacity of wireless networks
|
cs.IT cs.NI cs.PF math.IT
|
Transmission capacity (TC) is a performance metric for wireless networks that
measures the spatial intensity of successful transmissions per unit area,
subject to a constraint on the permissible outage probability (where outage
occurs when the SINR at a receiver is below a threshold). This volume gives a
unified treatment of the TC framework that has been developed by the authors
and their collaborators over the past decade. The mathematical framework
underlying the analysis (reviewed in Ch. 2) is stochastic geometry: Poisson
point processes model the locations of interferers, and (stable) shot noise
processes represent the aggregate interference seen at a receiver. Ch. 3
presents TC results (exact, asymptotic, and bounds) on a simple model in order
to illustrate a key strength of the framework: analytical tractability yields
explicit performance dependence upon key model parameters. Ch. 4 presents
enhancements to this basic model --- channel fading, variable link distances,
and multi-hop. Ch. 5 presents four network design case studies well-suited to
TC: i) spectrum management, ii) interference cancellation, iii) signal
threshold transmission scheduling, and iv) power control. Ch. 6 studies the TC
when nodes have multiple antennas, which provides a contrast vs. classical
results that ignore interference.
|
1201.0676
|
Knowledge epidemics and population dynamics models for describing idea
diffusion
|
physics.soc-ph cs.SI
|
The diffusion of ideas is often closely connected to the creation and
diffusion of knowledge and to the technological evolution of society. Because
of this, knowledge creation, exchange and its subsequent transformation into
innovations for improved welfare and economic growth is briefly described from
a historical point of view. Next, three approaches are discussed for modeling
the diffusion of ideas in the areas of science and technology, through (i)
deterministic, (ii) stochastic, and (iii) statistical approaches. These are
illustrated through their corresponding population dynamics and epidemic models
relative to the spreading of ideas, knowledge and innovations. The
deterministic dynamical models are considered to be appropriate for analyzing
the evolution of large and small societal, scientific and technological systems
when the influence of fluctuations is insignificant. Stochastic models are
appropriate when the system of interest is small but when the fluctuations
become significant for its evolution. Finally statistical approaches and models
based on the laws and distributions of Lotka, Bradford, Yule, Zipf-Mandelbrot,
and others, provide much useful information for the analysis of the evolution
of systems in which development is closely connected to the process of idea
diffusion.
|
1201.0715
|
Tree-Structure Expectation Propagation for LDPC Decoding over the BEC
|
cs.IT math.IT
|
We present the tree-structure expectation propagation (Tree-EP) algorithm to
decode low-density parity-check (LDPC) codes over discrete memoryless channels
(DMCs). EP generalizes belief propagation (BP) in two ways. First, it can be
used with any exponential family distribution over the cliques in the graph.
Second, it can impose additional constraints on the marginal distributions. We
use this second property to impose pair-wise marginal constraints over pairs of
variables connected to a check node of the LDPC code's Tanner graph. Thanks to
these additional constraints, the Tree-EP marginal estimates for each variable
in the graph are more accurate than those provided by BP. We also reformulate
the Tree-EP algorithm for the binary erasure channel (BEC) as a peeling-type
algorithm (TEP) and we show that the algorithm has the same computational
complexity as BP and it decodes a higher fraction of errors. We describe the
TEP decoding process by a set of differential equations that represents the
expected residual graph evolution as a function of the code parameters. The
solution of these equations is used to predict the TEP decoder performance in
both the asymptotic regime and the finite-length regime over the BEC. While the
asymptotic threshold of the TEP decoder is the same as the BP decoder for
regular and optimized codes, we propose a scaling law (SL) for finite-length
LDPC codes, which accurately approximates the TEP improved performance and
facilitates its optimization.
|
1201.0737
|
Spectrum Sensing in the Presence of Multiple Primary Users
|
cs.IT math.IT
|
We consider multi-antenna cooperative spectrum sensing in cognitive radio
networks, when there may be multiple primary users. A detector based on the
spherical test is analyzed in such a scenario. Based on the moments of the
distributions involved, simple and accurate analytical formulae for the key
performance metrics of the detector are derived. The false alarm and the
detection probabilities, as well as the detection threshold and Receiver
Operation Characteristics are available in closed form. Simulations are
provided to verify the accuracy of the derived results, and to compare with
other detectors in realistic sensing scenarios.
|
1201.0745
|
Communities and bottlenecks: Trees and treelike networks have high
modularity
|
physics.soc-ph cs.SI physics.data-an
|
Much effort has gone into understanding the modular nature of complex
networks. Communities, also known as clusters or modules, are typically
considered to be densely interconnected groups of nodes that are only sparsely
connected to other groups in the network. Discovering high quality communities
is a difficult and important problem in a number of areas. The most popular
approach is the objective function known as modularity, used both to discover
communities and to measure their strength. To understand the modular structure
of networks it is then crucial to know how such functions evaluate different
topologies, what features they account for, and what implicit assumptions they
may make. We show that trees and treelike networks can have unexpectedly and
often arbitrarily high values of modularity. This is surprising since trees are
maximally sparse connected graphs and are not typically considered to possess
modular structure, yet the nonlocal null model used by modularity assigns low
probabilities, and thus high significance, to the densities of these sparse
tree communities. We further study the practical performance of popular methods
on model trees and on a genealogical data set and find that the discovered
communities also have very high modularity, often approaching its maximum
value. Statistical tests reveal the communities in trees to be significant, in
contrast with known results for partitions of sparse, random graphs.
|
1201.0782
|
Umgebungserfassungssystem fuer mobile Roboter (environment logging
system for mobile autonomous robots)
|
cs.RO cs.AR
|
This diploma thesis describes the theoretical bases, the conception of the
module and the final result of the development process in application. for the
environment logging with a small mobile robot for interiors should be sketched
an economical alternative to the expensive laser scanners. the structure, color
or the material of the objects in the radius of action, as well as the
environment brightness and illuminating are to have thereby no influence on the
results of measurement.
|
1201.0794
|
Sparse Nonparametric Graphical Models
|
stat.ML cs.LG stat.ME
|
We present some nonparametric methods for graphical modeling. In the discrete
case, where the data are binary or drawn from a finite alphabet, Markov random
fields are already essentially nonparametric, since the cliques can take only a
finite number of values. Continuous data are different. The Gaussian graphical
model is the standard parametric model for continuous data, but it makes
distributional assumptions that are often unrealistic. We discuss two
approaches to building more flexible graphical models. One allows arbitrary
graphs and a nonparametric extension of the Gaussian; the other uses kernel
density estimation and restricts the graphs to trees and forests. Examples of
both methods are presented. We also discuss possible future research directions
for nonparametric graphical modeling.
|
1201.0830
|
Wireless Network-Coded Accumulate-Compute and Forward Two-Way Relaying
|
cs.IT math.IT
|
The design of modulation schemes for the physical layer network-coded two way
wireless relaying scenario is considered. It was observed by Koike-Akino et al.
for the two way relaying scenario, that adaptively changing the network coding
map used at the relay according to the channel conditions greatly reduces the
impact of multiple access interference which occurs at the relay during the MA
Phase and all these network coding maps should satisfy a requirement called
exclusive law. We extend this approach to an Accumulate-Compute and Forward
protocol which employs two phases: Multiple Access (MA) phase consisting of two
channel uses with independent messages in each channel use, and Broadcast (BC)
phase having one channel use. Assuming that the two users transmit points from
the same 4-PSK constellation, every such network coding map that satisfies the
exclusive law can be represented by a Latin Square with side 16, and
conversely, this relationship can be used to get the network coding maps
satisfying the exclusive law. Two methods of obtaining this network coding map
to be used at the relay are discussed. Using the structural properties of the
Latin Squares for a given set of parameters, the problem of finding all the
required maps is reduced to finding a small set of maps. Having obtained all
the Latin Squares, the set of all possible channel realizations is quantized,
depending on which one of the Latin Squares obtained optimizes the performance.
The quantization thus obtained, is shown to be the same as the one obtained in
[7] for the 2-stage bidirectional relaying.
|
1201.0838
|
A Topic Modeling Toolbox Using Belief Propagation
|
cs.LG
|
Latent Dirichlet allocation (LDA) is an important hierarchical Bayesian model
for probabilistic topic modeling, which attracts worldwide interests and
touches on many important applications in text mining, computer vision and
computational biology. This paper introduces a topic modeling toolbox (TMBP)
based on the belief propagation (BP) algorithms. TMBP toolbox is implemented by
MEX C++/Matlab/Octave for either Windows 7 or Linux. Compared with existing
topic modeling packages, the novelty of this toolbox lies in the BP algorithms
for learning LDA-based topic models. The current version includes BP algorithms
for latent Dirichlet allocation (LDA), author-topic models (ATM), relational
topic models (RTM), and labeled LDA (LaLDA). This toolbox is an ongoing project
and more BP-based algorithms for various topic models will be added in the near
future. Interested users may also extend BP algorithms for learning more
complicated topic models. The source codes are freely available under the GNU
General Public Licence, Version 1.0 at https://mloss.org/software/view/399/.
|
1201.0856
|
Complexity Classification in Infinite-Domain Constraint Satisfaction
|
cs.CC cs.AI cs.DM cs.LO math.LO
|
A constraint satisfaction problem (CSP) is a computational problem where the
input consists of a finite set of variables and a finite set of constraints,
and where the task is to decide whether there exists a satisfying assignment of
values to the variables. Depending on the type of constraints that we allow in
the input, a CSP might be tractable, or computationally hard. In recent years,
general criteria have been discovered that imply that a CSP is polynomial-time
tractable, or that it is NP-hard. Finite-domain CSPs have become a major common
research focus of graph theory, artificial intelligence, and finite model
theory. It turned out that the key questions for complexity classification of
CSPs are closely linked to central questions in universal algebra.
This thesis studies CSPs where the variables can take values from an infinite
domain. This generalization enhances dramatically the range of computational
problems that can be modeled as a CSP. Many problems from areas that have so
far seen no interaction with constraint satisfaction theory can be formulated
using infinite domains, e.g. problems from temporal and spatial reasoning,
phylogenetic reconstruction, and operations research.
It turns out that the universal-algebraic approach can also be applied to
study large classes of infinite-domain CSPs, yielding elegant complexity
classification results. A new tool in this thesis that becomes relevant
particularly for infinite domains is Ramsey theory. We demonstrate the
feasibility of our approach with two complete complexity classification
results: one on CSPs in temporal reasoning, the other on a generalization of
Schaefer's theorem for propositional logic to logic over graphs. We also study
the limits of complexity classification, and present classes of computational
problems provably do not exhibit a complexity dichotomy into hard and easy
problems.
|
1201.0876
|
Approximations of the Euclidean distance by chamfer distances
|
cs.IT math.IT
|
Chamfer distances play an important role in the theory of distance
transforms. Though the determination of the exact Euclidean distance transform
is also a well investigated area, the classical chamfering method based upon
"small" neighborhoods still outperforms it e.g. in terms of computation time.
In this paper we determine the best possible maximum relative error of chamfer
distances under various boundary conditions. In each case some best
approximating sequences are explicitly given. Further, because of possible
practical interest, we give all best approximating sequences in case of small
(i.e. 5 by 5 and 7 by 7) neighborhoods.
|
1201.0901
|
Two Algorithms for Orthogonal Nonnegative Matrix Factorization with
Application to Clustering
|
math.OC cs.IR
|
Approximate matrix factorization techniques with both nonnegativity and
orthogonality constraints, referred to as orthogonal nonnegative matrix
factorization (ONMF), have been recently introduced and shown to work
remarkably well for clustering tasks such as document classification. In this
paper, we introduce two new methods to solve ONMF. First, we show athematical
equivalence between ONMF and a weighted variant of spherical k-means, from
which we derive our first method, a simple EM-like algorithm. This also allows
us to determine when ONMF should be preferred to k-means and spherical k-means.
Our second method is based on an augmented Lagrangian approach. Standard ONMF
algorithms typically enforce nonnegativity for their iterates while trying to
achieve orthogonality at the limit (e.g., using a proper penalization term or a
suitably chosen search direction). Our method works the opposite way:
orthogonality is strictly imposed at each step while nonnegativity is
asymptotically obtained, using a quadratic penalty. Finally, we show that the
two proposed approaches compare favorably with standard ONMF algorithms on
synthetic, text and image data sets.
|
1201.0913
|
Novel Modulation Techniques using Isomers as Messenger Molecules for
Molecular Communication via Diffusion
|
q-bio.QM cs.CE cs.IT math.IT
|
In this paper, we propose novel modulation techniques using isomers as
messenger molecules for nano communication via diffusion. To evaluate
achievable rate performance, we compare the proposed techniques with
concentration-based and molecular-type-based methods. Analytical and numerical
results confirm that the proposed modulation techniques achieve higher data
transmission rate performance than conventional insulin based concepts.
|
1201.0925
|
On The Convergence of Gradient Descent for Finding the Riemannian Center
of Mass
|
math.DG cs.CV cs.NA math.NA math.OC
|
We study the problem of finding the global Riemannian center of mass of a set
of data points on a Riemannian manifold. Specifically, we investigate the
convergence of constant step-size gradient descent algorithms for solving this
problem. The challenge is that often the underlying cost function is neither
globally differentiable nor convex, and despite this one would like to have
guaranteed convergence to the global minimizer. After some necessary
preparations we state a conjecture which we argue is the best (in a sense
described) convergence condition one can hope for. The conjecture specifies
conditions on the spread of the data points, step-size range, and the location
of the initial condition (i.e., the region of convergence) of the algorithm.
These conditions depend on the topology and the curvature of the manifold and
can be conveniently described in terms of the injectivity radius and the
sectional curvatures of the manifold. For manifolds of constant nonnegative
curvature (e.g., the sphere and the rotation group in $\mathbb{R}^{3}$) we show
that the conjecture holds true (we do this by proving and using a comparison
theorem which seems to be of a different nature from the standard comparison
theorems in Riemannian geometry). For manifolds of arbitrary curvature we prove
convergence results which are weaker than the conjectured one (but still
superior over the available results). We also briefly study the effect of the
configuration of the data points on the speed of convergence.
|
1201.0942
|
Competitive Comparison of Optimal Designs of Experiments for
Sampling-based Sensitivity Analysis
|
cs.CE cs.NA stat.ME
|
Nowadays, the numerical models of real-world structures are more precise,
more complex and, of course, more time-consuming. Despite the growth of a
computational effort, the exploration of model behaviour remains a complex
task. The sensitivity analysis is a basic tool for investigating the
sensitivity of the model to its inputs. One widely used strategy to assess the
sensitivity is based on a finite set of simulations for a given sets of input
parameters, i.e. points in the design space. An estimate of the sensitivity can
be then obtained by computing correlations between the input parameters and the
chosen response of the model. The accuracy of the sensitivity prediction
depends on the choice of design points called the design of experiments. The
aim of the presented paper is to review and compare available criteria
determining the quality of the design of experiments suitable for
sampling-based sensitivity analysis.
|
1201.0946
|
Cops and Invisible Robbers: the Cost of Drunkenness
|
cs.DM cs.GT cs.RO math.CO math.PR
|
We examine a version of the Cops and Robber (CR) game in which the robber is
invisible, i.e., the cops do not know his location until they capture him.
Apparently this game (CiR) has received little attention in the CR literature.
We examine two variants: in the first the robber is adversarial (he actively
tries to avoid capture); in the second he is drunk (he performs a random walk).
Our goal in this paper is to study the invisible Cost of Drunkenness (iCOD),
which is defined as the ratio ct_i(G)/dct_i(G), with ct_i(G) and dct_i(G) being
the expected capture times in the adversarial and drunk CiR variants,
respectively. We show that these capture times are well defined, using game
theory for the adversarial case and partially observable Markov decision
processes (POMDP) for the drunk case. We give exact asymptotic values of iCOD
for several special graph families such as $d$-regular trees, give some bounds
for grids, and provide general upper and lower bounds for general classes of
graphs. We also give an infinite family of graphs showing that iCOD can be
arbitrarily close to any value in [2,infinty). Finally, we briefly examine one
more CiR variant, in which the robber is invisible and "infinitely fast"; we
argue that this variant is significantly different from the Graph Search game,
despite several similarities between the two games.
|
1201.0959
|
Constrained variable clustering and the best basis problem in functional
data analysis
|
stat.ML cs.LG
|
Functional data analysis involves data described by regular functions rather
than by a finite number of real valued variables. While some robust data
analysis methods can be applied directly to the very high dimensional vectors
obtained from a fine grid sampling of functional data, all methods benefit from
a prior simplification of the functions that reduces the redundancy induced by
the regularity. In this paper we propose to use a clustering approach that
targets variables rather than individual to design a piecewise constant
representation of a set of functions. The contiguity constraint induced by the
functional nature of the variables allows a polynomial complexity algorithm to
give the optimal solution.
|
1201.0962
|
Power Grid Network Evolutions for Local Energy Trading
|
physics.soc-ph cs.CE cs.SI
|
The shift towards an energy Grid dominated by prosumers (consumers and
producers of energy) will inevitably have repercussions on the distribution
infrastructure. Today it is a hierarchical one designed to deliver energy from
large scale facilities to end-users. Tomorrow it will be a capillary
infrastructure at the medium and Low Voltage levels that will support local
energy trading among prosumers. In our previous work, we analyzed the Dutch
Power Grid and made an initial analysis of the economic impact topological
properties have on decentralized energy trading. In this paper, we go one step
further and investigate how different networks topologies and growth models
facilitate the emergence of a decentralized market. In particular, we show how
the connectivity plays an important role in improving the properties of
reliability and path-cost reduction. From the economic point of view, we
estimate how the topological evolutions facilitate local electricity
distribution, taking into account the main cost ingredient required for
increasing network connectivity, i.e., the price of cabling.
|
1201.0963
|
Clustering Dynamic Web Usage Data
|
stat.ML cs.LG
|
Most classification methods are based on the assumption that data conforms to
a stationary distribution. The machine learning domain currently suffers from a
lack of classification techniques that are able to detect the occurrence of a
change in the underlying data distribution. Ignoring possible changes in the
underlying concept, also known as concept drift, may degrade the performance of
the classification model. Often these changes make the model inconsistent and
regular updatings become necessary. Taking the temporal dimension into account
during the analysis of Web usage data is a necessity, since the way a site is
visited may indeed evolve due to modifications in the structure and content of
the site, or even due to changes in the behavior of certain user groups. One
solution to this problem, proposed in this article, is to update models using
summaries obtained by means of an evolutionary approach based on an intelligent
clustering approach. We carry out various clustering strategies that are
applied on time sub-periods. To validate our approach we apply two external
evaluation criteria which compare different partitions from the same data set.
Our experiments show that the proposed approach is efficient to detect the
occurrence of changes.
|
1201.0979
|
Sciduction: Combining Induction, Deduction, and Structure for
Verification and Synthesis
|
cs.LO cs.AI cs.PL
|
Even with impressive advances in automated formal methods, certain problems
in system verification and synthesis remain challenging. Examples include the
verification of quantitative properties of software involving constraints on
timing and energy consumption, and the automatic synthesis of systems from
specifications. The major challenges include environment modeling,
incompleteness in specifications, and the complexity of underlying decision
problems.
This position paper proposes sciduction, an approach to tackle these
challenges by integrating inductive inference, deductive reasoning, and
structure hypotheses. Deductive reasoning, which leads from general rules or
concepts to conclusions about specific problem instances, includes techniques
such as logical inference and constraint solving. Inductive inference, which
generalizes from specific instances to yield a concept, includes algorithmic
learning from examples. Structure hypotheses are used to define the class of
artifacts, such as invariants or program fragments, generated during
verification or synthesis. Sciduction constrains inductive and deductive
reasoning using structure hypotheses, and actively combines inductive and
deductive reasoning: for instance, deductive techniques generate examples for
learning, and inductive reasoning is used to guide the deductive engines.
We illustrate this approach with three applications: (i) timing analysis of
software; (ii) synthesis of loop-free programs, and (iii) controller synthesis
for hybrid systems. Some future applications are also discussed.
|
1201.1039
|
Impartial games emulating one-dimensional cellular automata and
undecidability
|
math.CO cs.IT math.IT nlin.CG
|
We study two-player \emph{take-away} games whose outcomes emulate two-state
one-dimensional cellular automata, such as Wolfram's rules 60 and 110. Given an
initial string consisting of a central data pattern and periodic left and right
patterns, the rule 110 cellular automaton was recently proved Turing-complete
by Matthew Cook. Hence, many questions regarding its behavior are
algorithmically undecidable. We show that similar questions are undecidable for
our \emph{rule 110} game.
|
1201.1062
|
Network Coding Capacity Regions via Entropy Functions
|
cs.IT math.IT
|
In this paper, we use entropy functions to characterise the set of
rate-capacity tuples achievable with either zero decoding error, or vanishing
decoding error, for general network coding problems. We show that when sources
are colocated, the outer bound obtained by Yeung, A First Course in Information
Theory, Section 15.5 (2002) is tight and the sets of zero-error achievable and
vanishing-error achievable rate-capacity tuples are the same. We also
characterise the set of zero-error and vanishing-error achievable rate capacity
tuples for network coding problems subject to linear encoding constraints,
routing constraints (where some or all nodes can only perform routing) and
secrecy constraints. Finally, we show that even for apparently simple networks,
design of optimal codes may be difficult. In particular, we prove that for the
incremental multicast problem and for the single-source secure network coding
problem, characterisation of the achievable set is very hard and linear network
codes may not be optimal.
|
1201.1065
|
A Novel Error Correcting System Based on Product Codes for Future
Magnetic Recording Channels
|
cs.IT math.IT
|
We propose a novel construction of product codes for high-density magnetic
recording based on binary low-density parity check (LDPC) codes and binary
image of Reed Solomon (RS) codes. Moreover, two novel algorithms are proposed
to decode the codes in the presence of both AWGN errors and scattered hard
errors (SHEs). Simulation results show that at a bit error rate (bER) of
approximately 10^-8, our method allows improving the error performance by
approximately 1.9dB compared with that of a hard decision decoder of RS codes
of the same length and code rate. For the mixed error channel including random
noises and SHEs, the signal-to-noise ratio (SNR) is set at 5dB and 150 to 400
SHEs are randomly generated. The bit error performance of the proposed product
code shows a significant improvement over that of equivalent random LDPC codes
or serial concatenation of LDPC and RS codes.
|
1201.1085
|
Ontologies and tag-statistics
|
physics.soc-ph cs.IR stat.AP
|
Due to the increasing popularity of collaborative tagging systems, the
research on tagged networks, hypergraphs, ontologies, folksonomies and other
related concepts is becoming an important interdisciplinary topic with great
actuality and relevance for practical applications. In most collaborative
tagging systems the tagging by the users is completely "flat", while in some
cases they are allowed to define a shallow hierarchy for their own tags.
However, usually no overall hierarchical organisation of the tags is given, and
one of the interesting challenges of this area is to provide an algorithm
generating the ontology of the tags from the available data. In contrast, there
are also other type of tagged networks available for research, where the tags
are already organised into a directed acyclic graph (DAG), encapsulating the
"is a sub-category of" type of hierarchy between each other. In this paper we
study how this DAG affects the statistical distribution of tags on the nodes
marked by the tags in various real networks. We analyse the relation between
the tag-frequency and the position of the tag in the DAG in two large
sub-networks of the English Wikipedia and a protein-protein interaction
network. We also study the tag co-occurrence statistics by introducing a 2d
tag-distance distribution preserving both the difference in the levels and the
absolute distance in the DAG for the co-occurring pairs of tags. Our most
interesting finding is that the local relevance of tags in the DAG, (i.e.,
their rank or significance as characterised by, e.g., the length of the
branches starting from them) is much more important than their global distance
from the root. Furthermore, we also introduce a simple tagging model based on
random walks on the DAG, capable of reproducing the main statistical features
of tag co-occurrence.
|
1201.1096
|
Gibbs-Shannon Entropy and Related Measures: Tsallis Entropy
|
cs.IT math.IT
|
In this research paper, it is proved that an approximation to Gibbs-Shannon
entropy measure naturally leads to Tsallis entropy for the real parameter q =2
. Several interesting measures based on the input as well as output of a
discrete memoryless channel are provided and some of the properties of those
measures are discussed. It is expected that these results will be of utility in
Information Theoretic research.
|
1201.1170
|
Data Rate Limitations for Stabilization of Uncertain Systems over Lossy
Channels
|
cs.SY cs.IT math.IT math.OC
|
This paper considers data rate limitations for mean square stabilization of
uncertain discrete-time linear systems via finite data rate and lossy channels.
For a plant having parametric uncertainties, a necessary condition and a
sufficient condition are derived, represented by the data rate, the packet loss
probability, uncertainty bounds on plant parameters, and the unstable
eigenvalues of the plant. The results extend those existing in the area of
networked control, and in particular, the condition is exact for the scalar
plant case.
|
1201.1175
|
Throughput Optimal Multi-user Scheduling via Hierarchical Modulation
|
cs.NI cs.IT math.IT
|
We investigate the network stability problem when two users are scheduled
simultaneously. The key idea is to simultaneously transmit to more than one
users experiencing different channel conditions by employing hierarchical
modulation. For two-user scheduling problem, we develop a throughput-optimal
algorithm which can stabilize the network whenever this is possible. In
addition, we analytically prove that the proposed algorithm achieves larger
achievable rate region compared to the conventional Max-Weight algorithm which
employs uniform modulation and transmits a single user. We demonstrate the
efficacy of the algorithm on a realistic simulation environment using the
parameters of High Data Rate protocol in a Code Division Multiple Access
system. Simulation results show that with the proposed algorithm, the network
can carry higher user traffic with lower delays.
|
1201.1192
|
Formalization of semantic network of image constructions in electronic
content
|
cs.CL
|
A formal theory based on a binary operator of directional associative
relation is constructed in the article and an understanding of an associative
normal form of image constructions is introduced. A model of a commutative
semigroup, which provides a presentation of a sentence as three components of
an interrogative linguistic image construction, is considered.
|
1201.1215
|
Triadic motifs and dyadic self-organization in the World Trade Network
|
physics.soc-ph cs.SI physics.data-an q-fin.GN
|
In self-organizing networks, topology and dynamics coevolve in a continuous
feedback, without exogenous driving. The World Trade Network (WTN) is one of
the few empirically well documented examples of self-organizing networks: its
topology strongly depends on the GDP of world countries, which in turn depends
on the structure of trade. Therefore, understanding which are the key
topological properties of the WTN that deviate from randomness provides direct
empirical information about the structural effects of self-organization. Here,
using an analytical pattern-detection method that we have recently proposed, we
study the occurrence of triadic "motifs" (subgraphs of three vertices) in the
WTN between 1950 and 2000. We find that, unlike other properties, motifs are
not explained by only the in- and out-degree sequences. By contrast, they are
completely explained if also the numbers of reciprocal edges are taken into
account. This implies that the self-organization process underlying the
evolution of the WTN is almost completely encoded into the dyadic structure,
which strongly depends on reciprocity.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.