id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1103.2593
|
Unfolding communities in large complex networks: Combining defensive and
offensive label propagation for core extraction
|
physics.soc-ph cs.SI physics.data-an
|
Label propagation has proven to be a fast method for detecting communities in
large complex networks. Recent developments have also improved the accuracy of
the approach, however, a general algorithm is still an open issue. We present
an advanced label propagation algorithm that combines two unique strategies of
community formation, namely, defensive preservation and offensive expansion of
communities. Two strategies are combined in a hierarchical manner, to
recursively extract the core of the network, and to identify whisker
communities. The algorithm was evaluated on two classes of benchmark networks
with planted partition and on almost 25 real-world networks ranging from
networks with tens of nodes to networks with several tens of millions of edges.
It is shown to be comparable to the current state-of-the-art community
detection algorithms and superior to all previous label propagation algorithms,
with comparable time complexity. In particular, analysis on real-world networks
has proven that the algorithm has almost linear complexity,
$\mathcal{O}(m^{1.19})$, and scales even better than basic label propagation
algorithm ($m$ is the number of edges in the network).
|
1103.2596
|
Unfolding network communities by combining defensive and offensive label
propagation
|
physics.soc-ph cs.SI physics.data-an
|
Label propagation has proven to be a fast method for detecting communities in
complex networks. Recent work has also improved the accuracy and stability of
the basic algorithm, however, a general approach is still an open issue. We
propose different label propagation algorithms that convey two unique
strategies of community formation, namely, defensive preservation and offensive
expansion of communities. Furthermore, the strategies are combined in an
advanced label propagation algorithm that retains the advantages of both
approaches; and are enhanced with hierarchical community extraction, prominent
for the use on larger networks. The proposed algorithms were empirically
evaluated on different benchmarks networks with planted partition and on over
30 real-world networks of various types and sizes. The results confirm the
adequacy of the propositions and give promising grounds for future analysis of
(large) complex networks. Nevertheless, the main contribution of this work is
in showing that different types of networks (with different topological
properties) favor different strategies of community formation.
|
1103.2607
|
LLR Approximation for Wireless Channels Based on Taylor Series and Its
Application to BICM with LDPC Codes
|
cs.IT math.IT
|
A new approach for the approximation of the channel log-likelihood ratio
(LLR) for wireless channels based on Taylor series is proposed. The
approximation is applied to the uncorrelated flat Rayleigh fading channel with
unknown channel state information at the receiver. It is shown that the
proposed approximation greatly simplifies the calculation of channel LLRs, and
yet provides results almost identical to those based on the exact calculation
of channel LLRs. The results are obtained in the context of bit-interleaved
coded modulation (BICM) schemes with low-density parity-check (LDPC) codes, and
include threshold calculations and error rate performance of finite-length
codes. Compared to the existing approximations, the proposed method is either
significantly less complex, or considerably more accurate.
|
1103.2612
|
Synthesis for Constrained Nonlinear Systems using Hybridization and
Robust Controllers on Simplices
|
cs.SY math.OC
|
In this paper, we propose an approach to controller synthesis for a class of
constrained nonlinear systems. It is based on the use of a hybridization, that
is a hybrid abstraction of the nonlinear dynamics. This abstraction is defined
on a triangulation of the state-space where on each simplex of the
triangulation, the nonlinear dynamics is conservatively approximated by an
affine system subject to disturbances. Except for the disturbances, this
hybridization can be seen as a piecewise affine hybrid system on simplices for
which appealing control synthesis techniques have been developed in the past
decade. We extend these techniques to handle systems subject to disturbances by
synthesizing and coordinating local robust affine controllers defined on the
simplices of the triangulation. We show that the resulting hybrid controller
can be used to control successfully the original constrained nonlinear system.
Our approach, though conservative, can be fully automated and is
computationally tractable. To show its effectiveness in practical applications,
we apply our method to control a pendulum mounted on a cart.
|
1103.2635
|
Accelerating Nearest Neighbor Search on Manycore Systems
|
cs.DB cs.CG cs.DC cs.DS cs.IR
|
We develop methods for accelerating metric similarity search that are
effective on modern hardware. Our algorithms factor into easily parallelizable
components, making them simple to deploy and efficient on multicore CPUs and
GPUs. Despite the simple structure of our algorithms, their search performance
is provably sublinear in the size of the database, with a factor dependent only
on its intrinsic dimensionality. We demonstrate that our methods provide
substantial speedups on a range of datasets and hardware platforms. In
particular, we present results on a 48-core server machine, on graphics
hardware, and on a multicore desktop.
|
1103.2651
|
Efficient Continual Top-$k$ Keyword Search in Relational Databases
|
cs.DB cs.IR
|
Keyword search in relational databases has been widely studied in recent
years because it does not require users neither to master a certain structured
query language nor to know the complex underlying data schemas. Most of
existing methods focus on answering snapshot keyword queries in static
databases. In practice, however, databases are updated frequently, and users
may have long-term interests on specific topics. To deal with such a situation,
it is necessary to build effective and efficient facility in database systems
to support continual keyword queries evaluation.
In this paper, we propose an efficient method for continual keyword queries
answering over relational databases. The proposed method consists of two core
algorithms. The first one computes a set of potential top-$k$ results by
evaluating the ranges of the future relevance score for every query result and
create a light-weight state for each keyword query. The second one uses these
states to maintain the top-$k$ results of keyword queries when the database is
continually growing. Experimental results validate the effectiveness and
efficiency of the proposed method.
|
1103.2681
|
A Paradoxical Property of the Monkey Book
|
physics.data-an cond-mat.stat-mech cs.CL cs.IR physics.soc-ph
|
A "monkey book" is a book consisting of a random distribution of letters and
blanks, where a group of letters surrounded by two blanks is defined as a word.
We compare the statistics of the word distribution for a monkey book with the
corresponding distribution for the general class of random books, where the
latter are books for which the words are randomly distributed. It is shown that
the word distribution statistics for the monkey book is different and quite
distinct from a typical sampled book or real book. In particular the monkey
book obeys Heaps' power law to an extraordinary good approximation, in contrast
to the word distributions for sampled and real books, which deviate from Heaps'
law in a characteristics way. The somewhat counter-intuitive conclusion is that
a "monkey book" obeys Heaps' power law precisely because its word-frequency
distribution is not a smooth power law, contrary to the expectation based on
simple mathematical arguments that if one is a power law, so is the other.
|
1103.2690
|
Scheduled-PEG construction of LDPC codes for Upper-Layer FEC
|
cs.IT math.IT
|
The Progressive Edge Growth (PEG) algorithm is one of the most widely-used
method for constructing finite length LDPC codes. In this paper we consider the
PEG algorithm together with a scheduling distribution, which specifies the
order in which edges are established in the graph. The goal is to find a
scheduling distribution that yields "the best" performance in terms of decoding
overhead, performance metric specific to erasure codes and widely used for
upper-layer forward error correction (UL-FEC). We rigorously formulate this
optimization problem, and we show that it can be addressed by using genetic
optimization algorithms. We also exhibit PEG codes with optimized scheduling
distribution, whose decoding overhead is less than half of the decoding
overhead of their classical-PEG counterparts.
|
1103.2691
|
Extended Non-Binary Low-Density Parity-Check Codes over Erasure Channels
|
cs.IT math.IT
|
Based on the extended binary image of non-binary LDPC codes, we propose a
method for generating extra redundant bits, such as to decreases the coding
rate of a mother code. The proposed method allows for using the same decoder,
regardless of how many extra redundant bits have been produced, which
considerably increases the flexibility of the system without significantly
increasing its complexity. Extended codes are also optimized for the binary
erasure channel, by using density evolution methods. Nevertheless, the results
presented in this paper can easily be extrapolated to more general channel
models.
|
1103.2706
|
On stability of continuous-time quantum-filters
|
math.OC cs.SY quant-ph
|
We prove that the fidelity between the quantum state governed by a continuous
time stochastic master equation driven by a Wiener process and its associated
quantum-filter state is a sub-martingale. This result is a generalization to
non-pure quantum states where fidelity does not coincide in general with a
simple Frobenius inner product. This result implies the stability of such
filtering process but does not necessarily ensure the asymptotic convergence of
such quantum-filters.
|
1103.2741
|
Memory Retrieval in the B-Matrix Neural Network
|
cs.NE
|
This paper is an extension to the memory retrieval procedure of the B-Matrix
approach [6],[17] to neural network learning. The B-Matrix is a part of the
interconnection matrix generated from the Hebbian neural network, and in memory
retrieval, the B-matrix is clamped with a small fragment of the memory. The
fragment gradually enlarges by means of feedback, until the entire vector is
obtained. In this paper, we propose the use of delta learning to enhance the
retrieval rate of the stored memories.
|
1103.2750
|
Smart Finite State Devices: A Modeling Framework for Demand Response
Technologies
|
cs.SY math.OC
|
We introduce and analyze Markov Decision Process (MDP) machines to model
individual devices which are expected to participate in future demand-response
markets on distribution grids. We differentiate devices into the following four
types: (a) optional loads that can be shed, e.g. light dimming; (b) deferrable
loads that can be delayed, e.g. dishwashers; (c) controllable loads with
inertia, e.g. thermostatically-controlled loads, whose task is to maintain an
auxiliary characteristic (temperature) within pre-defined margins; and (d)
storage devices that can alternate between charging and generating. Our
analysis of the devices seeks to find their optimal price-taking control
strategy under a given stochastic model of the distribution market.
|
1103.2756
|
Sparse Transfer Learning for Interactive Video Search Reranking
|
cs.IR cs.CV cs.MM stat.ML
|
Visual reranking is effective to improve the performance of the text-based
video search. However, existing reranking algorithms can only achieve limited
improvement because of the well-known semantic gap between low level visual
features and high level semantic concepts. In this paper, we adopt interactive
video search reranking to bridge the semantic gap by introducing user's
labeling effort. We propose a novel dimension reduction tool, termed sparse
transfer learning (STL), to effectively and efficiently encode user's labeling
information. STL is particularly designed for interactive video search
reranking. Technically, it a) considers the pair-wise discriminative
information to maximally separate labeled query relevant samples from labeled
query irrelevant ones, b) achieves a sparse representation for the subspace to
encodes user's intention by applying the elastic net penalty, and c) propagates
user's labeling information from labeled samples to unlabeled samples by using
the data distribution knowledge. We conducted extensive experiments on the
TRECVID 2005, 2006 and 2007 benchmark datasets and compared STL with popular
dimension reduction algorithms. We report superior performance by using the
proposed STL based interactive video search reranking.
|
1103.2795
|
Cyber-Physical Attacks in Power Networks: Models, Fundamental
Limitations and Monitor Design
|
math.OC cs.SY
|
Future power networks will be characterized by safe and reliable
functionality against physical malfunctions and cyber attacks. This paper
proposes a unified framework and advanced monitoring procedures to detect and
identify network components malfunction or measurements corruption caused by an
omniscient adversary. We model a power system under cyber-physical attack as a
linear time-invariant descriptor system with unknown inputs. Our attack model
generalizes the prototypical stealth, (dynamic) false-data injection and replay
attacks. We characterize the fundamental limitations of both static and dynamic
procedures for attack detection and identification. Additionally, we design
provably-correct (dynamic) detection and identification procedures based on
tools from geometric control theory. Finally, we illustrate the effectiveness
of our method through a comparison with existing (static) detection algorithms,
and through a numerical study.
|
1103.2816
|
Universal low-rank matrix recovery from Pauli measurements
|
quant-ph cs.IT math.IT math.ST stat.ML stat.TH
|
We study the problem of reconstructing an unknown matrix M of rank r and
dimension d using O(rd poly log d) Pauli measurements. This has applications in
quantum state tomography, and is a non-commutative analogue of a well-known
problem in compressed sensing: recovering a sparse vector from a few of its
Fourier coefficients.
We show that almost all sets of O(rd log^6 d) Pauli measurements satisfy the
rank-r restricted isometry property (RIP). This implies that M can be recovered
from a fixed ("universal") set of Pauli measurements, using nuclear-norm
minimization (e.g., the matrix Lasso), with nearly-optimal bounds on the error.
A similar result holds for any class of measurements that use an orthonormal
operator basis whose elements have small operator norm. Our proof uses Dudley's
inequality for Gaussian processes, together with bounds on covering numbers
obtained via entropy duality.
|
1103.2832
|
Autotagging music with conditional restricted Boltzmann machines
|
cs.LG cs.IR cs.SD
|
This paper describes two applications of conditional restricted Boltzmann
machines (CRBMs) to the task of autotagging music. The first consists of
training a CRBM to predict tags that a user would apply to a clip of a song
based on tags already applied by other users. By learning the relationships
between tags, this model is able to pre-process training data to significantly
improve the performance of a support vector machine (SVM) autotagging. The
second is the use of a discriminative RBM, a type of CRBM, to autotag music. By
simultaneously exploiting the relationships among tags and between tags and
audio-based features, this model is able to significantly outperform SVMs,
logistic regression, and multi-layer perceptrons. In order to be applied to
this problem, the discriminative RBM was generalized to the multi-label setting
and four different learning algorithms for it were evaluated, the first such
in-depth analysis of which we are aware.
|
1103.2837
|
Reweighted LP Decoding for LDPC Codes
|
cs.IT math.IT
|
We introduce a novel algorithm for decoding binary linear codes by linear
programming. We build on the LP decoding algorithm of Feldman et al. and
introduce a post-processing step that solves a second linear program that
reweights the objective function based on the outcome of the original LP
decoder output. Our analysis shows that for some LDPC ensembles we can improve
the provable threshold guarantees compared to standard LP decoding. We also
show significant empirical performance gains for the reweighted LP decoding
algorithm with very small additional computational complexity.
|
1103.2882
|
On optimum strategies for minimizing the exponential moments of a given
cost function
|
cs.IT cond-mat.stat-mech math.IT
|
We consider a general problem of finding a strategy that minimizes the
exponential moment of a given cost function, with an emphasis on its relation
to the more common criterion of minimization the expectation of the first
moment of the same cost function. In particular, our main result is a theorem
that gives simple sufficient conditions for a strategy to be optimum in the
exponential moment sense. This theorem may be useful in various situations, and
application examples are given. We also examine the asymptotic regime and
investigate universal asymptotically optimum strategies in light of the
aforementioned sufficient conditions, as well as phenomena of irregularities,
or phase transitions, in the behavior of the asymptotic performance, which can
be viewed and understood from a statistical-mechanical perspective. Finally, we
propose a new route for deriving lower bounds on exponential moments of certain
cost functions (like the square error in estimation problems) on the basis of
well known lower bounds on their expectations.
|
1103.2886
|
Predicting User Preferences
|
cs.IR
|
The many metrics employed for the evaluation of search engine results have
not themselves been conclusively evaluated. We propose a new measure for a
metric's ability to identify user preference of result lists. Using this
measure, we evaluate the metrics Discounted Cumulated Gain, Mean Average
Precision and classical precision, finding that the former performs best. We
also show that considering more results for a given query can impair rather
than improve a metric's ability to predict user preferences.
|
1103.2897
|
Constructing test instances for Basis Pursuit Denoising
|
cs.IT math.IT
|
The number of available algorithms for the so-called Basis Pursuit Denoising
problem (or the related LASSO-problem) is large and keeps growing. Similarly,
the number of experiments to evaluate and compare these algorithms on different
instances is growing.
In this note, we present a method to produce instances with exact solutions
which is based on a simple observation which is related to the so called source
condition from sparse regularization.
|
1103.2903
|
A new ANEW: Evaluation of a word list for sentiment analysis in
microblogs
|
cs.IR cs.CL
|
Sentiment analysis of microblogs such as Twitter has recently gained a fair
amount of attention. One of the simplest sentiment analysis approaches compares
the words of a posting against a labeled word list, where each word has been
scored for valence, -- a 'sentiment lexicon' or 'affective word lists'. There
exist several affective word lists, e.g., ANEW (Affective Norms for English
Words) developed before the advent of microblogging and sentiment analysis. I
wanted to examine how well ANEW and other word lists performs for the detection
of sentiment strength in microblog posts in comparison with a new word list
specifically constructed for microblogs. I used manually labeled postings from
Twitter scored for sentiment. Using a simple word matching I show that the new
word list may perform better than ANEW, though not as good as the more
elaborate approach found in SentiStrength.
|
1103.2923
|
Estimation of Saturation of Permanent-Magnet Synchronous Motors Through
an Energy-Based Model
|
math.OC cs.SY physics.ins-det
|
We propose a parametric model of the saturated Permanent-Magnet Synchronous
Motor (PMSM) together with an estimation method of the magnetic parameters. The
model is based on an energy function which simply encompasses the saturation
effects. Injection of fast-varying pulsating voltages and measurements of the
resulting current ripples then permit to identify the magnetic parameters by
linear least squares. Experimental results on a surface-mounted PMSM and an
interoir magnet PMSM illustrate the relevance of the approach.
|
1103.2950
|
Fitting Ranked English and Spanish Letter Frequency Distribution in U.S.
and Mexican Presidential Speeches
|
cs.CL
|
The limited range in its abscissa of ranked letter frequency distributions
causes multiple functions to fit the observed distribution reasonably well. In
order to critically compare various functions, we apply the statistical model
selections on ten functions, using the texts of U.S. and Mexican presidential
speeches in the last 1-2 centuries. Dispite minor switching of ranking order of
certain letters during the temporal evolution for both datasets, the letter
usage is generally stable. The best fitting function, judged by either
least-square-error or by AIC/BIC model selection, is the Cocho/Beta function.
We also use a novel method to discover clusters of letters by their
observed-over-expected frequency ratios.
|
1103.2960
|
Xampling: Compressed Sensing of Analog Signals
|
cs.IT cs.SY math.IT
|
Xampling generalizes compressed sensing (CS) to reduced-rate sampling of
analog signals. A unified framework is introduced for low rate sampling and
processing of signals lying in a union of subspaces. Xampling consists of two
main blocks: Analog compression that narrows down the input bandwidth prior to
sampling with commercial devices followed by a nonlinear algorithm that detects
the input subspace prior to conventional signal processing. A variety of analog
CS applications are reviewed within the unified Xampling framework including a
general filter-bank scheme for sparse shift-invariant spaces, periodic
nonuniform sampling and modulated wideband conversion for multiband
communications with unknown carrier frequencies, acquisition techniques for
finite rate of innovation signals with applications to medical and radar
imaging, and random demodulation of sparse harmonic tones. A hardware-oriented
viewpoint is advocated throughout, addressing practical constraints and
exemplifying hardware realizations where relevant. It will appear as a chapter
in a book on "Compressed Sensing: Theory and Applications" edited by Yonina
Eldar and Gitta Kutyniok.
|
1103.3002
|
Floridian high-voltage power-grid network partitioning and cluster
optimization using simulated annealing
|
physics.soc-ph cond-mat.stat-mech cs.SI
|
Many partitioning methods may be used to partition a network into smaller
clusters while minimizing the number of cuts needed. However, other
considerations must also be taken into account when a network represents a real
system such as a power grid. In this paper we use a simulated annealing Monte
Carlo (MC) method to optimize initial clusters on the Florida high-voltage
power-grid network that were formed by associating each load with its "closest"
generator. The clusters are optimized to maximize internal connectivity within
the individual clusters and minimize the power deficiency or surplus that
clusters may otherwise have.
|
1103.3005
|
The Separation Principle in Stochastic Control, Redux
|
math.OC cs.SY
|
Over the last 50 years a steady stream of accounts have been written on the
separation principle of stochastic control. Even in the context of the
linear-quadratic regulator in continuous time with Gaussian white noise, subtle
difficulties arise, unexpected by many, that are often overlooked. In this
paper we propose a new framework for establishing the separation principle.
This approach takes the viewpoint that stochastic systems are well-defined maps
between sample paths rather than stochastic processes per se and allows us to
extend the separation principle to systems driven by martingales with possible
jumps. While the approach is more in line with "real-life" engineering thinking
where signals travel around the feedback loop, it is unconventional from a
probabilistic point of view in that control laws for which the feedback
equations are satisfied almost surely, and not deterministically for every
sample path, are excluded.
|
1103.3054
|
On the Capacity of Memoryless Finite-State Multiple Access Channels with
Asymmetric Noisy State Information at the Encoders
|
cs.IT math.IT
|
We consider the capacity of memoryless finite-state multiple access channel
(FS-MAC) with causal asymmetric noisy state information available at both
transmitters and complete state information available at the receiver. Single
letter inner and outer bounds are provided for the capacity of such channels
when the state process is independent and identically distributed. The outer
bound is attained by observing that the proposed inner bound is tight for the
sum-rate capacity.
|
1103.3093
|
Exploiting Interference Alignment in Multi-Cell Cooperative OFDMA
Resource Allocation
|
cs.IT math.IT
|
This paper studies interference alignment (IA) based multi-cell cooperative
resource allocation for the downlink OFDMA with universal frequency reuse.
Unlike the traditional scheme that treats subcarriers as separate dimensions
for resource allocation, the IA technique is utilized to enable
frequency-domain precoding over parallel subcarriers. In this paper, the joint
optimization of frequency-domain precoding via IA, subcarrier user selection
and power allocation is investigated for a cooperative three-cell OFDMA system
to maximize the downlink throughput. Numerical results for a simplified
symmetric channel setup reveal that the IA-based scheme achieves notable
throughput gains over the traditional scheme only when the inter-cell
interference link has a comparable strength as the direct link, and the
receiver SNR is sufficiently large. Motivated by this observation, a practical
hybrid scheme is proposed for cellular systems with heterogenous channel
conditions, where the total spectrum is divided into two subbands, over which
the IAbased scheme and the traditional scheme are applied for resource
allocation to users located in the cell-intersection region and cellnon-
intersection region, respectively. It is shown that this hybrid resource
allocation scheme flexibly exploits the downlink IA gains for OFDMA-based
cellular systems.
|
1103.3095
|
A note on active learning for smooth problems
|
cs.LG stat.ML
|
We show that the disagreement coefficient of certain smooth hypothesis
classes is $O(m)$, where $m$ is the dimension of the hypothesis space, thereby
answering a question posed in \cite{friedman09}.
|
1103.3099
|
Optimal Power Cost Management Using Stored Energy in Data Centers
|
cs.PF cs.SY math.OC
|
Since the electricity bill of a data center constitutes a significant portion
of its overall operational costs, reducing this has become important. We
investigate cost reduction opportunities that arise by the use of uninterrupted
power supply (UPS) units as energy storage devices. This represents a deviation
from the usual use of these devices as mere transitional fail-over mechanisms
between utility and captive sources such as diesel generators. We consider the
problem of opportunistically using these devices to reduce the time average
electric utility bill in a data center. Using the technique of Lyapunov
optimization, we develop an online control algorithm that can optimally exploit
these devices to minimize the time average cost. This algorithm operates
without any knowledge of the statistics of the workload or electricity cost
processes, making it attractive in the presence of workload and pricing
uncertainties. An interesting feature of our algorithm is that its deviation
from optimality reduces as the storage capacity is increased. Our work opens up
a new area in data center power management.
|
1103.3102
|
Human-Assisted Graph Search: It's Okay to Ask Questions
|
cs.DB cs.DS
|
We consider the problem of human-assisted graph search: given a directed
acyclic graph with some (unknown) target node(s), we consider the problem of
finding the target node(s) by asking an omniscient human questions of the form
"Is there a target node that is reachable from the current node?". This general
problem has applications in many domains that can utilize human intelligence,
including curation of hierarchies, debugging workflows, image segmentation and
categorization, interactive search and filter synthesis. To our knowledge, this
work provides the first formal algorithmic study of the optimization of human
computation for this problem. We study various dimensions of the problem space,
providing algorithms and complexity results. Our framework and algorithms can
be used in the design of an optimizer for crowd-sourcing platforms such as
Mechanical Turk.
|
1103.3103
|
Guided Data Repair
|
cs.DB
|
In this paper we present GDR, a Guided Data Repair framework that
incorporates user feedback in the cleaning process to enhance and accelerate
existing automatic repair techniques while minimizing user involvement. GDR
consults the user on the updates that are most likely to be beneficial in
improving data quality. GDR also uses machine learning methods to identify and
apply the correct updates directly to the database without the actual
involvement of the user on these specific updates. To rank potential updates
for consultation by the user, we first group these repairs and quantify the
utility of each group using the decision-theory concept of value of information
(VOI). We then apply active learning to order updates within a group based on
their ability to improve the learned model. User feedback is used to repair the
database and to adaptively refine the training set for the model. We
empirically evaluate GDR on a real-world dataset and show significant
improvement in data quality using our user guided repairing process. We also,
assess the trade-off between the user efforts and the resulting data quality.
|
1103.3105
|
High-Throughput Transaction Executions on Graphics Processors
|
cs.DB cs.DC
|
OLTP (On-Line Transaction Processing) is an important business system sector
in various traditional and emerging online services. Due to the increasing
number of users, OLTP systems require high throughput for executing tens of
thousands of transactions in a short time period. Encouraged by the recent
success of GPGPU (General-Purpose computation on Graphics Processors), we
propose GPUTx, an OLTP engine performing high-throughput transaction executions
on the GPU for in-memory databases. Compared with existing GPGPU studies
usually optimizing a single task, transaction executions require handling many
small tasks concurrently. Specifically, we propose the bulk execution model to
group multiple transactions into a bulk and to execute the bulk on the GPU as a
single task. The transactions within the bulk are executed concurrently on the
GPU. We study three basic execution strategies (one with locks and the other
two lock-free), and optimize them with the GPU features including the hardware
support of atomic operations, the massive thread parallelism and the SPMD
(Single Program Multiple Data) execution. We evaluate GPUTx on a recent NVIDIA
GPU in comparison with its counterpart on a quad-core CPU. Our experimental
results show that optimizations on GPUTx significantly improve the throughput,
and the optimized GPUTx achieves 4-10 times higher throughput than its
CPU-based counterpart on public transaction processing benchmarks.
|
1103.3107
|
Incrementally Maintaining Classification using an RDBMS
|
cs.DB
|
The proliferation of imprecise data has motivated both researchers and the
database industry to push statistical techniques into relational database
management systems (RDBMSs). We study algorithms to maintain model-based views
for a popular statistical technique, classification, inside an RDBMS in the
presence of updates to the training examples. We make three technical
contributions: (1) An algorithm that incrementally maintains classification
inside an RDBMS. (2) An analysis of the above algorithm that shows that our
algorithm is optimal among all deterministic algorithms (and asymptotically
within a factor of 2 of a nondeterministic optimal). (3) An index structure
based on the technical ideas that underlie the above algorithm which allows us
to store only a fraction of the entities in memory. We apply our techniques to
text processing, and we demonstrate that our algorithms provide several orders
of magnitude improvement over non-incremental approaches to classification on a
variety of data sets: such as the Cora, UCI Machine Learning Repository data
sets, Citeseer, and DBLife.
|
1103.3113
|
A Broadcast Approach To Secret Key Generation Over Slow Fading Channels
|
cs.IT cs.CR math.IT
|
A secret-key generation scheme based on a layered broadcasting strategy is
introduced for slow-fading channels. In the model considered, Alice wants to
share a key with Bob while keeping the key secret from Eve, who is a passive
eavesdropper. Both Alice-Bob and Alice-Eve channels are assumed to undergo slow
fading, and perfect channel state information (CSI) is assumed to be known only
at the receivers during the transmission. In each fading slot, Alice broadcasts
a continuum of coded layers and, hence, allows Bob to decode at the rate
corresponding to the fading state (unknown to Alice). The index of a reliably
decoded layer is sent back from Bob to Alice via a public and error-free
channel and used to generate a common secret key. In this paper, the achievable
secrecy key rate is first derived for a given power distribution over coded
layers. The optimal power distribution is then characterized. It is shown that
layered broadcast coding can increase the secrecy key rate significantly
compared to single-level coding.
|
1103.3117
|
Linearity and Complements in Projective Space
|
cs.IT math.IT
|
The projective space of order $n$ over the finite field $\Fq$, denoted here
as $\Ps$, is the set of all subspaces of the vector space $\Fqn$. The
projective space can be endowed with distance function $d_S(X,Y) = \dim(X) +
\dim(Y) - 2\dim(X\cap Y)$ which turns $\Ps$ into a metric space. With this,
\emph{an $(n,M,d)$ code $\C$ in projective space} is a subset of $\Ps$ of size
$M$ such that the distance between any two codewords (subspaces) is at least
$d$. Koetter and Kschischang recently showed that codes in projective space are
precisely what is needed for error-correction in networks: an $(n,M,d)$ code
can correct $t$ packet errors and $\rho$ packet erasures introduced
(adversarially) anywhere in the network as long as $2t + 2\rho < d$. This
motivates new interest in such codes.
In this paper, we examine the two fundamental concepts of
\myemph{complements} and \myemph{linear codes} in the context of $\Ps$. These
turn out to be considerably more involved than their classical counterparts.
These concepts are examined from two different points of view, coding theory
and lattice theory. Our discussion reveals some surprised phenomena of these
concepts in $\Ps$ and leaves some interesting problems for further research.
|
1103.3123
|
Reduced Ordered Binary Decision Diagram with Implied Literals: A New
knowledge Compilation Approach
|
cs.AI
|
Knowledge compilation is an approach to tackle the computational
intractability of general reasoning problems. According to this approach,
knowledge bases are converted off-line into a target compilation language which
is tractable for on-line querying. Reduced ordered binary decision diagram
(ROBDD) is one of the most influential target languages. We generalize ROBDD by
associating some implied literals in each node and the new language is called
reduced ordered binary decision diagram with implied literals (ROBDD-L). Then
we discuss a kind of subsets of ROBDD-L called ROBDD-i with precisely i implied
literals (0 \leq i \leq \infty). In particular, ROBDD-0 is isomorphic to ROBDD;
ROBDD-\infty requires that each node should be associated by the implied
literals as many as possible. We show that ROBDD-i has uniqueness over some
specific variables order, and ROBDD-\infty is the most succinct subset in
ROBDD-L and can meet most of the querying requirements involved in the
knowledge compilation map. Finally, we propose an ROBDD-i compilation algorithm
for any i and a ROBDD-\infty compilation algorithm. Based on them, we implement
a ROBDD-L package called BDDjLu and then get some conclusions from preliminary
experimental results: ROBDD-\infty is obviously smaller than ROBDD for all
benchmarks; ROBDD-\infty is smaller than the d-DNNF the benchmarks whose
compilation results are relatively small; it seems that it is better to
transform ROBDDs-\infty into FBDDs and ROBDDs rather than straight compile the
benchmarks.
|
1103.3174
|
A Longitudinal Study of Social Media Privacy Behavior
|
cs.SI cs.CY
|
Existing constructs for privacy concerns and behaviors do not adequately
model deviations between user attitudes and behaviors. Although a number of
studies have examined supposed deviations from rationality by online users,
true explanations for these behaviors may lie in factors not previously
addressed in privacy concern constructs. In particular, privacy attitudes and
behavioral changes over time have not been examined within the context of an
empirical study. This paper presents the results of an Agile, sprint-based
longitudinal study of Social Media users conducted over a two year period
between April of 2009 and March of 2011. This study combined concepts drawn
from Privacy Regulation Theory with the constructs of the Internet Users'
Information and Privacy Concern model to create a series of online surveys that
examined changes of Social Media privacy attitudes and self-reported behaviors
over time. The main findings of this study are that, over a two year period
between 2009 and 2011, respondents' privacy concerns and distrust of Social
Media Sites increased significantly, while their disclosure of personal
information and willingness to connect with new online friends decreased
significantly. Further qualitative interviews of selected respondents
identified these changes as emblematic of users developing ad-hoc risk
mitigation strategies to address privacy threats.
|
1103.3190
|
Designing Power-Efficient Modulation Formats for Noncoherent Optical
Systems
|
cs.IT math.IT
|
We optimize modulation formats for the additive white Gaussian noise channel
with a nonnegative input constraint, also known as the intensity-modulated
direct detection channel, with and without confining them to a lattice
structure. Our optimization criteria are the average electrical and optical
power. The nonnegativity input signal constraint is translated into a conical
constraint in signal space, and modulation formats are designed by sphere
packing inside this cone. Some remarkably dense packings are found, which yield
more power-efficient modulation formats than previously known. For example, at
a spectral efficiency of 1 bit/s/Hz, the obtained modulation format offers a
0.86 dB average electrical power gain and 0.43 dB average optical power gain
over the previously best known modulation formats to achieve a symbol error
rate of 10^-6. This modulation turns out to have a lattice-based structure. At
a spectral efficiency of 3/2 bits/s/Hz and to achieve a symbol error rate of
10^-6, the modulation format obtained for optimizing the average electrical
power offers a 0.58 dB average electrical power gain over the best
lattice-based modulation and 2.55 dB gain over the best previously known
format. However, the modulation format optimized for average optical power
offers a 0.46 dB average optical power gain over the best lattice-based
modulation and 1.35 dB gain over the best previously known format.
|
1103.3196
|
Condensation phase transition in nonlinear fitness networks
|
cond-mat.stat-mech cs.SI nlin.AO physics.soc-ph
|
We analyze the condensation phase transitions in out-of-equilibrium complex
networks in a unifying framework which includes the nonlinear model and the
fitness model as its appropriate limits. We show a novel phase structure which
depends on both the fitness parameter and the nonlinear exponent. The
occurrence of the condensation phase transitions in the dynamical evolution of
the network is demonstrated by using Bianconi-Barabasi method. We find that the
nonlinear and the fitness preferential attachment mechanisms play important
roles in formation of an interesting phase structure.
|
1103.3223
|
Using Soft Computer Techniques on Smart Devices for Monitoring Chronic
Diseases: the CHRONIOUS case
|
cs.AI
|
CHRONIOUS is an Open, Ubiquitous and Adaptive Chronic Disease Management
Platform for Chronic Obstructive Pulmonary Disease(COPD) Chronic Kidney Disease
(CKD) and Renal Insufficiency. It consists of several modules: an ontology
based literature search engine, a rule based decision support system, remote
sensors interacting with lifestyle interfaces (PDA, monitor touchscreen) and a
machine learning module. All these modules interact each other to allow the
monitoring of two types of chronic diseases and to help clinician in taking
decision for cure purpose. This paper illustrates how some machine learning
algorithms and a rule based decision support system can be used in smart
devices, to monitor chronic patient. We will analyse how a set of machine
learning algorithms can be used in smart devices to alert the clinician in case
of a patient health condition worsening trend.
|
1103.3228
|
Multi-parameter acoustic imaging of uniform objects in inhomogeneous
media
|
cs.CV physics.med-ph
|
The problem studied in this paper is ultrasound image reconstruction from
frequency-domain measurements of the scattered field from an object with
contrast in attenuation and sound speed. The case where the object has uniform
but unknown contrast in these properties relative to the background is
considered. Background clutter is taken into account in a physically realistic
manner by considering an exact scattering model for randomly located small
scatterers that vary in sound speed. The resulting statistical characteristics
of the interference is incorporated into the imaging solution, which includes
applying a total-variation minimization based approach where the relative
effect of perturbation in sound speed to attenuation is included as a
parameter. Convex optimization methods provide the basis for the reconstruction
algorithm. Numerical data for inversion examples are generated by solving the
discretized Lippman-Schwinger equation for the object and speckle-forming
scatterers in the background. A statistical model based on the Born
approximation is used for reconstruction of the object profile. Results are
presented for a two dimensional problem in terms of classification performance
and compared to minimum-l2-norm reconstruction. Classification using the
proposed method is shown to be robust down to a signal-to-clutter ratio of less
than 1 dB.
|
1103.3240
|
Decentralized Constraint Satisfaction
|
cs.AI
|
We show that several important resource allocation problems in wireless
networks fit within the common framework of Constraint Satisfaction Problems
(CSPs). Inspired by the requirements of these applications, where variables are
located at distinct network devices that may not be able to communicate but may
interfere, we define natural criteria that a CSP solver must possess in order
to be practical. We term these algorithms decentralized CSP solvers. The best
known CSP solvers were designed for centralized problems and do not meet these
criteria. We introduce a stochastic decentralized CSP solver and prove that it
will find a solution in almost surely finite time, should one exist, also
showing it has many practically desirable properties. We benchmark the
algorithm's performance on a well-studied class of CSPs, random k-SAT,
illustrating that the time the algorithm takes to find a satisfying assignment
is competitive with stochastic centralized solvers on problems with order a
thousand variables despite its decentralized nature. We demonstrate the
solver's practical utility for the problems that motivated its introduction by
using it to find a non-interfering channel allocation for a network formed from
data from downtown Manhattan.
|
1103.3292
|
Feedback Reduction for MIMO Broadcast Channel with Heterogeneous Fading
|
cs.IT math.IT
|
This paper considers feedback load reduction for multiuser multiple input
multiple output (MIMO) broadcast channel where the users' channel distributions
are not homogeneous. A cluster-based feedback scheme is proposed such that the
range of possible signal-to-noise ratio (SNR) of the users are divided into
several clusters according to the order statistics of the users' SNRs. Each
cluster has a corresponding threshold, and the users compare their measured
instantaneous SNRs with the thresholds to determine whether and how many bits
they should use to feed back their instantaneous SNRs. If a user's
instantaneous SNR is lower than a certain threshold, the user does not feed
back. Feedback load reduction is thus achieved. For a given number of clusters,
the sum rate loss using the cluster-based feedback scheme is investigated. Then
the minimum number of clusters given a maximum tolerable sum rate loss is
derived. Through simulations, it is shown that, when the number of users is
large, full multiuser diversity can be achieved by the proposed feedback
scheme, which is more efficient than the conventional schemes.
|
1103.3301
|
Scaling and entropy in p-median facility location along a line
|
physics.soc-ph cond-mat.stat-mech cs.SI physics.comp-ph
|
The p-median problem is a common model for optimal facility location. The
task is to place p facilities (e.g., warehouses or schools) in a
heterogeneously populated space such that the average distance from a person's
home to the nearest facility is minimized. Here we study the special case where
the population lives along a line (e.g., a road or a river). If facilities are
optimally placed, the length of the line segment served by a facility is
inversely proportional to the square root of the population density. This
scaling law is derived analytically and confirmed for concrete numerical
examples of three US Interstate highways and the Mississippi River. If facility
locations are permitted to deviate from the optimum, the number of possible
solutions increases dramatically. Using Monte Carlo simulations, we compute how
scaling is affected by an increase in the average distance to the nearest
facility. We find that the scaling exponents change and are most sensitive near
the optimum facility distribution.
|
1103.3316
|
Deterministic Bounds for Restricted Isometry of Compressed Sensing
Matrices
|
cs.IT math.IT
|
Compressed Sensing (CS) is an emerging field that enables reconstruction of a
sparse signal $x \in {\mathbb R} ^n$ that has only $k \ll n$ non-zero
coefficients from a small number $m \ll n$ of linear projections. The
projections are obtained by multiplying $x$ by a matrix $\Phi \in {\mathbb
R}^{m \times n}$ --- called a CS matrix --- where $k < m \ll n$. In this work,
we ask the following question: given the triplet $\{k, m, n \}$ that defines
the CS problem size, what are the deterministic limits on the performance of
the best CS matrix in ${\mathbb R}^{m \times n}$? We select Restricted Isometry
as the performance metric. We derive two deterministic converse bounds and one
deterministic achievable bound on the Restricted Isometry for matrices in
${\mathbb R}^{m \times n}$ in terms of $n$, $m$ and $k$. The first converse
bound (structural bound) is derived by exploiting the intricate relationships
between the singular values of sub-matrices and the complete matrix. The second
converse bound (packing bound) and the achievable bound (covering bound) are
derived by recognizing the equivalence of CS matrices to codes on Grassmannian
spaces. Simulations reveal that random Gaussian $\Phi$ provide far from optimal
performance. The derivation of the three bounds offers several new geometric
insights that relate optimal CS matrices to equi-angular tight frames, the
Welch bound, codes on Grassmannian spaces, and the Generalized Pythagorean
Theorem (GPT).
|
1103.3339
|
Transient Stability Assessment of Smart Power System using Complex
Networks Framework
|
cs.OH cs.SI physics.soc-ph
|
In this paper, a new methodology for stability assessment of a smart power
system is proposed. The key to this assessment is an index called betweenness
index which is based on ideas from complex network theory. The proposed
betweenness index is an improvement of previous works since it considers the
actual real power flow through the transmission lines along the network.
Furthermore, this work initiates a new area for complex system research to
assess the stability of the power system.
|
1103.3371
|
Numerical solution of a fuzzy time-optimal control problem
|
cs.NA cs.SY math.OC
|
In this paper, we consider a time-optimal control problem with uncertainties.
Dynamics of controlled object is expressed by crisp linear system of
differential equations with fuzzy initial and final states. We introduce a
notion of fuzzy optimal time and reduce its calculation to two crisp optimal
control problems. We examine the proposed approach on an example.
|
1103.3372
|
Automatically Discovering Relaxed Lyapunov Functions for Polynomial
Dynamical Systems
|
math.DS cs.SY math.OC
|
The notion of Lyapunov function plays a key role in design and verification
of dynamical systems, as well as hybrid and cyber-physical systems. In this
paper, to analyze the asymptotic stability of a dynamical system, we generalize
standard Lyapunov functions to relaxed Lyapunov functions (RLFs), by
considering higher order Lie derivatives of certain functions along the
system's vector field. Furthermore, we present a complete method to
automatically discovering polynomial RLFs for polynomial dynamical systems
(PDSs). Our method is complete in the sense that it is able to discover all
polynomial RLFs by enumerating all polynomial templates for any PDS.
|
1103.3391
|
An Integer Linear Programming Model for the Radiotherapy Treatment
Scheduling Problem
|
cs.CE
|
Radiotherapy represents an important phase of treatment for a large number of
cancer patients. It is essential that resources used to deliver this treatment
are employed effectively. This paper presents a new integer linear programming
model for real-world radiotherapy treatment scheduling and analyses the
effectiveness of using this model on a daily basis in a hospital. Experiments
are conducted varying the days on which schedules can be created. Results
obtained using real-world data from the Nottingham University Hospitals NHS
Trust, UK, are presented and show how the proposed model can be used with
different policies in order to achieve good quality schedules.
|
1103.3397
|
Criterions for locally dense subgraphs
|
physics.soc-ph cs.SI physics.comp-ph
|
Community detection is one of the most investigated problems in the field of
complex networks. Although several methods were proposed, there is still no
precise definition of communities. As a step towards a definition, I highlight
two necessary properties of communities, separation and internal cohesion, the
latter being a new concept. I propose a local method of community detection
based on two-dimensional local optimization, which I tested on common
benchmarks and on the word association database.
|
1103.3417
|
Finding Shortest Path for Developed Cognitive Map Using Medial Axis
|
cs.AI
|
this paper presents an enhancement of the medial axis algorithm to be used
for finding the optimal shortest path for developed cognitive map. The
cognitive map has been developed, based on the architectural blueprint maps.
The idea for using the medial-axis is to find main path central pixels; each
center pixel represents the center distance between two side boarder pixels.
The need for these pixels in the algorithm comes from the need of building a
network of nodes for the path, where each node represents a turning in the real
world (left, right, critical left, critical right...). The algorithm also
ignores from finding the center pixels paths that are too small for intelligent
robot navigation. The Idea of this algorithm is to find the possible shortest
path between start and end points. The goal of this research is to extract a
simple, robust representation of the shape of the cognitive map together with
the optimal shortest path between start and end points. The intelligent robot
will use this algorithm in order to decrease the time that is needed for
sweeping the targeted building.
|
1103.3420
|
Extraction of handwritten areas from colored image of bank checks by an
hybrid method
|
cs.AI
|
One of the first step in the realization of an automatic system of check
recognition is the extraction of the handwritten area. We propose in this paper
an hybrid method to extract these areas. This method is based on digit
recognition by Fourier descriptors and different steps of colored image
processing . It requires the bank recognition of its code which is located in
the check marking band as well as the handwritten color recognition by the
method of difference of histograms. The areas extraction is then carried out by
the use of some mathematical morphology tools.
|
1103.3430
|
Identification of arabic word from bilingual text using character
features
|
cs.AI cs.CV
|
The identification of the language of the script is an important stage in the
process of recognition of the writing. There are several works in this research
area, which treat various languages. Most of the used methods are global or
statistical. In this present paper, we study the possibility of using the
features of scripts to identify the language. The identification of the
language of the script by characteristics returns the identification in the
case of multilingual documents less difficult. We present by this work, a study
on the possibility of using the structural features to identify the Arabic
language from an Arabic / Latin text.
|
1103.3440
|
Off-Line Handwritten Signature Identification Using Rotated Complex
Wavelet Filters
|
cs.CV
|
In this paper, a new method for handwritten signature identification based on
rotated complex wavelet filters is proposed. We have proposed to use the
rotated complex wavelet filters (RCWF) and dual tree complex wavelet
transform(DTCWT) together to derive signature feature extraction, which
captures information in twelve different directions. In identification phase,
Canberra distance measure is used. The proposed method is compared with
discrete wavelet transform (DWT). From experimental results it is found that
signature identification rate of proposed method is superior over DWT
|
1103.3457
|
Ex ante prediction of cascade sizes on networks of agents facing binary
outcomes
|
cs.SI physics.soc-ph
|
We consider in this paper the potential for ex ante prediction of the cascade
size in a model of binary choice with externalities (Schelling 1973, Watts
2002). Agents are connected on a network and can be in one of two states of the
world, 0 or 1. Initially, all are in state 0 and a small number of seeds are
selected at random to switch to state1. A simple threshold rule specifies
whether other agents switch subsequently. The cascade size (the percolation) is
the proportion of all agents which eventually switches to state 1. We select
information on the connectivity of the initial seeds, the connectivity of the
agents to which they are connected, the thresholds of these latter agents, and
the thresholds of the agents to which these are connected. We obtain results
for random, small world and scale -free networks with different network
parameters and numbers of initial seeds. The results are robust with respect to
these factors. We perform least squares regression of the logit transformation
of the cascade size (Hosmer and Lemeshow 1989) on these potential explanatory
variables. We find considerable explanatory power for the ex ante prediction of
cascade sizes. For the random networks, on average 32 per cent of the variance
of the cascade sizes is explained, 40 per cent for the small world and 46 per
cent for the scale-free. The connectivity variables are hardly ever significant
in the regressions, whether relating to the seeds themselves or to the agents
connected to the seeds. In contrast, the information on the thresholds of
agents contains much more explanatory power. This supports the conjecture of
Watts and Dodds (2007.) that large cascades are driven by a small mass of
easily influenced agents.
|
1103.3510
|
Degrees of Freedom of a Communication Channel and Kolmogorov numbers
|
cs.IT math.FA math.IT
|
In this note, we show that the operator theoretic concept of Kolmogorov
numbers and the number of degrees of freedom at level $\epsilon$ of a
communication channel are closely related. Linear communication channels may be
modeled using linear compact operators on Banach or Hilbert spaces and the
number of degrees of freedom of such channels is defined to be the number of
linearly independent signals that may be communicated over this channel, where
the channel is restricted by a threshold noise level. Kolmogorov numbers are a
particular example of $s$-numbers, which are defined over the class of bounded
operators between Banach spaces. We demonstrate that these two concepts are
closely related, namely that the Kolmogorov numbers correspond to the "jump
points" in the function relating numbers of degrees of freedom with the noise
level $\epsilon$. We also establish a useful numerical computation result for
evaluating Kolmogorov numbers of compact operators.
|
1103.3532
|
4D Wavelet-Based Regularization for Parallel MRI Reconstruction: Impact
on Subject and Group-Levels Statistical Sensitivity in fMRI
|
stat.ME cs.CV physics.med-ph
|
Parallel MRI is a fast imaging technique that enables the acquisition of
highly resolved images in space. It relies on $k$-space undersampling and
multiple receiver coils with complementary sensitivity profiles in order to
reconstruct a full Field-Of-View (FOV) image. The performance of parallel
imaging mainly depends on the reconstruction algorithm, which can proceed
either in the original $k$-space (GRAPPA, SMASH) or in the image domain
(SENSE-like methods). To improve the performance of the widely used SENSE
algorithm, 2D- or slice-specific regularization in the wavelet domain has been
efficiently investigated. In this paper, we extend this approach using
3D-wavelet representations in order to handle all slices together and address
reconstruction artifacts which propagate across adjacent slices. The extension
also accounts for temporal correlations that exist between successive scans in
functional MRI (fMRI). The proposed 4D reconstruction scheme is fully
\emph{unsupervised} in the sense that all regularization parameters are
estimated in the maximum likelihood sense on a reference scan. The gain induced
by such extensions is first illustrated on EPI image reconstruction but also
measured in terms of statistical sensitivity during a fast event-related fMRI
protocol. The proposed 4D-UWR-SENSE algorithm outperforms the SENSE
reconstruction at the subject and group-levels (15 subjects) for different
contrasts of interest and using different parallel acceleration factors on
$2\times2\times3$mm$^3$ EPI images.
|
1103.3541
|
Distributed Learning Policies for Power Allocation in Multiple Access
Channels
|
cs.GT cs.LG cs.NI
|
We analyze the problem of distributed power allocation for orthogonal
multiple access channels by considering a continuous non-cooperative game whose
strategy space represents the users' distribution of transmission power over
the network's channels. When the channels are static, we find that this game
admits an exact potential function and this allows us to show that it has a
unique equilibrium almost surely. Furthermore, using the game's potential
property, we derive a modified version of the replicator dynamics of
evolutionary game theory which applies to this continuous game, and we show
that if the network's users employ a distributed learning scheme based on these
dynamics, then they converge to equilibrium exponentially quickly. On the other
hand, a major challenge occurs if the channels do not remain static but
fluctuate stochastically over time, following a stationary ergodic process. In
that case, the associated ergodic game still admits a unique equilibrium, but
the learning analysis becomes much more complicated because the replicator
dynamics are no longer deterministic. Nonetheless, by employing results from
the theory of stochastic approximation, we show that users still converge to
the game's unique equilibrium.
Our analysis hinges on a game-theoretical result which is of independent
interest: in finite player games which admit a (possibly nonlinear) convex
potential function, the replicator dynamics (suitably modified to account for
nonlinear payoffs) converge to an eps-neighborhood of an equilibrium at time of
order O(log(1/eps)).
|
1103.3580
|
On a Connection between Ideal Two-level Autocorrelation and Almost
Balancedness of $p$-ary Sequences
|
cs.IT math.IT
|
In this correspondence, for every periodic $p-$ary sequence satisfying ideal
two-level autocorrelation property the existence of an element of the field
${\bf GF}(p)$ which appears one time less than all the rest that are equally
distributed in a period of that sequence, is proved by algebraic method. In
addition, it is shown that such a special element might not be only the zero
element but as well arbitrary element of that field.
|
1103.3585
|
Incremental dimension reduction of tensors with random index
|
cs.DS cs.CL cs.IR
|
We present an incremental, scalable and efficient dimension reduction
technique for tensors that is based on sparse random linear coding. Data is
stored in a compactified representation with fixed size, which makes memory
requirements low and predictable. Component encoding and decoding are performed
on-line without computationally expensive re-analysis of the data set. The
range of tensor indices can be extended dynamically without modifying the
component representation. This idea originates from a mathematical model of
semantic memory and a method known as random indexing in natural language
processing. We generalize the random-indexing algorithm to tensors and present
signal-to-noise-ratio simulations for representations of vectors and matrices.
We present also a mathematical analysis of the approximate orthogonality of
high-dimensional ternary vectors, which is a property that underpins this and
other similar random-coding approaches to dimension reduction. To further
demonstrate the properties of random indexing we present results of a synonym
identification task. The method presented here has some similarities with
random projection and Tucker decomposition, but it performs well at high
dimensionality only (n>10^3). Random indexing is useful for a range of complex
practical problems, e.g., in natural language processing, data mining, pattern
recognition, event detection, graph searching and search engines. Prototype
software is provided. It supports encoding and decoding of tensors of order >=
1 in a unified framework, i.e., vectors, matrices and higher order tensors.
|
1103.3596
|
Beyond the Cut-Set Bound: Uncertainty Computations in Network Coding
with Correlated Sources
|
cs.IT math.IT
|
Cut-set bounds on achievable rates for network communication protocols are
not in general tight. In this paper we introduce a new technique for proving
converses for the problem of transmission of correlated sources in networks,
that results in bounds that are tighter than the corresponding cut-set bounds.
We also define the concept of "uncertainty region" which might be of
independent interest. We provide a full characterization of this region for the
case of two correlated random variables. The bounding technique works as
follows: on one hand we show that if the communication problem is solvable, the
uncertainty of certain random variables in the network with respect to
imaginary parties that have partial knowledge of the sources must satisfy some
constraints that depend on the network architecture. On the other hand, the
same uncertainties have to satisfy constraints that only depend on the joint
distribution of the sources. Matching these two leads to restrictions on the
statistical joint distribution of the sources in communication problems that
are solvable over a given network architecture.
|
1103.3616
|
Energy-Optimal Scheduling in Low Duty Cycle Sensor Networks
|
cs.NI cs.SY math.OC
|
Energy consumption of a wireless sensor node mainly depends on the amount of
time the node spends in each of the high power active (e.g., transmit, receive)
and low power sleep modes. It has been well established that in order to
prolong node's lifetime the duty-cycle of the node should be low. However, low
power sleep modes usually have low current draw but high energy cost while
switching to the active mode with a higher current draw. In this work, we
investigate a MaxWeightlike opportunistic sleep-active scheduling algorithm
that takes into account time- varying channel and traffic conditions. We show
that our algorithm is energy optimal in the sense that the proposed ESS
algorithm can achieve an energy consumption which is arbitrarily close to the
global minimum solution. Simulation studies are provided to confirm the
theoretical results.
|
1103.3624
|
Analyzing biosignals using the R freeware (open source) tool
|
physics.data-an cs.CE physics.bio-ph
|
For researchers in electromyography (EMG), and similar biosginals, signal
processing is naturally an essential topic. There are a number of excellent
tools available. To these one may add the freely available open source
statistical software package R, which is in fact also a programming language.
It is becoming one the standard tools for scientists to visualize and process
data. A large number of additional packages are continually contributed by an
active community. The purpose of this paper is to alert biomechanics
researchers to the usefulness of this versatile tool. We discuss a set of basic
signal processing methods and their realizations with R which are provided in
the supplementary material. The data used in the examples are EMG and force
plate data acquired during a quiet standing test.
|
1103.3641
|
On the Pseudocodeword Redundancy of Binary Linear Codes
|
cs.IT math.IT
|
The AWGNC, BSC, and max-fractional pseudocodeword redundancies of a binary
linear code are defined to be the smallest number of rows in a parity-check
matrix such that the corresponding minimum pseudoweight is equal to the minimum
Hamming distance of the code. It is shown that most codes do not have a finite
pseudocodeword redundancy. Also, upper bounds on the pseudocodeword redundancy
for some families of codes, including codes based on designs, are provided. The
pseudocodeword redundancies for all codes of small length (at most 9) are
computed. Furthermore, comprehensive results are provided on the cases of
cyclic codes of length at most 250 for which the eigenvalue bound of Vontobel
and Koetter is sharp.
|
1103.3673
|
Buffers Improve the Performance of Relay Selection
|
cs.IT math.IT
|
We show that the performance of relay selection can be improved by employing
relays with buffers. Under the idealized assumption that no buffer is full or
empty, the best source-relay and the best relay-destination channels can be
simultaneously exploited by selecting the corresponding relays for reception
and transmission, respectively. The resulting relay selection scheme is
referred to as max-max relay selection (MMRS). Since for finite buffer sizes,
empty and full buffers are practically unavoidable if MMRS is employed, we
propose a hybrid relay selection (HRS) scheme, which is a combination of
conventional best relay selection (BRS) and MMRS. We analyze the outage
probabilities of MMRS and HRS and show that both schemes achieve the same
diversity gain as conventional BRS and a superior coding gain. Furthermore, our
results show that for moderate buffer sizes (e.g. 30 packets) HRS closely
approaches the performance of idealized MMRS and the performance gain compared
to BRS approaches 3 dB as the number of relays increases.
|
1103.3687
|
Cost Based Satisficing Search Considered Harmful
|
cs.AI
|
Recently, several researchers have found that cost-based satisficing search
with A* often runs into problems. Although some "work arounds" have been
proposed to ameliorate the problem, there has not been any concerted effort to
pinpoint its origin. In this paper, we argue that the origins can be traced
back to the wide variance in action costs that is observed in most planning
domains. We show that such cost variance misleads A* search, and that this is
no trifling detail or accidental phenomenon, but a systemic weakness of the
very concept of "cost-based evaluation functions + systematic search +
combinatorial graphs". We show that satisficing search with sized-based
evaluation functions is largely immune to this problem.
|
1103.3698
|
Super-resolution in map-making based on a physical instrument model and
regularized inversion. Application to SPIRE/Herschel
|
astro-ph.CO astro-ph.IM cs.CE
|
We investigate super-resolution methods for image reconstruction from data
provided by a family of scanning instruments like the Herschel observatory. To
do this, we constructed a model of the instrument that faithfully reflects the
physical reality, accurately taking the acquisition process into account to
explain the data in a reliable manner. The inversion, ie the image
reconstruction process, is based on a linear approach resulting from a
quadratic regularized criterion and numerical optimization tools. The
application concerns the reconstruction of maps for the SPIRE instrument of the
Herschel observatory. The numerical evaluation uses simulated and real data to
compare the standard tool (coaddition) and the proposed method. The inversion
approach is capable to restore spatial frequencies over a bandwidth four times
that possible with coaddition and thus to correctly show details invisible on
standard maps. The approach is also applied to real data with significant
improvement in spatial resolution.
|
1103.3719
|
Diversity-Multiplexing Tradeoff in the Multiaccess Relay Channel with
Finite Block Length
|
cs.IT math.IT
|
The Dynamic Decode-and-Forward (DDF) protocol and the Hybrid DDF and
Amplified-and-Forward (HDAF) protocol for the multiple-access relay channel
(MARC) with quasi static fading are evaluated using the Zheng-Tse
diversity-multiplexing tradeoff (DMT). We assume that there are two users, one
half-duplex relay, and a common destination, each equipped with single antenna.
For the Rayleigh fading, the DDF protocol is well known and has been analyzed
in terms of the DMT with infinite block length. By carefully dealing with
properties specific to finite block length, we characterize the finite block
length DMT which takes into account the fact that the event of decoding error
at the relay causes the degradation in error performance when the block length
is finite. Furthermore, we consider the situation where the destination does
not have a priori knowledge of the relay decision time at which the relay
switches from listening to transmitting. By introducing a decision rejection
criterion such that the relay forwards message only when its decision is
reliable, and the generalized likelihood ratio test (GLRT) at the destination
that jointly decodes the relay decision time and the information message, our
analysis show that the optimal DMT is achievable as if there is no decoding
error at the relay and the relay decision time is known at the destination.
Therefore, infinite block length and additional overhead for communicating the
decision time are not needed for the DDF to achieve the optimal DMT. To further
improve the DMT, we propose the HDAF protocol which take advantages of both the
DDF and the Amplified-and-Forward protocols by judiciously choosing which
protocol to use. Our result shows that the HDAF protocol outperforms the
original DDF in the DMT perspective. Finally, a variant of the HDAF protocol
with lower implementation complexity without sacrificing the DMT performance is
devised.
|
1103.3735
|
Refining Recency Search Results with User Click Feedback
|
cs.IR cs.AI cs.LG
|
Traditional machine-learned ranking systems for web search are often trained
to capture stationary relevance of documents to queries, which has limited
ability to track non-stationary user intention in a timely manner. In recency
search, for instance, the relevance of documents to a query on breaking news
often changes significantly over time, requiring effective adaptation to user
intention. In this paper, we focus on recency search and study a number of
algorithms to improve ranking results by leveraging user click feedback. Our
contributions are three-fold. First, we use real search sessions collected in a
random exploration bucket for \emph{reliable} offline evaluation of these
algorithms, which provides an unbiased comparison across algorithms without
online bucket tests. Second, we propose a re-ranking approach to improve search
results for recency queries using user clicks. Third, our empirical comparison
of a dozen algorithms on real-life search data suggests importance of a few
algorithmic choices in these applications, including generalization across
different query-document pairs, specialization to popular queries, and
real-time adaptation of user clicks.
|
1103.3737
|
MDS Array Codes with Optimal Rebuilding
|
cs.IT cs.DC math.IT
|
MDS array codes are widely used in storage systems to protect data against
erasures. We address the \emph{rebuilding ratio} problem, namely, in the case
of erasures, what is the the fraction of the remaining information that needs
to be accessed in order to rebuild \emph{exactly} the lost information? It is
clear that when the number of erasures equals the maximum number of erasures
that an MDS code can correct then the rebuilding ratio is 1 (access all the
remaining information). However, the interesting (and more practical) case is
when the number of erasures is smaller than the erasure correcting capability
of the code. For example, consider an MDS code that can correct two erasures:
What is the smallest amount of information that one needs to access in order to
correct a single erasure? Previous work showed that the rebuilding ratio is
bounded between 1/2 and 3/4, however, the exact value was left as an open
problem. In this paper, we solve this open problem and prove that for the case
of a single erasure with a 2-erasure correcting code, the rebuilding ratio is
1/2. In general, we construct a new family of $r$-erasure correcting MDS array
codes that has optimal rebuilding ratio of $\frac{1}{r}$ in the case of a
single erasure. Our array codes have efficient encoding and decoding algorithms
(for the case $r=2$ they use a finite field of size 3) and an optimal update
property.
|
1103.3742
|
The key exchange cryptosystem used with higher order Diophantine
equations
|
cs.IT math.IT
|
One-way functions are widely used for encrypting the secret in public key
cryptography, although they are regarded as plausibly one-way but have not been
proven so. Here we discuss the public key cryptosystem based on the system of
higher order Diophantine equations. In this system those Diophantine equations
are used as public keys for sender and recipient, and sender can recover the
secret from the Diophantine equation returned from recipient with a trapdoor.
In general the system of Diophantine equations is hard to solve when it is
positive-dimensional and it implies the Diophantine equations in this
cryptosystem works as a possible one-way function. We also discuss some
problems on implementation, which are caused from additional complexity
necessary for constructing Diophantine equations in order to prevent from
attacking by tamperers.
|
1103.3745
|
The AllDifferent Constraint with Precedences
|
cs.AI
|
We propose AllDiffPrecedence, a new global constraint that combines together
an AllDifferent constraint with precedence constraints that strictly order
given pairs of variables. We identify a number of applications for this global
constraint including instruction scheduling and symmetry breaking. We give an
efficient propagation algorithm that enforces bounds consistency on this global
constraint. We show how to implement this propagator using a decomposition that
extends the bounds consistency enforcing decomposition proposed for the
AllDifferent constraint. Finally, we prove that enforcing domain consistency on
this global constraint is NP-hard in general.
|
1103.3746
|
Using a Secret Key to Foil an Eavesdropper
|
cs.CR cs.IT math.IT
|
This work addresses private communication with distributed systems in mind.
We consider how to best use secret key resources and communication to transmit
signals across a system so that an eavesdropper is least capable to act on the
signals. One of the key assumptions is that the private signals are publicly
available with a delay---in this case a delay of one. We find that even if the
source signal (information source) is memoryless, the design and performance of
the optimal system has a strong dependence on which signals are assumed to be
available to the eavesdropper with delay.
Specifically, we consider a distributed system with two components where
information is known to only one component and communication resources are
limited. Instead of measuring secrecy by "equivocation," we define a value
function for the system, based on the actions of the system and the adversary,
and characterize the optimal performance of the system, as measured by the
average value obtained against the worst adversary. The resulting optimal
rate-payoff region is expressed with information theoretic inequalities, and
the optimal communication methods are not standard source coding techniques but
instead are methods that stem from synthesizing a memoryless channel.
|
1103.3753
|
On the Scalability of Multidimensional Databases
|
cs.DB
|
It is commonly accepted in the practice of on-line analytical processing of
databases that the multidimensional database organization is less scalable than
the relational one. It is easy to see that the size of the multidimensional
organization may increase very quickly. For example, if we introduce one
additional dimension, then the total number of possible cells will be at least
doubled. However, this reasoning does not takethe fact into account that the
multidimensional organization can be compressed. There are compression
techniques, which can remove all or at least a part of the empty cells from the
multidimensional organization, while maintaining a good retrieval performance.
Relational databases often use B-tree indices to speed up the access to given
rows of tables. It can be proven, under some reasonable assumptions, that the
total size of the table and the B-tree index is bigger than a compressed
multidimensional representation. This implies that the compressed array results
in a smaller database and faster access at the same time. This paper compares
several compression techniques and shows when we should and should not apply
compressed arrays instead of relational tables.
|
1103.3787
|
Pattern-recalling processes in quantum Hopfield networks far from
saturation
|
cond-mat.dis-nn cs.LG physics.bio-ph
|
As a mathematical model of associative memories, the Hopfield model was now
well-established and a lot of studies to reveal the pattern-recalling process
have been done from various different approaches. As well-known, a single
neuron is itself an uncertain, noisy unit with a finite unnegligible error in
the input-output relation. To model the situation artificially, a kind of 'heat
bath' that surrounds neurons is introduced. The heat bath, which is a source of
noise, is specified by the 'temperature'. Several studies concerning the
pattern-recalling processes of the Hopfield model governed by the
Glauber-dynamics at finite temperature were already reported. However, we might
extend the 'thermal noise' to the quantum-mechanical variant. In this paper, in
terms of the stochastic process of quantum-mechanical Markov chain Monte Carlo
method (the quantum MCMC), we analytically derive macroscopically deterministic
equations of order parameters such as 'overlap' in a quantum-mechanical variant
of the Hopfield neural networks (let us call "quantum Hopfield model" or
"quantum Hopfield networks"). For the case in which non-extensive number $p$ of
patterns are embedded via asymmetric Hebbian connections, namely, $p/N \to 0$
for the number of neuron $N \to \infty$ ('far from saturation'), we evaluate
the recalling processes for one of the built-in patterns under the influence of
quantum-mechanical noise.
|
1103.3794
|
Improved QPP Interleavers for LTE Standard
|
cs.IT math.IT
|
This paper proposes and proves a theorem which stipulates sufficient
conditions the coefficients of two quadratic permutation polynomials (QPP) must
satisfy, so that the permutations generated by them are identical. The result
is used to reduce the search time of QPP interleavers with lengths given by
Long Term Evolution (LTE) standard up to 512, by improving the distance
spectrum over the set of polynomials with the largest spreading factor.
Polynomials that lead to better performance compared to LTE standard are found
for several lengths. Simulations show that 0.5 dB coding gains can be obtained
compared to LTE standard.
|
1103.3799
|
Relaxed Belief Propagation for MIMO Detection
|
cs.IT math.IT
|
In this paper, relaxed belief propagation (RBP) based detectors are proposed
for multiple-input multiple-out (MIMO) system. The factor graph is leveraged to
represent the MIMO channels, and based on which our algorithms are developed.
Unlike the existing complicated standard belief propagation (SBP) detector that
considers all the edges of the factor graph when updating messages, the
proposed RBP focuses on partial edges, which largely reduces computational
complexity. In particular, relax degree is introduced in to determine how many
edges to be selected, whereby RBP is a generalized edge selection based BP
method and SBP is a special case of RBP having the smallest relax degree.
Moreover, we propose a novel Gaussian approximation with feedback information
mechanism to enable the proposed RBP detector. In order to further improve the
detection performance, we also propose to cascade a minimum mean square error
(MMSE) detector before the RBP detector, from which pseudo priori information
is judiciously exploited. Convergence and complexity analyses, along with the
numerical simulation results, verify that the proposed RBP outperform other BP
methods having the similar complexity, and the MMSE cascaded RBP even
outperform SBP at the largest relax degree in large MIMO systems.
|
1103.3801
|
Two methods for solving optimization problems arising in electronic
measurements and electrical engineering
|
math.NA cs.CE cs.NA math.OC physics.comp-ph
|
In this paper we introduce a common problem in electronic measurements and
electrical engineering: finding the first root from the left of an equation in
the presence of some initial conditions. We present examples of
electrotechnical devices (analog signal filtering), where it is necessary to
solve it. Two new methods for solving this problem, based on global
optimization ideas, are introduced. The first uses the exact a priori given
global Lipschitz constant for the first derivative. The second method
adaptively estimates local Lipschitz constants during the search. Both
algorithms either find the first root from the left or determine the global
minimizers (in the case when the objective function has no roots). Sufficient
conditions for convergence of the new methods to the desired solution are
established in both cases. The results of numerical experiments for real
problems and a set of test functions are also presented.
|
1103.3837
|
Transmission Selection Schemes using Sum Rate Analysis in Distributed
Antenna Systems
|
cs.IT math.IT
|
In this paper, we study single cell multi-user downlink distributed antenna
systems (DAS) where the antenna ports are geographically separated in a cell.
First, we derive an expression of the ergodic sum rate for DAS in the presence
of pathloss. Then, we propose a transmission selection scheme based on the
derived expressions to maximize the overall ergodic sum rate. Utilizing the
knowledge of distance information from a user to each distributed antenna (DA)
port, we consider the pairings of each DA port and its supporting user to
optimize the system performance. Then, we compute the ergodic sum rate for
various transmission mode candidates and adopt a transmission selection scheme
which chooses the best mode maximizing the ergodic sum rate among the mode
candidates. In our proposed scheme, the number of mode candidates are greatly
reduced compared to that of the ideal mode selection. Through Monte Carlo
simulations, we will show the accuracy of our derivation for the ergodic sum
rate expression. Moreover, simulation results with the pathloss modeling
confirm that the proposed transmission selection scheme produces the average
sum rate identical to the ideal mode selection with significantly reduced
selection candidates.
|
1103.3843
|
A Simple Sampling Method for Metric Measure Spaces
|
cs.IT math.IT math.MG
|
We introduce a new, simple metric method of sampling metric measure spaces,
based on a well-known "snowflakeing operator" and we show that, as a
consequence of a classical result of Assouad, the sampling of doubling metric
spaces is bilipschitz equivalent to that of subsets of some $\mathbb{R}^N$.
Moreover, we compare this new method with two other approaches, in particular
to one that represents a direct application of our triangulation method of
metric measure spaces satisfying a generalized Ricci curvature condition.
|
1103.3846
|
The Performance of PCM Quantization Under Tight Frame Representations
|
math.NA cs.IT math.FA math.IT
|
In this paper, we study the performance of the PCM scheme with linear
quantization rule for quantizing finite unit-norm tight frame expansions for
$\R^d$ and derive the PCM quantization error without the White Noise
Hypothesis. We prove that for the class of unit norm tight frames derived from
uniform frame paths the quantization error has an upper bound of
$O(\delta^{3/2})$ regardless of the frame redundancy. This is achieved using
some of the techniques developed by G\"{u}nt\"{u}rk in his study of Sigma-Delta
quantization. Using tools of harmonic analysis we show that this upper bound is
sharp for $d=2$. A consequence of this result is that, unlike with Sigma-Delta
quantization, the error for PCM quantization in general does not diminish to
zero as one increases the frame redundancy. We extend the result to high
dimension and show that the PCM quantization error has an upper bound
$O(\delta^{(d+1)/2})$ for asymptopitcally equidistributed unit-norm tight frame
of $\R^{d}$.
|
1103.3857
|
Difference Sequence Compression of Multidimensional Databases
|
cs.DB
|
The multidimensional databases often use compression techniques in order to
decrease the size of the database. This paper introduces a new method called
difference sequence compression. Under some conditions, this new technique is
able to create a smaller size multidimensional database than others like single
count header compression, logical position compression or base-offset
compression. Keywords: compression, multidimensional database, On-line
Analytical Processing, OLAP.
|
1103.3863
|
Multidimensional or Relational? / How to Organize an On-line Analytical
Processing Database
|
cs.DB
|
In the past few years, the number of OLAP applications increased quickly.
These applications use two significantly different DB structures:
multidimensional (MD) and table-based. One can show that the traditional model
of relational databases cannot make difference between these two structures.
Another model is necessary to make the differences visible. One of these is the
speed of the system. It can be proven that the multidimensional DB organization
results in shorter response times. And it is crucial, since a manager may
become impatient, if he or she has to wait say more than 20 seconds for the
next screen. On the other hand, we have to pay for the speed with a bigger DB
size. Why does the size of MD databases grow so quickly? The reason is the
sparsity of data: The MD matrix contains many empty cells. Efficient handling
of sparse matrices is indispensable in an OLAP application. One way to handle
sparsity is to take the structure closer to the table-based one. Thus the DB
size decreases, while the application gets slower. Therefore, other methods are
needed. This paper deals with the comparison of the two DB structures and the
limits of their usage. The new results of the paper: (1) It gives a
constructive proof that all relations can be represented in MD arrays. (2) It
also shows when the MD array representation is quicker than the table-based
one. (3) The MD representation results in smaller DB size under some
conditions. One such sufficient condition is proved in the paper. (4) A
variation of the single count header compression scheme is described with an
algorithm, which creates the compressed array from the ordered table without
materializing the uncompressed array. (5) The speed of the two different
database organizations is tested with experiments, as well. The tests are done
on benchmark as well as real life data. The experiments support the theoretical
results.
|
1103.3866
|
Multibeam Satellite Frequency/Time Duality Study and Capacity
Optimization
|
cs.IT cs.NI math.IT
|
In this paper, we investigate two new candidate transmission schemes,
Non-Orthogonal Frequency Reuse (NOFR) and Beam-Hoping (BH). They operate in
different domains (frequency and time/space, respectively), and we want to know
which domain shows overall best performance. We propose a novel formulation of
the Signal-to-Interference plus Noise Ratio (SINR) which allows us to prove the
frequency/time duality of these schemes. Further, we propose two novel capacity
optimization approaches assuming per-beam SINR constraints in order to use the
satellite resources (e.g. power and bandwidth) more efficiently. Moreover, we
develop a general methodology to include technological constraints due to
realistic implementations, and obtain the main factors that prevent the two
technologies dual of each other in practice, and formulate the technological
gap between them. The Shannon capacity (upper bound) and current
state-of-the-art coding and modulations are analyzed in order to quantify the
gap and to evaluate the performance of the two candidate schemes. Simulation
results show significant improvements in terms of power gain, spectral
efficiency and traffic matching ratio when comparing with conventional systems,
which are designed based on uniform bandwidth and power allocation. The results
also show that BH system turns out to show a less complex design and performs
better than NOFR system specially for non-real time services.
|
1103.3872
|
Probability Bracket Notation, Term Vector Space, Concept Fock Space and
Induced Probabilistic IR Models
|
cs.IR math-ph math.MP math.PR
|
After a brief introduction to Probability Bracket Notation (PBN) for discrete
random variables in time-independent probability spaces, we apply both PBN and
Dirac notation to investigate probabilistic modeling for information retrieval
(IR). We derive the expressions of relevance of document to query (RDQ) for
various probabilistic models, induced by Term Vector Space (TVS) and by Concept
Fock Space (CFS). The inference network model (INM) formula is symmetric and
can be used to evaluate relevance of document to document (RDD); the
CFS-induced models contain ingredients of all three classical IR models. The
relevance formulas are tested and compared on different scenarios against a
famous textbook example.
|
1103.3882
|
A Transform Approach to Linear Network Coding for Acyclic Networks with
Delay
|
cs.IT math.IT
|
The algebraic formulation for linear network coding in acyclic networks with
the links having integer delay is well known. Based on this formulation, for a
given set of connections over an arbitrary acyclic network with integer delay
assumed for the links, the output symbols at the sink nodes, at any given time
instant, is a \mathbb{F}_{q}$-linear combination of the input symbols across
different generations, where $\mathbb{F}_{q}$ denotes the field over which the
network operates. We use finite-field discrete fourier transform (DFT) to
convert the output symbols at the sink nodes, at any given time instant, into a
$\mathbb{F}_{q}$-linear combination of the input symbols generated during the
same generation. We call this as transforming the acyclic network with delay
into {\em $n$-instantaneous networks} ($n$ is sufficiently large). We show that
under certain conditions, there exists a network code satisfying sink demands
in the usual (non-transform) approach if and only if there exists a network
code satisfying sink demands in the transform approach. Furthermore, we show
that the transform method (along with the use of alignment strategies) can be
employed to achieve half the rate corresponding to the individual
source-destination min-cut (which are assumed to be equal to 1) for some
classes of three-source three-destination unicast network with delays, when the
zero-interference conditions are not satisfied.
|
1103.3885
|
Feedback Reduction for Random Beamforming in Multiuser MIMO Broadcast
Channel
|
cs.IT math.IT
|
For the multiuser multiple-input multiple-output (MIMO) downlink channel, the
users feedback their channel state information (CSI) to help the base station
(BS) schedule users and improve the system sum rate. However, this incurs a
large aggregate feedback bandwidth which grows linearly with the number of
users. In this paper, we propose a novel scheme to reduce the feedback load in
a downlink orthogonal space division multiple access (SDMA) system with
zero-forcing receivers by allowing the users to dynamically determine the
number of feedback bits to use according to multiple decision thresholds.
Through theoretical analysis, we show that, while keeping the aggregate
feedback load of the entire system constant regardless of the number of users,
the proposed scheme almost achieves the optimal asymptotic sum rate scaling
with respect to the number of users (also known as the multiuser diversity).
Specifically, given the number of thresholds, the proposed scheme can achieve a
constant portion of the optimal sum rate achievable only by the system where
all the users always feedback, and the remaining portion (referred to as the
sum rate loss) decreases exponentially to zero as the number of thresholds
increases. By deriving a tight upper bound for the sum rate loss, the minimum
number of thresholds for a given tolerable sum rate loss is determined. In
addition, a fast bit allocation method is discussed for the proposed scheme,
and the simulation results show that the sum rate performances with the complex
optimal bit allocation method and with the fast algorithm are almost the same.
We compare our multi-threshold scheme to some previously proposed feedback
schemes. Through simulation, we demonstrate that the proposed scheme can reduce
the feedback load and utilize the limited feedback bandwidth more effectively
than the existing feedback methods.
|
1103.3904
|
Informed Heuristics for Guiding Stem-and-Cycle Ejection Chains
|
cs.AI cs.DM
|
The state of the art in local search for the Traveling Salesman Problem is
dominated by ejection chain methods utilising the Stem-and-Cycle reference
structure. Though effective such algorithms employ very little information in
their successor selection strategy, typically seeking only to minimise the cost
of a move. We propose an alternative approach inspired from the AI literature
and show how an admissible heuristic can be used to guide successor selection.
We undertake an empirical analysis and demonstrate that this technique often
produces better results than less informed strategies albeit at the cost of
running in higher polynomial time.
|
1103.3915
|
LDPC Code Design for the BPSK-constrained Gaussian Wiretap Channel
|
cs.IT math.IT
|
A coding scheme based on irregular low-density parity-check (LDPC) codes is
proposed to send secret messages from a source over the Gaussian wiretap
channel to a destination in the presence of a wiretapper, with the restriction
that the source can send only binary phase-shift keyed (BPSK) symbols. The
secrecy performance of the proposed coding scheme is measured by the secret
message rate through the wiretap channel as well as the equivocation rate about
the message at the wiretapper. A code search procedure is suggested to obtain
irregular LDPC codes that achieve good secrecy performance in such context.
|
1103.3933
|
Product Constructions for Perfect Lee Codes
|
cs.IT math.IT
|
A well known conjecture of Golomb and Welch is that the only nontrivial
perfect codes in the Lee and Manhattan metrics have length two or minimum
distance three. This problem and related topics were subject for extensive
research in the last forty years. In this paper two product constructions for
perfect Lee codes and diameter perfect Lee codes are presented. These
constructions yield a large number of nonlinear perfect codes and nonlinear
diameter perfect codes in the Lee and Manhattan metrics. A short survey and
other related problems on perfect codes in the Lee and the Manhattan metrics
are also discussed.
|
1103.3949
|
A Goal-Directed Implementation of Query Answering for Hybrid MKNF
Knowledge Bases
|
cs.AI
|
Ontologies and rules are usually loosely coupled in knowledge representation
formalisms. In fact, ontologies use open-world reasoning while the leading
semantics for rules use non-monotonic, closed-world reasoning. One exception is
the tightly-coupled framework of Minimal Knowledge and Negation as Failure
(MKNF), which allows statements about individuals to be jointly derived via
entailment from an ontology and inferences from rules. Nonetheless, the
practical usefulness of MKNF has not always been clear, although recent work
has formalized a general resolution-based method for querying MKNF when rules
are taken to have the well-founded semantics, and the ontology is modeled by a
general oracle. That work leaves open what algorithms should be used to relate
the entailments of the ontology and the inferences of rules. In this paper we
provide such algorithms, and describe the implementation of a query-driven
system, CDF-Rules, for hybrid knowledge bases combining both (non-monotonic)
rules under the well-founded semantics and a (monotonic) ontology, represented
by a CDF Type-1 (ALQ) theory. To appear in Theory and Practice of Logic
Programming (TPLP)
|
1103.3952
|
Mixing, Ergodic, and Nonergodic Processes with Rapidly Growing
Information between Blocks
|
cs.IT cs.CL math.IT
|
We construct mixing processes over an infinite alphabet and ergodic processes
over a finite alphabet for which Shannon mutual information between adjacent
blocks of length $n$ grows as $n^\beta$, where $\beta\in(0,1)$. The processes
are a modification of nonergodic Santa Fe processes, which were introduced in
the context of natural language modeling. The rates of mutual information for
the latter processes are alike and also established in this paper. As an
auxiliary result, it is shown that infinite direct products of mixing processes
are also mixing.
|
1103.3954
|
BoolVar/PB v1.0, a java library for translating pseudo-Boolean
constraints into CNF formulae
|
cs.AI
|
BoolVar/PB is an open source java library dedicated to the translation of
pseudo-Boolean constraints into CNF formulae. Input constraints can be
categorized with tags. Several encoding schemes are implemented in a way that
each input constraint can be translated using one or several encoders,
according to the related tags. The library can be easily extended by adding new
encoders and / or new output formats.
|
1103.4007
|
Multiple Access Channel with Partial and Controlled Cribbing Encoders
|
cs.IT math.IT
|
In this paper we consider a multiple access channel (MAC) with partial
cribbing encoders. This means that each of two encoders obtains a deterministic
function of the other encoder output with or without delay. The partial
cribbing scheme is especially motivated by the additive noise Gaussian MAC
since perfect cribbing results in the degenerated case of full cooperation
between the encoders and requires an infinite entropy link. We derive a single
letter characterization of the capacity of the MAC with partial cribbing for
the cases of causal and strictly causal partial cribbing. Several numerical
examples, such as quantized cribbing, are presented. We further consider and
derive the capacity region where the cribbing depends on actions that are
functions of the previous cribbed observations. In particular, we consider a
scenario where the action is "to crib or not to crib" and show that a naive
time-sharing strategy is not optimal.
|
1103.4012
|
On the accuracy of language trees
|
physics.soc-ph cs.CL q-bio.QM
|
Historical linguistics aims at inferring the most likely language
phylogenetic tree starting from information concerning the evolutionary
relatedness of languages. The available information are typically lists of
homologous (lexical, phonological, syntactic) features or characters for many
different languages.
From this perspective the reconstruction of language trees is an example of
inverse problems: starting from present, incomplete and often noisy,
information, one aims at inferring the most likely past evolutionary history. A
fundamental issue in inverse problems is the evaluation of the inference made.
A standard way of dealing with this question is to generate data with
artificial models in order to have full access to the evolutionary process one
is going to infer. This procedure presents an intrinsic limitation: when
dealing with real data sets, one typically does not know which model of
evolution is the most suitable for them. A possible way out is to compare
algorithmic inference with expert classifications. This is the point of view we
take here by conducting a thorough survey of the accuracy of reconstruction
methods as compared with the Ethnologue expert classifications. We focus in
particular on state-of-the-art distance-based methods for phylogeny
reconstruction using worldwide linguistic databases.
In order to assess the accuracy of the inferred trees we introduce and
characterize two generalizations of standard definitions of distances between
trees. Based on these scores we quantify the relative performances of the
distance-based algorithms considered. Further we quantify how the completeness
and the coverage of the available databases affect the accuracy of the
reconstruction. Finally we draw some conclusions about where the accuracy of
the reconstructions in historical linguistics stands and about the leading
directions to improve it.
|
1103.4039
|
Left invertibility of discrete-time output-quantized systems: the linear
case with finite inputs
|
math.OC cs.SY math.DS
|
This paper studies left invertibility of discrete-time linear
output-quantized systems. Quantized outputs are generated according to a given
partition of the state-space, while inputs are sequences on a finite alphabet.
Left invertibility, i.e. injectivity of I/O map, is reduced to left
D-invertibility, under suitable conditions. While left invertibility takes into
account membership to sets of a given partition, left D-invertibility considers
only membership to a single set, and is much easier to detect. The condition
under which left invertibility and left D-invertibility are equivalent is that
the elements of the dynamic matrix of the system form an algebraically
independent set. Our main result is a method to compute left D-invertibility
for all linear systems with no eigenvalue of modulus one. Therefore we are able
to check left invertibility of output-quantized linear systems for a full
measure set of matrices. Some examples are presented to show the application of
the proposed method.
|
1103.4059
|
Modeling the dynamical interaction between epidemics on overlay networks
|
physics.soc-ph cond-mat.stat-mech cs.SI q-bio.PE
|
Epidemics seldom occur as isolated phenomena. Typically, two or more viral
agents spread within the same host population and may interact dynamically with
each other. We present a general model where two viral agents interact via an
immunity mechanism as they propagate simultaneously on two networks connecting
the same set of nodes. Exploiting a correspondence between the propagation
dynamics and a dynamical process performing progressive network generation, we
develop an analytic approach that accurately captures the dynamical interaction
between epidemics on overlay networks. The formalism allows for overlay
networks with arbitrary joint degree distribution and overlap. To illustrate
the versatility of our approach, we consider a hypothetical delayed
intervention scenario in which an immunizing agent is disseminated in a host
population to hinder the propagation of an undesirable agent (e.g. the spread
of preventive information in the context of an emerging infectious disease).
|
1103.4065
|
Probabilistically Safe Vehicle Control in a Hostile Environment
|
cs.SY cs.RO math.OC
|
In this paper we present an approach to control a vehicle in a hostile
environment with static obstacles and moving adversaries. The vehicle is
required to satisfy a mission objective expressed as a temporal logic
specification over a set of properties satisfied at regions of a partitioned
environment. We model the movements of adversaries in between regions of the
environment as Poisson processes. Furthermore, we assume that the time it takes
for the vehicle to traverse in between two facets of each region is
exponentially distributed, and we obtain the rate of this exponential
distribution from a simulator of the environment. We capture the motion of the
vehicle and the vehicle updates of adversaries distributions as a Markov
Decision Process. Using tools in Probabilistic Computational Tree Logic, we
find a control strategy for the vehicle that maximizes the probability of
accomplishing the mission objective. We demonstrate our approach with
illustrative case studies.
|
1103.4072
|
Modularity functions maximization with nonnegative relaxation
facilitates community detection in networks
|
physics.soc-ph cs.SI
|
We show here that the problem of maximizing a family of quantitative
functions, encompassing both the modularity (Q-measure) and modularity density
(D-measure), for community detection can be uniformly understood as a
combinatoric optimization involving the trace of a matrix called modularity
Laplacian. Instead of using traditional spectral relaxation, we apply
additional nonnegative constraint into this graph clustering problem and design
efficient algorithms to optimize the new objective. With the explicit
nonnegative constraint, our solutions are very close to the ideal community
indicator matrix and can directly assign nodes into communities. The
near-orthogonal columns of the solution can be reformulated as the posterior
probability of corresponding node belonging to each community. Therefore, the
proposed method can be exploited to identify the fuzzy or overlapping
communities and thus facilitates the understanding of the intrinsic structure
of networks. Experimental results show that our new algorithm consistently,
sometimes significantly, outperforms the traditional spectral relaxation
approaches.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.