id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1004.1158
|
New MDS Self-Dual Codes over Large Finite Fields
|
cs.IT math.IT
|
We construct MDS Euclidean and Hermitian self-dual codes over large finite
fields of odd and even characteristics. Our codes arise from cyclic and
negacyclic duadic codes.
|
1004.1184
|
Circulant Arrays on Cyclic Subgroups of Finite Fields: Rank Analysis and
Construction of Quasi-Cyclic LDPC Codes
|
cs.IT math.IT
|
This paper consists of three parts. The first part presents a large class of
new binary quasi-cyclic (QC)-LDPC codes with girth of at least 6 whose
parity-check matrices are constructed based on cyclic subgroups of finite
fields. Experimental results show that the codes constructed perform well over
the binary-input AWGN channel with iterative decoding using the sum-product
algorithm (SPA). The second part analyzes the ranks of the parity-check
matrices of codes constructed based on finite fields with characteristic of 2
and gives combinatorial expressions for these ranks. The third part identifies
a subclass of constructed QC-LDPC codes that have large minimum distances.
Decoding of codes in this subclass with the SPA converges very fast.
|
1004.1195
|
Ergodic Capacity Analysis in Cognitive Radio Systems under Channel
Uncertainty
|
cs.IT math.IT
|
In this paper, pilot-symbol-assisted transmission in cognitive radio systems
over time selective flat fading channels is studied. It is assumed that causal
and noncausal Wiener filter estimators are used at the secondary receiver with
the aid of training symbols to obtain the channel side information (CSI) under
an interference power constraint. Cognitive radio model is described together
with detection and false alarm probabilities determined by using a
Neyman-Person detector for channel sensing. Subsequently, for both filters, the
variances of estimate errors are calculated from the Doppler power spectrum of
the channel, and achievable rate expressions are provided considering the
scenarios which are results of channel sensing. Numerical results are obtained
in Gauss-Markov modeled channels, and achievable rates obtained by using causal
and noncausal filters are compared and it is shown that the difference is
decreasing with increasing signal-to-noise ratio (SNR). Moreover, the optimal
probability of detection and false alarm values are shown, and the tradeoff
between these two parameters is discussed. Finally, optimal power distributions
are provided.
|
1004.1198
|
Structured LDPC Codes from Permutation Matrices Free of Small Trapping
Sets
|
cs.IT math.IT
|
This paper introduces a class of structured lowdensity parity-check (LDPC)
codes whose parity check matrices are arrays of permutation matrices. The
permutation matrices are obtained from Latin squares and form a finite field
under some matrix operations. They are chosen so that the Tanner graphs do not
contain subgraphs harmful to iterative decoding algorithms. The construction of
column-weight-three codes is presented. Although the codes are optimized for
the Gallager A/B algorithm over the binary symmetric channel (BSC), their error
performance is very good on the additive white Gaussian noise channel (AWGNC)
as well.
|
1004.1215
|
Regularized Richardson-Lucy Algorithm for Sparse Reconstruction of
Poissonian Images
|
cs.CV
|
Restoration of digital images from their degraded measurements has always
been a problem of great theoretical and practical importance in numerous
applications of imaging sciences. A specific solution to the problem of image
restoration is generally determined by the nature of degradation phenomenon as
well as by the statistical properties of measurement noises. The present study
is concerned with the case in which the images of interest are corrupted by
convolutional blurs and Poisson noises. To deal with such problems, there
exists a range of solution methods which are based on the principles
originating from the fixed-point algorithm of Richardson and Lucy (RL). In this
paper, we provide conceptual and experimental proof that such methods tend to
converge to sparse solutions, which makes them applicable only to those images
which can be represented by a relatively small number of non-zero samples in
the spatial domain. Unfortunately, the set of such images is relatively small,
which restricts the applicability of RL-type methods. On the other hand,
virtually all practical images admit sparse representations in the domain of a
properly designed linear transform. To take advantage of this fact, it is
therefore tempting to modify the RL algorithm so as to make it recover
representation coefficients, rather than the values of their associated image.
Such modification is introduced in this paper. Apart from the generality of its
assumptions, the proposed method is also superior to many established
reconstruction approaches in terms of estimation accuracy and computational
complexity. This and other conclusions of this study are validated through a
series of numerical experiments.
|
1004.1218
|
The Noise-Sensitivity Phase Transition in Compressed Sensing
|
math.ST cs.IT math.IT stat.TH
|
Consider the noisy underdetermined system of linear equations: y=Ax0 + z0,
with n x N measurement matrix A, n < N, and Gaussian white noise z0 ~
N(0,\sigma^2 I). Both y and A are known, both x0 and z0 are unknown, and we
seek an approximation to x0. When x0 has few nonzeros, useful approximations
are obtained by l1-penalized l2 minimization, in which the reconstruction \hxl
solves min || y - Ax||^2/2 + \lambda ||x||_1.
Evaluate performance by mean-squared error (MSE = E ||\hxl - x0||_2^2/N).
Consider matrices A with iid Gaussian entries and a large-system limit in which
n,N\to\infty with n/N \to \delta and k/n \to \rho. Call the ratio MSE/\sigma^2
the noise sensitivity. We develop formal expressions for the MSE of \hxl, and
evaluate its worst-case formal noise sensitivity over all types of k-sparse
signals. The phase space 0 < \delta, \rho < 1 is partitioned by curve \rho =
\rhoMSE(\delta) into two regions. Formal noise sensitivity is bounded
throughout the region \rho < \rhoMSE(\delta) and is unbounded throughout the
region \rho > \rhoMSE(\delta). The phase boundary \rho = \rhoMSE(\delta) is
identical to the previously-known phase transition curve for equivalence of l1
- l0 minimization in the k-sparse noiseless case. Hence a single phase boundary
describes the fundamental phase transitions both for the noiseless and noisy
cases. Extensive computational experiments validate the predictions of this
formalism, including the existence of game theoretical structures underlying
it. Underlying our formalism is the AMP algorithm introduced earlier by the
authors. Other papers by the authors detail expressions for the formal MSE of
AMP and its close connection to l1-penalized reconstruction. Here we derive the
minimax formal MSE of AMP and then read out results for l1-penalized
reconstruction.
|
1004.1227
|
Signature Recognition using Multi Scale Fourier Descriptor And Wavelet
Transform
|
cs.CV
|
This paper present a novel off-line signature recognition method based on
multi scale Fourier Descriptor and wavelet transform . The main steps of
constructing a signature recognition system are discussed and experiments on
real data sets show that the average error rate can reach 1%. Finally we
compare 8 distance measures between feature vectors with respect to the
recognition performance.
Key words: signature recognition; Fourier Descriptor; Wavelet transform;
personal verification
|
1004.1229
|
Feature-Based Adaptive Tolerance Tree (FATT): An Efficient Indexing
Technique for Content-Based Image Retrieval Using Wavelet Transform
|
cs.MM cs.DB
|
This paper introduces a novel indexing and access method, called Feature-
Based Adaptive Tolerance Tree (FATT), using wavelet transform is proposed to
organize large image data sets efficiently and to support popular image access
mechanisms like Content Based Image Retrieval (CBIR).Conventional database
systems are designed for managing textual and numerical data and retrieving
such data is often based on simple comparisons of text or numerical values.
However, this method is no longer adequate for images, since the digital
presentation of images does not convey the reality of images. Retrieval of
images become difficult when the database is very large. This paper addresses
such problems and presents a novel indexing technique, Feature Based Adaptive
Tolerance Tree (FATT), which is designed to bring an effective solution
especially for indexing large databases. The proposed indexing scheme is then
used along with a query by image content, in order to achieve the ultimate goal
from the user point of view that is retrieval of all relevant images. FATT
indexing technique, features of the image is extracted using 2-dimensional
discrete wavelet transform (2DDWT) and index code is generated from the
determinant value of the features. Multiresolution analysis technique using
2D-DWT can decompose the image into components at different scales, so that the
coarest scale components carry the global approximation information while the
finer scale components contain the detailed information. Experimental results
show that the FATT outperforms M-tree upto 200%, Slim-tree up to 120% and HCT
upto 89%. FATT indexing technique is adopted to increase the efficiently of
data storage and retrieval.
|
1004.1230
|
Ontology-supported processing of clinical text using medical knowledge
integration for multi-label classification of diagnosis coding
|
cs.LG cs.AI
|
This paper discusses the knowledge integration of clinical information
extracted from distributed medical ontology in order to ameliorate a machine
learning-based multi-label coding assignment system. The proposed approach is
implemented using a decision tree based cascade hierarchical technique on the
university hospital data for patients with Coronary Heart Disease (CHD). The
preliminary results obtained show a satisfactory finding.
|
1004.1236
|
On Describing the Routing Capacity Regions of Networks
|
math.OC cs.IT cs.NI math.IT
|
The routing capacity region of networks with multiple unicast sessions can be
characterized using Farkas' lemma as an infinite set of linear inequalities. In
this paper this result is sharpened by exploiting properties of the solution
satisfied by each rate-tuple on the boundary of the capacity region, and a
finite description of the routing capacity region which depends on network
parameters is offered. For the special case of undirected ring networks
additional results on the complexity of the description are provided.
|
1004.1249
|
Semi-Automatic Index Tuning: Keeping DBAs in the Loop
|
cs.DB
|
To obtain good system performance, a DBA must choose a set of indices that is
appropriate for the workload. The system can aid in this challenging task by
providing recommendations for the index configuration. We propose a new index
recommendation technique, termed semi-automatic tuning, that keeps the DBA "in
the loop" by generating recommendations that use feedback about the DBA's
preferences. The technique also works online, which avoids the limitations of
commercial tools that require the workload to be known in advance. The
foundation of our approach is the Work Function Algorithm, which can solve a
wide variety of online optimization problems with strong competitive
guarantees. We present an experimental analysis that validates the benefits of
semi-automatic tuning in a wide variety of conditions.
|
1004.1257
|
A Survey on Preprocessing Methods for Web Usage Data
|
cs.IR
|
World Wide Web is a huge repository of web pages and links. It provides
abundance of information for the Internet users. The growth of web is
tremendous as approximately one million pages are added daily. Users' accesses
are recorded in web logs. Because of the tremendous usage of web, the web log
files are growing at a faster rate and the size is becoming huge. Web data
mining is the application of data mining techniques in web data. Web Usage
Mining applies mining techniques in log data to extract the behavior of users
which is used in various applications like personalized services, adaptive web
sites, customer profiling, prefetching, creating attractive web sites etc., Web
usage mining consists of three phases preprocessing, pattern discovery and
pattern analysis. Web log data is usually noisy and ambiguous and preprocessing
is an important process before mining. For discovering patterns sessions are to
be constructed efficiently. This paper reviews existing work done in the
preprocessing stage. A brief overview of various data mining techniques for
discovering patterns, and pattern analysis are discussed. Finally a glimpse of
various applications of web usage mining is also presented.
|
1004.1277
|
Closed-Form Expressions for Relay Selection with Secrecy Constraints
|
cs.IT math.IT
|
An opportunistic relay selection based on instantaneous knowledge of channels
is considered to increase security against eavesdroppers. The closed-form
expressions are derived for the average secrecy rates and the outage
probability when the cooperative networks use Decode-and-Forward (DF) or
Amplify-and-Forward (AF) strategy. These techniques are demonstrated
analytically and with simulation results.
|
1004.1379
|
Index coding via linear programming
|
cs.IT math.CO math.IT
|
Index Coding has received considerable attention recently motivated in part
by real-world applications and in part by its connection to Network Coding. The
basic setting of Index Coding encodes the problem input as an undirected graph
and the fundamental parameter is the broadcast rate $\beta$, the average
communication cost per bit for sufficiently long messages (i.e. the non-linear
vector capacity). Recent nontrivial bounds on $\beta$ were derived from the
study of other Index Coding capacities (e.g. the scalar capacity $\beta_1$) by
Bar-Yossef et al (2006), Lubetzky and Stav (2007) and Alon et al (2008).
However, these indirect bounds shed little light on the behavior of $\beta$:
there was no known polynomial-time algorithm for approximating $\beta$ in a
general network to within a nontrivial (i.e. $o(n)$) factor, and the exact
value of $\beta$ remained unknown for any graph where Index Coding is
nontrivial.
Our main contribution is a direct information-theoretic analysis of the
broadcast rate $\beta$ using linear programs, in contrast to previous
approaches that compared $\beta$ with graph-theoretic parameters. This allows
us to resolve the aforementioned two open questions. We provide a
polynomial-time algorithm with a nontrivial approximation ratio for computing
$\beta$ in a general network along with a polynomial-time decision procedure
for recognizing instances with $\beta=2$. In addition, we pinpoint $\beta$
precisely for various classes of graphs (e.g. for various Cayley graphs of
cyclic groups) thereby simultaneously improving the previously known upper and
lower bounds for these graphs. Via this approach we construct graphs where the
difference between $\beta$ and its trivial lower bound is linear in the number
of vertices and ones where $\beta$ is uniformly bounded while its upper bound
derived from the naive encoding scheme is polynomially worse.
|
1004.1399
|
A note on the entropy of repetitive sequences of symmetry group
permutations
|
cs.IT math.IT
|
The paper makes the observation that all orders of information entropy are
equal in signals composed of repeating units of distinct symbols where the
units can be classified as a member of a symmetry group. This leads to an
improved metric for measuring the information content of higher order entropies
in data such as text, signals, or genetics and another measure of similarity to
compare the incremental information content across entropy orders when
comparing data of different sizes and symbol sets or when comparing entire
sequences.
|
1004.1423
|
Strong Secrecy and Reliable Byzantine Detection in the Presence of an
Untrusted Relay
|
cs.IT math.IT
|
We consider a Gaussian two-hop network where the source and the destination
can communicate only via a relay node who is both an eavesdropper and a
Byzantine adversary. Both the source and the destination nodes are allowed to
transmit, and the relay receives a superposition of their transmitted signals.
We propose a new coding scheme that satisfies two requirements simultaneously:
the transmitted message must be kept secret from the relay node, and the
destination must be able to detect any Byzantine attack that the relay node
might launch reliably and fast. The three main components of the scheme are the
nested lattice code, the privacy amplification and the algebraic manipulation
detection (AMD)code. Specifically, for the Gaussian two-hop network, we show
that lattice coding can successfully pair with AMD codes enabling its first
application to a noisy channel model. We prove, using this new coding scheme,
that the probability that the Byzantine attack goes undetected decreases
exponentially fast with respect to the number of channel uses, while the loss
in the secrecy rate, compared to the rate achievable when the relay is honest,
can be made arbitrarily small. In addition, in contrast with prior work in
Gaussian channels, the notion of secrecy provided here is strong secrecy.
|
1004.1447
|
The Total s-Energy of a Multiagent System
|
nlin.AO cs.MA math.OC
|
We introduce the "total s-energy" of a multiagent system with time-dependent
links. This provides a new analytical lens on bidirectional agreement dynamics,
which we use to bound the convergence rates of dynamical systems for
synchronization, flocking, opinion dynamics, and social epistemology.
|
1004.1503
|
A New Construction for Constant Weight Codes
|
cs.IT math.IT
|
A new construction for constant weight codes is presented. The codes are
constructed from $k$-dimensional subspaces of the vector space $\F_q^n$. These
subspaces form a constant dimension code in the Grassmannian space
$\cG_q(n,k)$. Some of the constructed codes are optimal constant weight codes
with parameters not known before. An efficient algorithm for error-correction
is given for the constructed codes. If the constant dimension code has an
efficient encoding and decoding algorithms then also the constructed constant
weight code has an efficient encoding and decoding algorithms.
|
1004.1511
|
Bounds for codes for a non-symmetric ternary channel
|
cs.IT math.IT
|
We provide bounds for codes for a non-symmetric channel or, equivalently, for
ternary codes with the Manhattan distance.
|
1004.1540
|
Importance of Sources using the Repeated Fusion Method and the
Proportional Conflict Redistribution Rules #5 and #6
|
cs.AI
|
We present in this paper some examples of how to compute by hand the PCR5
fusion rule for three sources, so the reader will better understand its
mechanism. We also take into consideration the importance of sources, which is
different from the classical discounting of sources.
|
1004.1564
|
Polymatroids with Network Coding
|
cs.IT math.IT
|
The problem of network coding for multicasting a single source to multiple
sinks has first been studied by Ahlswede, Cai, Li and Yeung in 2000, in which
they have established the celebrated max-flow mini-cut theorem on non-physical
information flow over a network of independent channels. On the other hand, in
1980, Han has studied the case with correlated multiple sources and a single
sink from the viewpoint of polymatroidal functions in which a necessary and
sufficient condition has been demonstrated for reliable transmission over the
network. This paper presents an attempt to unify both cases, which leads to
establish a necessary and sufficient condition for reliable transmission over a
noisy network for multicasting all the correlated multiple sources to all the
multiple sinks. Furthermore, we address also the problem of transmitting
"independent" sources over a multiple-access-type of network as well as over a
broadcast-type of network, which reveals that the (co-) polymatroidal
structures are intrinsically involved in these types of network coding.
|
1004.1569
|
A Streaming Approximation Algorithm for Klee's Measure Problem
|
cs.DS cs.DB
|
The efficient estimation of frequency moments of a data stream in one-pass
using limited space and time per item is one of the most fundamental problem in
data stream processing. An especially important estimation is to find the
number of distinct elements in a data stream, which is generally referred to as
the zeroth frequency moment and denoted by $F_0$. In this paper, we consider
streams of rectangles defined over a discrete space and the task is to compute
the total number of distinct points covered by the rectangles. This is known as
the Klee's measure problem in 2 dimensions. We present and analyze a randomized
streaming approximation algorithm which gives an $(\epsilon,
\delta)$-approximation of $F_0$ for the total area of Klee's measure problem in
2 dimensions. Our algorithm achieves the following complexity bounds: (a) the
amortized processing time per rectangle is $O(\frac{1}{\epsilon^4}\log^3
n\log\frac{1}{\delta})$; (b) the space complexity is
$O(\frac{1}{\epsilon^2}\log n \log\frac{1}{\delta})$ bits; and (c) the time to
answer a query for $F_0$ is $O(\log\frac{1}{\delta})$, respectively. To our
knowledge, this is the first streaming approximation for the Klee's measure
problem that achieves sub-polynomial bounds.
|
1004.1586
|
Belief Propagation for Min-cost Network Flow: Convergence and
Correctness
|
cs.DM cs.AI
|
Message passing type algorithms such as the so-called Belief Propagation
algorithm have recently gained a lot of attention in the statistics, signal
processing and machine learning communities as attractive algorithms for
solving a variety of optimization and inference problems. As a decentralized,
easy to implement and empirically successful algorithm, BP deserves attention
from the theoretical standpoint, and here not much is known at the present
stage. In order to fill this gap we consider the performance of the BP
algorithm in the context of the capacitated minimum-cost network flow problem -
the classical problem in the operations research field. We prove that BP
converges to the optimal solution in the pseudo-polynomial time, provided that
the optimal solution of the underlying problem is unique and the problem input
is integral. Moreover, we present a simple modification of the BP algorithm
which gives a fully polynomial-time randomized approximation scheme (FPRAS) for
the same problem, which no longer requires the uniqueness of the optimal
solution. This is the first instance where BP is proved to have
fully-polynomial running time. Our results thus provide a theoretical
justification for the viability of BP as an attractive method to solve an
important class of optimization problems.
|
1004.1614
|
PROBER: Ad-Hoc Debugging of Extraction and Integration Pipelines
|
cs.DB
|
Complex information extraction (IE) pipelines assembled by plumbing together
off-the-shelf operators, specially customized operators, and operators re-used
from other text processing pipelines are becoming an integral component of most
text processing frameworks. A critical task faced by the IE pipeline user is to
run a post-mortem analysis on the output. Due to the diverse nature of
extraction operators (often implemented by independent groups), it is time
consuming and error-prone to describe operator semantics formally or
operationally to a provenance system. We introduce the first system that helps
IE users analyze pipeline semantics and infer provenance interactively while
debugging. This allows the effort to be proportional to the need, and to focus
on the portions of the pipeline under the greatest suspicion. We present a
generic debugger for running post-execution analysis of any IE pipeline
consisting of arbitrary types of operators. We propose an effective provenance
model for IE pipelines which captures a variety of operator types, ranging from
those for which full or no specifications are available. We present a suite of
algorithms to effectively build provenance and facilitate debugging. Finally,
we present an extensive experimental study on large-scale real-world
extractions from an index of ~500 million Web documents.
|
1004.1675
|
Fuzzy Logic of Speed and Steering Control System for Three Dimensional
Line Following of an Autonomous Vehicle
|
cs.RO
|
... This paper is to describe exploratory research on the design of a modular
autonomous mobile robot controller. The controller incorporates a fuzzy logic
[8] [9] approach for steering and speed control [37], a FL approach for
ultrasound sensing and an overall expert system for guidance. The advantages of
a modular system are related to portability and transportability, i.e. any
vehicle can become autonomous with minimal modifications. A mobile robot test
bed has been constructed in university of Cincinnati using a golf cart base.
This cart has full speed control with guidance provided by a vision system and
obstacle avoidance using ultrasonic sensors. The speed and steering fuzzy logic
controller is supervised through a multi-axis motion controller. The obstacle
avoidance system is based on a microcontroller interfaced with ultrasonic
transducers. This micro-controller independently handles all timing and
distance calculations and sends distance information back to the fuzzy logic
controller via the serial line. This design yields a portable independent
system in which high speed computer communication is not necessary. Vision
guidance has been accomplished with the use of CCD cameras judging the current
position of the robot.[34] [35][36] It will be generating a good image for
reducing an uncertain wrong command from ground coordinate to tackle the
parameter uncertainties of the system, and to obtain good WMR dynamic
response.[1] Here we Apply 3D line following mythology. It transforms from 3D
to 2D and also maps the image coordinates and vice versa, leading to the
improved accuracy of the WMR position. ...
|
1004.1677
|
Mining The Data From Distributed Database Using An Improved Mining
Algorithm
|
cs.DB
|
Association rule mining is an active data mining research area and most ARM
algorithms cater to a centralized environment. Centralized data mining to
discover useful patterns in distributed databases isn't always feasible because
merging data sets from different sites incurs huge network communication costs.
In this paper, an Improved algorithm based on good performance level for data
mining is being proposed. In local sites, it runs the application based on the
improved LMatrix algorithm, which is used to calculate local support counts.
Local Site also finds a centre site to manage every message exchanged to obtain
all globally frequent item sets. It also reduces the time of scan of partition
database by using LMatrix which increases the performance of the algorithm.
Therefore, the research is to develop a distributed algorithm for
geographically distributed data sets that reduces communication costs, superior
running efficiency, and stronger scalability than direct application of a
sequential algorithm in distributed databases.
|
1004.1679
|
A Robust Fuzzy Clustering Technique with Spatial Neighborhood
Information for Effective Medical Image Segmentation
|
cs.CV
|
Medical image segmentation demands an efficient and robust segmentation
algorithm against noise. The conventional fuzzy c-means algorithm is an
efficient clustering algorithm that is used in medical image segmentation. But
FCM is highly vulnerable to noise since it uses only intensity values for
clustering the images. This paper aims to develop a novel and efficient fuzzy
spatial c-means clustering algorithm which is robust to noise. The proposed
clustering algorithm uses fuzzy spatial information to calculate membership
value. The input image is clustered using proposed ISFCM algorithm. A
comparative study has been made between the conventional FCM and proposed
ISFCM. The proposed approach is found to be outperforming the conventional FCM.
|
1004.1686
|
New Clustering Algorithm for Vector Quantization using Rotation of Error
Vector
|
cs.CV cs.IT math.IT
|
The paper presents new clustering algorithm. The proposed algorithm gives
less distortion as compared to well known Linde Buzo Gray (LBG) algorithm and
Kekre's Proportionate Error (KPE) Algorithm. Constant error is added every time
to split the clusters in LBG, resulting in formation of cluster in one
direction which is 1350 in 2-dimensional case. Because of this reason
clustering is inefficient resulting in high MSE in LBG. To overcome this
drawback of LBG proportionate error is added to change the cluster orientation
in KPE. Though the cluster orientation in KPE is changed its variation is
limited to +/- 450 over 1350. The proposed algorithm takes care of this problem
by introducing new orientation every time to split the clusters. The proposed
method reduces PSNR by 2db to 5db for codebook size 128 to 1024 with respect to
LBG.
|
1004.1707
|
A Survey on Space-Time Turbo Codes
|
cs.IT math.IT
|
As wireless communication systems look intently to compose the transition
from voice communication to interactive Internet data, achieving higher bit
rates becomes both increasingly desirable and challenging. Space-time coding
(STC) is a communications technique for wireless systems that inhabit multiple
transmit antennas and single or multiple receive antennas. Space-time codes
make use of advantage of both the spatial diversity provided by multiple
antennas and the temporal diversity available with time-varying fading.
Space-time codes can be divided into block codes and trellis codes. Space-time
trellis coding merges signal processing at the receiver with coding techniques
appropriate to multiple transmit antennas. The advantages of space-time codes
(STC) make it extremely remarkable for high-rate wireless applications. Initial
STC research efforts focused on narrowband flat-fading channels. The decoding
complexity of Space-time turbo codes STTC increases exponentially as a function
of the diversity level and transmission rate. This proposed paper provides an
over view on various techniques used for the design of space-time turbo codes.
This paper also discusses the techniques handled by researchers to built
encoder and decoder section for multiple transmits and receives antennas. In
addition the future enhancement gives a general idea for improvement and
development of various codes which will involve implementing viterbi decoder
with soft decoding in a multi-antenna scenario. In addition the space-time code
may be analyzed using some of the available metrics and finally to simulate it
for different receive antenna configurations.
|
1004.1729
|
On the bias of BFS
|
cs.DM cs.DS cs.NI cs.SI stat.ME
|
Breadth First Search (BFS) and other graph traversal techniques are widely
used for measuring large unknown graphs, such as online social networks. It has
been empirically observed that an incomplete BFS is biased toward high degree
nodes. In contrast to more studied sampling techniques, such as random walks,
the precise bias of BFS has not been characterized to date. In this paper, we
quantify the degree bias of BFS sampling. In particular, we calculate the node
degree distribution expected to be observed by BFS as a function of the
fraction of covered nodes, in a random graph $RG(p_k)$ with a given degree
distribution $p_k$. Furthermore, we also show that, for $RG(p_k)$, all commonly
used graph traversal techniques (BFS, DFS, Forest Fire, and Snowball Sampling)
lead to the same bias, and we show how to correct for this bias. To give a
broader perspective, we compare this class of exploration techniques to random
walks that are well-studied and easier to analyze. Next, we study by simulation
the effect of graph properties not captured directly by our model. We find that
the bias gets amplified in graphs with strong positive assortativity. Finally,
we demonstrate the above results by sampling the Facebook social network, and
we provide some practical guidelines for graph sampling in practice.
|
1004.1743
|
An Analytical Study on Behavior of Clusters Using K Means, EM and K*
Means Algorithm
|
cs.LG cs.IR
|
Clustering is an unsupervised learning method that constitutes a cornerstone
of an intelligent data analysis process. It is used for the exploration of
inter-relationships among a collection of patterns, by organizing them into
homogeneous clusters. Clustering has been dynamically applied to a variety of
tasks in the field of Information Retrieval (IR). Clustering has become one of
the most active area of research and the development. Clustering attempts to
discover the set of consequential groups where those within each group are more
closely related to one another than the others assigned to different groups.
The resultant clusters can provide a structure for organizing large bodies of
text for efficient browsing and searching. There exists a wide variety of
clustering algorithms that has been intensively studied in the clustering
problem. Among the algorithms that remain the most common and effectual, the
iterative optimization clustering algorithms have been demonstrated reasonable
performance for clustering, e.g. the Expectation Maximization (EM) algorithm
and its variants, and the well known k-means algorithm. This paper presents an
analysis on how partition method clustering techniques - EM, K -means and K*
Means algorithm work on heartspect dataset with below mentioned features -
Purity, Entropy, CPU time, Cluster wise analysis, Mean value analysis and inter
cluster distance. Thus the paper finally provides the experimental results of
datasets for five clusters to strengthen the results that the quality of the
behavior in clusters in EM algorithm is far better than k-means algorithm and
k*means algorithm.
|
1004.1747
|
Mobile Database System: Role of Mobility on the Query Processing
|
cs.DB
|
The rapidly expanding technology of mobile communication will give mobile
users capability of accessing information from anywhere and any time. The
wireless technology has made it possible to achieve continuous connectivity in
mobile environment. When the query is specified as continuous, the requesting
mobile user can obtain continuously changing result. In order to provide
accurate and timely outcome to requesting mobile user, the locations of moving
object has to be closely monitored. The objective of paper is to discuss the
problem related to the role of personal and terminal mobility and query
processing in the mobile environment.
|
1004.1749
|
Capacity Achieving Low Density Parity Check Lattices
|
cs.IT math.IT
|
The concept and existence of sphere-bound-achieving and capacity-achieving
lattices has been explained on AWGN channels by Forney. LDPC lattices,
introduced by Sadeghi, perform very well under iterative decoding algorithm. In
this work, we focus on an ensemble of regular LDPC lattices. We produce and
investigate an ensemble of LDPC lattices with known properties. It is shown
that these lattices are sphere-bound-achieving and capacity-achieving. As
byproducts we find the minimum distance, coding gain, kissing number and an
upper bound for probability of error for this special ensemble of regular LDPC
lattices.
|
1004.1752
|
Improved Two-Point Codes on Hermitian Curves
|
cs.IT math.AG math.IT math.NT
|
One-point codes on the Hermitian curve produce long codes with excellent
parameters. Feng and Rao introduced a modified construction that improves the
parameters while still using one-point divisors. A separate improvement of the
parameters was introduced by Matthews considering the classical construction
but with two-point divisors. Those two approaches are combined to describe an
elementary construction of two-point improved codes. Upon analysis of their
minimum distance and redundancy, it is observed that they improve on the
previous constructions for a large range of designed distances.
|
1004.1768
|
A New Approach to Lung Image Segmentation using Fuzzy Possibilistic
C-Means Algorithm
|
cs.CV
|
Image segmentation is a vital part of image processing. Segmentation has its
application widespread in the field of medical images in order to diagnose
curious diseases. The same medical images can be segmented manually. But the
accuracy of image segmentation using the segmentation algorithms is more when
compared with the manual segmentation. In the field of medical diagnosis an
extensive diversity of imaging techniques is presently available, such as
radiography, computed tomography (CT) and magnetic resonance imaging (MRI).
Medical image segmentation is an essential step for most consequent image
analysis tasks. Although the original FCM algorithm yields good results for
segmenting noise free images, it fails to segment images corrupted by noise,
outliers and other imaging artifact. This paper presents an image segmentation
approach using Modified Fuzzy C-Means (FCM) algorithm and Fuzzy Possibilistic
c-means algorithm (FPCM). This approach is a generalized version of standard
Fuzzy CMeans Clustering (FCM) algorithm. The limitation of the conventional FCM
technique is eliminated in modifying the standard technique. The Modified FCM
algorithm is formulated by modifying the distance measurement of the standard
FCM algorithm to permit the labeling of a pixel to be influenced by other
pixels and to restrain the noise effect during segmentation. Instead of having
one term in the objective function, a second term is included, forcing the
membership to be as high as possible without a maximum limit constraint of one.
Experiments are conducted on real images to investigate the performance of the
proposed modified FCM technique in segmenting the medical images. Standard FCM,
Modified FCM, Fuzzy Possibilistic CMeans algorithm (FPCM) are compared to
explore the accuracy of our proposed approach.
|
1004.1772
|
Terrorism Event Classification Using Fuzzy Inference Systems
|
cs.AI
|
Terrorism has led to many problems in Thai societies, not only property
damage but also civilian casualties. Predicting terrorism activities in advance
can help prepare and manage risk from sabotage by these activities. This paper
proposes a framework focusing on event classification in terrorism domain using
fuzzy inference systems (FISs). Each FIS is a decision-making model combining
fuzzy logic and approximate reasoning. It is generated in five main parts: the
input interface, the fuzzification interface, knowledge base unit, decision
making unit and output defuzzification interface. Adaptive neuro-fuzzy
inference system (ANFIS) is a FIS model adapted by combining the fuzzy logic
and neural network. The ANFIS utilizes automatic identification of fuzzy logic
rules and adjustment of membership function (MF). Moreover, neural network can
directly learn from data set to construct fuzzy logic rules and MF implemented
in various applications. FIS settings are evaluated based on two comparisons.
The first evaluation is the comparison between unstructured and structured
events using the same FIS setting. The second comparison is the model settings
between FIS and ANFIS for classifying structured events. The data set consists
of news articles related to terrorism events in three southern provinces of
Thailand. The experimental results show that the classification performance of
the FIS resulting from structured events achieves satisfactory accuracy and is
better than the unstructured events. In addition, the classification of
structured events using ANFIS gives higher performance than the events using
only FIS in the prediction of terrorism events.
|
1004.1789
|
SAR Image Segmentation using Vector Quantization Technique on Entropy
Images
|
cs.MM cs.CV
|
The development and application of various remote sensing platforms result in
the production of huge amounts of satellite image data. Therefore, there is an
increasing need for effective querying and browsing in these image databases.
In order to take advantage and make good use of satellite images data, we must
be able to extract meaningful information from the imagery. Hence we proposed a
new algorithm for SAR image segmentation. In this paper we propose segmentation
using vector quantization technique on entropy image. Initially, we obtain
entropy image and in second step we use Kekre's Fast Codebook Generation (KFCG)
algorithm for segmentation of the entropy image. Thereafter, a codebook of size
128 was generated for the Entropy image. These code vectors were further
clustered in 8 clusters using same KFCG algorithm and converted into 8 images.
These 8 images were displayed as a result. This approach does not lead to over
segmentation or under segmentation. We compared these results with well known
Gray Level Co-occurrence Matrix. The proposed algorithm gives better
segmentation with less complexity.
|
1004.1794
|
Probabilistic Semantic Web Mining Using Artificial Neural Analysis
|
cs.AI
|
Most of the web user's requirements are search or navigation time and getting
correctly matched result. These constrains can be satisfied with some
additional modules attached to the existing search engines and web servers.
This paper proposes that powerful architecture for search engines with the
title of Probabilistic Semantic Web Mining named from the methods used. With
the increase of larger and larger collection of various data resources on the
World Wide Web (WWW), Web Mining has become one of the most important
requirements for the web users. Web servers will store various formats of data
including text, image, audio, video etc., but servers can not identify the
contents of the data. These search techniques can be improved by adding some
special techniques including semantic web mining and probabilistic analysis to
get more accurate results. Semantic web mining technique can provide meaningful
search of data resources by eliminating useless information with mining
process. In this technique web servers will maintain Meta information of each
and every data resources available in that particular web server. This will
help the search engine to retrieve information that is relevant to user given
input string. This paper proposing the idea of combing these two techniques
Semantic web mining and Probabilistic analysis for efficient and accurate
search results of web mining. SPF can be calculated by considering both
semantic accuracy and syntactic accuracy of data with the input string. This
will be the deciding factor for producing results.
|
1004.1796
|
Document Clustering using Sequential Information Bottleneck Method
|
cs.IR
|
This paper illustrates the Principal Direction Divisive Partitioning (PDDP)
algorithm and describes its drawbacks and introduces a combinatorial framework
of the Principal Direction Divisive Partitioning (PDDP) algorithm, then
describes the simplified version of the EM algorithm called the spherical
Gaussian EM (sGEM) algorithm and Information Bottleneck method (IB) is a
technique for finding accuracy, complexity and time space. The PDDP algorithm
recursively splits the data samples into two sub clusters using the hyper plane
normal to the principal direction derived from the covariance matrix, which is
the central logic of the algorithm. However, the PDDP algorithm can yield poor
results, especially when clusters are not well separated from one another. To
improve the quality of the clustering results problem, it is resolved by
reallocating new cluster membership using the IB algorithm with different
settings. IB Method gives accuracy but time consumption is more. Furthermore,
based on the theoretical background of the sGEM algorithm and sequential
Information Bottleneck method(sIB), it can be obvious to extend the framework
to cover the problem of estimating the number of clusters using the Bayesian
Information Criterion. Experimental results are given to show the effectiveness
of the proposed algorithm with comparison to the existing algorithm.
|
1004.1821
|
Phase Transitions for Greedy Sparse Approximation Algorithms
|
cs.IT math.IT
|
A major enterprise in compressed sensing and sparse approximation is the
design and analysis of computationally tractable algorithms for recovering
sparse, exact or approximate, solutions of underdetermined linear systems of
equations. Many such algorithms have now been proven to have optimal-order
uniform recovery guarantees using the ubiquitous Restricted Isometry Property
(RIP). However, it is unclear when the RIP-based sufficient conditions on the
algorithm are satisfied. We present a framework in which this task can be
achieved; translating these conditions for Gaussian measurement matrices into
requirements on the signal's sparsity level, length, and number of
measurements. We illustrate this approach on three of the state-of-the-art
greedy algorithms: CoSaMP, Subspace Pursuit (SP), and Iterative Hard
Thresholding (IHT). Designed to allow a direct comparison of existing theory,
our framework implies that, according to the best known bounds, IHT requires
the fewest number of compressed sensing measurements and has the lowest per
iteration computational cost of the three algorithms compared here.
|
1004.1854
|
Contribution Games in Social Networks
|
cs.GT cs.DS cs.MA
|
We consider network contribution games, where each agent in a social network
has a budget of effort that he can contribute to different collaborative
projects or relationships. Depending on the contribution of the involved agents
a relationship will flourish or drown, and to measure the success we use a
reward function for each relationship. Every agent is trying to maximize the
reward from all relationships that it is involved in. We consider pairwise
equilibria of this game, and characterize the existence, computational
complexity, and quality of equilibrium based on the types of reward functions
involved. For example, when all reward functions are concave, we prove that the
price of anarchy is at most 2. For convex functions the same only holds under
some special but very natural conditions. A special focus of the paper are
minimum effort games, where the reward of a relationship depends only on the
minimum effort of any of the participants. Finally, we show tight bounds for
approximate equilibria and convergence of dynamics in these games.
|
1004.1886
|
Feature Level Fusion of Face and Palmprint Biometrics by Isomorphic
Graph-based Improved K-Medoids Partitioning
|
cs.CV
|
This paper presents a feature level fusion approach which uses the improved
K-medoids clustering algorithm and isomorphic graph for face and palmprint
biometrics. Partitioning around medoids (PAM) algorithm is used to partition
the set of n invariant feature points of the face and palmprint images into k
clusters. By partitioning the face and palmprint images with scale invariant
features SIFT points, a number of clusters is formed on both the images. Then
on each cluster, an isomorphic graph is drawn. In the next step, the most
probable pair of graphs is searched using iterative relaxation algorithm from
all possible isomorphic graphs for a pair of corresponding face and palmprint
images. Finally, graphs are fused by pairing the isomorphic graphs into
augmented groups in terms of addition of invariant SIFT points and in terms of
combining pair of keypoint descriptors by concatenation rule. Experimental
results obtained from the extensive evaluation show that the proposed feature
level fusion with the improved K-medoids partitioning algorithm increases the
performance of the system with utmost level of accuracy.
|
1004.1887
|
Maximized Posteriori Attributes Selection from Facial Salient Landmarks
for Face Recognition
|
cs.CV
|
This paper presents a robust and dynamic face recognition technique based on
the extraction and matching of devised probabilistic graphs drawn on SIFT
features related to independent face areas. The face matching strategy is based
on matching individual salient facial graph characterized by SIFT features as
connected to facial landmarks such as the eyes and the mouth. In order to
reduce the face matching errors, the Dempster-Shafer decision theory is applied
to fuse the individual matching scores obtained from each pair of salient
facial features. The proposed algorithm is evaluated with the ORL and the IITK
face databases. The experimental results demonstrate the effectiveness and
potential of the proposed face recognition technique also in case of partially
occluded faces.
|
1004.1938
|
On Optimal Anticodes over Permutations with the Infinity Norm
|
cs.IT math.IT
|
Motivated by the set-antiset method for codes over permutations under the
infinity norm, we study anticodes under this metric. For half of the parameter
range we classify all the optimal anticodes, which is equivalent to finding the
maximum permanent of certain $(0,1)$-matrices. For the rest of the cases we
show constraints on the structure of optimal anticodes.
|
1004.1955
|
An Achievable Rate for the MIMO Individual Channel
|
cs.IT math.IT
|
We consider the problem of communicating over a multiple-input
multiple-output (MIMO) real valued channel for which no mathematical model is
specified, and achievable rates are given as a function of the channel input
and output sequences known a-posteriori. This paper extends previous results
regarding individual channels by presenting a rate function for the MIMO
individual channel, and showing its achievability in a fixed transmission rate
communication scenario.
|
1004.1982
|
State-Space Dynamics Distance for Clustering Sequential Data
|
cs.LG
|
This paper proposes a novel similarity measure for clustering sequential
data. We first construct a common state-space by training a single
probabilistic model with all the sequences in order to get a unified
representation for the dataset. Then, distances are obtained attending to the
transition matrices induced by each sequence in that state-space. This approach
solves some of the usual overfitting and scalability issues of the existing
semi-parametric techniques, that rely on training a model for each sequence.
Empirical studies on both synthetic and real-world datasets illustrate the
advantages of the proposed similarity measure for clustering sequences.
|
1004.1997
|
An optimized recursive learning algorithm for three-layer feedforward
neural networks for mimo nonlinear system identifications
|
cs.NE cs.DC cs.LG
|
Back-propagation with gradient method is the most popular learning algorithm
for feed-forward neural networks. However, it is critical to determine a proper
fixed learning rate for the algorithm. In this paper, an optimized recursive
algorithm is presented for online learning based on matrix operation and
optimization methods analytically, which can avoid the trouble to select a
proper learning rate for the gradient method. The proof of weak convergence of
the proposed algorithm also is given. Although this approach is proposed for
three-layer, feed-forward neural networks, it could be extended to multiple
layer feed-forward neural networks. The effectiveness of the proposed
algorithms applied to the identification of behavior of a two-input and
two-output non-linear dynamic system is demonstrated by simulation experiments.
|
1004.1999
|
Towards a mathematical theory of meaningful communication
|
cs.IT math.IT nlin.AO q-bio.OT
|
Despite its obvious relevance, meaning has been outside most theoretical
approaches to information in biology. As a consequence, functional responses
based on an appropriate interpretation of signals has been replaced by a
probabilistic description of correlations between emitted and received symbols.
This assumption leads to potential paradoxes, such as the presence of a maximum
information associated to a channel that would actually create completely wrong
interpretations of the signals. Game-theoretic models of language evolution use
this view of Shannon's theory, but other approaches considering embodied
communicating agents show that the correct (meaningful) match resulting from
agent-agent exchanges is always achieved and natural systems obviously solve
the problem correctly. How can Shannon's theory be expanded in such a way that
meaning -at least, in its minimal referential form- is properly incorporated?
Inspired by the concept of {\em duality of the communicative sign} stated by
the swiss linguist Ferdinand de Saussure, here we present a complete
description of the minimal system necessary to measure the amount of
information that is consistently decoded. Several consequences of our
developments are investigated, such the uselessness of an amount of information
properly transmitted for communication among autonomous agents.
|
1004.2003
|
The Socceral Force
|
cs.AI cs.SE
|
We have an audacious dream, we would like to develop a simulation and virtual
reality system to support the decision making in European football (soccer). In
this review, we summarize the efforts that we have made to fulfil this dream
until recently. In addition, an introductory version of FerSML (Footballer and
Football Simulation Markup Language) is presented in this paper.
|
1004.2008
|
Matrix Coherence and the Nystrom Method
|
cs.AI
|
The Nystrom method is an efficient technique to speed up large-scale learning
applications by generating low-rank approximations. Crucial to the performance
of this technique is the assumption that a matrix can be well approximated by
working exclusively with a subset of its columns. In this work we relate this
assumption to the concept of matrix coherence and connect matrix coherence to
the performance of the Nystrom method. Making use of related work in the
compressed sensing and the matrix completion literature, we derive novel
coherence-based bounds for the Nystrom method in the low-rank setting. We then
present empirical results that corroborate these theoretical bounds. Finally,
we present more general empirical results for the full-rank setting that
convincingly demonstrate the ability of matrix coherence to measure the degree
to which information can be extracted from a subset of columns.
|
1004.2027
|
Dynamic Policy Programming
|
cs.LG cs.AI cs.SY math.OC stat.ML
|
In this paper, we propose a novel policy iteration method, called dynamic
policy programming (DPP), to estimate the optimal policy in the
infinite-horizon Markov decision processes. We prove the finite-iteration and
asymptotic l\infty-norm performance-loss bounds for DPP in the presence of
approximation/estimation error. The bounds are expressed in terms of the
l\infty-norm of the average accumulated error as opposed to the l\infty-norm of
the error in the case of the standard approximate value iteration (AVI) and the
approximate policy iteration (API). This suggests that DPP can achieve a better
performance than AVI and API since it averages out the simulation noise caused
by Monte-Carlo sampling throughout the learning process. We examine this
theoretical results numerically by com- paring the performance of the
approximate variants of DPP with existing reinforcement learning (RL) methods
on different problem domains. Our results show that, in all cases, DPP-based
algorithms outperform other RL methods by a wide margin.
|
1004.2079
|
Bargaining dynamics in exchange networks
|
cs.GT cs.MA
|
We consider a one-sided assignment market or exchange network with
transferable utility and propose a model for the dynamics of bargaining in such
a market. Our dynamical model is local, involving iterative updates of 'offers'
based on estimated best alternative matches, in the spirit of pairwise Nash
bargaining. We establish that when a balanced outcome (a generalization of the
pairwise Nash bargaining solution to networks) exists, our dynamics converges
rapidly to such an outcome. We extend our results to the cases of (i) general
agent 'capacity constraints', i.e., an agent may be allowed to participate in
multiple matches, and (ii) 'unequal bargaining powers' (where we also find a
surprising change in rate of convergence).
|
1004.2102
|
Distributed anonymous discrete function computation
|
math.OC cs.DC cs.SY
|
We propose a model for deterministic distributed function computation by a
network of identical and anonymous nodes. In this model, each node has bounded
computation and storage capabilities that do not grow with the network size.
Furthermore, each node only knows its neighbors, not the entire graph. Our goal
is to characterize the class of functions that can be computed within this
model. In our main result, we provide a necessary condition for computability
which we show to be nearly sufficient, in the sense that every function that
satisfies this condition can at least be approximated. The problem of computing
suitably rounded averages in a distributed manner plays a central role in our
development; we provide an algorithm that solves it in time that grows
quadratically with the size of the network.
|
1004.2104
|
Sum Capacity of K User Gaussian Degraded Interference Channels
|
cs.IT math.IT
|
This paper studies a family of genie-MAC (multiple access channel) outer
bounds for K-user Gaussian interference channels. This family is inspired by
existing genie-aided bounding mechanisms, but differs from current approaches
in its optimization problem formulation and application. The fundamental idea
behind these bounds is to create a group of genie receivers that form multiple
access channels that can decode a subset of the original interference channel's
messages. The MAC sum capacity of each of the genie receivers provides an outer
bound on the sum of rates for this subset. The genie-MAC outer bounds are used
to derive new sum-capacity results. In particular, this paper derives
sum-capacity in closed-form for the class of K-user Gaussian degraded
interference channels. The sum-capacity achieving scheme is shown to be a
successive interference cancellation scheme. This result generalizes a known
result for two-user channels to K-user channels.
|
1004.2131
|
A New Full-diversity Criterion and Low-complexity STBCs with Partial
Interference Cancellation Decoding
|
cs.IT math.IT
|
Recently, Guo and Xia gave sufficient conditions for an STBC to achieve full
diversity when a PIC (Partial Interference Cancellation) or a PIC-SIC (PIC with
Successive Interference Cancellation) decoder is used at the receiver. In this
paper, we give alternative conditions for an STBC to achieve full diversity
with PIC and PIC-SIC decoders, which are equivalent to Guo and Xia's
conditions, but are much easier to check. Using these conditions, we construct
a new class of full diversity PIC-SIC decodable codes, which contain the
Toeplitz codes and a family of codes recently proposed by Zhang, Xu et. al. as
proper subclasses. With the help of the new criteria, we also show that a class
of PIC-SIC decodable codes recently proposed by Zhang, Shi et. al. can be
decoded with much lower complexity than what is reported, without compromising
on full diversity.
|
1004.2155
|
Constraint-based Query Distribution Framework for an Integrated Global
Schema
|
cs.DB cs.DC cs.IR
|
Distributed heterogeneous data sources need to be queried uniformly using
global schema. Query on global schema is reformulated so that it can be
executed on local data sources. Constraints in global schema and mappings are
used for source selection, query optimization,and querying partitioned and
replicated data sources. The provided system is all XML-based which poses query
in XML form, transforms, and integrates local results in an XML document.
Contributions include the use of constraints in our existing global schema
which help in source selection and query optimization, and a global query
distribution framework for querying distributed heterogeneous data sources.
|
1004.2222
|
What a Difference a Tag Cloud Makes: Effects of Tasks and Cognitive
Abilities on Search Results Interface Use
|
cs.HC cs.IR
|
The goal of this study is to expand our understanding of the relationships
between selected tasks, cognitive abilities and search result interfaces. The
underlying objective is to understand how to select search results presentation
for tasks and user contexts. Twenty three participants conducted four search
tasks of two types and used two interfaces (List and Overview) to refine and
examine search results. Clickthrough data were recorded. This controlled study
employed a mixed model design with two within-subject factors (task and
interface) and two between-subject factors (two cognitive abilities: memory
span and verbal closure). Quantitative analyses were carried out by means of
the statistical package SPSS. Specifically, multivariate analysis of variance
with repeated measures and non-parametric tests were performed on the collected
data. The overview of search results appeared to have benefited searchers in
several ways. It made them faster; it facilitated formulation of more effective
queries and helped to assess search results. Searchers with higher cognitive
abilities were faster in the Overview interface and in less demanding
situations (on simple tasks), while at the same time they issued about the same
number of queries as lower-ability searchers. In more demanding situations (on
complex tasks and in the List interface), the higher ability searchers expended
more search effort, although they were not significantly slower than the lower
ability people in these situations. The higher search effort, however, did not
result in a measurable improvement of task outcomes for high-ability searchers.
These findings have implications for the design of search interfaces. They
suggest benefits of providing result overviews. They also suggest the
importance of considering cognitive abilities in the design of search results'
presentation and interaction.
|
1004.2242
|
Group Leaders Optimization Algorithm
|
cs.NE math.OC
|
We present a new global optimization algorithm in which the influence of the
leaders in social groups is used as an inspiration for the evolutionary
technique which is designed into a group architecture. To demonstrate the
efficiency of the method, a standard suite of single and multidimensional
optimization functions along with the energies and the geometric structures of
Lennard-Jones clusters are given as well as the application of the algorithm on
quantum circuit design problems. We show that as an improvement over previous
methods, the algorithm scales as N^2.5 for the Lennard-Jones clusters of
N-particles. In addition, an efficient circuit design is shown for two qubit
Grover search algorithm which is a quantum algorithm providing quadratic
speed-up over the classical counterpart.
|
1004.2280
|
XOR at a Single Vertex -- Artificial Dendrites
|
cs.NE q-bio.NC
|
New to neuroscience with implications for AI, the exclusive OR, or any other
Boolean gate may be biologically accomplished within a single region where
active dendrites merge. This is demonstrated below using dynamic circuit
analysis. Medical knowledge aside, this observation points to the possibility
of specially coated conductors to accomplish artificial dendrites.
|
1004.2285
|
A Majorization-Minimization Approach to Design of Power Transmission
Networks
|
math.OC cs.CE
|
We propose an optimization approach to design cost-effective electrical power
transmission networks. That is, we aim to select both the network structure and
the line conductances (line sizes) so as to optimize the trade-off between
network efficiency (low power dissipation within the transmission network) and
the cost to build the network. We begin with a convex optimization method based
on the paper ``Minimizing Effective Resistance of a Graph'' [Ghosh, Boyd \&
Saberi]. We show that this (DC) resistive network method can be adapted to the
context of AC power flow. However, that does not address the combinatorial
aspect of selecting network structure. We approach this problem as selecting a
subgraph within an over-complete network, posed as minimizing the (convex)
network power dissipation plus a non-convex cost on line conductances that
encourages sparse networks where many line conductances are set to zero. We
develop a heuristic approach to solve this non-convex optimization problem
using: (1) a continuation method to interpolate from the smooth, convex problem
to the (non-smooth, non-convex) combinatorial problem, (2) the
majorization-minimization algorithm to perform the necessary intermediate
smooth but non-convex optimization steps. Ultimately, this involves solving a
sequence of convex optimization problems in which we iteratively reweight a
linear cost on line conductances to fit the actual non-convex cost. Several
examples are presented which suggest that the overall method is a good
heuristic for network design. We also consider how to obtain sparse networks
that are still robust against failures of lines and/or generators.
|
1004.2299
|
An Optimal Coding Strategy for the Binary Multi-Way Relay Channel
|
cs.IT math.IT
|
We derive the capacity of the binary multi-way relay channel, in which
multiple users exchange messages at a common rate through a relay. The capacity
is achieved using a novel functional-decode-forward coding strategy. In the
functional-decode-forward coding strategy, the relay decodes functions of the
users' messages without needing to decode individual messages. The functions to
be decoded by the relay are defined such that when the relay broadcasts the
functions back to the users, every user is able to decode the messages of all
other users.
|
1004.2300
|
Capacity Theorems for the AWGN Multi-Way Relay Channel
|
cs.IT math.IT
|
The L-user additive white Gaussian noise multi-way relay channel is
considered, where multiple users exchange information through a single relay at
a common rate. Existing coding strategies, i.e., complete-decode-forward and
compress-forward are shown to be bounded away from the cut-set upper bound at
high signal-to-noise ratios (SNR). It is known that the gap between the
compress-forward rate and the capacity upper bound is a constant at high SNR,
and that between the complete-decode-forward rate and the upper bound increases
with SNR at high SNR. In this paper, a functional-decode-forward coding
strategy is proposed. It is shown that for L >= 3, complete-decode-forward
achieves the capacity when SNR <= 0 dB, and functional-decode-forward achieves
the capacity when SNR >= 0 dB. For L=$, functional-decode-forward achieves the
capacity asymptotically as SNR increases.
|
1004.2303
|
The Binary-Symmetric Parallel-Relay Network
|
cs.IT math.IT
|
We present capacity results of the binary-symmetric parallel-relay network,
where there is one source, one destination, and K relays in parallel. We show
that forwarding relays, where the relays merely transmit their received
signals, achieve the capacity in two ways: with coded transmission at the
source and a finite number of relays, or uncoded transmission at the source and
a sufficiently large number of relays. On the other hand, decoding relays,
where the relays decode the source message, re-encode, and forward it to the
destination, achieve the capacity when the number of relays is small.
|
1004.2304
|
Spatio-Temporal Graphical Model Selection
|
stat.ML cs.AI
|
We consider the problem of estimating the topology of spatial interactions in
a discrete state, discrete time spatio-temporal graphical model where the
interactions affect the temporal evolution of each agent in a network. Among
other models, the susceptible, infected, recovered ($SIR$) model for
interaction events fall into this framework. We pose the problem as a structure
learning problem and solve it using an $\ell_1$-penalized likelihood convex
program. We evaluate the solution on a simulated spread of infectious over a
complex network. Our topology estimates outperform those of a standard spatial
Markov random field graphical model selection using $\ell_1$-regularized
logistic regression.
|
1004.2316
|
Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable
Information Criterion in Singular Learning Theory
|
cs.LG
|
In regular statistical models, the leave-one-out cross-validation is
asymptotically equivalent to the Akaike information criterion. However, since
many learning machines are singular statistical models, the asymptotic behavior
of the cross-validation remains unknown. In previous studies, we established
the singular learning theory and proposed a widely applicable information
criterion, the expectation value of which is asymptotically equal to the
average Bayes generalization loss. In the present paper, we theoretically
compare the Bayes cross-validation loss and the widely applicable information
criterion and prove two theorems. First, the Bayes cross-validation loss is
asymptotically equivalent to the widely applicable information criterion as a
random variable. Therefore, model selection and hyperparameter optimization
using these two values are asymptotically equivalent. Second, the sum of the
Bayes generalization error and the Bayes cross-validation error is
asymptotically equal to $2\lambda/n$, where $\lambda$ is the real log canonical
threshold and $n$ is the number of training samples. Therefore the relation
between the cross-validation error and the generalization error is determined
by the algebraic geometrical structure of a learning machine. We also clarify
that the deviance information criteria are different from the Bayes
cross-validation and the widely applicable information criterion.
|
1004.2342
|
Mean field for Markov Decision Processes: from Discrete to Continuous
Optimization
|
cs.AI cs.PF cs.SY math.OC math.PR
|
We study the convergence of Markov Decision Processes made of a large number
of objects to optimization problems on ordinary differential equations (ODE).
We show that the optimal reward of such a Markov Decision Process, satisfying a
Bellman equation, converges to the solution of a continuous
Hamilton-Jacobi-Bellman (HJB) equation based on the mean field approximation of
the Markov Decision Process. We give bounds on the difference of the rewards,
and a constructive algorithm for deriving an approximating solution to the
Markov Decision Process from a solution of the HJB equations. We illustrate the
method on three examples pertaining respectively to investment strategies,
population dynamics control and scheduling in queues are developed. They are
used to illustrate and justify the construction of the controlled ODE and to
show the gain obtained by solving a continuous HJB equation rather than a large
discrete Bellman equation.
|
1004.2372
|
Learning Deterministic Regular Expressions for the Inference of Schemas
from XML Data
|
cs.DB cs.FL
|
Inferring an appropriate DTD or XML Schema Definition (XSD) for a given
collection of XML documents essentially reduces to learning deterministic
regular expressions from sets of positive example words. Unfortunately, there
is no algorithm capable of learning the complete class of deterministic regular
expressions from positive examples only, as we will show. The regular
expressions occurring in practical DTDs and XSDs, however, are such that every
alphabet symbol occurs only a small number of times. As such, in practice it
suffices to learn the subclass of deterministic regular expressions in which
each alphabet symbol occurs at most k times, for some small k. We refer to such
expressions as k-occurrence regular expressions (k-OREs for short). Motivated
by this observation, we provide a probabilistic algorithm that learns k-OREs
for increasing values of k, and selects the deterministic one that best
describes the sample based on a Minimum Description Length argument. The
effectiveness of the method is empirically validated both on real world and
synthetic data. Furthermore, the method is shown to be conservative over the
simpler classes of expressions considered in previous work.
|
1004.2392
|
On the optimal stacking of noisy observations
|
cs.IT math.IT
|
Observations where additive noise is present can for many models be grouped
into a compound observation matrix, adhering to the same type of model. There
are many ways the observations can be stacked, for instance vertically,
horizontally, or quadratically. An estimator for the spectrum of the underlying
model can be formulated for each stacking scenario in the case of Gaussian
noise. We compare these spectrum estimators for the different stacking
scenarios, and show that all kinds of stacking actually decreases the variance
when compared to just taking an average of the observations. We show that,
regardless of the number of observations, the variance of the estimator is
smallest when the compound observation matrix is made as square as possible.
When the number of observations grow, however, it is shown that the difference
between the estimators is marginal: Two stacking scenarios where the number of
columns and rows grow to infinity are shown to have the same variance
asymptotically, even if the asymptotic matrix aspect ratios differ. Only the
cases of vertical and horizontal stackings display different behaviour, giving
a higher variance asymptotically. Models where not all kinds of stackings are
possible are also discussed.
|
1004.2425
|
Bounds on Thresholds Related to Maximum Satisfiability of Regular Random
Formulas
|
cs.IT cs.CC cs.DM math.IT
|
We consider the regular balanced model of formula generation in conjunctive
normal form (CNF) introduced by Boufkhad, Dubois, Interian, and Selman. We say
that a formula is $p$-satisfying if there is a truth assignment satisfying
$1-2^{-k}+p 2^{-k}$ fraction of clauses. Using the first moment method we
determine upper bound on the threshold clause density such that there are no
$p$-satisfying assignments with high probability above this upper bound. There
are two aspects in deriving the lower bound using the second moment method. The
first aspect is, given any $p \in (0,1)$ and $k$, evaluate the lower bound on
the threshold. This evaluation is numerical in nature. The second aspect is to
derive the lower bound as a function of $p$ for large enough $k$. We address
the first aspect and evaluate the lower bound on the $p$-satisfying threshold
using the second moment method. We observe that as $k$ increases the lower
bound seems to converge to the asymptotically derived lower bound for uniform
model of formula generation by Achlioptas, Naor, and Peres.
|
1004.2434
|
The Multi-way Relay Channel
|
cs.IT math.IT
|
The multiuser communication channel, in which multiple users exchange
information with the help of a relay terminal, termed the multi-way relay
channel (mRC), is introduced. In this model, multiple interfering clusters of
users communicate simultaneously, where the users within the same cluster wish
to exchange messages among themselves. It is assumed that the users cannot
receive each other's signals directly, and hence the relay terminal in this
model is the enabler of communication. In particular, restricted encoders,
which ignore the received channel output and use only the corresponding
messages for generating the channel input, are considered. Achievable rate
regions and an outer bound are characterized for the Gaussian mRC, and their
comparison is presented in terms of exchange rates in a symmetric Gaussian
network scenario. It is shown that the compress-and-forward (CF) protocol
achieves exchange rates within a constant bit offset of the exchange capacity
independent of the power constraints of the terminals in the network. A finite
bit gap between the exchange rates achieved by the CF and the
amplify-and-forward (AF) protocols is also shown. The two special cases of the
mRC, the full data exchange model, in which every user wants to receive
messages of all other users, and the pairwise data exchange model which
consists of multiple two-way relay channels, are investigated in detail. In
particular for the pairwise data exchange model, in addition to the proposed
random coding based achievable schemes, a nested lattice coding based scheme is
also presented and is shown to achieve exchange rates within a constant bit gap
of the exchange capacity.
|
1004.2484
|
Duality, Polite Water-filling, and Optimization for MIMO B-MAC
Interference Networks and iTree Networks
|
cs.IT math.IT
|
This paper gives the long sought network version of water-filling named as
polite water-filling. Unlike in single-user MIMO channels, where no one uses
general purpose optimization algorithms in place of the simple and optimal
water-filling for transmitter optimization, the traditional water-filling is
generally far from optimal in networks as simple as MIMO multiaccess channels
(MAC) and broadcast channels (BC), where steepest ascent algorithms have been
used except for the sum-rate optimization. This is changed by the polite
water-filling that is optimal for all boundary points of the capacity regions
of MAC and BC and for all boundary points of a set of achievable regions of a
more general class of MIMO B-MAC interference networks, which is a combination
of multiple interfering broadcast channels, from the transmitter point of view,
and multiaccess channels, from the receiver point of view, including MAC, BC,
interference channels, X networks, and most practical wireless networks as
special case. It is polite because it strikes an optimal balance between
reducing interference to others and maximizing a link's own rate. Employing it,
the related optimizations can be vastly simplified by taking advantage of the
structure of the problems. Deeply connected to the polite water-filling, the
rate duality is extended to the forward and reverse links of the B-MAC
networks. As a demonstration, weighted sum-rate maximization algorithms based
on polite water-filling and duality with superior performance and low
complexity are designed for B-MAC networks and are analyzed for Interference
Tree (iTree) Networks, a sub-class of the B-MAC networks that possesses
promising properties for further information theoretic study.
|
1004.2515
|
Nonnegative Decomposition of Multivariate Information
|
cs.IT math-ph math.IT math.MP physics.bio-ph physics.data-an q-bio.NC q-bio.QM
|
Of the various attempts to generalize information theory to multiple
variables, the most widely utilized, interaction information, suffers from the
problem that it is sometimes negative. Here we reconsider from first principles
the general structure of the information that a set of sources provides about a
given variable. We begin with a new definition of redundancy as the minimum
information that any source provides about each possible outcome of the
variable, averaged over all possible outcomes. We then show how this measure of
redundancy induces a lattice over sets of sources that clarifies the general
structure of multivariate information. Finally, we use this redundancy lattice
to propose a definition of partial information atoms that exhaustively
decompose the Shannon information in a multivariate system in terms of the
redundancy between synergies of subsets of the sources. Unlike interaction
information, the atoms of our partial information decomposition are never
negative and always support a clear interpretation as informational quantities.
Our analysis also demonstrates how the negativity of interaction information
can be explained by its confounding of redundancy and synergy.
|
1004.2519
|
Robust State Space Filtering under Incremental Model Perturbations
Subject to a Relative Entropy Tolerance
|
math.OC cs.IT cs.SY math.IT
|
This paper considers robust filtering for a nominal Gaussian state-space
model, when a relative entropy tolerance is applied to each time increment of a
dynamical model. The problem is formulated as a dynamic minimax game where the
maximizer adopts a myopic strategy. This game is shown to admit a saddle point
whose structure is characterized by applying and extending results presented
earlier in [1] for static least-squares estimation. The resulting minimax
filter takes the form of a risk-sensitive filter with a time varying risk
sensitivity parameter, which depends on the tolerance bound applied to the
model dynamics and observations at the corresponding time index. The
least-favorable model is constructed and used to evaluate the performance of
alternative filters. Simulations comparing the proposed risk-sensitive filter
to a standard Kalman filter show a significant performance advantage when
applied to the least-favorable model, and only a small performance loss for the
nominal model.
|
1004.2523
|
How Much Multiuser Diversity is Required for Energy Limited Multiuser
Systems?
|
cs.IT math.IT
|
Multiuser diversity (MUDiv) is one of the central concepts in multiuser (MU)
systems. In particular, MUDiv allows for scheduling among users in order to
eliminate the negative effects of unfavorable channel fading conditions of some
users on the system performance. Scheduling, however, consumes energy (e.g.,
for making users' channel state information available to the scheduler). This
extra usage of energy, which could potentially be used for data transmission,
can be very wasteful, especially if the number of users is large. In this
paper, we answer the question of how much MUDiv is required for energy limited
MU systems. Focusing on uplink MU wireless systems, we develop MU scheduling
algorithms which aim at maximizing the MUDiv gain. Toward this end, we
introduce a new realistic energy model which accounts for scheduling energy and
describes the distribution of the total energy between scheduling and data
transmission stages. Using the fact that such energy distribution can be
controlled by varying the number of active users, we optimize this number by
either (i) minimizing the overall system bit error rate (BER) for a fixed total
energy of all users in the system or (ii) minimizing the total energy of all
users for fixed BER requirements. We find that for a fixed number of available
users, the achievable MUDiv gain can be improved by activating only a subset of
users. Using asymptotic analysis and numerical simulations, we show that our
approach benefits from MUDiv gains higher than that achievable by generic
greedy access algorithm, which is the optimal scheduling method for energy
unlimited systems.
|
1004.2542
|
Relay-Assisted Partial Packet Recovery with IDMA Method in CDMA Wireless
Network
|
cs.IT math.IT
|
Automatic Repeat Request (ARQ) is an effective technique for reliable
transmission of packets in wireless networks. In ARQ, however, only a few
erroneous bits in a packet will cause the entire packet to be discarded at the
receiver. In this case, it's wasteful to retransmit the correct bit in the
received packet. The partial packet recovery only retransmits the unreliable
decoded bits in order to increase the throughput of network. In addition, the
cooperative transmission based on Interleave-division multiple-access (IDMA)
can obtain diversity gains with multiple relays with different locations for
multiple sources simultaneously. By exploring the diversity from the channel
between relay and destination, we propose a relay-assisted partial packet
recovery in CDMA wireless network to improve the performance of throughput. In
the proposed scheme, asynchronous IDMA iterative chip-by-chip multiuser
detection is utilized as a method of multiple partial recovery, which can be a
complementarity in a current CDMA network. The confidence values' concept is
applied to detect unreliable decoded bits. According to the result of
unreliable decoded bits' position, we use a recursive algorithm based on cost
evaluation to decide a feedback strategy. Then the feedback request with
minimum cost can be obtained. The simulation results show that the performance
of throughput can be significantly improved with our scheme, compared with
traditional ARQ scheme. The upper bound with our scheme is provided in our
simulation. Moreover, we show how relays' location affects the performance.
|
1004.2616
|
Achievable Rate Regions for Dirty Tape Channels and "Joint Writing on
Dirty Paper and Dirty Tape"
|
cs.IT math.IT
|
We consider the Gaussian Dirty Tape Channel (DTC) Y=X+S+Z, where S is an
additive Gaussian interference known causally to the transmitter. The general
expression [max]\_top(P_U,f(.),X=f(U,S))I(U;Y) is presented for the capacity of
this channel. For linear assignment to f(.), i.e. X=U-{\beta}S, this expression
leads to the compensation strategy proposed previously by Willems to obtain an
achievable rate for the DTC. We show that linear assignment to f(.) is optimal,
under the condition that there exists a real number {\beta}^* such that the
pair (X+{\beta}^* S,U) is independent of interference S. Furthermore, by
applying a time-sharing technique to the achievable rate derived by linear
assignment to f(.), an improved lower bound on the capacity of DTC is obtained.
We also consider the Gaussian multiple access channel with additive
interference, and study two different scenarios for this system. In the first
case, both transmitters know interference causally while in the second, one
transmitter has access to the interference noncausally and the other causally.
Achievable rate regions for these two scenarios are then established.
|
1004.2624
|
Symmetry within Solutions
|
cs.AI
|
We define the concept of an internal symmetry. This is a symmety within a
solution of a constraint satisfaction problem. We compare this to solution
symmetry, which is a mapping between different solutions of the same problem.
We argue that we may be able to exploit both types of symmetry when finding
solutions. We illustrate the potential of exploiting internal symmetries on two
benchmark domains: Van der Waerden numbers and graceful graphs. By identifying
internal symmetries we are able to extend the state of the art in both cases.
|
1004.2626
|
Propagating Conjunctions of AllDifferent Constraints
|
cs.AI
|
We study propagation algorithms for the conjunction of two AllDifferent
constraints. Solutions of an AllDifferent constraint can be seen as perfect
matchings on the variable/value bipartite graph. Therefore, we investigate the
problem of finding simultaneous bipartite matchings. We present an extension of
the famous Hall theorem which characterizes when simultaneous bipartite
matchings exists. Unfortunately, finding such matchings is NP-hard in general.
However, we prove a surprising result that finding a simultaneous matching on a
convex bipartite graph takes just polynomial time. Based on this theoretical
result, we provide the first polynomial time bound consistency algorithm for
the conjunction of two AllDifferent constraints. We identify a pathological
problem on which this propagator is exponentially faster compared to existing
propagators. Our experiments show that this new propagator can offer
significant benefits over existing methods.
|
1004.2628
|
Lossy Source Compression of Non-Uniform Binary Sources Using GQ-LDGM
Codes
|
cs.IT math.IT
|
In this paper, we study the use of GF(q)-quantized LDGM codes for binary
source coding. By employing quantization, it is possible to obtain binary
codewords with a non-uniform distribution. The obtained statistics is hence
suitable for optimal, direct quantization of non-uniform Bernoulli sources. We
employ a message-passing algorithm combined with a decimation procedure in
order to perform compression. The experimental results based on GF(q)-LDGM
codes with regular degree distributions yield performances quite close to the
theoretical rate-distortion bounds.
|
1004.2648
|
Optimality and Approximate Optimality of Source-Channel Separation in
Networks
|
cs.IT math.IT
|
We consider the source-channel separation architecture for lossy source
coding in communication networks. It is shown that the separation approach is
optimal in two general scenarios, and is approximately optimal in a third
scenario. The two scenarios for which separation is optimal complement each
other: the first is when the memoryless sources at source nodes are arbitrarily
correlated, each of which is to be reconstructed at possibly multiple
destinations within certain distortions, but the channels in this network are
synchronized, orthogonal and memoryless point-to-point channels; the second is
when the memoryless sources are mutually independent, each of which is to be
reconstructed only at one destination within a certain distortion, but the
channels are general, including multi-user channels such as multiple access,
broadcast, interference and relay channels, possibly with feedback. The third
scenario, for which we demonstrate approximate optimality of source-channel
separation, generalizes the second scenario by allowing each source to be
reconstructed at multiple destinations with different distortions. For this
case, the loss from optimality by using the separation approach can be
upper-bounded when a "difference" distortion measure is taken, and in the
special case of quadratic distortion measure, this leads to universal constant
bounds.
|
1004.2683
|
Error Rates of Capacity-Achieving Codes Are Convex
|
cs.IT math.IT
|
Motivated by a wide-spread use of convex optimization techniques, convexity
properties of bit error rate of the maximum likelihood detector operating in
the AWGN channel are studied for arbitrary constellations and bit mappings,
which also includes coding under maximum-likelihood decoding. Under this
generic setting, the pairwise probability of error and bit error rate are shown
to be convex functions of the SNR and noise power in the high SNR/low noise
regime with explicitly-determined boundary. Any code, including
capacity-achieving ones, whose decision regions include the hardened noise
spheres (from the noise sphere hardening argument in the channel coding
theorem) satisfies this high SNR requirement and thus has convex error rates in
both SNR and noise power. We conjecture that all capacity-achieving codes have
convex error rates.
|
1004.2719
|
Is This a Good Title?
|
cs.IR
|
Missing web pages, URIs that return the 404 "Page Not Found" error or the
HTTP response code 200 but dereference unexpected content, are ubiquitous in
today's browsing experience. We use Internet search engines to relocate such
missing pages and provide means that help automate the rediscovery process. We
propose querying web pages' titles against search engines. We investigate the
retrieval performance of titles and compare them to lexical signatures which
are derived from the pages' content. Since titles naturally represent the
content of a document they intuitively change over time. We measure the edit
distance between current titles and titles of copies of the same pages obtained
from the Internet Archive and display their evolution. We further investigate
the correlation between title changes and content modifications of a web page
over time. Lastly we provide a predictive model for the quality of any given
web page title in terms of its discovery performance. Our results show that
titles return more than 60% URIs top ranked and further relevant content
returned in the top 10 results. We show that titles decay slowly but are far
more stable than the pages' content. We further distill stop titles than can
help identify insufficiently performing search engine queries.
|
1004.2757
|
Multi-User Cooperative Diversity through Network Coding Based on
Classical Coding Theory
|
cs.IT math.IT
|
In this work, we propose and analyze a generalized construction of
distributed network codes for a network consisting of $M$ users sending
different information to a common base station through independent block fading
channels. The aim is to increase the diversity order of the system without
reducing its throughput. The proposed scheme, called generalized
dynamic-network codes (GDNC), is a generalization of the dynamic-network codes
(DNC) recently proposed by Xiao and Skoglund. The design of the network codes
that maximize the diversity order is recognized as equivalent to the design of
linear block codes over a nonbinary finite field under the Hamming metric. We
prove that adopting a systematic generator matrix of a maximum distance
separable block code over a sufficiently large finite field as the network
transfer matrix is a sufficient condition for full diversity order under link
failure model. The proposed generalization offers a much better tradeoff
between rate and diversity order compared to the DNC. An outage probability
analysis showing the improved performance is carried out, and computer
simulations results are shown to agree with the analytical results.
|
1004.2773
|
High-Rate and Full-Diversity Space-Time Block Codes with Low Complexity
Partial Interference Cancellation Group Decoding
|
cs.IT math.IT
|
In this paper, we propose a systematic design of space-time block codes
(STBC) which can achieve high rate and full diversity when the partial
interference cancellation (PIC) group decoding is used at receivers. The
proposed codes can be applied to any number of transmit antennas and admit a
low decoding complexity while achieving full diversity. For M transmit
antennas, in each codeword real and imaginary parts of PM complex information
symbols are parsed into P diagonal layers and then encoded, respectively. With
PIC group decoding, it is shown that the decoding complexity can be reduced to
a joint decoding of M/2 real symbols. In particular, for 4 transmit antennas,
the code has real symbol pairwise (i.e., single complex symbol) decoding that
achieves full diversity and the code rate is 4/3. Simulation results
demonstrate that the full diversity is offered by the newly proposed STBC with
the PIC group decoding.
|
1004.2795
|
An extension of Massey scheme for secret sharing
|
cs.IT cs.CR math.IT
|
We consider an extension of Massey's construction of secret sharing schemes
using linear codes. We describe the access structure of the scheme and show its
connection to the dual code. We use the $g$-fold weight enumerator and
invariant theory to study the access structure.
|
1004.2844
|
Minimizing the Complexity of Fast Sphere Decoding of STBCs
|
cs.IT math.IT
|
Decoding of linear space-time block codes (STBCs) with sphere-decoding (SD)
is well known. A fast-version of the SD known as fast sphere decoding (FSD) has
been recently studied by Biglieri, Hong and Viterbo. Viewing a linear STBC as a
vector space spanned by its defining weight matrices over the real number
field, we define a quadratic form (QF), called the Hurwitz-Radon QF (HRQF), on
this vector space and give a QF interpretation of the FSD complexity of a
linear STBC. It is shown that the FSD complexity is only a function of the
weight matrices defining the code and their ordering, and not of the channel
realization (even though the equivalent channel when SD is used depends on the
channel realization) or the number of receive antennas. It is also shown that
the FSD complexity is completely captured into a single matrix obtained from
the HRQF. Moreover, for a given set of weight matrices, an algorithm to obtain
a best ordering of them leading to the least FSD complexity is presented. The
well known classes of low FSD complexity codes (multi-group decodable codes,
fast decodable codes and fast group decodable codes) are presented in the
framework of HRQF.
|
1004.2854
|
Experimenting with Innate Immunity
|
cs.AI cs.NE
|
In a previous paper the authors argued the case for incorporating ideas from
innate immunity into artificial immune systems (AISs) and presented an outline
for a conceptual framework for such systems. A number of key general properties
observed in the biological innate and adaptive immune systems were highlighted,
and how such properties might be instantiated in artificial systems was
discussed in detail. The next logical step is to take these ideas and build a
software system with which AISs with these properties can be implemented and
experimentally evaluated. This paper reports on the results of that step - the
libtissue system.
|
1004.2860
|
Behavioural Correlation for Detecting P2P Bots
|
cs.AI cs.CR cs.NE
|
In the past few years, IRC bots, malicious programs which are remotely
controlled by the attacker through IRC servers, have become a major threat to
the Internet and users. These bots can be used in different malicious ways such
as issuing distributed denial of services attacks to shutdown other networks
and services, keystrokes logging, spamming, traffic sniffing cause serious
disruption on networks and users. New bots use peer to peer (P2P) protocols
start to appear as the upcoming threat to Internet security due to the fact
that P2P bots do not have a centralized point to shutdown or traceback, thus
making the detection of P2P bots is a real challenge. In response to these
threats, we present an algorithm to detect an individual P2P bot running on a
system by correlating its activities. Our evaluation shows that correlating
different activities generated by P2P bots within a specified time period can
detect these kind of bots.
|
1004.2868
|
Inference with minimal Gibbs free energy in information field theory
|
astro-ph.IM cs.IT hep-th math.IT physics.data-an stat.ME
|
Non-linear and non-Gaussian signal inference problems are difficult to
tackle. Renormalization techniques permit us to construct good estimators for
the posterior signal mean within information field theory (IFT), but the
approximations and assumptions made are not very obvious. Here we introduce the
simple concept of minimal Gibbs free energy to IFT, and show that previous
renormalization results emerge naturally. They can be understood as being the
Gaussian approximation to the full posterior probability, which has maximal
cross information with it. We derive optimized estimators for three
applications, to illustrate the usage of the framework: (i) reconstruction of a
log-normal signal from Poissonian data with background counts and point spread
function, as it is needed for gamma ray astronomy and for cosmography using
photometric galaxy redshifts, (ii) inference of a Gaussian signal with unknown
spectrum and (iii) inference of a Poissonian log-normal signal with unknown
spectrum, the combination of (i) and (ii). Finally we explain how Gaussian
knowledge states constructed by the minimal Gibbs free energy principle at
different temperatures can be combined into a more accurate surrogate of the
non-Gaussian posterior.
|
1004.2870
|
Nurse Rostering with Genetic Algorithms
|
cs.AI cs.NE
|
In recent years genetic algorithms have emerged as a useful tool for the
heuristic solution of complex discrete optimisation problems. In particular
there has been considerable interest in their use in tackling problems arising
in the areas of scheduling and timetabling. However, the classical genetic
algorithm paradigm is not well equipped to handle constraints and successful
implementations usually require some sort of modification to enable the search
to exploit problem specific knowledge in order to overcome this shortcoming.
This paper is concerned with the development of a family of genetic algorithms
for the solution of a nurse rostering problem at a major UK hospital. The
hospital is made up of wards of up to 30 nurses. Each ward has its own group of
nurses whose shifts have to be scheduled on a weekly basis. In addition to
fulfilling the minimum demand for staff over three daily shifts, nurses' wishes
and qualifications have to be taken into account. The schedules must also be
seen to be fair, in that unpopular shifts have to be spread evenly amongst all
nurses, and other restrictions, such as team nursing and special conditions for
senior staff, have to be satisfied. The basis of the family of genetic
algorithms is a classical genetic algorithm consisting of n-point crossover,
single-bit mutation and a rank-based selection. The solution space consists of
all schedules in which each nurse works the required number of shifts, but the
remaining constraints, both hard and soft, are relaxed and penalised in the
fitness function. The talk will start with a detailed description of the
problem and the initial implementation and will go on to highlight the
shortcomings of such an approach, in terms of the key element of balancing
feasibility, i.e. covering the demand and work regulations, and quality, as
measured by the nurses' preferences. A series of experiments involving
parameter adaptation, niching, intelligent weights, delta coding, local hill
climbing, migration and special selection rules will then be outlined and it
will be shown how a series of these enhancements were able to eradicate these
difficulties. Results based on several months' real data will be used to
measure the impact of each modification, and to show that the final algorithm
is able to compete with a tabu search approach currently employed at the
hospital. The talk will conclude with some observations as to the overall
quality of this approach to this and similar problems.
|
1004.2880
|
GRASP for the Coalition Structure Formation Problem
|
cs.AI cs.MA
|
The coalition structure formation problem represents an active research area
in multi-agent systems. A coalition structure is defined as a partition of the
agents involved in a system into disjoint coalitions. The problem of finding
the optimal coalition structure is NP-complete. In order to find the optimal
solution in a combinatorial optimization problem it is theoretically possible
to enumerate the solutions and evaluate each. But this approach is infeasible
since the number of solutions often grows exponentially with the size of the
problem. In this paper we present a greedy adaptive search procedure (GRASP) to
efficiently search the space of coalition structures in order to find an
optimal one. Experiments and comparisons to other algorithms prove the validity
of the proposed method in solving this hard combinatorial problem.
|
1004.2926
|
Sparse Reconstruction via The Reed-Muller Sieve
|
cs.IT math.IT
|
This paper introduces the Reed Muller Sieve, a deterministic measurement
matrix for compressed sensing. The columns of this matrix are obtained by
exponentiating codewords in the quaternary second order Reed Muller code of
length $N$. For $k=O(N)$, the Reed Muller Sieve improves upon prior methods for
identifying the support of a $k$-sparse vector by removing the requirement that
the signal entries be independent. The Sieve also enables local detection; an
algorithm is presented with complexity $N^2 \log N$ that detects the presence
or absence of a signal at any given position in the data domain without
explicitly reconstructing the entire signal. Reconstruction is shown to be
resilient to noise in both the measurement and data domains; the $\ell_2 /
\ell_2$ error bounds derived in this paper are tighter than the $\ell_2 /
\ell_1$ bounds arising from random ensembles and the $\ell_1 /\ell_1$ bounds
arising from expander-based ensembles.
|
1004.3006
|
Microlocal Analysis of the Geometric Separation Problem
|
math.FA cs.IT math.IT math.NA
|
Image data are often composed of two or more geometrically distinct
constituents; in galaxy catalogs, for instance, one sees a mixture of pointlike
structures (galaxy superclusters) and curvelike structures (filaments). It
would be ideal to process a single image and extract two geometrically `pure'
images, each one containing features from only one of the two geometric
constituents. This seems to be a seriously underdetermined problem, but recent
empirical work achieved highly persuasive separations. We present a theoretical
analysis showing that accurate geometric separation of point and curve
singularities can be achieved by minimizing the $\ell_1$ norm of the
representing coefficients in two geometrically complementary frames: wavelets
and curvelets. Driving our analysis is a specific property of the ideal (but
unachievable) representation where each content type is expanded in the frame
best adapted to it. This ideal representation has the property that important
coefficients are clustered geometrically in phase space, and that at fine
scales, there is very little coherence between a cluster of elements in one
frame expansion and individual elements in the complementary frame. We formally
introduce notions of cluster coherence and clustered sparsity and use this
machinery to show that the underdetermined systems of linear equations can be
stably solved by $\ell_1$ minimization; microlocal phase space helps organize
the calculations that cluster coherence requires.
|
1004.3040
|
Online Sparse System Identification and Signal Reconstruction using
Projections onto Weighted $\ell_1$ Balls
|
cs.IT math.IT
|
This paper presents a novel projection-based adaptive algorithm for sparse
signal and system identification. The sequentially observed data are used to
generate an equivalent sequence of closed convex sets, namely hyperslabs. Each
hyperslab is the geometric equivalent of a cost criterion, that quantifies
"data mismatch". Sparsity is imposed by the introduction of appropriately
designed weighted $\ell_1$ balls. The algorithm develops around projections
onto the sequence of the generated hyperslabs as well as the weighted $\ell_1$
balls. The resulting scheme exhibits linear dependence, with respect to the
unknown system's order, on the number of multiplications/additions and an
$\mathcal{O}(L\log_2L)$ dependence on sorting operations, where $L$ is the
length of the system/signal to be estimated. Numerical results are also given
to validate the performance of the proposed method against the LASSO algorithm
and two very recently developed adaptive sparse LMS and LS-type of adaptive
algorithms, which are considered to belong to the same algorithmic family.
|
1004.3071
|
Subspace Methods for Joint Sparse Recovery
|
cs.IT math.IT
|
We propose robust and efficient algorithms for the joint sparse recovery
problem in compressed sensing, which simultaneously recover the supports of
jointly sparse signals from their multiple measurement vectors obtained through
a common sensing matrix. In a favorable situation, the unknown matrix, which
consists of the jointly sparse signals, has linearly independent nonzero rows.
In this case, the MUSIC (MUltiple SIgnal Classification) algorithm, originally
proposed by Schmidt for the direction of arrival problem in sensor array
processing and later proposed and analyzed for joint sparse recovery by Feng
and Bresler, provides a guarantee with the minimum number of measurements. We
focus instead on the unfavorable but practically significant case of
rank-defect or ill-conditioning. This situation arises with limited number of
measurement vectors, or with highly correlated signal components. In this case
MUSIC fails, and in practice none of the existing methods can consistently
approach the fundamental limit. We propose subspace-augmented MUSIC (SA-MUSIC),
which improves on MUSIC so that the support is reliably recovered under such
unfavorable conditions. Combined with subspace-based greedy algorithms also
proposed and analyzed in this paper, SA-MUSIC provides a computationally
efficient algorithm with a performance guarantee. The performance guarantees
are given in terms of a version of restricted isometry property. In particular,
we also present a non-asymptotic perturbation analysis of the signal subspace
estimation that has been missing in the previous study of MUSIC.
|
1004.3085
|
Universal Coding of Ergodic Sources for Multiple Decoders with Side
Information
|
cs.IT math.IT
|
A multiterminal lossy coding problem, which includes various problems such as
the Wyner-Ziv problem and the complementary delivery problem as special cases,
is considered. It is shown that any point in the achievable rate-distortion
region can be attained even if the source statistics are not known.
|
1004.3147
|
Genetic Algorithms for Multiple-Choice Problems
|
cs.NE cs.AI cs.CE
|
This thesis investigates the use of problem-specific knowledge to enhance a
genetic algorithm approach to multiple-choice optimisation problems.It shows
that such information can significantly enhance performance, but that the
choice of information and the way it is included are important factors for
success.Two multiple-choice problems are considered.The first is constructing a
feasible nurse roster that considers as many requests as possible.In the second
problem, shops are allocated to locations in a mall subject to constraints and
maximising the overall income.Genetic algorithms are chosen for their
well-known robustness and ability to solve large and complex discrete
optimisation problems.However, a survey of the literature reveals room for
further research into generic ways to include constraints into a genetic
algorithm framework.Hence, the main theme of this work is to balance
feasibility and cost of solutions.In particular, co-operative co-evolution with
hierarchical sub-populations, problem structure exploiting repair schemes and
indirect genetic algorithms with self-adjusting decoder functions are
identified as promising approaches.The research starts by applying standard
genetic algorithms to the problems and explaining the failure of such
approaches due to epistasis.To overcome this, problem-specific information is
added in a variety of ways, some of which are designed to increase the number
of feasible solutions found whilst others are intended to improve the quality
of such solutions.As well as a theoretical discussion as to the underlying
reasons for using each operator,extensive computational experiments are carried
out on a variety of data.These show that the indirect approach relies less on
problem structure and hence is easier to implement and superior in solution
quality.
|
1004.3165
|
The space complexity of recognizing well-parenthesized expressions in
the streaming model: the Index function revisited
|
cs.CC cs.IT math.IT quant-ph
|
We show an Omega(sqrt{n}/T) lower bound for the space required by any
unidirectional constant-error randomized T-pass streaming algorithm that
recognizes whether an expression over two types of parenthesis is
well-parenthesized. This proves a conjecture due to Magniez, Mathieu, and Nayak
(2009) and rigorously establishes that bidirectional streams are exponentially
more efficient in space usage as compared with unidirectional ones. We obtain
the lower bound by establishing the minimum amount of information that is
necessarily revealed by the players about their respective inputs in a
two-party communication protocol for a variant of the Index function, namely
Augmented Index. The information cost trade-off is obtained by a novel
application of the conceptually simple and familiar ideas such as average
encoding and the cut-and-paste property of randomized protocols.
Motivated by recent examples of exponential savings in space by streaming
quantum algorithms, we also study quantum protocols for Augmented Index.
Defining an appropriate notion of information cost for quantum protocols
involves a delicate balancing act between its applicability and the ease with
which we can analyze it. We define a notion of quantum information cost which
reflects some of the non-intuitive properties of quantum information and give a
trade-off for this notion. While this trade-off demonstrates the strength of
our proof techniques, it does not lead to a space lower bound for checking
parentheses. We leave such an implication for quantum streaming algorithms as
an intriguing open question.
|
1004.3183
|
Statistical Physics for Natural Language Processing
|
cs.CL cond-mat.stat-mech cs.IR
|
This paper has been withdrawn by the author.
|
1004.3196
|
Introducing Dendritic Cells as a Novel Immune-Inspired Algorithm for
Anomoly Detection
|
cs.AI cs.NE
|
Dendritic cells are antigen presenting cells that provide a vital link
between the innate and adaptive immune system. Research into this family of
cells has revealed that they perform the role of coordinating T-cell based
immune responses, both reactive and for generating tolerance. We have derived
an algorithm based on the functionality of these cells, and have used the
signals and differentiation pathways to build a control mechanism for an
artificial immune system. We present our algorithmic details in addition to
some preliminary results, where the algorithm was applied for the purpose of
anomaly detection. We hope that this algorithm will eventually become the key
component within a large, distributed immune system, based on sound
immunological concepts.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.