id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1005.5412
|
On Cooperative Beamforming Based on Second-Order Statistics of Channel
State Information
|
cs.IT math.IT
|
Cooperative beamforming in relay networks is considered, in which a source
transmits to its destination with the help of a set of cooperating nodes. The
source first transmits locally. The cooperating nodes that receive the source
signal retransmit a weighted version of it in an amplify-and-forward (AF)
fashion. Assuming knowledge of the second-order statistics of the channel state
information, beamforming weights are determined so that the signal-to-noise
ratio (SNR) at the destination is maximized subject to two different power
constraints, i.e., a total (source and relay) power constraint, and individual
relay power constraints. For the former constraint, the original problem is
transformed into a problem of one variable, which can be solved via Newton's
method. For the latter constraint, the original problem is transformed into a
homogeneous quadratically constrained quadratic programming (QCQP) problem. In
this case, it is shown that when the number of relays does not exceed three the
global solution can always be constructed via semidefinite programming (SDP)
relaxation and the matrix rank-one decomposition technique. For the cases in
which the SDP relaxation does not generate a rank one solution, two methods are
proposed to solve the problem: the first one is based on the coordinate descent
method, and the second one transforms the QCQP problem into an infinity norm
maximization problem in which a smooth finite norm approximation can lead to
the solution using the augmented Lagrangian method.
|
1005.5432
|
Attribute oriented induction with star schema
|
cs.DB
|
This paper will propose a novel star schema attribute induction as a new
attribute induction paradigm and as improving from current attribute oriented
induction. A novel star schema attribute induction will be examined with
current attribute oriented induction based on characteristic rule and using non
rule based concept hierarchy by implementing both of approaches. In novel star
schema attribute induction some improvements have been implemented like
elimination threshold number as maximum tuples control for generalization
result, there is no ANY as the most general concept, replacement the role
concept hierarchy with concept tree, simplification for the generalization
strategy steps and elimination attribute oriented induction algorithm. Novel
star schema attribute induction is more powerful than the current attribute
oriented induction since can produce small number final generalization tuples
and there is no ANY in the results.
|
1005.5433
|
A Data Warehouse Assistant Design System Based on Clover Model
|
cs.DB
|
Nowadays, Data Warehouse (DW) plays a crucial role in the process of decision
making. However, their design remains a very delicate and difficult task either
for expert or users. The goal of this paper is to propose a new approach based
on the clover model, destined to assist users to design a DW. The proposed
approach is based on two main steps. The first one aims to guide users in their
choice of DW schema model. The second one aims to finalize the chosen model by
offering to the designer views related to former successful DW design
experiences.
|
1005.5434
|
Efficient Support Coupled Frequent Pattern Mining Over Progressive
Databases
|
cs.DB
|
There have been many recent studies on sequential pattern mining. The
sequential pattern mining on progressive databases is relatively very new, in
which we progressively discover the sequential patterns in period of interest.
Period of interest is a sliding window continuously advancing as the time goes
by. As the focus of sliding window changes, the new items are added to the
dataset of interest and obsolete items are removed from it and become up to
date. In general, the existing proposals do not fully explore the real world
scenario, such as items associated with support in data stream applications
such as market basket analysis. Thus mining important knowledge from supported
frequent items becomes a non trivial research issue. Our proposed novel
approach efficiently mines frequent sequential pattern coupled with support
using progressive mining tree.
|
1005.5437
|
Content Based Image Retrieval Using Exact Legendre Moments and Support
Vector Machine
|
cs.CV
|
Content Based Image Retrieval (CBIR) systems based on shape using invariant
image moments, viz., Moment Invariants (MI) and Zernike Moments (ZM) are
available in the literature. MI and ZM are good at representing the shape
features of an image. However, non-orthogonality of MI and poor reconstruction
of ZM restrict their application in CBIR. Therefore, an efficient and
orthogonal moment based CBIR system is needed. Legendre Moments (LM) are
orthogonal, computationally faster, and can represent image shape features
compactly. CBIR system using Exact Legendre Moments (ELM) for gray scale images
is proposed in this work. Superiority of the proposed CBIR system is observed
over other moment based methods, viz., MI and ZM in terms of retrieval
efficiency and retrieval time. Further, the classification efficiency is
improved by employing Support Vector Machine (SVM) classifier. Improved
retrieval results are obtained over existing CBIR algorithm based on Stacked
Euler Vector (SERVE) combined with Modified Moment Invariants (MMI).
|
1005.5438
|
Query Routing and Processing in Peer-To-Peer Data Sharing Systems
|
cs.DB
|
Sharing musical files via the Internet was the essential motivation of early
P2P systems. Despite of the great success of the P2P file sharing systems,
these systems support only "simple" queries. The focus in such systems is how
to carry out an efficient query routing in order to find the nodes storing a
desired file. Recently, several research works have been made to extend P2P
systems to be able to share data having a fine granularity (i.e. atomic
attribute) and to process queries written with a highly expressive language
(i.e. SQL). These works have led to the emergence of P2P data sharing systems
that represent a new generation of P2P systems and, on the other hand, a next
stage in a long period of the database research area. ? The characteristics of
P2P systems (e.g. large-scale, node autonomy and instability) make impractical
to have a global catalog that represents often an essential component in
traditional database systems. Usually, such a catalog stores information about
data, schemas and data sources. Query routing and processing are two problems
affected by the absence of a global catalog. Locating relevant data sources and
generating a close to optimal execution plan become more difficult. In this
paper, we concentrate our study on proposed solutions for the both problems.
Furthermore, selected case studies of main P2P data sharing systems are
analyzed and compared.
|
1005.5439
|
Detection of Bleeding in Wireless Capsule Endoscopy Images Using Range
Ratio Color
|
cs.CV
|
Wireless Capsule Endoscopy (WCE) is device to detect abnormalities in
colon,esophagus,small intestinal and stomach, to distinguish bleeding in WCE
images from non bleeding is a hard job by human reviewing and very time
consuming. Consequently, automation for classifying bleeding frames not only
will expedite the process but will reduce the burden on the doctors. Using the
purity of the red color we can detect the Bleeding areas in WCE images. But, we
could find various intensity of red color values in different parts of the
small intestinal,so it is not enough to depend on the red color feature alone.
We select RGB(Red,Green,Blue) because it takes raw level values and it is easy
to use. In this paper we will put range ratio color for each of R,G,and B.
Therefore, we divide each image into multiple pixels and apply the range ratio
color condition for each pixel. Then we count the number of the pixels that
achieved our condition. If the number of pixels grater than zero, then the
frame is classified as a bleeding type. Otherwise, it is a non-bleeding. Our
experimental results show that this method could achieve a very high accuracy
in detecting bleeding images for the different parts of the small intestinal
|
1005.5448
|
Failover in cellular automata
|
cs.AI nlin.CG
|
A cellular automata (CA) configuration is constructed that exhibits emergent
failover. The configuration is based on standard Game of Life rules. Gliders
and glider-guns form the core messaging structure in the configuration. The
blinker is represented as the basic computational unit, and it is shown how it
can be recreated in case of a failure. Stateless failover using primary-backup
mechanism is demonstrated. The details of the CA components used in the
configuration and its working are described, and a simulation of the complete
configuration is also presented.
|
1005.5462
|
On the clustering aspect of nonnegative matrix factorization
|
cs.LG
|
This paper provides a theoretical explanation on the clustering aspect of
nonnegative matrix factorization (NMF). We prove that even without imposing
orthogonality nor sparsity constraint on the basis and/or coefficient matrix,
NMF still can give clustering results, thus providing a theoretical support for
many works, e.g., Xu et al. [1] and Kim et al. [2], that show the superiority
of the standard NMF as a clustering method.
|
1005.5466
|
Quantitative parametrization of texts written by Ivan Franko: An attempt
of the project
|
cs.CL
|
In the article, the project of quantitative parametrization of all texts by
Ivan Franko is manifested. It can be made only by using modern computer
techniques after the frequency dictionaries for all Franko's works are
compiled. The paper describes the application spheres, methodology, stages,
principles and peculiarities in the compilation of the frequency dictionary of
the second half of the 19th century - the beginning of the 20th century. The
relation between the Ivan Franko frequency dictionary, explanatory dictionary
of writer's language and text corpus is discussed.
|
1005.5514
|
Managing Semantic Loss during Query Reformulation in Peer Data
Management Systems
|
cs.DB
|
In this paper we deal with the notion of semantic loss in Peer Data
Management Systems (PDMS) queries. We define such a notion and we give a
mechanism that discovers semantic loss in a PDMS network. Next, we propose an
algorithm that addresses the problem of restoring such a loss. Further
evaluation of our proposed algorithm is an ongoing work
|
1005.5516
|
On the Fly Query Entity Decomposition Using Snippets
|
cs.IR
|
One of the most important issues in Information Retrieval is inferring the
intents underlying users' queries. Thus, any tool to enrich or to better
contextualized queries can proof extremely valuable. Entity extraction,
provided it is done fast, can be one of such tools. Such techniques usually
rely on a prior training phase involving large datasets. That training is
costly, specially in environments which are increasingly moving towards real
time scenarios where latency to retrieve fresh informacion should be minimal.
In this paper an `on-the-fly' query decomposition method is proposed. It uses
snippets which are mined by means of a na\"ive statistical algorithm. An
initial evaluation of such a method is provided, in addition to a discussion on
its applicability to different scenarios.
|
1005.5543
|
Provenance Views for Module Privacy
|
cs.DB cs.DS
|
Scientific workflow systems increasingly store provenance information about
the module executions used to produce a data item, as well as the parameter
settings and intermediate data items passed between module executions. However,
authors/owners of workflows may wish to keep some of this information
confidential. In particular, a module may be proprietary, and users should not
be able to infer its behavior by seeing mappings between all data inputs and
outputs. The problem we address in this paper is the following: Given a
workflow, abstractly modeled by a relation R, a privacy requirement \Gamma and
costs associated with data. The owner of the workflow decides which data
(attributes) to hide, and provides the user with a view R' which is the
projection of R over attributes which have not been hidden. The goal is to
minimize the cost of hidden data while guaranteeing that individual modules are
\Gamma -private. We call this the "secureview" problem. We formally define the
problem, study its complexity, and offer algorithmic solutions.
|
1005.5556
|
Empirical learning aided by weak domain knowledge in the form of feature
importance
|
cs.LG cs.AI cs.NE
|
Standard hybrid learners that use domain knowledge require stronger knowledge
that is hard and expensive to acquire. However, weaker domain knowledge can
benefit from prior knowledge while being cost effective. Weak knowledge in the
form of feature relative importance (FRI) is presented and explained. Feature
relative importance is a real valued approximation of a feature's importance
provided by experts. Advantage of using this knowledge is demonstrated by IANN,
a modified multilayer neural network algorithm. IANN is a very simple
modification of standard neural network algorithm but attains significant
performance gains. Experimental results in the field of molecular biology show
higher performance over other empirical learning algorithms including standard
backpropagation and support vector machines. IANN performance is even
comparable to a theory refinement system KBANN that uses stronger domain
knowledge. This shows Feature relative importance can improve performance of
existing empirical learning algorithms significantly with minimal effort.
|
1005.5574
|
Robust Beamforming for Amplify-and-Forward MIMO Relay Systems Based on
Quadratic Matrix Programming
|
cs.IT math.IT
|
In this paper, robust transceiver design based on minimum-mean-square-error
(MMSE) criterion for dual-hop amplify-and-forward MIMO relay systems is
investigated. The channel estimation errors are modeled as Gaussian random
variables, and then the effect are incorporated into the robust transceiver
based on the Bayesian framework. An iterative algorithm is proposed to jointly
design the precoder at the source, the forward matrix at the relay and the
equalizer at the destination, and the joint design problem can be efficiently
solved by quadratic matrix programming (QMP).
|
1005.5577
|
Transceiver Design for Dual-Hop Non-regenerative MIMO-OFDM Relay Systems
Under Channel Uncertainties
|
cs.IT math.IT
|
In this paper, linear transceiver design for dual-hop non-regenerative
(amplify-and-forward (AF)) MIMO-OFDM systems under channel estimation errors is
investigated. Second order moments of channel estimation errors in the two hops
are first deduced. Then based on the Bayesian framework, joint design of linear
forwarding matrix at the relay and equalizer at the destination under channel
estimation errors is proposed to minimize the total mean-square-error (MSE) of
the output signal at the destination. The optimal designs for both correlated
and uncorrelated channel estimation errors are considered. The relationship
with existing algorithms is also disclosed. Moreover, this design is extended
to the joint design involving source precoder design. Simulation results show
that the proposed design outperforms the design based on estimated channel
state information only.
|
1005.5581
|
Multi-View Active Learning in the Non-Realizable Case
|
cs.LG
|
The sample complexity of active learning under the realizability assumption
has been well-studied. The realizability assumption, however, rarely holds in
practice. In this paper, we theoretically characterize the sample complexity of
active learning in the non-realizable case under multi-view setting. We prove
that, with unbounded Tsybakov noise, the sample complexity of multi-view active
learning can be $\widetilde{O}(\log\frac{1}{\epsilon})$, contrasting to
single-view setting where the polynomial improvement is the best possible
achievement. We also prove that in general multi-view setting the sample
complexity of active learning with unbounded Tsybakov noise is
$\widetilde{O}(\frac{1}{\epsilon})$, where the order of $1/\epsilon$ is
independent of the parameter in Tsybakov noise, contrasting to previous
polynomial bounds where the order of $1/\epsilon$ is related to the parameter
in Tsybakov noise.
|
1005.5582
|
A First-order Augmented Lagrangian Method for Compressed Sensing
|
math.OC cs.SY
|
We propose a first-order augmented Lagrangian algorithm (FAL) for solving the
basis pursuit problem. FAL computes a solution to this problem by inexactly
solving a sequence of L1-regularized least squares sub-problems. These
sub-problems are solved using an infinite memory proximal gradient algorithm
wherein each update reduces to "shrinkage" or constrained "shrinkage". We show
that FAL converges to an optimal solution of the basis pursuit problem whenever
the solution is unique, which is the case with very high probability for
compressed sensing problems. We construct a parameter sequence such that the
corresponding FAL iterates are eps-feasible and eps-optimal for all eps>0
within O(log(1/eps)) FAL iterations. Moreover, FAL requires at most O(1/eps)
matrix-vector multiplications of the form Ax or A^Ty to compute an
eps-feasible, eps-optimal solution. We show that FAL can be easily extended to
solve the basis pursuit denoising problem when there is a non-trivial level of
noise on the measurements. We report the results of numerical experiments
comparing FAL with the state-of-the-art algorithms for both noisy and noiseless
compressed sensing problems. A striking property of FAL that we observed in the
numerical experiments with randomly generated instances when there is no
measurement noise was that FAL always correctly identifies the support of the
target signal without any thresholding or post-processing, for moderately small
error tolerance values.
|
1005.5591
|
On the minimum weight problem of permutation codes under Chebyshev
distance
|
cs.IT math.IT
|
Permutation codes of length $n$ and distance $d$ is a set of permutations on
$n$ symbols, where the distance between any two elements in the set is at least
$d$. Subgroup permutation codes are permutation codes with the property that
the elements are closed under the operation of composition. In this paper,
under the distance metric $\ell_{\infty}$-norm, we prove that finding the
minimum weight codeword for subgroup permutation code is NP-complete. Moreover,
we show that it is NP-hard to approximate the minimum weight within the factor
$7/6-\epsilon$ for any $\epsilon>0$.
|
1005.5596
|
A generic tool to generate a lexicon for NLP from Lexicon-Grammar tables
|
cs.CL
|
Lexicon-Grammar tables constitute a large-coverage syntactic lexicon but they
cannot be directly used in Natural Language Processing (NLP) applications
because they sometimes rely on implicit information. In this paper, we
introduce LGExtract, a generic tool for generating a syntactic lexicon for NLP
from the Lexicon-Grammar tables. It is based on a global table that contains
undefined information and on a unique extraction script including all
operations to be performed for all tables. We also present an experiment that
has been conducted to generate a new lexicon of French verbs and predicative
nouns.
|
1005.5603
|
On the Relation between Realizable and Nonrealizable Cases of the
Sequence Prediction Problem
|
cs.LG cs.IT math.IT math.ST stat.TH
|
A sequence $x_1,\dots,x_n,\dots$ of discrete-valued observations is generated
according to some unknown probabilistic law (measure) $\mu$. After observing
each outcome, one is required to give conditional probabilities of the next
observation. The realizable case is when the measure $\mu$ belongs to an
arbitrary but known class $\mathcal C$ of process measures. The non-realizable
case is when $\mu$ is completely arbitrary, but the prediction performance is
measured with respect to a given set $\mathcal C$ of process measures. We are
interested in the relations between these problems and between their solutions,
as well as in characterizing the cases when a solution exists and finding these
solutions. We show that if the quality of prediction is measured using the
total variation distance, then these problems coincide, while if it is measured
using the expected average KL divergence, then they are different. For some of
the formalizations we also show that when a solution exists, it can be obtained
as a Bayes mixture over a countable subset of $\mathcal C$. We also obtain
several characterization of those sets $\mathcal C$ for which solutions to the
considered problems exist. As an illustration to the general results obtained,
we show that a solution to the non-realizable case of the sequence prediction
problem exists for the set of all finite-memory processes, but does not exist
for the set of all stationary processes.
It should be emphasized that the framework is completely general: the
processes measures considered are not required to be i.i.d., mixing,
stationary, or to belong to any parametric family.
|
1005.5638
|
Distributed source identification for wave equations: an observer-based
approach (full paper)
|
math.OC cs.SY math.AP
|
In this paper, we consider the 1D wave equation where the spatial domain is a
bounded interval. Assuming the initial conditions to be known, we are here
interested in identifying an unknown source term, while we take the Neumann
derivative of the solution on one of the boundaries as the measurement output.
Applying a back-and-forth iterative scheme and constructing well-chosen
observers, we retrieve the source term from the measurement output in the
minimal observation time. We further provide an extension of the method to the
case of wave equations with N dimensional spatial domain.
|
1005.5697
|
Unbiased Estimation of a Sparse Vector in White Gaussian Noise
|
math.ST cs.IT math.IT stat.TH
|
We consider unbiased estimation of a sparse nonrandom vector corrupted by
additive white Gaussian noise. We show that while there are infinitely many
unbiased estimators for this problem, none of them has uniformly minimum
variance. Therefore, we focus on locally minimum variance unbiased (LMVU)
estimators. We derive simple closed-form lower and upper bounds on the variance
of LMVU estimators or, equivalently, on the Barankin bound (BB). Our bounds
allow an estimation of the threshold region separating the low-SNR and high-SNR
regimes, and they indicate the asymptotic behavior of the BB at high SNR. We
also develop numerical lower and upper bounds which are tighter than the
closed-form bounds and thus characterize the BB more accurately. Numerical
studies compare our characterization of the BB with established biased
estimation schemes, and demonstrate that while unbiased estimators perform
poorly at low SNR, they may perform better than biased estimators at high SNR.
An interesting conclusion of our analysis is that the high-SNR behavior of the
BB depends solely on the value of the smallest nonzero component of the sparse
vector, and that this type of dependence is also exhibited by the performance
of certain practical estimators.
|
1005.5718
|
Agent-based Social Psychology: from Neurocognitive Processes to Social
Data
|
physics.soc-ph cs.SI q-bio.NC
|
Moral Foundation Theory states that groups of different observers may rely on
partially dissimilar sets of moral foundations, thereby reaching different
moral valuations. The use of functional imaging techniques has revealed a
spectrum of cognitive styles with respect to the differential handling of novel
or corroborating information that is correlated to political affiliation. Here
we characterize the collective behavior of an agent-based model whose inter
individual interactions due to information exchange in the form of opinions are
in qualitative agreement with experimental neuroscience data. The main
conclusion derived connects the existence of diversity in the cognitive
strategies and statistics of the sets of moral foundations and suggests that
this connection arises from interactions between agents. Thus a simple
interacting agent model, whose interactions are in accord with empirical data
on conformity and learning processes, presents statistical signatures
consistent with moral judgment patterns of conservatives and liberals as
obtained by survey studies of social psychology.
|
1005.5732
|
A New Framework for Join Product Skew
|
cs.DB
|
Different types of data skew can result in load imbalance in the context of
parallel joins under the shared nothing architecture. We study one important
type of skew, join product skew (JPS). A static approach based on frequency
classes is proposed which takes for granted the data distribution of join
attribute values. It comes from the observation that the join selectivity can
be expressed as a sum of products of frequencies of the join attribute values.
As a consequence, an appropriate assignment of join sub-tasks, that takes into
consideration the magnitude of the frequency products can alleviate the join
product skew. Motivated by the aforementioned remark, we propose an algorithm,
called Handling Join Product Skew (HJPS), to handle join product skew.
|
1005.5734
|
The Re-Encoding Transformation in Algebraic List-Decoding of
Reed-Solomon Codes
|
cs.IT math.IT
|
The main computational steps in algebraic soft-decoding, as well as
Sudan-type list-decoding, of Reed-Solomon codes are bivariate polynomial
interpolation and factorization. We introduce a computational technique, based
upon re-encoding and coordinate transformation, that significantly reduces the
complexity of the bivariate interpolation procedure. This re-encoding and
coordinate transformation converts the original interpolation problem into
another reduced interpolation problem, which is orders of magnitude smaller
than the original one. A rigorous proof is presented to show that the two
interpolation problems are indeed equivalent. An efficient factorization
procedure that applies directly to the reduced interpolation problem is also
given.
|
1006.0051
|
Image Characterization and Classification by Physical Complexity
|
cs.CC cs.IT math.IT
|
We present a method for estimating the complexity of an image based on
Bennett's concept of logical depth. Bennett identified logical depth as the
appropriate measure of organized complexity, and hence as being better suited
to the evaluation of the complexity of objects in the physical world. Its use
results in a different, and in some sense a finer characterization than is
obtained through the application of the concept of Kolmogorov complexity alone.
We use this measure to classify images by their information content. The method
provides a means for classifying and evaluating the complexity of objects by
way of their visual representations. To the authors' knowledge, the method and
application inspired by the concept of logical depth presented herein are being
proposed and implemented for the first time.
|
1006.0054
|
Anti-measurement Matrix Uncertainty Sparse Signal Recovery for
Compressive Sensing
|
cs.IT math.IT math.NA stat.AP
|
Compressive sensing (CS) is a technique for estimating a sparse signal from
the random measurements and the measurement matrix. Traditional sparse signal
recovery methods have seriously degeneration with the measurement matrix
uncertainty (MMU). Here the MMU is modeled as a bounded additive error. An
anti-uncertainty constraint in the form of a mixed L2 and L1 norm is deduced
from the sparse signal model with MMU. Then we combine the sparse constraint
with the anti-uncertainty constraint to get an anti-uncertainty sparse signal
recovery operator. Numerical simulations demonstrate that the proposed operator
has a better reconstructing performance with the MMU than traditional methods.
|
1006.0056
|
Inter-atom Interference Mitigation for Sparse Signal Reconstruction
Using Semi-blindly Weighted Minimum Variance Distortionless Response
|
cs.IT math.IT math.NA
|
The feasibility of sparse signal reconstruction depends heavily on the
inter-atom interference of redundant dictionary. In this paper, a semi-blindly
weighted minimum variance distortionless response (SBWMVDR) is proposed to
mitigate the inter-atom interference. Examples of direction of arrival
estimation are presented to show that the orthogonal match pursuit (OMP) based
on SBWMVDR performs better than the ordinary OMP algorithm.
|
1006.0109
|
Results on Binary Linear Codes With Minimum Distance 8 and 10
|
cs.IT math.IT
|
All codes with minimum distance 8 and codimension up to 14 and all codes with
minimum distance 10 and codimension up to 18 are classified. Nonexistence of
codes with parameters [33,18,8] and [33,14,10] is proved. This leads to 8 new
exact bounds for binary linear codes. Primarily two algorithms considering the
dual codes are used, namely extension of dual codes with a proper coordinate,
and a fast algorithm for finding a maximum clique in a graph, which is modified
to find a maximum set of vectors with the right dependency structure.
|
1006.0153
|
Ivan Franko's novel Dlja domashnjoho ohnyshcha (For the Hearth) in the
light of the frequency dictionary
|
cs.CL
|
In the article, the methodology and the principles of the compilation of the
Frequency dictionary for Ivan Franko's novel Dlja domashnjoho ohnyshcha (For
the Hearth) are described. The following statistical parameters of the novel
vocabulary are obtained: variety, exclusiveness, concentration indexes,
correlation between word rank and text coverage, etc. The main quantitative
characteristics of Franko's novels Perekhresni stezhky (The Cross-Paths) and
Dlja domashnjoho ohnyshcha are compared on the basis of their frequency
dictionaries.
|
1006.0168
|
Perfusion Linearity and Its Applications
|
cs.CE
|
Perfusion analysis computes blood flow parameters (blood volume, blood flow,
mean transit time) from the observed flow of contrast agent, passing through
the patient's vascular system. Perfusion deconvolution has been widely accepted
as the principal numerical tool for perfusion analysis, and is used routinely
in clinical applications. This extensive use of perfusion in clinical
decision-making makes numerical stability and robustness of perfusion
computations vital for accurate diagnostics and patient safety. The main goal
of this paper is to propose a novel approach for validating numerical
properties of perfusion algorithms. The approach is based on Perfusion
Linearity Property (PLP), which we find in perfusion deconvolution, as well as
in many other perfusion techniques. PLP allows one to study perfusion values as
weighted averages of the original imaging data. This, in turn, uncovers hidden
problems with the existing deconvolution techniques, and may be used to suggest
more reliable computational approaches and methodology.
|
1006.0170
|
A Fast Generalized Minimum Distance Decoder for Reed-Solomon Codes Based
on the Extended Euclidean Algorithm
|
cs.IT math.IT
|
This paper presents a method to determine a set of basis polynomials from the
extended Euclidean algorithm that allows Generalized Minimum Distance decoding
of Reed-Solomon codes with a complexity of O(nd).
|
1006.0234
|
Inferring Networks of Diffusion and Influence
|
cs.DS cs.SI physics.soc-ph stat.ML
|
Information diffusion and virus propagation are fundamental processes taking
place in networks. While it is often possible to directly observe when nodes
become infected with a virus or adopt the information, observing individual
transmissions (i.e., who infects whom, or who influences whom) is typically
very difficult. Furthermore, in many applications, the underlying network over
which the diffusions and propagations spread is actually unobserved. We tackle
these challenges by developing a method for tracing paths of diffusion and
influence through networks and inferring the networks over which contagions
propagate. Given the times when nodes adopt pieces of information or become
infected, we identify the optimal network that best explains the observed
infection times. Since the optimization problem is NP-hard to solve exactly, we
develop an efficient approximation algorithm that scales to large datasets and
finds provably near-optimal networks.
We demonstrate the effectiveness of our approach by tracing information
diffusion in a set of 170 million blogs and news articles over a one year
period to infer how information flows through the online media space. We find
that the diffusion network of news for the top 1,000 media sites and blogs
tends to have a core-periphery structure with a small set of core media sites
that diffuse information to the rest of the Web. These sites tend to have
stable circles of influence with more general news media sites acting as
connectors between them.
|
1006.0245
|
Improved compression of network coding vectors using erasure decoding
and list decoding
|
cs.IT math.IT
|
Practical random network coding based schemes for multicast include a header
in each packet that records the transformation between the sources and the
terminal. The header introduces an overhead that can be significant in certain
scenarios. In previous work, parity check matrices of error control codes along
with error decoding were used to reduce this overhead. In this work we propose
novel packet formats that allow us to use erasure decoding and list decoding.
Both schemes have a smaller overhead compared to the error decoding based
scheme, when the number of sources combined in a packet is not too small.
|
1006.0259
|
Methods for the Reconstruction of Parallel Turbo Codes
|
cs.IT math.IT
|
We present two new algorithms for the reconstruction of turbo codes from a
noisy intercepted bitstream. With these algorithms, we were able to reconstruct
various turbo codes with realistic parameter sizes. To the best of our
knowledge, these are the first algorithms able to recover the whole permutation
of a turbo code in the presence of high noise levels.
|
1006.0271
|
The Quality of Oscillations in Overdamped Networks
|
cond-mat.stat-mech cs.SI math-ph math.MP math.SP nlin.PS physics.bio-ph q-bio.MN
|
The second law of thermodynamics implies that no macroscopic system may
oscillate indefinitely without consuming energy. The question of the number of
possible oscillations and the coherent quality of these oscillations remain
unanswered. This paper proves the upper-bounds on the number and quality of
such oscillations when the system in question is homogeneously driven and has a
discrete network of states. In a closed system, the maximum number of
oscillations is bounded by the number of states in the network. In open
systems, the size of the network bounds the quality factor of oscillation. This
work also explores how the quality factor of macrostate oscillations, such as
would be observed in chemical reactions, are bounded by the smallest equivalent
loop of the network, not the size of the entire system. The consequences of
this limit are explored in the context of chemical clocks and limit cycles.
|
1006.0274
|
Learning Probabilistic Hierarchical Task Networks to Capture User
Preferences
|
cs.AI
|
We propose automatically learning probabilistic Hierarchical Task Networks
(pHTNs) in order to capture a user's preferences on plans, by observing only
the user's behavior. HTNs are a common choice of representation for a variety
of purposes in planning, including work on learning in planning. Our
contributions are (a) learning structure and (b) representing preferences. In
contrast, prior work employing HTNs considers learning method preconditions
(instead of structure) and representing domain physics or search control
knowledge (rather than preferences). Initially we will assume that the observed
distribution of plans is an accurate representation of user preference, and
then generalize to the situation where feasibility constraints frequently
prevent the execution of preferred plans. In order to learn a distribution on
plans we adapt an Expectation-Maximization (EM) technique from the discipline
of (probabilistic) grammar induction, taking the perspective of task reductions
as productions in a context-free grammar over primitive actions. To account for
the difference between the distributions of possible and preferred plans we
subsequently modify this core EM technique, in short, by rescaling its input.
|
1006.0277
|
The Limits of Error Correction with lp Decoding
|
cs.IT math.IT
|
An unknown vector f in R^n can be recovered from corrupted measurements y =
Af + e where A^(m*n)(m>n) is the coding matrix if the unknown error vector e is
sparse. We investigate the relationship of the fraction of errors and the
recovering ability of lp-minimization (0 < p <= 1) which returns a vector x
minimizing the "lp-norm" of y - Ax. We give sharp thresholds of the fraction of
errors that determine the successful recovery of f. If e is an arbitrary
unknown vector, the threshold strictly decreases from 0.5 to 0.239 as p
increases from 0 to 1. If e has fixed support and fixed signs on the support,
the threshold is 2/3 for all p in (0, 1), while the threshold is 1 for
l1-minimization.
|
1006.0284
|
Asymptotic Optimality of Antidictionary Codes
|
cs.IT math.IT
|
An antidictionary code is a lossless compression algorithm using an
antidictionary which is a set of minimal words that do not occur as substrings
in an input string. The code was proposed by Crochemore et al. in 2000, and its
asymptotic optimality has been proved with respect to only a specific
information source, called balanced binary source that is a binary Markov
source in which a state transition occurs with probability 1/2 or 1. In this
paper, we prove the optimality of both static and dynamic antidictionary codes
with respect to a stationary ergodic Markov source on finite alphabet such that
a state transition occurs with probability $p (0 < p \leq 1)$.
|
1006.0289
|
M\'{e}todos para la Selecci\'{o}n y el Ajuste de Caracter\'{i}sticas en
el Problema de la Detecci\'{o}n de Spam
|
cs.IR cs.AI
|
The email is used daily by millions of people to communicate around the globe
and it is a mission-critical application for many businesses. Over the last
decade, unsolicited bulk email has become a major problem for email users. An
overwhelming amount of spam is flowing into users' mailboxes daily. In 2004, an
estimated 62% of all email was attributed to spam. Spam is not only frustrating
for most email users, it strains the IT infrastructure of organizations and
costs businesses billions of dollars in lost productivity. In recent years,
spam has evolved from an annoyance into a serious security threat, and is now a
prime medium for phishing of sensitive information, as well the spread of
malicious software. This work presents a first approach to attack the spam
problem. We propose an algorithm that will improve a classifier's results by
adjusting its training set data. It improves the document's vocabulary
representation by detecting good topic descriptors and discriminators.
|
1006.0304
|
On the stable recovery of the sparsest overcomplete representations in
presence of noise
|
cs.IT math.IT
|
Let x be a signal to be sparsely decomposed over a redundant dictionary A,
i.e., a sparse coefficient vector s has to be found such that x=As. It is known
that this problem is inherently unstable against noise, and to overcome this
instability, the authors of [Stable Recovery; Donoho et.al., 2006] have
proposed to use an "approximate" decomposition, that is, a decomposition
satisfying ||x - A s|| < \delta, rather than satisfying the exact equality x =
As. Then, they have shown that if there is a decomposition with ||s||_0 <
(1+M^{-1})/2, where M denotes the coherence of the dictionary, this
decomposition would be stable against noise. On the other hand, it is known
that a sparse decomposition with ||s||_0 < spark(A)/2 is unique. In other
words, although a decomposition with ||s||_0 < spark(A)/2 is unique, its
stability against noise has been proved only for highly more restrictive
decompositions satisfying ||s||_0 < (1+M^{-1})/2, because usually (1+M^{-1})/2
<< spark(A)/2.
This limitation maybe had not been very important before, because ||s||_0 <
(1+M^{-1})/2 is also the bound which guaranties that the sparse decomposition
can be found via minimizing the L1 norm, a classic approach for sparse
decomposition. However, with the availability of new algorithms for sparse
decomposition, namely SL0 and Robust-SL0, it would be important to know whether
or not unique sparse decompositions with (1+M^{-1})/2 < ||s||_0 < spark(A)/2
are stable. In this paper, we show that such decompositions are indeed stable.
In other words, we extend the stability bound from ||s||_0 < (1+M^{-1})/2 to
the whole uniqueness range ||s||_0 < spark(A)/2. In summary, we show that "all
unique sparse decompositions are stably recoverable". Moreover, we see that
sparser decompositions are "more stable".
|
1006.0312
|
Markov Lemma for Countable Alphabets
|
cs.IT math.IT
|
Strong typicality and the Markov lemma have been used in the proofs of
several multiterminal source coding theorems. Since these two tools can be
applied to finite alphabets only, the results proved by them are subject to the
same limitation. Recently, a new notion of typicality, namely unified
typicality, has been defined. It can be applied to both finite or countably
infinite alphabets, and it retains the asymptotic equipartition property and
the structural properties of strong typicality. In this paper, unified
typicality is used to derive a version of the Markov lemma which works on both
finite or countably infinite alphabets so that many results in multiterminal
source coding can readily be extended. Furthermore, a simple way to verify
whether some sequences are jointly typical is shown.
|
1006.0330
|
Soft-Output Sphere Decoder for Multiple-Symbol Differential Detection of
Impulse-Radio Ultra-Wideband
|
cs.IT math.IT
|
Power efficiency of noncoherent receivers for impulse-radio ultra-wideband
(IR-UWB) transmission systems can significantly be improved, on the one hand,
by employing multiple-symbol differential detection (MSDD), and, on the other
hand, by providing reliability information to the subsequent channel decoder.
In this paper, we combine these two techniques. Incorporating the computation
of the soft information into a single-tree-search sphere decoder (SD), the
application of this soft-output MSDD in a typical IR-UWB system imposes only a
moderate complexity increase at, however, improved performance over hard-output
MSDD, and in particular, over conventional symbol-by-symbol noncoherent
differential detection.
|
1006.0334
|
One-Shot Capacity of Discrete Channels
|
cs.IT math.IT
|
Shannon defined channel capacity as the highest rate at which there exists a
sequence of codes of block length $n$ such that the error probability goes to
zero as $n$ goes to infinity. In this definition, it is implicit that the block
length, which can be viewed as the number of available channel uses, is
unlimited. This is not the case when the transmission power must be
concentrated on a single transmission, most notably in military scenarios with
adversarial conditions or delay-tolerant networks with random short encounters.
A natural question arises: how much information can we transmit in a single use
of the channel? We give a precise characterization of the one-shot capacity of
discrete channels, defined as the maximum number of bits that can be
transmitted in a single use of a channel with an error probability that does
not exceed a prescribed value. This capacity definition is shown to be useful
and significantly different from the zero-error problem statement.
|
1006.0355
|
An algebraic approach to information theory
|
cs.IT math.IT
|
This work proposes an algebraic model for classical information theory. We
first give an algebraic model of probability theory. Information theoretic
constructs are based on this model. In addition to theoretical insights
provided by our model one obtains new computational and analytical tools.
Several important theorems of classical probability and information theory are
presented in the algebraic framework.
|
1006.0375
|
Information theoretic model validation for clustering
|
cs.IT cs.LG math.IT stat.ML
|
Model selection in clustering requires (i) to specify a suitable clustering
principle and (ii) to control the model order complexity by choosing an
appropriate number of clusters depending on the noise level in the data. We
advocate an information theoretic perspective where the uncertainty in the
measurements quantizes the set of data partitionings and, thereby, induces
uncertainty in the solution space of clusterings. A clustering model, which can
tolerate a higher level of fluctuations in the measurements than alternative
models, is considered to be superior provided that the clustering solution is
equally informative. This tradeoff between \emph{informativeness} and
\emph{robustness} is used as a model selection criterion. The requirement that
data partitionings should generalize from one data set to an equally probable
second data set gives rise to a new notion of structure induced information.
|
1006.0379
|
Adaptive Demodulation in Differentially Coherent Phase Systems: Design
and Performance Analysis
|
cs.IT math.IT
|
Adaptive Demodulation (ADM) is a newly proposed rate-adaptive system which
operates without requiring Channel State Information (CSI) at the transmitter
(unlike adaptive modulation) by using adaptive decision region boundaries at
the receiver and encoding the data with a rateless code. This paper addresses
the design and performance of an ADM scheme for two common differentially
coherent schemes: M-DPSK (M-ary Differential Phase Shift Keying) and M-DAPSK
(M-ary Differential Amplitude and Phase Shift Keying) operating over AWGN and
Rayleigh fading channels. The optimal method for determining the most reliable
bits for a given differential detection scheme is presented. In addition,
simple (near-optimal) implementations are provided for recovering the most
reliable bits from a received pair of differentially encoded symbols for
systems using 16-DPSK and 16- DAPSK. The new receivers offer the advantages of
a rate-adaptive system, without requiring CSI at the transmitter and a coherent
phase reference at the receiver. Bit error analysis for the ADM system in both
cases is presented along with numerical results of the spectral efficiency for
the rate-adaptive systems operating over a Rayleigh fading channel.
|
1006.0385
|
Brain-Like Stochastic Search: A Research Challenge and Funding
Opportunity
|
cs.AI
|
Brain-Like Stochastic Search (BLiSS) refers to this task: given a family of
utility functions U(u,A), where u is a vector of parameters or task
descriptors, maximize or minimize U with respect to u, using networks (Option
Nets) which input A and learn to generate good options u stochastically. This
paper discusses why this is crucial to brain-like intelligence (an area funded
by NSF) and to many applications, and discusses various possibilities for
network design and training. The appendix discusses recent research, relations
to work on stochastic optimization in operations research, and relations to
engineering-based approaches to understanding neocortex.
|
1006.0386
|
A Smart Approach for GPT Cryptosystem Based on Rank Codes
|
cs.IT cs.CR math.IT
|
The concept of Public- key cryptosystem was innovated by McEliece's
cryptosystem. The public key cryptosystem based on rank codes was presented in
1991 by Gabidulin -Paramonov-Trejtakov(GPT). The use of rank codes in
cryptographic applications is advantageous since it is practically impossible
to utilize combinatoric decoding. This has enabled using public keys of a
smaller size. Respective structural attacks against this system were proposed
by Gibson and recently by Overbeck. Overbeck's attacks break many versions of
the GPT cryptosystem and are turned out to be either polynomial or exponential
depending on parameters of the cryptosystem. In this paper, we introduce a new
approach, called the Smart approach, which is based on a proper choice of the
distortion matrix X. The Smart approach allows for withstanding all known
attacks even if the column scrambler matrix P over the base field Fq.
|
1006.0392
|
Computing the speed of convergence of ergodic averages and pseudorandom
points in computable dynamical systems
|
cs.NA cs.CE cs.LO
|
A pseudorandom point in an ergodic dynamical system over a computable metric
space is a point which is computable but its dynamics has the same statistical
behavior as a typical point of the system.
It was proved in [Avigad et al. 2010, Local stability of ergodic averages]
that in a system whose dynamics is computable the ergodic averages of
computable observables converge effectively. We give an alternative, simpler
proof of this result.
This implies that if also the invariant measure is computable then the
pseudorandom points are a set which is dense (hence nonempty) on the support of
the invariant measure.
|
1006.0397
|
Effective Capacity and Randomness of Closed Sets
|
cs.LO cs.IT math.IT math.LO
|
We investigate the connection between measure and capacity for the space of
nonempty closed subsets of {0,1}*. For any computable measure, a computable
capacity T may be defined by letting T(Q) be the measure of the family of
closed sets which have nonempty intersection with Q. We prove an effective
version of Choquet's capacity theorem by showing that every computable capacity
may be obtained from a computable measure in this way. We establish conditions
that characterize when the capacity of a random closed set equals zero or is
>0. We construct for certain measures an effectively closed set with positive
capacity and with Lebesgue measure zero.
|
1006.0408
|
A Mathematical Framework for Agent Based Models of Complex Biological
Networks
|
q-bio.QM cs.MA physics.bio-ph
|
Agent-based modeling and simulation is a useful method to study biological
phenomena in a wide range of fields, from molecular biology to ecology. Since
there is currently no agreed-upon standard way to specify such models it is not
always easy to use published models. Also, since model descriptions are not
usually given in mathematical terms, it is difficult to bring mathematical
analysis tools to bear, so that models are typically studied through
simulation. In order to address this issue, Grimm et al. proposed a protocol
for model specification, the so-called ODD protocol, which provides a standard
way to describe models. This paper proposes an addition to the ODD protocol
which allows the description of an agent-based model as a dynamical system,
which provides access to computational and theoretical tools for its analysis.
The mathematical framework is that of algebraic models, that is, time-discrete
dynamical systems with algebraic structure. It is shown by way of several
examples how this mathematical specification can help with model analysis.
|
1006.0448
|
Emergence of Complex-Like Cells in a Temporal Product Network with Local
Receptive Fields
|
cs.NE
|
We introduce a new neural architecture and an unsupervised algorithm for
learning invariant representations from temporal sequence of images. The system
uses two groups of complex cells whose outputs are combined multiplicatively:
one that represents the content of the image, constrained to be constant over
several consecutive frames, and one that represents the precise location of
features, which is allowed to vary over time but constrained to be sparse. The
architecture uses an encoder to extract features, and a decoder to reconstruct
the input from the features. The method was applied to patches extracted from
consecutive movie frames and produces orientation and frequency selective units
analogous to the complex cells in V1. An extension of the method is proposed to
train a network composed of units with local receptive field spread over a
large image of arbitrary size. A layer of complex cells, subject to sparsity
constraints, pool feature units over overlapping local neighborhoods, which
causes the feature units to organize themselves into pinwheel patterns of
orientation-selective receptive fields, similar to those observed in the
mammalian visual cortex. A feed-forward encoder efficiently computes the
feature representation of full images.
|
1006.0475
|
Prediction with Advice of Unknown Number of Experts
|
cs.LG
|
In the framework of prediction with expert advice, we consider a recently
introduced kind of regret bounds: the bounds that depend on the effective
instead of nominal number of experts. In contrast to the NormalHedge bound,
which mainly depends on the effective number of experts and also weakly depends
on the nominal one, we obtain a bound that does not contain the nominal number
of experts at all. We use the defensive forecasting method and introduce an
application of defensive forecasting to multivalued supermartingales.
|
1006.0496
|
The diversity-multiplexing tradeoff of the MIMO Z interference channel
|
cs.IT math.IT
|
The fundamental generalized diversity-multiplexing tradeoff (GDMT) of the
quasi-static fading MIMO Z interference channel (Z-IC) is established for the
general Z-IC with an arbitrary number of antennas at each node under the
assumptions of full channel state information at the transmitters (CSIT) and a
short-term average power constraint. In the GDMT framework, the direct link
signal-to-noise ratios (SNR) and cross-link interference-to-noise ratio (INR)
are allowed to vary so that their ratios relative to a nominal SNR in the dB
scale, i.e., the SNR/INR exponents, are fixed. It is shown that a simple
Han-Kobayashi message-splitting/partial interference decoding scheme that uses
only partial CSIT -- in which the second transmitter's signal depends only on
its cross-link channel matrix and the first user's transmit signal doesn't need
any CSIT whatsoever -- can achieve the full-CSIT GDMT of the MIMO Z-IC. The
GDMT of the MIMO Z-IC under the No-CSIT assumption is also obtained for some
range of multiplexing gains. The size of this range depends on the numbers of
antennas at the four nodes and the SNR and INR exponents of the direct and
cross links, respectively. For certain classes of channels including those in
which the interfered receiver has more antennas than do the other nodes, or
when the INR exponent is greater than a certain threshold, the GDMT of the MIMO
Z-IC under the No-CSIT assumption is completely characterized.
|
1006.0542
|
Multicast Capacity Scaling of Wireless Networks with Multicast Outage
|
cs.IT math.IT
|
Multicast transmission has several distinctive traits as opposed to more
commonly studied unicast networks. Specially, these include (i) identical
packets must be delivered successfully to several nodes, (ii) outage could
simultaneously happen at different receivers, and (iii) the multicast rate is
dominated by the receiver with the weakest link in order to minimize outage and
retransmission. To capture these key traits, we utilize a Poisson cluster
process consisting of a distinct Poisson point process (PPP) for the
transmitters and receivers, and then define the multicast transmission capacity
(MTC) as the maximum achievable multicast rate times the number of multicast
clusters per unit volume, accounting for outages and retransmissions. Our main
result shows that if $\tau$ transmission attempts are allowed in a multicast
cluster, the MTC is $\Theta\left(\rho k^{x}\log(k)\right)$ where $\rho$ and $x$
are functions of $\tau$ depending on the network size and density, and $k$ is
the average number of the intended receivers in a cluster. We also show that an
appropriate number of retransmissions can significantly enhance the MTC.
|
1006.0544
|
Capacity scaling law by multiuser diversity in cognitive radio systems
|
cs.IT math.IT
|
This paper analyzes the multiuser diversity gain in a cognitive radio (CR)
system where secondary transmitters opportunistically utilize the spectrum
licensed to primary users only when it is not occupied by the primary users. To
protect the primary users from the interference caused by the missed detection
of primary transmissions in the secondary network, minimum average throughput
of the primary network is guaranteed by transmit power control at the secondary
transmitters. The traffic dynamics of a primary network are also considered in
our analysis. We derive the average achievable capacity of the secondary
network and analyze its asymptotic behaviors to characterize the multiuser
diversity gains in the CR system.
|
1006.0575
|
XQ2P: Efficient XQuery P2P Time Series Processing
|
cs.DB
|
In this demonstration, we propose a model for the management of XML time
series (TS), using the new XQuery 1.1 window operator. We argue that
centralized computation is slow, and demonstrate XQ2P, our prototype of
efficient XQuery P2P TS computation in the context of financial analysis of
large data sets (>1M values).
|
1006.0576
|
Gestion efficace de s\'eries temporelles en P2P: Application \`a
l'analyse technique et l'\'etude des objets mobiles
|
cs.DB
|
In this paper, we propose a simple generic model to manage time series. A
time series is composed of a calendar with a typed value for each calendar
entry. Although the model could support any kind of XML typed values, in this
paper we focus on real numbers, which are the usual application. We define
basic vector space operations (plus, minus, scale), and also relational-like
and application oriented operators to manage time series. We show the interest
of this generic model on two applications: (i) a stock investment helper; (ii)
an ecological transport management system. Stock investment requires
window-based operations while trip management requires complex queries. The
model has been implemented and tested in PHP, Java, and XQuery. We show
benchmark results illustrating that the computing of 5000 series of over
100.000 entries in length - common requirements for both applications - is
difficult on classical centralized PCs. In order to serve a community of users
sharing time series, we propose a P2P implementation of time series by dividing
them in segments and providing optimized algorithms for operator expression
computation.
|
1006.0619
|
Spectrum Sharing in Cognitive Radio with Quantized Channel Information
|
cs.IT math.IT math.OC
|
We consider a wideband spectrum sharing system where a secondary user can
share a number of orthogonal frequency bands where each band is licensed to an
individual primary user. We address the problem of optimum secondary transmit
power allocation for its ergodic capacity maximization subject to an average
sum (across the bands) transmit power constraint and individual average
interference constraints on the primary users. The major contribution of our
work lies in considering quantized channel state information (CSI)(for the
vector channel space consisting of all secondary-to-secondary and
secondary-to-primary channels) at the secondary transmitter. It is assumed that
a band manager or a cognitive radio service provider has access to the full CSI
information from the secondary and primary receivers and designs (offline) an
optimal power codebook based on the statistical information (channel
distributions) of the channels and feeds back the index of the codebook to the
secondary transmitter for every channel realization in real-time, via a
delay-free noiseless limited feedback channel. A modified Generalized
Lloyds-type algorithm (GLA) is designed for deriving the optimal power
codebook. An approximate quantized power allocation (AQPA) algorithm is also
presented, that performs very close to its GLA based counterpart for large
number of feedback bits and is significantly faster. We also present an
extension of the modified GLA based quantized power codebook design algorithm
for the case when the feedback channel is noisy. Numerical studies illustrate
that with only 3-4 bits of feedback, the modified GLA based algorithms provide
secondary ergodic capacity very close to that achieved by full CSI and with
only as little as 4 bits of feedback, AQPA provides a comparable performance,
thus making it an attractive choice for practical implementation.
|
1006.0644
|
The Achievable Distortion Region of Bivariate Gaussian Source on
Gaussian Broadcast Channel
|
cs.IT math.IT
|
We provide a complete characterization of the achievable distortion region
for the problem of sending a bivariate Gaussian source over bandwidth-matched
Gaussian broadcast channels, where each receiver is interested in only one
component of the source. This setting naturally generalizes the simple single
Gaussian source bandwidth-matched broadcast problem for which the uncoded
scheme is known to be optimal. We show that a hybrid scheme can achieve the
optimum for the bivariate case, but neither an uncoded scheme alone nor a
separation-based scheme alone is sufficient. We further show that in this joint
source channel coding setting, the Gaussian setting is the worst scenario among
the sources and channel noises with the same covariances.
|
1006.0646
|
Irregular Turbo Codes in Block-Fading Channels
|
cs.IT math.IT
|
We study irregular binary turbo codes over non-ergodic block-fading channels.
We first propose an extension of channel multiplexers initially designed for
regular turbo codes. We then show that, using these multiplexers, irregular
turbo codes that exhibit a small decoding threshold over the ergodic
Gaussian-noise channel perform very close to the outage probability on
block-fading channels, from both density evolution and finite-length
perspectives.
|
1006.0659
|
EXIT Chart Approximations using the Role Model Approach
|
cs.IT math.IT
|
Extrinsic Information Transfer (EXIT) functions can be measured by
statistical methods if the message alphabet size is moderate or if messages are
true a-posteriori distributions. We propose an approximation we call mixed
information that constitutes a lower bound for the true EXIT function and can
be estimated by statistical methods even when the message alphabet is large and
histogram-based approaches are impractical, or when messages are not true
probability distributions and time-averaging approaches are not applicable. We
illustrate this with the hypothetical example of a rank-only message passing
decoder for which it is difficult to compute or measure EXIT functions in the
conventional way. We show that the role model approach (arXiv:0809.1300) can be
used to optimize post-processing for the decoder and that it coincides with
Monte Carlo integration in the non-parametric case. It is guaranteed to tend
towards the optimal Bayesian post-processing estimator and can be applied in a
blind setup with unknown code-symbols to optimize the check-node operation for
non-binary Low-Density Parity-Check (LDPC) decoders.
|
1006.0719
|
Why Gabor Frames? Two Fundamental Measures of Coherence and Their Role
in Model Selection
|
math.ST cs.IT math.IT stat.ML stat.TH
|
This paper studies non-asymptotic model selection for the general case of
arbitrary design matrices and arbitrary nonzero entries of the signal. In this
regard, it generalizes the notion of incoherence in the existing literature on
model selection and introduces two fundamental measures of coherence---termed
as the worst-case coherence and the average coherence---among the columns of a
design matrix. It utilizes these two measures of coherence to provide an
in-depth analysis of a simple, model-order agnostic one-step thresholding (OST)
algorithm for model selection and proves that OST is feasible for exact as well
as partial model selection as long as the design matrix obeys an easily
verifiable property. One of the key insights offered by the ensuing analysis in
this regard is that OST can successfully carry out model selection even when
methods based on convex optimization such as the lasso fail due to the rank
deficiency of the submatrices of the design matrix. In addition, the paper
establishes that if the design matrix has reasonably small worst-case and
average coherence then OST performs near-optimally when either (i) the energy
of any nonzero entry of the signal is close to the average signal energy per
nonzero entry or (ii) the signal-to-noise ratio in the measurement system is
not too high. Finally, two other key contributions of the paper are that (i) it
provides bounds on the average coherence of Gaussian matrices and Gabor frames,
and (ii) it extends the results on model selection using OST to low-complexity,
model-order agnostic recovery of sparse signals with arbitrary nonzero entries.
|
1006.0741
|
Analysis of Collectivism and Egoism Phenomena within the Context of
Social Welfare
|
cs.MA cs.SI math.OC
|
Comparative benefits provided by the basic social strategies including
collectivism and egoism are investigated within the framework of democratic
decision-making. In particular, we study the mechanism of growing "snowball" of
cooperation.
|
1006.0763
|
Good Codes From Generalised Algebraic Geometry Codes
|
cs.IT math.IT
|
Algebraic geometry codes or Goppa codes are defined with places of degree
one. In constructing generalised algebraic geometry codes places of higher
degree are used. In this paper we present 41 new codes over GF(16) which
improve on the best known codes of the same length and rate. The construction
method uses places of small degree with a technique originally published over
10 years ago for the construction of generalised algebraic geometry codes.
|
1006.0778
|
The Two-Way Wiretap Channel: Achievable Regions and Experimental Results
|
cs.IT math.IT
|
This work considers the two-way wiretap channel in which two legitimate
users, Alice and Bob, wish to exchange messages securely in the presence of a
passive eavesdropper Eve. In the full-duplex scenario, where each node can
transmit and receive simultaneously, we obtain new achievable secrecy rate
regions based on the idea of allowing the two users to jointly optimize their
channel prefixing distributions and binning codebooks in addition to key
sharing. The new regions are shown to be strictly larger than the known ones
for a wide class of discrete memoryless and Gaussian channels. In the
half-duplex case, where a user can only transmit or receive on any given degree
of freedom, we introduce the idea of randomized scheduling and establish the
significant gain it offers in terms of the achievable secrecy sum-rate. We
further develop an experimental setup based on a IEEE 802.15.4-enabled sensor
boards, and use this testbed to show that one can exploit the two-way nature of
the communication, via appropriately randomizing the transmit power levels and
transmission schedule, to introduce significant ambiguity at a noiseless Eve.
|
1006.0795
|
Channel Decoding with a Bayesian Equalizer
|
cs.IT math.IT
|
Low-density parity-check (LPDC) decoders assume the channel estate
information (CSI) is known and they have the true a posteriori probability
(APP) for each transmitted bit. But in most cases of interest, the CSI needs to
be estimated with the help of a short training sequence and the LDPC decoder
has to decode the received word using faulty APP estimates. In this paper, we
study the uncertainty in the CSI estimate and how it affects the bit error rate
(BER) output by the LDPC decoder. To improve these APP estimates, we propose a
Bayesian equalizer that takes into consideration not only the uncertainty due
to the noise in the channel, but also the uncertainty in the CSI estimate,
reducing the BER after the LDPC decoder.
|
1006.0871
|
Capacity for Half-Duplex Line Networks with Two Sources
|
cs.IT math.IT
|
The focus is on noise-free half-duplex line networks with two sources where
the first node and either the second node or the second-last node in the
cascade act as sources. In both cases, we establish the capacity region of
rates at which both sources can transmit independent information to a common
sink. The achievability scheme presented for the first case is constructive
while the achievability scheme for the second case is based on a random coding
argument.
|
1006.0876
|
Building a Data Warehouse for National Social Security Fund of the
Republic of Tunisia
|
cs.DB
|
The amounts of data available to decision makers are increasingly important,
given the network availability, low cost storage and diversity of applications.
To maximize the potential of these data within the National Social Security
Fund (NSSF) in Tunisia, we have built a data warehouse as a multidimensional
database, cleaned, homogenized, historicized and consolidated. We used Oracle
Warehouse Builder to extract, transform and load the source data into the Data
Warehouse, by applying the KDD process. We have implemented the Data Warehouse
as an Oracle OLAP. The knowledge extraction has been performed using the Oracle
Discoverer tool. This allowed users to take maximum advantage of knowledge as a
regular report or as ad hoc queries. We started by implementing the main topic
for this public institution, accounting for the movements of insured persons.
The great success that has followed the completion of this work has encouraged
the NSSF to complete the achievement of other topics of interest within the
NSSF. We suggest in the near future to use Multidimensional Data Mining to
extract hidden knowledge and that are not predictable by the OLAP.
|
1006.0888
|
Fundamental Limits of Wideband Localization - Part I: A General
Framework
|
cs.IT cs.NI math.IT
|
The availability of positional information is of great importance in many
commercial, public safety, and military applications. The coming years will see
the emergence of location-aware networks with sub-meter accuracy, relying on
accurate range measurements provided by wide bandwidth transmissions. In this
two-part paper, we determine the fundamental limits of localization accuracy of
wideband wireless networks in harsh multipath environments. We first develop a
general framework to characterize the localization accuracy of a given node
here and then extend our analysis to cooperative location-aware networks in
Part II.
In this paper, we characterize localization accuracy in terms of a
performance measure called the squared position error bound (SPEB), and
introduce the notion of equivalent Fisher information to derive the SPEB in a
succinct expression. This methodology provides insights into the essence of the
localization problem by unifying localization information from individual
anchors and information from a priori knowledge of the agent's position in a
canonical form. Our analysis begins with the received waveforms themselves
rather than utilizing only the signal metrics extracted from these waveforms,
such as time-of-arrival and received signal strength. Hence, our framework
exploits all the information inherent in the received waveforms, and the
resulting SPEB serves as a fundamental limit of localization accuracy.
|
1006.0890
|
Fundamental Limits of Wideband Localization - Part II: Cooperative
Networks
|
cs.IT cs.NI math.IT
|
The availability of positional information is of great importance in many
commercial, governmental, and military applications. Localization is commonly
accomplished through the use of radio communication between mobile devices
(agents) and fixed infrastructure (anchors). However, precise determination of
agent positions is a challenging task, especially in harsh environments due to
radio blockage or limited anchor deployment. In these situations, cooperation
among agents can significantly improve localization accuracy and reduce
localization outage probabilities. A general framework of analyzing the
fundamental limits of wideband localization has been developed in Part I of the
paper. Here, we build on this framework and establish the fundamental limits of
wideband cooperative location-aware networks. Our analysis is based on the
waveforms received at the nodes, in conjunction with Fisher information
inequality. We provide a geometrical interpretation of equivalent Fisher
information for cooperative networks. This approach allows us to succinctly
derive fundamental performance limits and their scaling behaviors, and to treat
anchors and agents in a unified way from the perspective of localization
accuracy. Our results yield important insights into how and when cooperation is
beneficial.
|
1006.0964
|
On Achievable Rate Regions for Half-Duplex Causal Cognitive Radio
Channels
|
cs.IT math.IT
|
Coding for the causal cognitive radio channel, with the cognitive source
subjected to a half-duplex constraint, is studied. A discrete memoryless
channel model incorporating the half-duplex constraint is presented, and a new
achievable rate region is derived for this channel. It is proved that this rate
region contains the previously known causal achievable rate region of
\cite{Devroye06} for Gaussian channels.
|
1006.0991
|
Variational Program Inference
|
cs.AI
|
We introduce a framework for representing a variety of interesting problems
as inference over the execution of probabilistic model programs. We represent a
"solution" to such a problem as a guide program which runs alongside the model
program and influences the model program's random choices, leading the model
program to sample from a different distribution than from its priors. Ideally
the guide program influences the model program to sample from the posteriors
given the evidence. We show how the KL- divergence between the true posterior
distribution and the distribution induced by the guided model program can be
efficiently estimated (up to an additive constant) by sampling multiple
executions of the guided model program. In addition, we show how to use the
guide program as a proposal distribution in importance sampling to
statistically prove lower bounds on the probability of the evidence and on the
probability of a hypothesis and the evidence. We can use the quotient of these
two bounds as an estimate of the conditional probability of the hypothesis
given the evidence. We thus turn the inference problem into a heuristic search
for better guide programs.
|
1006.1024
|
A Low-Complexity Joint Detection-Decoding Algorithm for Nonbinary
LDPC-Coded Modulation Systems
|
cs.IT math.IT
|
In this paper, we present a low-complexity joint detection-decoding algorithm
for nonbinary LDPC codedmodulation systems. The algorithm combines
hard-decision decoding using the message-passing strategy with the signal
detector in an iterative manner. It requires low computational complexity,
offers good system performance and has a fast rate of decoding convergence.
Compared to the q-ary sum-product algorithm (QSPA), it provides an attractive
candidate for practical applications of q-ary LDPC codes.
|
1006.1029
|
Chi-square-based scoring function for categorization of MEDLINE
citations
|
cs.IR stat.AP stat.ML
|
Objectives: Text categorization has been used in biomedical informatics for
identifying documents containing relevant topics of interest. We developed a
simple method that uses a chi-square-based scoring function to determine the
likelihood of MEDLINE citations containing genetic relevant topic. Methods: Our
procedure requires construction of a genetic and a nongenetic domain document
corpus. We used MeSH descriptors assigned to MEDLINE citations for this
categorization task. We compared frequencies of MeSH descriptors between two
corpora applying chi-square test. A MeSH descriptor was considered to be a
positive indicator if its relative observed frequency in the genetic domain
corpus was greater than its relative observed frequency in the nongenetic
domain corpus. The output of the proposed method is a list of scores for all
the citations, with the highest score given to those citations containing MeSH
descriptors typical for the genetic domain. Results: Validation was done on a
set of 734 manually annotated MEDLINE citations. It achieved predictive
accuracy of 0.87 with 0.69 recall and 0.64 precision. We evaluated the method
by comparing it to three machine learning algorithms (support vector machines,
decision trees, na\"ive Bayes). Although the differences were not statistically
significantly different, results showed that our chi-square scoring performs as
good as compared machine learning algorithms. Conclusions: We suggest that the
chi-square scoring is an effective solution to help categorize MEDLINE
citations. The algorithm is implemented in the BITOLA literature-based
discovery support system as a preprocessor for gene symbol disambiguation
process.
|
1006.1030
|
Rasch-based high-dimensionality data reduction and class prediction with
applications to microarray gene expression data
|
cs.AI stat.AP stat.ME stat.ML
|
Class prediction is an important application of microarray gene expression
data analysis. The high-dimensionality of microarray data, where number of
genes (variables) is very large compared to the number of samples (obser-
vations), makes the application of many prediction techniques (e.g., logistic
regression, discriminant analysis) difficult. An efficient way to solve this
prob- lem is by using dimension reduction statistical techniques. Increasingly
used in psychology-related applications, Rasch model (RM) provides an appealing
framework for handling high-dimensional microarray data. In this paper, we
study the potential of RM-based modeling in dimensionality reduction with
binarized microarray gene expression data and investigate its prediction ac-
curacy in the context of class prediction using linear discriminant analysis.
Two different publicly available microarray data sets are used to illustrate a
general framework of the approach. Performance of the proposed method is
assessed by re-randomization scheme using principal component analysis (PCA) as
a benchmark method. Our results show that RM-based dimension reduction is as
effective as PCA-based dimension reduction. The method is general and can be
applied to the other high-dimensional data problems.
|
1006.1055
|
Shannon Revisited: Considering a More Tractable Expression to Measure
and Manage Intractability, Uncertainty, Risk, Ignorance, and Entropy
|
cs.IT math.IT
|
Building on Shannon's lead, let's consider a more malleable expression for
tracking uncertainty, and states of "knowledge available" vs. "knowledge
missing," to better practice innovation, improve risk management, and
successfully measure progress of intractable undertakings. Shannon's formula,
and its common replacements (Renyi, Tsallis) compute to increased knowledge
whenever two competing choices, however marginal, exchange probability
measures. Such and other distortions are corrected by anchoring knowledge to a
reference challenge. Entropy then expresses progress towards meeting that
challenge. We introduce an 'interval of interest' outside which all probability
changes should be ignored. The resultant formula for Missing Acquirable
Relevant Knowledge (MARK) serves as a means to optimize intractable activities
involving knowledge acquisition, such as research, development, risk
management, and opportunity exploitation.
|
1006.1057
|
On improving security of GPT cryptosystems
|
cs.CR cs.IT math.IT
|
The public key cryptosystem based on rank error correcting codes (the GPT
cryptosystem) was proposed in 1991. Use of rank codes in cryptographic
applications is advantageous since it is practically impossible to utilize
combinatoric decoding. This enabled using public keys of a smaller size.
Several attacks against this system were published, including Gibson's attacks
and more recently Overbeck's attacks. A few modifications were proposed
withstanding Gibson's attack but at least one of them was broken by the
stronger attacks by Overbeck. A tool to prevent Overbeck's attack is presented
in [12]. In this paper, we apply this approach to other variants of the GPT
cryptosystem.
|
1006.1080
|
The Dilated Triple
|
cs.AI
|
The basic unit of meaning on the Semantic Web is the RDF statement, or
triple, which combines a distinct subject, predicate and object to make a
definite assertion about the world. A set of triples constitutes a graph, to
which they give a collective meaning. It is upon this simple foundation that
the rich, complex knowledge structures of the Semantic Web are built. Yet the
very expressiveness of RDF, by inviting comparison with real-world knowledge,
highlights a fundamental shortcoming, in that RDF is limited to statements of
absolute fact, independent of the context in which a statement is asserted.
This is in stark contrast with the thoroughly context-sensitive nature of human
thought. The model presented here provides a particularly simple means of
contextualizing an RDF triple by associating it with related statements in the
same graph. This approach, in combination with a notion of graph similarity, is
sufficient to select only those statements from an RDF graph which are
subjectively most relevant to the context of the requesting process.
|
1006.1129
|
Predictive PAC learnability: a paradigm for learning from exchangeable
input data
|
cs.LG
|
Exchangeable random variables form an important and well-studied
generalization of i.i.d. variables, however simple examples show that no
nontrivial concept or function classes are PAC learnable under general
exchangeable data inputs $X_1,X_2,\ldots$. Inspired by the work of Berti and
Rigo on a Glivenko--Cantelli theorem for exchangeable inputs, we propose a new
paradigm, adequate for learning from exchangeable data: predictive PAC
learnability. A learning rule $\mathcal L$ for a function class $\mathscr F$ is
predictive PAC if for every $\e,\delta>0$ and each function $f\in {\mathscr
F}$, whenever $\abs{\sigma}\geq s(\delta,\e)$, we have with confidence
$1-\delta$ that the expected difference between $f(X_{n+1})$ and the image of
$f\vert\sigma$ under $\mathcal L$ does not exceed $\e$ conditionally on
$X_1,X_2,\ldots,X_n$. Thus, instead of learning the function $f$ as such, we
are learning to a given accuracy $\e$ the predictive behaviour of $f$ at the
future points $X_i(\omega)$, $i>n$ of the sample path. Using de Finetti's
theorem, we show that if a universally separable function class $\mathscr F$ is
distribution-free PAC learnable under i.i.d. inputs, then it is
distribution-free predictive PAC learnable under exchangeable inputs, with a
slightly worse sample complexity.
|
1006.1138
|
Online Learning via Sequential Complexities
|
cs.LG stat.ML
|
We consider the problem of sequential prediction and provide tools to study
the minimax value of the associated game. Classical statistical learning theory
provides several useful complexity measures to study learning with i.i.d. data.
Our proposed sequential complexities can be seen as extensions of these
measures to the sequential setting. The developed theory is shown to yield
precise learning guarantees for the problem of sequential prediction. In
particular, we show necessary and sufficient conditions for online learnability
in the setting of supervised learning. Several examples show the utility of our
framework: we can establish learnability without having to exhibit an explicit
online learning algorithm.
|
1006.1149
|
The diversity-multiplexing tradeoff of the symmetric MIMO 2-user
interference channel
|
cs.IT math.IT
|
The fundamental diversity-multiplexing tradeoff (DMT) of the quasi-static
fading, symmetric $2$-user MIMO interference channel (IC) with channel state
information at the transmitters (CSIT) and a short term average power
constraint is obtained. The general case is considered where the
interference-to-noise ratio (INR) at each receiver scales differently from the
signal-to-noise ratio (SNR) at the receivers. The achievability of the DMT is
proved by showing that a simple Han-Kobayashi coding scheme can achieve a rate
region which is within a constant (independent of SNR) number of bits from a
set of upper bounds to the capacity region of the IC. In general, only part of
the DMT curve with CSIT can be achieved by coding schemes which do not use any
CSIT (No-CSIT). A result in this paper establishes a threshold for the INR
beyond which the DMT with CSIT coincides with that with No-CSIT. Our result
also settles one of the conjectures made in~\cite{EaOlCv}. Furthermore, the
fundamental DMT of a class of non-symmetric ICs with No-CSIT is also obtained
wherein the two receivers have different numbers of antennas.
|
1006.1162
|
MIMO ARQ with Multi-bit Feedback: Outage Analysis
|
cs.IT math.IT
|
We study the asymptotic outage performance of incremental redundancy
automatic repeat request (INR-ARQ) transmission over the multiple-input
multiple-output (MIMO) block-fading channels with discrete input
constellations. We first show that transmission with random codes using a
discrete signal constellation across all transmit antennas achieves the optimal
outage diversity given by the Singleton bound. We then analyze the optimal
SNR-exponent and outage diversity of INR-ARQ transmission over the MIMO
block-fading channel. We show that a significant gain in outage diversity is
obtained by providing more than one bit feedback at each ARQ round. Thus, the
outage performance of INR-ARQ transmission can be remarkably improved with
minimal additional overhead. A suboptimal feedback and power adaptation rule,
which achieves the optimal outage diversity, is proposed for MIMO INR-ARQ,
demonstrating the benefits provided by multi-bit feedback.
|
1006.1172
|
Distributed Rateless Codes with UEP Property
|
cs.IT math.IT
|
When multiple sources of data need to transmit their rateless coded symbols
through a single relay to a common destination, a distributed rateless code
instead of several separate conventional rateless codes can be employed to
encode the input symbols to increase the transmission efficiency and
flexibility.
In this paper, we propose distributed rateless codes DU-rateless that can
provide unequal error protection (UEP) for distributed sources with different
data block lengths and different importance levels. We analyze our proposed
DU-rateless code employing And-Or tree analysis technique. Next, we design
several sets of optimum DU-rateless codes for various setups employing
multi-objective genetic algorithms and evaluate their performances.
|
1006.1184
|
An Algorithm to Self-Extract Secondary Keywords and Their Combinations
Based on Abstracts Collected using Primary Keywords from Online Digital
Libraries
|
cs.IR
|
The high-level contribution of this paper is the development and
implementation of an algorithm to selfextract secondary keywords and their
combinations (combo words) based on abstracts collected using standard primary
keywords for research areas from reputed online digital libraries like IEEE
Explore, PubMed Central and etc. Given a collection of N abstracts, we
arbitrarily select M abstracts (M<< N; M/N as low as 0.15) and parse each of
the M abstracts, word by word. Upon the first-time appearance of a word, we
query the user for classifying the word into an Accept-List or non-Accept-List.
The effectiveness of the training approach is evaluated by measuring the
percentage of words for which the user is queried for classification when the
algorithm parses through the words of each of the M abstracts. We observed that
as M grows larger, the percentage of words for which the user is queried for
classification reduces drastically. After the list of acceptable words is built
by parsing the M abstracts, we now parse all the N abstracts, word by word, and
count the frequency of appearance of each of the words in Accept-List in these
N abstracts. We also construct a Combo-Accept-List comprising of all possible
combinations of the single keywords in Accept-List and parse all the N
abstracts, two successive words (combo word) at a time, and count the frequency
of appearance of each of the combo words in the Combo-Accept-List in these N
abstracts.
|
1006.1187
|
Biometric Authentication using Nonparametric Methods
|
cs.CV
|
The physiological and behavioral trait is employed to develop biometric
authentication systems. The proposed work deals with the authentication of iris
and signature based on minimum variance criteria. The iris patterns are
preprocessed based on area of the connected components. The segmented image
used for authentication consists of the region with large variations in the
gray level values. The image region is split into quadtree components. The
components with minimum variance are determined from the training samples. Hu
moments are applied on the components. The summation of moment values
corresponding to minimum variance components are provided as input vector to
k-means and fuzzy kmeans classifiers. The best performance was obtained for MMU
database consisting of 45 subjects. The number of subjects with zero False
Rejection Rate [FRR] was 44 and number of subjects with zero False Acceptance
Rate [FAR] was 45. This paper addresses the computational load reduction in
off-line signature verification based on minimal features using k-means, fuzzy
k-means, k-nn, fuzzy k-nn and novel average-max approaches. FRR of 8.13% and
FAR of 10% was achieved using k-nn classifier. The signature is a biometric,
where variations in a genuine case, is a natural expectation. In the genuine
signature, certain parts of signature vary from one instance to another. The
system aims to provide simple, fast and robust system using less number of
features when compared to state of art works.
|
1006.1190
|
Game Information System
|
cs.AI
|
In this Information system age many organizations consider information system
as their weapon to compete or gain competitive advantage or give the best
services for non profit organizations. Game Information System as combining
Information System and game is breakthrough to achieve organizations'
performance. The Game Information System will run the Information System with
game and how game can be implemented to run the Information System. Game is not
only for fun and entertainment, but will be a challenge to combine fun and
entertainment with Information System. The Challenge to run the information
system with entertainment, deliver the entertainment with information system
all at once. Game information system can be implemented in many sectors as like
the information system itself but in difference's view. A view of game which
people can joy and happy and do their transaction as a fun things.
|
1006.1210
|
Full vectoring optimal power allocation in xDSL channels under per-modem
power constraints and spectral mask constraints
|
cs.IT math.IT
|
In xDSL systems, crosstalk can be separated into two categories, namely
in-domain crosstalk and out-of-domain crosstalk. In-domain crosstalk is also
refered to as self crosstalk. Out-of-domain crosstalk is crosstalk originating
from outside the multi-pair system and is also denoted as external noise (alien
crosstalk, radio frequency interference,...). While self crosstalk in itself
can easily be canceled by a linear detector like the ZF detector, the presence
of external noise requires a more advanced processing. Coordination between
transmitters and receivers enables the self crosstalk and the external noise to
be mitigated using MIMO signal processing, usually by means of a whitening
filter and SVD. In this paper, we investigate the problem of finding the
optimal power allocation in MIMO xDSL systems in the presence of self crosstalk
and external noise. Optimal Tx/Rx structures and power allocation algorithms
will be devised under practical limitations from xDSL systems, namely per-modem
total power constraints and/or spectral mask constraints, leading to a
generalized SVD-based transmission. Simulation results are given for bonded
VDSL2 systems with external noise coming from ADSL2+ or VDSL2 disturbing lines,
along with a comparison between algorithms with one-sided signal coordination
either only at the transmit side or the receive side.
|
1006.1213
|
Optimal power allocation for downstream xDSL with per-modem total power
constraints : Broadcast Channel Optimal Spectrum Balancing (BC-OSB)
|
cs.IT math.IT
|
Recently, the duality between Multiple Input Multiple Output (MIMO) Multiple
Access Channels (MAC) and MIMO Broadcast Channels (BC) has been established
under a total power constraint. The same set of rates for MAC can be achieved
in BC exploiting the MAC-BC duality formulas while preserving the total power
constraint. In this paper, we describe the BC optimal power allo- cation
applying this duality in a downstream x-Digital Subscriber Lines (xDSL) context
under a total power constraint for all modems over all tones. Then, a new
algorithm called BC-Optimal Spectrum Balancing (BC-OSB) is devised for a more
realistic power allocation under per-modem total power constraints. The
capacity region of the primal BC problem under per-modem total power
constraints is found by the dual optimization problem for the BC under
per-modem total power constraints which can be rewritten as a dual optimization
problem in the MAC by means of a precoder matrix based on the Lagrange
multipliers. We show that the duality gap between the two problems is zero. The
multi-user power allocation problem has been solved for interference channels
and MAC using the OSB algorithm. In this paper we solve the problem of
multi-user power allocation for the BC case using the OSB algorithm as well and
we derive a computational efficient algorithm that will be referred to as
BC-OSB. Simulation results are provided for two VDSL2 scenarios: the first one
with Differential-Mode (DM) transmission only and the second one with both DM
and Phantom- Mode (PM) transmissions.
|
1006.1288
|
Regression on fixed-rank positive semidefinite matrices: a Riemannian
approach
|
cs.LG
|
The paper addresses the problem of learning a regression model parameterized
by a fixed-rank positive semidefinite matrix. The focus is on the nonlinear
nature of the search space and on scalability to high-dimensional problems. The
mathematical developments rely on the theory of gradient descent algorithms
adapted to the Riemannian geometry that underlies the set of fixed-rank
positive semidefinite matrices. In contrast with previous contributions in the
literature, no restrictions are imposed on the range space of the learned
matrix. The resulting algorithms maintain a linear complexity in the problem
size and enjoy important invariance properties. We apply the proposed
algorithms to the problem of learning a distance function parameterized by a
positive semidefinite matrix. Good performance is observed on classical
benchmarks.
|
1006.1309
|
Using Grid Files for a Relational Database Management System
|
cs.DB
|
This paper describes our experience with using Grid files as the main storage
organization for a relational database management system. We primarily focus on
the following two aspects. (i) Strategies for implementing grid files
efficiently. (ii) Methods for efficiency evaluating queries posed to a database
organized using grid files.
|
1006.1328
|
Uncovering the Riffled Independence Structure of Rankings
|
cs.LG cs.AI stat.AP stat.ML
|
Representing distributions over permutations can be a daunting task due to
the fact that the number of permutations of $n$ objects scales factorially in
$n$. One recent way that has been used to reduce storage complexity has been to
exploit probabilistic independence, but as we argue, full independence
assumptions impose strong sparsity constraints on distributions and are
unsuitable for modeling rankings. We identify a novel class of independence
structures, called \emph{riffled independence}, encompassing a more expressive
family of distributions while retaining many of the properties necessary for
performing efficient inference and reducing sample complexity. In riffled
independence, one draws two permutations independently, then performs the
\emph{riffle shuffle}, common in card games, to combine the two permutations to
form a single permutation. Within the context of ranking, riffled independence
corresponds to ranking disjoint sets of objects independently, then
interleaving those rankings. In this paper, we provide a formal introduction to
riffled independence and present algorithms for using riffled independence
within Fourier-theoretic frameworks which have been explored by a number of
recent papers. Additionally, we propose an automated method for discovering
sets of items which are riffle independent from a training set of rankings. We
show that our clustering-like algorithms can be used to discover meaningful
latent coalitions from real preference ranking datasets and to learn the
structure of hierarchically decomposable models based on riffled independence.
|
1006.1343
|
Segmentation and Nodal Points in Narrative: Study of Multiple Variations
of a Ballad
|
cs.CL stat.ML
|
The Lady Maisry ballads afford us a framework within which to segment a
storyline into its major components. Segments and as a consequence nodal points
are discussed for nine different variants of the Lady Maisry story of a (young)
woman being burnt to death by her family, on account of her becoming pregnant
by a foreign personage. We motivate the importance of nodal points in textual
and literary analysis. We show too how the openings of the nine variants can be
analyzed comparatively, and also the conclusions of the ballads.
|
1006.1346
|
C-HiLasso: A Collaborative Hierarchical Sparse Modeling Framework
|
stat.ML cs.CV
|
Sparse modeling is a powerful framework for data analysis and processing.
Traditionally, encoding in this framework is performed by solving an
L1-regularized linear regression problem, commonly referred to as Lasso or
Basis Pursuit. In this work we combine the sparsity-inducing property of the
Lasso model at the individual feature level, with the block-sparsity property
of the Group Lasso model, where sparse groups of features are jointly encoded,
obtaining a sparsity pattern hierarchically structured. This results in the
Hierarchical Lasso (HiLasso), which shows important practical modeling
advantages. We then extend this approach to the collaborative case, where a set
of simultaneously coded signals share the same sparsity pattern at the higher
(group) level, but not necessarily at the lower (inside the group) level,
obtaining the collaborative HiLasso model (C-HiLasso). Such signals then share
the same active groups, or classes, but not necessarily the same active set.
This model is very well suited for applications such as source identification
and separation. An efficient optimization procedure, which guarantees
convergence to the global optimum, is developed for these new models. The
underlying presentation of the new framework and optimization approach is
complemented with experimental examples and theoretical results regarding
recovery guarantees for the proposed models.
|
1006.1377
|
Joint Bandwidth and Power Allocation with Admission Control in Wireless
Multi-User Networks With and Without Relaying
|
cs.IT math.IT
|
Equal allocation of bandwidth and/or power may not be efficient for wireless
multi-user networks with limited bandwidth and power resources. Joint bandwidth
and power allocation strategies for wireless multi-user networks with and
without relaying are proposed in this paper for (i) the maximization of the sum
capacity of all users; (ii) the maximization of the worst user capacity; and
(iii) the minimization of the total power consumption of all users. It is shown
that the proposed allocation problems are convex and, therefore, can be solved
efficiently. Moreover, the admission control based joint bandwidth and power
allocation is considered. A suboptimal greedy search algorithm is developed to
solve the admission control problem efficiently. The conditions under which the
greedy search is optimal are derived and shown to be mild. The performance
improvements offered by the proposed joint bandwidth and power allocation are
demonstrated by simulations. The advantages of the suboptimal greedy search
algorithm for admission control are also shown.
|
1006.1380
|
Pareto Region Characterization for Rate Control in Multi-User Systems
and Nash Bargaining
|
cs.GT cs.IT math.IT
|
The problem of rate control in multi-user multiple-input multiple-output
(MIMO) interference systems is formulated as a multicriteria optimization (MCO)
problem. The Pareto rate region of the MCO problem is characterized. It is
shown that for the convexity of the Pareto rate region it is sufficient that
the interference-plus-noise covariance matrices (INCMs) of multiple users with
conflicting objectives approach identity matrix. The latter can be achieved by
using either orthogonal signaling, time-sharing, or interference cancellation
strategies. In the case of high interference, the interference cancellation is
preferable in order to increase the Pareto boundary and guarantee the convexity
of the Pareto rate region. The Nash bargaining (NB) is applied to transform the
MCO problem into a single-objective one. The characteristics of the NB over
MIMO interference systems such as the uniqueness, existence of the NB solution,
and feasibility of the NB set are investigated. When the NB solution exists,
the sufficient condition for the corresponding single-objective problem to have
a unique solution is that the INCMs of users approach identity matrix. A simple
multi-stage interference cancellation scheme, which leads to a larger convex
Pareto rate region and, correspondingly, a unique NB solution with larger user
rates compared to the orthogonal and time-sharing signaling schemes, is
proposed. The convexity of the rate region, effectiveness of the proposed
interference cancellation technique, and existence of the NB solution for MIMO
interference systems are examined by means of numerical studies. The fairness
of the NB solution is also demonstrated. Finally, the special cases of
multi-input single-output (MISO) and single-input single-output (SISO)
interference systems are also considered.
|
1006.1382
|
On Regret of Parametric Mismatch in Minimum Mean Square Error Estimation
|
cs.IT math.IT
|
This paper studies the effect of parametric mismatch in minimum mean square
error (MMSE) estimation. In particular, we consider the problem of estimating
the input signal from the output of an additive white Gaussian channel whose
gain is fixed, but unknown. The input distribution is known, and the estimation
process consists of two algorithms. First, a channel estimator blindly
estimates the channel gain using past observations. Second, a mismatched MMSE
estimator, optimized for the estimated channel gain, estimates the input
signal. We analyze the regret, i.e., the additional mean square error, that is
raised in this process. We derive upper-bounds on both absolute and relative
regrets. Bounds are expressed in terms of the Fisher information. We also study
regret for unbiased, efficient channel estimators, and derive a simple
trade-off between Fisher information and relative regret. This trade-off shows
that the product of a certain function of relative regret and Fisher
information equals the signal-to-noise ratio, independent of the input
distribution. The trade-off relation implies that higher Fisher information
results to smaller expected relative regret.
|
1006.1383
|
Efficient Symbol Sorting for High Intermediate Recovery Rate of LT Codes
|
cs.IT math.IT
|
LT codes are modern and efficient rateless forward error correction (FEC)
codes with close to channel capacity performance. Nevertheless, in intermediate
range where the number of received encoded symbols is less than the number of
source symbols, LT codes have very low recovery rates.
In this paper, we propose a novel algorithm which significantly increases the
intermediate recovery rate of LT codes, while it preserves the codes' close to
channel capacity performance. To increase the intermediate recovery rate, our
proposed algorithm rearranges the transmission order of the encoded symbols
exploiting their structure, their transmission history, and an estimate of the
channel's erasure rate. We implement our algorithm for conventional LT codes,
and numerically evaluate its performance.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.