id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
0906.0052
|
A Minimum Description Length Approach to Multitask Feature Selection
|
cs.LG cs.AI
|
Many regression problems involve not one but several response variables
(y's). Often the responses are suspected to share a common underlying
structure, in which case it may be advantageous to share information across
them; this is known as multitask learning. As a special case, we can use
multiple responses to better identify shared predictive features -- a project
we might call multitask feature selection.
This thesis is organized as follows. Section 1 introduces feature selection
for regression, focusing on ell_0 regularization methods and their
interpretation within a Minimum Description Length (MDL) framework. Section 2
proposes a novel extension of MDL feature selection to the multitask setting.
The approach, called the "Multiple Inclusion Criterion" (MIC), is designed to
borrow information across regression tasks by more easily selecting features
that are associated with multiple responses. We show in experiments on
synthetic and real biological data sets that MIC can reduce prediction error in
settings where features are at least partially shared across responses. Section
3 surveys hypothesis testing by regression with a single response, focusing on
the parallel between the standard Bonferroni correction and an MDL approach.
Mirroring the ideas in Section 2, Section 4 proposes a novel MIC approach to
hypothesis testing with multiple responses and shows that on synthetic data
with significant sharing of features across responses, MIC sometimes
outperforms standard FDR-controlling methods in terms of finding true positives
for a given level of false positives. Section 5 concludes.
|
0906.0060
|
A Walk in Facebook: Uniform Sampling of Users in Online Social Networks
|
cs.SI cs.NI physics.data-an physics.soc-ph stat.ME
|
Our goal in this paper is to develop a practical framework for obtaining a
uniform sample of users in an online social network (OSN) by crawling its
social graph. Such a sample allows to estimate any user property and some
topological properties as well. To this end, first, we consider and compare
several candidate crawling techniques. Two approaches that can produce
approximately uniform samples are the Metropolis-Hasting random walk (MHRW) and
a re-weighted random walk (RWRW). Both have pros and cons, which we demonstrate
through a comparison to each other as well as to the "ground truth." In
contrast, using Breadth-First-Search (BFS) or an unadjusted Random Walk (RW)
leads to substantially biased results. Second, and in addition to offline
performance assessment, we introduce online formal convergence diagnostics to
assess sample quality during the data collection process. We show how these
diagnostics can be used to effectively determine when a random walk sample is
of adequate size and quality. Third, as a case study, we apply the above
methods to Facebook and we collect the first, to the best of our knowledge,
representative sample of Facebook users. We make it publicly available and
employ it to characterize several key properties of Facebook.
|
0906.0065
|
Managing Distributed MARF with SNMP
|
cs.DC cs.CV
|
The scope of this project's work focuses on the research and prototyping of
the extension of the Distributed MARF such that its services can be managed
through the most popular management protocol familiarly, SNMP. The rationale
behind SNMP vs. MARF's proprietary management protocols, is that can be
integrated with the use of common network service and device management, so the
administrators can manage MARF nodes via a already familiar protocol, as well
as monitor their performance, gather statistics, set desired configuration,
etc. perhaps using the same management tools they've been using for other
network devices and application servers.
|
0906.0080
|
Reverse method for labeling the information from semi-structured web
pages
|
cs.IR cs.DS
|
We propose a new technique to infer the structure and extract the tokens of
data from the semi-structured web sources which are generated using a
consistent template or layout with some implicit regularities. The attributes
are extracted and labeled reversely from the region of interest of targeted
contents. This is in contrast with the existing techniques which always
generate the trees from the root. We argue and show that our technique is
simpler, more accurate and effective especially to detect the changes of the
templates of targeted web pages.
|
0906.0211
|
Equations of States in Statistical Learning for a Nonparametrizable and
Regular Case
|
cs.LG
|
Many learning machines that have hierarchical structure or hidden variables
are now being used in information science, artificial intelligence, and
bioinformatics. However, several learning machines used in such fields are not
regular but singular statistical models, hence their generalization performance
is still left unknown. To overcome these problems, in the previous papers, we
proved new equations in statistical learning, by which we can estimate the
Bayes generalization loss from the Bayes training loss and the functional
variance, on the condition that the true distribution is a singularity
contained in a learning machine. In this paper, we prove that the same
equations hold even if a true distribution is not contained in a parametric
model. Also we prove that, the proposed equations in a regular case are
asymptotically equivalent to the Takeuchi information criterion. Therefore, the
proposed equations are always applicable without any condition on the unknown
true distribution.
|
0906.0231
|
Solving $k$-Nearest Neighbor Problem on Multiple Graphics Processors
|
cs.IR cs.DS cs.NE
|
The recommendation system is a software system to predict customers' unknown
preferences from known preferences. In the recommendation system, customers'
preferences are encoded into vectors, and finding the nearest vectors to each
vector is an essential part. This vector-searching part of the problem is
called a $k$-nearest neighbor problem. We give an effective algorithm to solve
this problem on multiple graphics processor units (GPUs).
Our algorithm consists of two parts: an $N$-body problem and a partial sort.
For a algorithm of the $N$-body problem, we applied the idea of a known
algorithm for the $N$-body problem in physics, although another trick is need
to overcome the problem of small sized shared memory. For the partial sort, we
give a novel GPU algorithm which is effective for small $k$. In our partial
sort algorithm, a heap is accessed in parallel by threads with a low cost of
synchronization. Both of these two parts of our algorithm utilize maximal power
of coalesced memory access, so that a full bandwidth is achieved.
By an experiment, we show that when the size of the problem is large, an
implementation of the algorithm on two GPUs runs more than 330 times faster
than a single core implementation on a latest CPU. We also show that our
algorithm scales well with respect to the number of GPUs.
|
0906.0247
|
Coded Modulation with Mismatched CSIT over Block-Fading Channels
|
cs.IT math.IT
|
Reliable communication over delay-constrained block-fading channels with
discrete inputs and mismatched (imperfect) channel state information at the
transmitter (CSIT) is studied. The CSIT mismatch is modeled as Gaussian random
variables, whose variances decay as a power of the signal-to-noise ratio (SNR).
A special focus is placed on the large-SNR decay of the outage probability when
power control with long-term power constraints is used. Without explicitly
characterizing the corresponding power allocation algorithms, we derive the
outage exponent as a function of the system parameters, including the CSIT
noise variance exponent and the exponent of the peak power constraint. It is
shown that CSIT, even if noisy, is always beneficial and leads to important
gains in terms of exponents. It is also shown that when multidimensional
rotations or precoders are used at the transmitter, further exponent gains can
be attained, but at the expense of larger decoding complexity.
|
0906.0249
|
Faster Projection in Sphere Decoding
|
cs.IT math.IT
|
Most of the calculations in standard sphere decoders are redundant, in the
sense that they either calculate quantities that are never used or calculate
some quantities more than once. A new method, which is applicable to lattices
as well as finite constellations, is proposed to avoid these redundant
calculations while still returning the same result. Pseudocode is given to
facilitate immediate implementation. Simulations show that the speed gain with
the proposed method increases linearly with the lattice dimension. At dimension
60, the new algorithms avoid about 75% of all floating-point operations.
|
0906.0252
|
Progressive Processing of Continuous Range Queries in Hierarchical
Wireless Sensor Networks
|
cs.DB
|
In this paper, we study the problem of processing continuous range queries in
a hierarchical wireless sensor network. Contrasted with the traditional
approach of building networks in a "flat" structure using sensor devices of the
same capability, the hierarchical approach deploys devices of higher capability
in a higher tier, i.e., a tier closer to the server. While query processing in
flat sensor networks has been widely studied, the study on query processing in
hierarchical sensor networks has been inadequate. In wireless sensor networks,
the main costs that should be considered are the energy for sending data and
the storage for storing queries. There is a trade-off between these two costs.
Based on this, we first propose a progressive processing method that
effectively processes a large number of continuous range queries in
hierarchical sensor networks. The proposed method uses the query merging
technique proposed by Xiang et al. as the basis and additionally considers the
trade-off between the two costs. More specifically, it works toward reducing
the storage cost at lower-tier nodes by merging more queries, and toward
reducing the energy cost at higher-tier nodes by merging fewer queries (thereby
reducing "false alarms"). We then present how to build a hierarchical sensor
network that is optimal with respect to the weighted sum of the two costs. It
allows for a cost-based systematic control of the trade-off based on the
relative importance between the storage and energy in a given network
environment and application. Experimental results show that the proposed method
achieves a near-optimal control between the storage and energy and reduces the
cost by 0.989~84.995 times compared with the cost achieved using the flat
(i.e., non-hierarchical) setup as in the work by Xiang et al.
|
0906.0298
|
Delay-Optimal Power and Precoder Adaptation for Multi-stream MIMO
Systems
|
cs.IT math.IT
|
In this paper, we consider delay-optimal MIMO precoder and power allocation
design for a MIMO Link in wireless fading channels. There are $L$ data streams
spatially multiplexed onto the MIMO link with heterogeneous packet arrivals and
delay requirements. The transmitter is assumed to have knowledge of the channel
state information (CSI) as well as the joint queue state information (QSI) of
the $L$ buffers. Using $L$-dimensional Markov Decision Process (MDP), we obtain
optimal precoding and power allocation policies for general delay regime, which
consists of an online solution and an offline solution. The online solution has
negligible complexity but the offline solution has worst case complexity ${\cal
O}((N+1)^L)$ where $N$ is the buffer size. Using {\em static sorting} of the
$L$ eigenchannels, we decompose the MDP into $L$ independent 1-dimensional
subproblems and obtained low complexity offline solution with linear complexity
order ${\cal O}(NL)$ and close-to-optimal performance.
|
0906.0311
|
Solar radiation forecasting using ad-hoc time series preprocessing and
neural networks
|
cs.AI cs.NA physics.data-an
|
In this paper, we present an application of neural networks in the renewable
energy domain. We have developed a methodology for the daily prediction of
global solar radiation on a horizontal surface. We use an ad-hoc time series
preprocessing and a Multi-Layer Perceptron (MLP) in order to predict solar
radiation at daily horizon. First results are promising with nRMSE < 21% and
RMSE < 998 Wh/m2. Our optimized MLP presents prediction similar to or even
better than conventional methods such as ARIMA techniques, Bayesian inference,
Markov chains and k-Nearest-Neighbors approximators. Moreover we found that our
data preprocessing approach can reduce significantly forecasting errors.
|
0906.0330
|
Information-Theoretic Inequalities on Unimodular Lie Groups
|
cs.IT math-ph math.IT math.MP
|
Classical inequalities used in information theory such as those of de Bruijn,
Fisher, and Kullback carry over from the setting of probability theory on
Euclidean space to that of unimodular Lie groups. These are groups that posses
integration measures that are invariant under left and right shifts, which
means that even in noncommutative cases they share many of the useful features
of Euclidean space. In practical engineering terms the rotation group and
Euclidean motion group are the unimodular Lie groups of most interest, and the
development of information theory applicable to these Lie groups opens up the
potential to study problems relating to image reconstruction from irregular or
random projection directions, information gathering in mobile robotics,
satellite attitude control, and bacterial chemotaxis and information
processing. Several definitions are extended from the Euclidean case to that of
Lie groups including the Fisher information matrix, and inequalities analogous
to those in classical information theory are derived and stated in the form of
fifteen small theorems. In all such inequalities, addition of random variables
is replaced with the group product, and the appropriate generalization of
convolution of probability densities is employed.
|
0906.0434
|
Total Variation, Adaptive Total Variation and Nonconvex Smoothly Clipped
Absolute Deviation Penalty for Denoising Blocky Images
|
cs.CV cs.NA stat.ME
|
The total variation-based image denoising model has been generalized and
extended in numerous ways, improving its performance in different contexts. We
propose a new penalty function motivated by the recent progress in the
statistical literature on high-dimensional variable selection. Using a
particular instantiation of the majorization-minimization algorithm, the
optimization problem can be efficiently solved and the computational procedure
realized is similar to the spatially adaptive total variation model. Our
two-pixel image model shows theoretically that the new penalty function solves
the bias problem inherent in the total variation model. The superior
performance of the new penalty is demonstrated through several experiments. Our
investigation is limited to "blocky" images which have small total variation.
|
0906.0470
|
An optimal linear separator for the Sonar Signals Classification task
|
cs.LG
|
The problem of classifying sonar signals from rocks and mines first studied
by Gorman and Sejnowski has become a benchmark against which many learning
algorithms have been tested. We show that both the training set and the test
set of this benchmark are linearly separable, although with different
hyperplanes. Moreover, the complete set of learning and test patterns together,
is also linearly separable. We give the weights that separate these sets, which
may be used to compare results found by other algorithms.
|
0906.0531
|
Medium Access Control Protocols With Memory
|
cs.NI cs.IT math.IT
|
Many existing medium access control (MAC) protocols utilize past information
(e.g., the results of transmission attempts) to adjust the transmission
parameters of users. This paper provides a general framework to express and
evaluate distributed MAC protocols utilizing a finite length of memory for a
given form of feedback information. We define protocols with memory in the
context of a slotted random access network with saturated arrivals. We
introduce two performance metrics, throughput and average delay, and formulate
the problem of finding an optimal protocol. We first show that a TDMA outcome,
which is the best outcome in the considered scenario, can be obtained after a
transient period by a protocol with (N-1)-slot memory, where N is the total
number of users. Next, we analyze the performance of protocols with 1-slot
memory using a Markov chain and numerical methods. Protocols with 1-slot memory
can achieve throughput arbitrarily close to 1 (i.e., 100% channel utilization)
at the expense of large average delay, by correlating successful users in two
consecutive slots. Finally, we apply our framework to wireless local area
networks.
|
0906.0550
|
On linear completely regular codes with covering radius $\rho=1$.
Construction and classification
|
cs.IT math.IT
|
Completely regular codes with covering radius $\rho=1$ must have minimum
distance $d\leq 3$. For $d=3$, such codes are perfect and their parameters are
well known. In this paper, the cases $d=1$ and $d=2$ are studied and completely
characterized when the codes are linear. Moreover, it is proven that all these
codes are completely transitive.
|
0906.0612
|
Community detection in graphs
|
physics.soc-ph cond-mat.stat-mech cs.IR physics.bio-ph physics.comp-ph q-bio.QM
|
The modern science of networks has brought significant advances to our
understanding of complex systems. One of the most relevant features of graphs
representing real systems is community structure, or clustering, i. e. the
organization of vertices in clusters, with many edges joining vertices of the
same cluster and comparatively few edges joining vertices of different
clusters. Such clusters, or communities, can be considered as fairly
independent compartments of a graph, playing a similar role like, e. g., the
tissues or the organs in the human body. Detecting communities is of great
importance in sociology, biology and computer science, disciplines where
systems are often represented as graphs. This problem is very hard and not yet
satisfactorily solved, despite the huge effort of a large interdisciplinary
community of scientists working on it over the past few years. We will attempt
a thorough exposition of the topic, from the definition of the main elements of
the problem, to the presentation of most methods developed, with a special
focus on techniques designed by statistical physicists, from the discussion of
crucial issues like the significance of clustering and how methods should be
tested and compared against each other, to the description of applications to
real networks.
|
0906.0651
|
Optimal Byzantine Resilient Convergence in Asynchronous Robot Networks
|
cs.DC cs.RO
|
We propose the first deterministic algorithm that tolerates up to $f$
byzantine faults in $3f+1$-sized networks and performs in the asynchronous
CORDA model. Our solution matches the previously established lower bound for
the semi-synchronous ATOM model on the number of tolerated Byzantine robots.
Our algorithm works under bounded scheduling assumptions for oblivious robots
moving in a uni-dimensional space.
|
0906.0667
|
Quality assessment of the MPEG-4 scalable video CODEC
|
cs.MM cs.CV
|
In this paper, the performance of the emerging MPEG-4 SVC CODEC is evaluated.
In the first part, a brief introduction on the subject of quality assessment
and the development of the MPEG-4 SVC CODEC is given. After that, the used test
methodologies are described in detail, followed by an explanation of the actual
test scenarios. The main part of this work concentrates on the performance
analysis of the MPEG-4 SVC CODEC - both objective and subjective.
|
0906.0675
|
Encoding models for scholarly literature
|
cs.CL
|
We examine the issue of digital formats for document encoding, archiving and
publishing, through the specific example of "born-digital" scholarly journal
articles. We will begin by looking at the traditional workflow of journal
editing and publication, and how these practices have made the transition into
the online domain. We will examine the range of different file formats in which
electronic articles are currently stored and published. We will argue strongly
that, despite the prevalence of binary and proprietary formats such as PDF and
MS Word, XML is a far superior encoding choice for journal articles. Next, we
look at the range of XML document structures (DTDs, Schemas) which are in
common use for encoding journal articles, and consider some of their strengths
and weaknesses. We will suggest that, despite the existence of specialized
schemas intended specifically for journal articles (such as NLM), and more
broadly-used publication-oriented schemas such as DocBook, there are strong
arguments in favour of developing a subset or customization of the Text
Encoding Initiative (TEI) schema for the purpose of journal-article encoding;
TEI is already in use in a number of journal publication projects, and the
scale and precision of the TEI tagset makes it particularly appropriate for
encoding scholarly articles. We will outline the document structure of a
TEI-encoded journal article, and look in detail at suggested markup patterns
for specific features of journal articles.
|
0906.0684
|
New Instability Results for High Dimensional Nearest Neighbor Search
|
cs.DB cs.IR
|
Consider a dataset of n(d) points generated independently from R^d according
to a common p.d.f. f_d with support(f_d) = [0,1]^d and sup{f_d([0,1]^d)}
growing sub-exponentially in d. We prove that: (i) if n(d) grows
sub-exponentially in d, then, for any query point q^d in [0,1]^d and any
epsilon>0, the ratio of the distance between any two dataset points and q^d is
less that 1+epsilon with probability -->1 as d-->infinity; (ii) if
n(d)>[4(1+epsilon)]^d for large d, then for all q^d in [0,1]^d (except a small
subset) and any epsilon>0, the distance ratio is less than 1+epsilon with
limiting probability strictly bounded away from one. Moreover, we provide
preliminary results along the lines of (i) when f_d=N(mu_d,Sigma_d).
|
0906.0690
|
Thinning, Entropy and the Law of Thin Numbers
|
cs.IT math.IT math.PR
|
Renyi's "thinning" operation on a discrete random variable is a natural
discrete analog of the scaling operation for continuous random variables. The
properties of thinning are investigated in an information-theoretic context,
especially in connection with information-theoretic inequalities related to
Poisson approximation results. The classical Binomial-to-Poisson convergence
(sometimes referred to as the "law of small numbers" is seen to be a special
case of a thinning limit theorem for convolutions of discrete distributions. A
rate of convergence is provided for this limit, and nonasymptotic bounds are
also established. This development parallels, in part, the development of
Gaussian inequalities leading to the information-theoretic version of the
central limit theorem. In particular, a "thinning Markov chain" is introduced,
and it is shown to play a role analogous to that of the Ornstein-Uhlenbeck
process in connection to the entropy power inequality.
|
0906.0695
|
On network coding for sum-networks
|
cs.IT math.IT
|
A directed acyclic network is considered where all the terminals need to
recover the sum of the symbols generated at all the sources. We call such a
network a sum-network. It is shown that there exists a solvably (and linear
solvably) equivalent sum-network for any multiple-unicast network, and thus for
any directed acyclic communication network. It is also shown that there exists
a linear solvably equivalent multiple-unicast network for every sum-network. It
is shown that for any set of polynomials having integer coefficients, there
exists a sum-network which is scalar linear solvable over a finite field F if
and only if the polynomials have a common root in F. For any finite or cofinite
set of prime numbers, a network is constructed which has a vector linear
solution of any length if and only if the characteristic of the alphabet field
is in the given set. The insufficiency of linear network coding and
unachievability of the network coding capacity are proved for sum-networks by
using similar known results for communication networks. Under fractional vector
linear network coding, a sum-network and its reverse network are shown to be
equivalent. However, under non-linear coding, it is shown that there exists a
solvable sum-network whose reverse network is not solvable.
|
0906.0716
|
Size dependent word frequencies and translational invariance of books
|
cs.CL physics.soc-ph
|
It is shown that a real novel shares many characteristic features with a null
model in which the words are randomly distributed throughout the text. Such a
common feature is a certain translational invariance of the text. Another is
that the functional form of the word-frequency distribution of a novel depends
on the length of the text in the same way as the null model. This means that an
approximate power-law tail ascribed to the data will have an exponent which
changes with the size of the text-section which is analyzed. A further
consequence is that a novel cannot be described by text-evolution models like
the Simon model. The size-transformation of a novel is found to be well
described by a specific Random Book Transformation. This size transformation in
addition enables a more precise determination of the functional form of the
word-frequency distribution. The implications of the results are discussed.
|
0906.0739
|
Spectrum Sensing in Low SNR Regime via Stochastic Resonance
|
cs.IT math.IT
|
Spectrum sensing is essential in cognitive radio to enable dynamic spectrum
access. In many scenarios, primary user signal must be detected reliably in low
signal-to-noise ratio (SNR) regime under required sensing time. We propose to
use stochastic resonance, a nonlinear filter having certain resonance
frequency, to detect primary users when the SNR is very low. Both block and
sequential detection schemes are studied. Simulation results show that, under
the required false alarm rate, both detection probability and average detection
delay can be substantially improved. A few implementation issues are also
discussed.
|
0906.0744
|
Ergodic Fading Interference Channels: Sum-Capacity and Separability
|
cs.IT math.IT
|
The sum-capacity for specific sub-classes of ergodic fading Gaussian two-user
interference channels (IFCs) is developed under the assumption of perfect
channel state information at all transmitters and receivers. For the
sub-classes of uniformly strong (every fading state is strong) and ergodic very
strong two-sided IFCs (a mix of strong and weak fading states satisfying
specific fading averaged conditions) the optimality of completely decoding the
interference, i.e., converting the IFC to a compound multiple access channel
(C-MAC), is proved. It is also shown that this capacity-achieving scheme
requires encoding and decoding jointly across all fading states. As an
achievable scheme and also as a topic of independent interest, the capacity
region and the corresponding optimal power policies for an ergodic fading C-MAC
are developed. For the sub-class of uniformly weak IFCs (every fading state is
weak), genie-aided outer bounds are developed. The bounds are shown to be
achieved by treating interference as noise and by separable coding for
one-sided fading IFCs. Finally, for the sub-class of one-sided hybrid IFCs (a
mix of weak and strong states that do not satisfy ergodic very strong
conditions), an achievable scheme involving rate splitting and joint coding
across all fading states is developed and is shown to perform at least as well
as a separable coding scheme.
|
0906.0798
|
Single Neuron Memories and the Network's Proximity Matrix
|
cs.NE
|
This paper extends the treatment of single-neuron memories obtained by the
B-matrix approach. The spreading of the activity within the network is
determined by the network's proximity matrix which represents the separations
amongst the neurons through the neural pathways.
|
0906.0840
|
Soft-Input Soft-Output Single Tree-Search Sphere Decoding
|
cs.IT math.IT
|
Soft-input soft-output (SISO) detection algorithms form the basis for
iterative decoding. The computational complexity of SISO detection often poses
significant challenges for practical receiver implementations, in particular in
the context of multiple-input multiple-output (MIMO) wireless communication
systems. In this paper, we present a low-complexity SISO sphere-decoding
algorithm, based on the single tree-search paradigm proposed originally for
soft-output MIMO detection in Studer, et al., IEEE J-SAC, 2008. The new
algorithm incorporates clipping of the extrinsic log-likelihood ratios (LLRs)
into the tree-search, which results in significant complexity savings and
allows to cover a large performance/complexity tradeoff region by adjusting a
single parameter. Furthermore, we propose a new method for correcting
approximate LLRs --resulting from sub-optimal detectors-- which (often
significantly) improves detection performance at low additional computational
complexity.
|
0906.0861
|
Using Genetic Algorithms for Texts Classification Problems
|
cs.LG cs.NE
|
The avalanche quantity of the information developed by mankind has led to
concept of automation of knowledge extraction - Data Mining ([1]). This
direction is connected with a wide spectrum of problems - from recognition of
the fuzzy set to creation of search machines. Important component of Data
Mining is processing of the text information. Such problems lean on concept of
classification and clustering ([2]). Classification consists in definition of
an accessory of some element (text) to one of in advance created classes.
Clustering means splitting a set of elements (texts) on clusters which quantity
are defined by localization of elements of the given set in vicinities of these
some natural centers of these clusters. Realization of a problem of
classification initially should lean on the given postulates, basic of which -
the aprioristic information on primary set of texts and a measure of affinity
of elements and classes.
|
0906.0872
|
Fast Weak Learner Based on Genetic Algorithm
|
cs.LG cs.NE
|
An approach to the acceleration of parametric weak classifier boosting is
proposed. Weak classifier is called parametric if it has fixed number of
parameters and, so, can be represented as a point into multidimensional space.
Genetic algorithm is used instead of exhaustive search to learn parameters of
such classifier. Proposed approach also takes cases when effective algorithm
for learning some of the classifier parameters exists into account. Experiments
confirm that such an approach can dramatically decrease classifier training
time while keeping both training and test errors small.
|
0906.0885
|
Mining Compressed Repetitive Gapped Sequential Patterns Efficiently
|
cs.DB cs.AI
|
Mining frequent sequential patterns from sequence databases has been a
central research topic in data mining and various efficient mining sequential
patterns algorithms have been proposed and studied. Recently, in many problem
domains (e.g, program execution traces), a novel sequential pattern mining
research, called mining repetitive gapped sequential patterns, has attracted
the attention of many researchers, considering not only the repetition of
sequential pattern in different sequences but also the repetition within a
sequence is more meaningful than the general sequential pattern mining which
only captures occurrences in different sequences. However, the number of
repetitive gapped sequential patterns generated by even these closed mining
algorithms may be too large to understand for users, especially when support
threshold is low. In this paper, we propose and study the problem of
compressing repetitive gapped sequential patterns. Inspired by the ideas of
summarizing frequent itemsets, RPglobal, we develop an algorithm, CRGSgrow
(Compressing Repetitive Gapped Sequential pattern grow), including an efficient
pruning strategy, SyncScan, and an efficient representative pattern checking
scheme, -dominate sequential pattern checking. The CRGSgrow is a two-step
approach: in the first step, we obtain all closed repetitive sequential
patterns as the candidate set of representative repetitive sequential patterns,
and at the same time get the most of representative repetitive sequential
patterns; in the second step, we only spend a little time in finding the
remaining the representative patterns from the candidate set. An empirical
study with both real and synthetic data sets clearly shows that the CRGSgrow
has good performance.
|
0906.0910
|
On the Challenges of Collaborative Data Processing
|
cs.DB cs.HC
|
The last 30 years have seen the creation of a variety of electronic
collaboration tools for science and business. Some of the best-known
collaboration tools support text editing (e.g., wikis). Wikipedia's success
shows that large-scale collaboration can produce highly valuable content.
Meanwhile much structured data is being collected and made publicly available.
We have never had access to more powerful databases and statistical packages.
Is large-scale collaborative data analysis now possible? Using a quantitative
analysis of Web 2.0 data visualization sites, we find evidence that at least
moderate open collaboration occurs. We then explore some of the limiting
factors of collaboration over data.
|
0906.0958
|
On a Generalized Foster-Lyapunov Type Criterion for the Stability of
Multidimensional Markov chains with Applications to the Slotted-Aloha
Protocol with Finite Number of Queues
|
cs.IT cs.NI math.IT
|
In this paper, we generalize a positive recurrence criterion for
multidimensional discrete-time Markov chains over countable state spaces due to
Rosberg (JAP, Vol. 17, No. 3, 1980). We revisit the stability analysis of well
known slotted-Aloha protocol with finite number of queues. Under standard
modeling assumptions, we derive a sufficient condition for the stability by
applying our positive recurrence criterion. Our sufficiency condition for
stability is linear in arrival rates and does not require knowledge of the
stationary joint statistics of queue lengths. We believe that the technique
reported here could be useful in analyzing other stability problems in
countable space Markovian settings. Toward the end, we derive some sufficient
conditions for instability of the protocol.
|
0906.0964
|
On Sparse Channel Estimation
|
cs.IT math.IT
|
Channel Estimation is an essential component in applications such as radar
and data communication. In multi path time varying environments, it is
necessary to estimate time-shifts, scale-shifts (the wideband equivalent of
Doppler-shifts), and the gains/phases of each of the multiple paths. With
recent advances in sparse estimation (or "compressive sensing"), new estimation
techniques have emerged which yield more accurate estimates of these channel
parameters than traditional strategies. These estimation strategies, however,
restrict potential estimates of time-shifts and scale-shifts to a finite set of
values separated by a choice of grid spacing. A small grid spacing increases
the number of potential estimates, thus lowering the quantization error, but
also increases complexity and estimation time. Conversely, a large grid spacing
lowers the number of potential estimates, thus lowering the complexity and
estimation time, but increases the quantization error. In this thesis, we
derive an expression which relates the choice of grid spacing to the
mean-squared quantization error. Furthermore, we consider the case when
scale-shifts are approximated by Doppler-shifts, and derive a similar
expression relating the choice of the grid spacing and the quantization error.
Using insights gained from these expressions, we further explore the effects of
the choice and grid spacing, and examine when a wideband model can be well
approximated by a narrowband model.
|
0906.0997
|
Division Algebras and Wireless Communication
|
math.RA cs.IT math.IT math.NT
|
We survey the recent use of division algebras in wireless communication.
|
0906.1079
|
Modified Frame Reconstruction Algorithm for Compressive Sensing
|
cs.IT math.IT
|
Compressive sensing is a technique to sample signals well below the Nyquist
rate using linear measurement operators. In this paper we present an algorithm
for signal reconstruction given such a set of measurements. This algorithm
generalises and extends previous iterative hard thresholding algorithms and we
give sufficient conditions for successful reconstruction of the original data
signal. In addition we show that by underestimating the sparsity of the data
signal we can increase the success rate of the algorithm.
We also present a number of modifications to this algorithm: the
incorporation of a least squares step, polynomial acceleration and an adaptive
method for choosing the step-length. These modified algorithms converge to the
correct solution under similar conditions to the original un-modified
algorithm. Empirical evidence show that these modifications dramatically
increase both the success rate and the rate of convergence, and can outperform
other algorithms previously used for signal reconstruction in compressive
sensing.
|
0906.1148
|
Collaborative filtering based on multi-channel diffusion
|
cs.IR
|
In this paper, by applying a diffusion process, we propose a new index to
quantify the similarity between two users in a user-object bipartite graph. To
deal with the discrete ratings on objects, we use a multi-channel
representation where each object is mapped to several channels with the number
of channels being equal to the number of different ratings. Each channel
represents a certain rating and a user having voted an object will be connected
to the channel corresponding to the rating. Diffusion process taking place on
such a user-channel bipartite graph gives a new similarity measure of user
pairs, which is further demonstrated to be more accurate than the classical
Pearson correlation coefficient under the standard collaborative filtering
framework.
|
0906.1166
|
Comparison of Galled Trees
|
q-bio.PE cs.CE cs.DM q-bio.QM
|
Galled trees, directed acyclic graphs that model evolutionary histories with
isolated hybridization events, have become very popular due to both their
biological significance and the existence of polynomial time algorithms for
their reconstruction. In this paper we establish to which extent several
distance measures for the comparison of evolutionary networks are metrics for
galled trees, and hence when they can be safely used to evaluate galled tree
reconstruction methods.
|
0906.1182
|
The CIFF Proof Procedure for Abductive Logic Programming with
Constraints: Theory, Implementation and Experiments
|
cs.AI cs.LO
|
We present the CIFF proof procedure for abductive logic programming with
constraints, and we prove its correctness. CIFF is an extension of the IFF
proof procedure for abductive logic programming, relaxing the original
restrictions over variable quantification (allowedness conditions) and
incorporating a constraint solver to deal with numerical constraints as in
constraint logic programming. Finally, we describe the CIFF system, comparing
it with state of the art abductive systems and answer set solvers and showing
how to use it to program some applications. (To appear in Theory and Practice
of Logic Programming - TPLP).
|
0906.1189
|
On the Throughput/Bit-Cost Tradeoff in CSMA Based Cooperative Networks
|
cs.IT cs.NI math.IT
|
Wireless local area networks (WLAN) still suffer from a severe performance
discrepancy between different users in the uplink. This is because of the
spatially varying channel conditions provided by the wireless medium.
Cooperative medium access control (MAC) protocols as for example CoopMAC were
proposed to mitigate this problem. In this work, it is shown that cooperation
implies for cooperating nodes a tradeoff between throughput and bit-cost, which
is the energy needed to transmit one bit. The tradeoff depends on the degree of
cooperation. For carrier sense multiple access (CSMA) based networks, the
throughput/bit-cost tradeoff curve is theoretically derived. A new distributed
CSMA protocol called fairMAC is proposed and it is theoretically shown that
fairMAC can asymptotically achieve any operating point on the tradeoff curve
when the packet lengths go to infinity. The theoretical results are validated
through Monte Carlo simulations.
|
0906.1244
|
Generalised Pinsker Inequalities
|
cs.IT math.IT
|
We generalise the classical Pinsker inequality which relates variational
divergence to Kullback-Liebler divergence in two ways: we consider arbitrary
f-divergences in place of KL divergence, and we assume knowledge of a sequence
of values of generalised variational divergences. We then develop a best
possible inequality for this doubly generalised situation. Specialising our
result to the classical case provides a new and tight explicit bound relating
KL to variational divergence (solving a problem posed by Vajda some 40 years
ago). The solution relies on exploiting a connection between divergences and
the Bayes risk of a learning problem via an integral representation.
|
0906.1339
|
Error Exponents for Broadcast Channels with Degraded Message Sets
|
cs.IT math.IT
|
We consider a broadcast channel with a degraded message set, in which a
single transmitter sends a common message to two receivers and a private
message to one of the receivers only. The main goal of this work is to find new
lower bounds to the error exponents of the strong user, the one that should
decode both messages, and of the weak user, that should decode only the common
message. Unlike previous works, where suboptimal decoders where used, the
exponents we derive in this work pertain to optimal decoding and depend on both
rates. We take two different approaches.
The first approach is based, in part, on variations of Gallager-type bounding
techniques that were presented in a much earlier work on error exponents for
erasure/list decoding. The resulting lower bounds are quite simple to
understand and to compute.
The second approach is based on a technique that is rooted in statistical
physics, and it is exponentially tight from the initial step and onward. This
technique is based on analyzing the statistics of certain enumerators.
Numerical results show that the bounds obtained by this technique are tighter
than those obtained by the first approach and previous results. The derivation,
however, is more complex than the first approach and the retrieved exponents
are harder to compute.
|
0906.1360
|
On the effectiveness of a binless entropy estimator for generalised
entropic forms
|
cs.IT cs.DS math.IT math.NA
|
In this manuscript we discuss the effectiveness of the Kozachenko-Leonenko
entropy estimator when generalised to cope with entropic forms customarily
applied to study systems evincing asymptotic scale invariance and dependence
(either linear or non-linear type). We show that when the variables are
independently and identically distributed the estimator is only valuable along
the whole domain if the data follow the uniform distribution, whereas for other
distributions the estimator is only effectual in the limit of the
Boltzmann-Gibbs-Shanon entropic form. We also analyse the influence of the
dependence (linear and non-linear) between variables on the accuracy of the
estimator between variables. As expected in the last case the estimator looses
efficiency for the Boltzmann-Gibbs-Shanon entropic form as well.
|
0906.1467
|
Syntax is from Mars while Semantics from Venus! Insights from Spectral
Analysis of Distributional Similarity Networks
|
physics.data-an cs.CL
|
We study the global topology of the syntactic and semantic distributional
similarity networks for English through the technique of spectral analysis. We
observe that while the syntactic network has a hierarchical structure with
strong communities and their mixtures, the semantic network has several tightly
knit communities along with a large core without any such well-defined
community structure.
|
0906.1487
|
The Physics of Compressive Sensing and the Gradient-Based Recovery
Algorithms
|
cs.IT math.IT
|
The physics of compressive sensing (CS) and the gradient-based recovery
algorithms are presented. First, the different forms for CS are summarized.
Second, the physical meanings of coherence and measurement are given. Third,
the gradient-based recovery algorithms and their geometry explanations are
provided. Finally, we conclude the report and give some suggestion for future
work.
|
0906.1538
|
On "A Novel Maximum Likelihood Decoding Algorithm for Orthogonal
Space-Time Block Codes"
|
cs.IT math.IT
|
The computational complexity of the Maximum Likelihood decoding algorithm in
[1], [2] for orthogonal space-time block codes is smaller than specified.
|
0906.1565
|
Correcting a Fraction of Errors in Nonbinary Expander Codes with Linear
Programming
|
cs.IT math.IT
|
A linear-programming decoder for \emph{nonbinary} expander codes is
presented. It is shown that the proposed decoder has the maximum-likelihood
certificate properties. It is also shown that this decoder corrects any pattern
of errors of a relative weight up to approximately 1/4 \delta_A \delta_B (where
\delta_A and \delta_B are the relative minimum distances of the constituent
codes).
|
0906.1593
|
On Defining 'I' "I logy"
|
cs.AI cs.LO
|
Could we define I? Throughout this article we give a negative answer to this
question. More exactly, we show that there is no definition for I in a certain
way. But this negative answer depends on our definition of definability. Here,
we try to consider sufficient generalized definition of definability. In the
middle of paper a paradox will arise which makes us to modify the way we use
the concept of property and definability.
|
0906.1599
|
Bits Through Deterministic Relay Cascades with Half-Duplex Constraint
|
cs.IT math.IT
|
Consider a relay cascade, i.e. a network where a source node, a sink node and
a certain number of intermediate source/relay nodes are arranged on a line and
where adjacent node pairs are connected by error-free (q+1)-ary pipes. Suppose
the source and a subset of the relays wish to communicate independent
information to the sink under the condition that each relay in the cascade is
half-duplex constrained. A coding scheme is developed which transfers
information by an information-dependent allocation of the transmission and
reception slots of the relays. The coding scheme requires synchronization on
the symbol level through a shared clock. The coding strategy achieves capacity
for a single source. Numerical values for the capacity of cascades of various
lengths are provided, and the capacities are significantly higher than the
rates which are achievable with a predetermined time-sharing approach. If the
cascade includes a source and a certain number of relays with their own
information, the strategy achieves the cut-set bound when the rates of the
relay sources fall below certain thresholds. For cascades composed of an
infinite number of half-duplex constrained relays and a single source, we
derive an explicit capacity expression. Remarkably, the capacity in bits/use
for q=1 is equal to the logarithm of the golden ratio, and the capacity for q=2
is 1 bit/use.
|
0906.1603
|
Multiaccess Channels with State Known to One Encoder: Another Case of
Degraded Message Sets
|
cs.IT math.IT
|
We consider a two-user state-dependent multiaccess channel in which only one
of the encoders is informed, non-causally, of the channel states. Two
independent messages are transmitted: a common message transmitted by both the
informed and uninformed encoders, and an individual message transmitted by only
the uninformed encoder. We derive inner and outer bounds on the capacity region
of this model in the discrete memoryless case as well as the Gaussian case.
Further, we show that the bounds for the Gaussian case are tight in some
special cases.
|
0906.1618
|
On the Statistics of Cognitive Radio Capacity in Shadowing and Fast
Fading Environments (Journal Version)
|
cs.IT math.IT
|
In this paper we consider the capacity of the cognitive radio channel in
different fading environments under a low interference regime. First we derive
the probability that the low interference regime holds under shadow fading as
well as Rayleigh and Rician fast fading conditions. We demonstrate that this is
the dominant case, especially in practical cognitive radio deployment
scenarios. The capacity of the cognitive radio channel depends critically on a
power loss parameter, $\alpha$, which governs how much transmit power the
cognitive radio dedicates to relaying the primary message. We derive a simple,
accurate approximation to $\alpha$ in Rayleigh and Rician fading environments
which gives considerable insight into system capacity. We also investigate the
effects of system parameters and propagation environment on $\alpha$ and the
cognitive radio capacity. In all cases, the use of the approximation is shown
to be extremely accurate.
|
0906.1673
|
Knowledge Management in Economic Intelligence with Reasoning on Temporal
Attributes
|
cs.AI
|
People have to make important decisions within a time frame. Hence, it is
imperative to employ means or strategy to aid effective decision making.
Consequently, Economic Intelligence (EI) has emerged as a field to aid
strategic and timely decision making in an organization. In the course of
attaining this goal: it is indispensable to be more optimistic towards
provision for conservation of intellectual resource invested into the process
of decision making. This intellectual resource is nothing else but the
knowledge of the actors as well as that of the various processes for effecting
decision making. Knowledge has been recognized as a strategic economic resource
for enhancing productivity and a key for innovation in any organization or
community. Thus, its adequate management with cognizance of its temporal
properties is highly indispensable. Temporal properties of knowledge refer to
the date and time (known as timestamp) such knowledge is created as well as the
duration or interval between related knowledge. This paper focuses on the needs
for a user-centered knowledge management approach as well as exploitation of
associated temporal properties. Our perspective of knowledge is with respect to
decision-problems projects in EI. Our hypothesis is that the possibility of
reasoning about temporal properties in exploitation of knowledge in EI projects
should foster timely decision making through generation of useful inferences
from available and reusable knowledge for a new project.
|
0906.1677
|
Outage Behavior of Discrete Memoryless Channels (DMCs) Under Channel
Estimation Errors
|
cs.IT cs.DM math.IT
|
Communication systems are usually designed by assuming perfect channel state
information (CSI). However, in many practical scenarios, only a noisy estimate
of the channel is available, which may strongly differ from the true channel.
This imperfect CSI scenario is addressed by introducing the notion of
estimation-induced outage (EIO) capacity. We derive a single-letter
characterization of the maximal EIO rate and prove an associated coding theorem
and its strong converse for discrete memoryless channels (DMCs). The
transmitter and the receiver rely on the channel estimate and the statistics of
the estimate to construct codes that guarantee reliable communication with a
certain outage probability. This ensures that in the non-outage case the
transmission meets the target rate with small error probability, irrespective
of the quality of the channel estimate. Applications of the EIO capacity to a
single-antenna (non-ergodic) Ricean fading channel are considered. The EIO
capacity for this case is compared to the EIO rates of a communication system
in which the receiver decodes by using a mismatched ML decoder. The effects of
rate-limited feedback to provide the transmitter with quantized CSI are also
investigated.
|
0906.1694
|
Toward a Category Theory Design of Ontological Knowledge Bases
|
cs.AI
|
I discuss (ontologies_and_ontological_knowledge_bases /
formal_methods_and_theories) duality and its category theory extensions as a
step toward a solution to Knowledge-Based Systems Theory. In particular I focus
on the example of the design of elements of ontologies and ontological
knowledge bases of next three electronic courses: Foundations of Research
Activities, Virtual Modeling of Complex Systems and Introduction to String
Theory.
|
0906.1713
|
Feature Reinforcement Learning: Part I: Unstructured MDPs
|
cs.LG cs.AI cs.IT math.IT
|
General-purpose, intelligent, learning agents cycle through sequences of
observations, actions, and rewards that are complex, uncertain, unknown, and
non-Markovian. On the other hand, reinforcement learning is well-developed for
small finite state Markov decision processes (MDPs). Up to now, extracting the
right state representations out of bare observations, that is, reducing the
general agent setup to the MDP framework, is an art that involves significant
effort by designers. The primary goal of this work is to automate the reduction
process and thereby significantly expand the scope of many existing
reinforcement learning algorithms and the agents that employ them. Before we
can think of mechanizing this search for suitable MDPs, we need a formal
objective criterion. The main contribution of this article is to develop such a
criterion. I also integrate the various parts into one learning algorithm.
Extensions to more realistic dynamic Bayesian networks are developed in Part
II. The role of POMDPs is also considered there.
|
0906.1763
|
Segmentation of Facial Expressions Using Semi-Definite Programming and
Generalized Principal Component Analysis
|
cs.CV
|
In this paper, we use semi-definite programming and generalized principal
component analysis (GPCA) to distinguish between two or more different facial
expressions. In the first step, semi-definite programming is used to reduce the
dimension of the image data and "unfold" the manifold which the data points
(corresponding to facial expressions) reside on. Next, GPCA is used to fit a
series of subspaces to the data points and associate each data point with a
subspace. Data points that belong to the same subspace are claimed to belong to
the same facial expression category. An example is provided.
|
0906.1814
|
Large-Margin kNN Classification Using a Deep Encoder Network
|
cs.LG cs.AI
|
KNN is one of the most popular classification methods, but it often fails to
work well with inappropriate choice of distance metric or due to the presence
of numerous class-irrelevant features. Linear feature transformation methods
have been widely applied to extract class-relevant information to improve kNN
classification, which is very limited in many applications. Kernels have been
used to learn powerful non-linear feature transformations, but these methods
fail to scale to large datasets. In this paper, we present a scalable
non-linear feature mapping method based on a deep neural network pretrained
with restricted boltzmann machines for improving kNN classification in a
large-margin framework, which we call DNet-kNN. DNet-kNN can be used for both
classification and for supervised dimensionality reduction. The experimental
results on two benchmark handwritten digit datasets show that DNet-kNN has much
better performance than large-margin kNN using a linear mapping and kNN based
on a deep autoencoder pretrained with retricted boltzmann machines.
|
0906.1835
|
Secret-Key Generation using Correlated Sources and Channels
|
cs.IT cs.CR math.IT
|
We study the problem of generating a shared secret key between two terminals
in a joint source-channel setup -- the sender communicates to the receiver over
a discrete memoryless wiretap channel and additionally the terminals have
access to correlated discrete memoryless source sequences. We establish lower
and upper bounds on the secret-key capacity. These bounds coincide,
establishing the capacity, when the underlying channel consists of independent,
parallel and reversely degraded wiretap channels. In the lower bound, the
equivocation terms of the source and channel components are functionally
additive. The secret-key rate is maximized by optimally balancing the the
source and channel contributions. This tradeoff is illustrated in detail for
the Gaussian case where it is also shown that Gaussian codebooks achieve the
capacity. When the eavesdropper also observes a source sequence, the secret-key
capacity is established when the sources and channels of the eavesdropper are a
degraded version of the legitimate receiver. Finally the case when the
terminals also have access to a public discussion channel is studied. We
propose generating separate keys from the source and channel components and
establish the optimality of this approach when the when the channel outputs of
the receiver and the eavesdropper are conditionally independent given the
input.
|
0906.1842
|
Managing Requirement Volatility in an Ontology-Driven Clinical LIMS
Using Category Theory. International Journal of Telemedicine and Applications
|
cs.AI cs.MA
|
Requirement volatility is an issue in software engineering in general, and in
Web-based clinical applications in particular, which often originates from an
incomplete knowledge of the domain of interest. With advances in the health
science, many features and functionalities need to be added to, or removed
from, existing software applications in the biomedical domain. At the same
time, the increasing complexity of biomedical systems makes them more difficult
to understand, and consequently it is more difficult to define their
requirements, which contributes considerably to their volatility. In this
paper, we present a novel agent-based approach for analyzing and managing
volatile and dynamic requirements in an ontology-driven laboratory information
management system (LIMS) designed for Web-based case reporting in medical
mycology. The proposed framework is empowered with ontologies and formalized
using category theory to provide a deep and common understanding of the
functional and nonfunctional requirement hierarchies and their interrelations,
and to trace the effects of a change on the conceptual framework.
|
0906.1845
|
Towards Improving Validation, Verification, Crash Investigations, and
Event Reconstruction of Flight-Critical Systems with Self-Forensics
|
cs.SE cs.AI
|
This paper introduces a novel concept of self-forensics to complement the
standard autonomic self-CHOP properties of the self-managed systems, to be
specified in the Forensic Lucid language. We argue that self-forensics, with
the forensics taken out of the cybercrime domain, is applicable to
"self-dissection" for the purpose of verification of autonomous software and
hardware systems of flight-critical systems for automated incident and anomaly
analysis and event reconstruction by the engineering teams in a variety of
incident scenarios during design and testing as well as actual flight data.
|
0906.1900
|
How deals with discrete data for the reduction of simulation models
using neural network
|
cs.NE
|
Simulation is useful for the evaluation of a Master Production/distribution
Schedule (MPS). Also, the goal of this paper is the study of the design of a
simulation model by reducing its complexity. According to theory of
constraints, we want to build reduced models composed exclusively by
bottlenecks and a neural network. Particularly a multilayer perceptron, is
used. The structure of the network is determined by using a pruning procedure.
This work focuses on the impact of discrete data on the results and compares
different approaches to deal with these data. This approach is applied to
sawmill internal supply chain
|
0906.1905
|
The VOISE Algorithm: a Versatile Tool for Automatic Segmentation of
Astronomical Images
|
astro-ph.IM astro-ph.EP cs.CV physics.data-an stat.AP
|
The auroras on Jupiter and Saturn can be studied with a high sensitivity and
resolution by the Hubble Space Telescope (HST) ultraviolet (UV) and
far-ultraviolet (FUV) Space Telescope spectrograph (STIS) and Advanced Camera
for Surveys (ACS) instruments. We present results of automatic detection and
segmentation of Jupiter's auroral emissions as observed by HST ACS instrument
with VOronoi Image SEgmentation (VOISE). VOISE is a dynamic algorithm for
partitioning the underlying pixel grid of an image into regions according to a
prescribed homogeneity criterion. The algorithm consists of an iterative
procedure that dynamically constructs a tessellation of the image plane based
on a Voronoi Diagram, until the intensity of the underlying image within each
region is classified as homogeneous. The computed tessellations allow the
extraction of quantitative information about the auroral features such as mean
intensity, latitudinal and longitudinal extents and length scales. These
outputs thus represent a more automated and objective method of characterising
auroral emissions than manual inspection.
|
0906.1980
|
On Maximum a Posteriori Estimation of Hidden Markov Processes
|
cs.AI cond-mat.stat-mech cs.IT math.IT physics.data-an stat.ML
|
We present a theoretical analysis of Maximum a Posteriori (MAP) sequence
estimation for binary symmetric hidden Markov processes. We reduce the MAP
estimation to the energy minimization of an appropriately defined Ising spin
model, and focus on the performance of MAP as characterized by its accuracy and
the number of solutions corresponding to a typical observed sequence. It is
shown that for a finite range of sufficiently low noise levels, the solution is
uniquely related to the observed sequence, while the accuracy degrades linearly
with increasing the noise strength. For intermediate noise values, the accuracy
is nearly noise-independent, but now there are exponentially many solutions to
the estimation problem, which is reflected in non-zero ground-state entropy for
the Ising model. Finally, for even larger noise intensities, the number of
solutions reduces again, but the accuracy is poor. It is shown that these
regimes are different thermodynamic phases of the Ising model that are related
to each other via first-order phase transitions.
|
0906.2027
|
Matrix Completion from Noisy Entries
|
cs.LG stat.ML
|
Given a matrix M of low-rank, we consider the problem of reconstructing it
from noisy observations of a small, random subset of its entries. The problem
arises in a variety of applications, from collaborative filtering (the `Netflix
problem') to structure-from-motion and positioning. We study a low complexity
algorithm introduced by Keshavan et al.(2009), based on a combination of
spectral techniques and manifold optimization, that we call here OptSpace. We
prove performance guarantees that are order-optimal in a number of
circumstances.
|
0906.2032
|
Mapping Equivalence for Symbolic Sequences: Theory and Applications
|
cs.IT cs.NA math.FA math.IT
|
Processing of symbolic sequences represented by mapping of symbolic data into
numerical signals is commonly used in various applications. It is a
particularly popular approach in genomic and proteomic sequence analysis.
Numerous mappings of symbolic sequences have been proposed for various
applications. It is unclear however whether the processing of symbolic data
provides an artifact of the numerical mapping or is an inherent property of the
symbolic data. This issue has been long ignored in the engineering and
scientific literature. It is possible that many of the results obtained in
symbolic signal processing could be a byproduct of the mapping and might not
shed any light on the underlying properties embedded in the data. Moreover, in
many applications, conflicting conclusions may arise due to the choice of the
mapping used for numerical representation of symbolic data. In this paper, we
present a novel framework for the analysis of the equivalence of the mappings
used for numerical representation of symbolic data. We present strong and weak
equivalence properties and rely on signal correlation to characterize
equivalent mappings. We derive theoretical results which establish conditions
for consistency among numerical mappings of symbolic data. Furthermore, we
introduce an abstract mapping model for symbolic sequences and extend the
notion of equivalence to an algebraic framework. Finally, we illustrate our
theoretical results by application to DNA sequence analysis.
|
0906.2061
|
On the Minimum Distance of Non Binary LDPC Codes
|
cs.IT math.IT
|
Minimum distance is an important parameter of a linear error correcting code.
For improved performance of binary Low Density Parity Check (LDPC) codes, we
need to have the minimum distance grow fast with n, the codelength. However,
the best we can hope for is a linear growth in dmin with n. For binary LDPC
codes, the necessary and sufficient conditions on the LDPC ensemble parameters,
to ensure linear growth of minimum distance is well established. In the case of
non-binary LDPC codes, the structure of logarithmic weight codewords is
different from that of binary codes. We have carried out a preliminary study on
the logarithmic bound on the the minimum distance of non-binary LDPC code
ensembles. In particular, we have investigated certain configurations which
would lead to low weight codewords. A set of simulations are performed to
identify some of these configurations. Finally, we have provided a bound on the
logarithmic minimum distance of nonbinary codes, using a strategy similar to
the girth bound for binary codes. This bound has the same asymptotic behaviour
as that of binary codes.
|
0906.2154
|
From formulas to cirquents in computability logic
|
cs.LO cs.AI cs.CC math.LO
|
Computability logic (CoL) (see http://www.cis.upenn.edu/~giorgi/cl.html) is a
recently introduced semantical platform and ambitious program for redeveloping
logic as a formal theory of computability, as opposed to the formal theory of
truth that logic has more traditionally been. Its expressions represent
interactive computational tasks seen as games played by a machine against the
environment, and "truth" is understood as existence of an algorithmic winning
strategy. With logical operators standing for operations on games, the
formalism of CoL is open-ended, and has already undergone series of extensions.
This article extends the expressive power of CoL in a qualitatively new way,
generalizing formulas (to which the earlier languages of CoL were limited) to
circuit-style structures termed cirquents. The latter, unlike formulas, are
able to account for subgame/subtask sharing between different parts of the
overall game/task. Among the many advantages offered by this ability is that it
allows us to capture, refine and generalize the well known
independence-friendly logic which, after the present leap forward, naturally
becomes a conservative fragment of CoL, just as classical logic had been known
to be a conservative fragment of the formula-based version of CoL. Technically,
this paper is self-contained, and can be read without any prior familiarity
with CoL.
|
0906.2228
|
Characterising equilibrium logic and nested logic programs: Reductions
and complexity
|
cs.LO cs.AI
|
Equilibrium logic is an approach to nonmonotonic reasoning that extends the
stable-model and answer-set semantics for logic programs. In particular, it
includes the general case of nested logic programs, where arbitrary Boolean
combinations are permitted in heads and bodies of rules, as special kinds of
theories. In this paper, we present polynomial reductions of the main reasoning
tasks associated with equilibrium logic and nested logic programs into
quantified propositional logic, an extension of classical propositional logic
where quantifications over atomic formulas are permitted. We provide reductions
not only for decision problems, but also for the central semantical concepts of
equilibrium logic and nested logic programs. In particular, our encodings map a
given decision problem into some formula such that the latter is valid
precisely in case the former holds. The basic tasks we deal with here are the
consistency problem, brave reasoning, and skeptical reasoning. Additionally, we
also provide encodings for testing equivalence of theories or programs under
different notions of equivalence, viz. ordinary, strong, and uniform
equivalence. For all considered reasoning tasks, we analyse their computational
complexity and give strict complexity bounds.
|
0906.2252
|
Dirty Paper Coding for the MIMO Cognitive Radio Channel with Imperfect
CSIT
|
cs.IT math.IT
|
A Dirty Paper Coding (DPC) based transmission scheme for the Gaussian
multiple-input multiple-output (MIMO) cognitive radio channel (CRC) is studied
when there is imperfect and perfect channel knowledge at the transmitters
(CSIT) and the receivers, respectively. In particular, the problem of
optimizing the sum-rate of the MIMO CRC over the transmit covariance matrices
is dealt with. Such an optimization, under the DPC-based transmission strategy,
needs to be performed jointly with an optimization over the inflation factor.
To this end, first the problem of determination of inflation factor over the
MIMO channel $Y=H_1 X + H_2 S + Z$ with imperfect CSIT is investigated. For
this problem, two iterative algorithms, which generalize the corresponding
algorithms proposed for the channel $Y=H(X+S)+Z$, are developed. Later, the
necessary conditions for maximizing the sum-rate of the MIMO CRC over the
transmit covariances for a given choice of inflation factor are derived. Using
these necessary conditions and the algorithms for the determination of the
inflation factor, an iterative, numerical algorithm for the joint optimization
is proposed. Some interesting observations are made from the numerical results
obtained from the algorithm. Furthermore, the high-SNR sum-rate scaling factor
achievable over the CRC with imperfect CSIT is obtained.
|
0906.2274
|
A Neural Network Classifier of Volume Datasets
|
cs.GR cs.AI
|
Many state-of-the art visualization techniques must be tailored to the
specific type of dataset, its modality (CT, MRI, etc.), the recorded object or
anatomical region (head, spine, abdomen, etc.) and other parameters related to
the data acquisition process. While parts of the information (imaging modality
and acquisition sequence) may be obtained from the meta-data stored with the
volume scan, there is important information which is not stored explicitly
(anatomical region, tracing compound). Also, meta-data might be incomplete,
inappropriate or simply missing.
This paper presents a novel and simple method of determining the type of
dataset from previously defined categories. 2D histograms based on intensity
and gradient magnitude of datasets are used as input to a neural network, which
classifies it into one of several categories it was trained with. The proposed
method is an important building block for visualization systems to be used
autonomously by non-experts. The method has been tested on 80 datasets, divided
into 3 classes and a "rest" class.
A significant result is the ability of the system to classify datasets into a
specific class after being trained with only one dataset of that class. Other
advantages of the method are its easy implementation and its high computational
performance.
|
0906.2369
|
Properties of quasi-alphabetic tree bimorphisms
|
cs.CL cs.FL
|
We study the class of quasi-alphabetic relations, i.e., tree transformations
defined by tree bimorphisms with two quasi-alphabetic tree homomorphisms and a
regular tree language. We present a canonical representation of these
relations; as an immediate consequence, we get the closure under union. Also,
we show that they are not closed under intersection and complement, and do not
preserve most common operations on trees (branches, subtrees, v-product,
v-quotient, f-top-catenation). Moreover, we prove that the translations defined
by quasi-alphabetic tree bimorphism are exactly products of context-free string
languages. We conclude by presenting the connections between quasi-alphabetic
relations, alphabetic relations and classes of tree transformations defined by
several types of top-down tree transducers. Furthermore, we get that
quasi-alphabetic relations preserve the recognizable and algebraic tree
languages.
|
0906.2372
|
Bounds on the Rate of 2-D Bit-Stuffing Encoders
|
cs.IT math.IT
|
A method for bounding the rate of bit-stuffing encoders for 2-D constraints
is presented. Instead of considering the original encoder, we consider a
related one which is quasi-stationary. We use the quasi-stationary property in
order to formulate linear requirements that must hold on the probabilities of
the constrained arrays that are generated by the encoder. These requirements
are used as part of a linear program. The minimum and maximum of the linear
program bound the rate of the encoder from below and from above, respectively.
A lower bound on the rate of an encoder is also a lower bound on the capacity
of the corresponding constraint. For some constraints, our results lead to
tighter lower bounds than what was previously known.
|
0906.2415
|
Without a 'doubt'? Unsupervised discovery of downward-entailing
operators
|
cs.CL
|
An important part of textual inference is making deductions involving
monotonicity, that is, determining whether a given assertion entails
restrictions or relaxations of that assertion. For instance, the statement 'We
know the epidemic spread quickly' does not entail 'We know the epidemic spread
quickly via fleas', but 'We doubt the epidemic spread quickly' entails 'We
doubt the epidemic spread quickly via fleas'. Here, we present the first
algorithm for the challenging lexical-semantics problem of learning linguistic
constructions that, like 'doubt', are downward entailing (DE). Our algorithm is
unsupervised, resource-lean, and effective, accurately recovering many DE
operators that are missing from the hand-constructed lists that
textual-inference systems currently use.
|
0906.2459
|
Exact Indexing for Massive Time Series Databases under Time Warping
Distance
|
cs.DB cs.AI cs.IR
|
Among many existing distance measures for time series data, Dynamic Time
Warping (DTW) distance has been recognized as one of the most accurate and
suitable distance measures due to its flexibility in sequence alignment.
However, DTW distance calculation is computationally intensive. Especially in
very large time series databases, sequential scan through the entire database
is definitely impractical, even with random access that exploits some index
structures since high dimensionality of time series data incurs extremely high
I/O cost. More specifically, a sequential structure consumes high CPU but low
I/O costs, while an index structure requires low CPU but high I/O costs. In
this work, we therefore propose a novel indexed sequential structure called
TWIST (Time Warping in Indexed Sequential sTructure) which benefits from both
sequential access and index structure. When a query sequence is issued, TWIST
calculates lower bounding distances between a group of candidate sequences and
the query sequence, and then identifies the data access order in advance, hence
reducing a great number of both sequential and random accesses. Impressively,
our indexed sequential structure achieves significant speedup in a querying
process by a few orders of magnitude. In addition, our method shows superiority
over existing rival methods in terms of query processing time, number of page
accesses, and storage requirement with no false dismissal guaranteed.
|
0906.2509
|
On $[[n,n-4,3]]_{q}$ Quantum MDS Codes for odd prime power $q$
|
cs.IT math.IT
|
For each odd prime power $q$, let $4 \leq n\leq q^{2}+1$. Hermitian
self-orthogonal $[n,2,n-1]$ codes over $GF(q^{2})$ with dual distance three are
constructed by using finite field theory. Hence, $[[n,n-4,3]]_{q}$ quantum MDS
codes for $4 \leq n\leq q^{2}+1$ are obtained.
|
0906.2511
|
Robust Rate-Adaptive Wireless Communication Using ACK/NAK-Feedback
|
cs.IT cs.NI math.IT
|
To combat the detrimental effects of the variability in wireless channels, we
consider cross-layer rate adaptation based on limited feedback. In particular,
based on limited feedback in the form of link-layer acknowledgements (ACK) and
negative acknowledgements (NAK), we maximize the physical-layer transmission
rate subject to an upper bound on the expected packet error rate. \textr{We
take a robust approach in that we do not assume} any particular prior
distribution on the channel state. We first analyze the fundamental limitations
of such systems and derive an upper bound on the achievable rate for signaling
schemes based on uncoded QAM and random Gaussian ensembles. We show that, for
channel estimation based on binary ACK/NAK feedback, it may be preferable to
use a separate training sequence at high error rates, rather than to exploit
low-error-rate data packets themselves. We also develop an adaptive recursive
estimator, which is provably asymptotically optimal and asymptotically
efficient.
|
0906.2530
|
Observed Universality of Phase Transitions in High-Dimensional Geometry,
with Implications for Modern Data Analysis and Signal Processing
|
math.ST cs.IT math.IT physics.data-an stat.CO stat.TH
|
We review connections between phase transitions in high-dimensional
combinatorial geometry and phase transitions occurring in modern
high-dimensional data analysis and signal processing. In data analysis, such
transitions arise as abrupt breakdown of linear model selection, robust data
fitting or compressed sensing reconstructions, when the complexity of the model
or the number of outliers increases beyond a threshold. In combinatorial
geometry these transitions appear as abrupt changes in the properties of face
counts of convex polytopes when the dimensions are varied. The thresholds in
these very different problems appear in the same critical locations after
appropriate calibration of variables.
These thresholds are important in each subject area: for linear modelling,
they place hard limits on the degree to which the now-ubiquitous
high-throughput data analysis can be successful; for robustness, they place
hard limits on the degree to which standard robust fitting methods can tolerate
outliers before breaking down; for compressed sensing, they define the sharp
boundary of the undersampling/sparsity tradeoff in undersampling theorems.
Existing derivations of phase transitions in combinatorial geometry assume
the underlying matrices have independent and identically distributed (iid)
Gaussian elements. In applications, however, it often seems that Gaussianity is
not required. We conducted an extensive computational experiment and formal
inferential analysis to test the hypothesis that these phase transitions are
{\it universal} across a range of underlying matrix ensembles. The experimental
results are consistent with an asymptotic large-$n$ universality across matrix
ensembles; finite-sample universality can be rejected.
|
0906.2547
|
Superactivation of the Asymptotic Zero-Error Classical Capacity of a
Quantum Channel
|
quant-ph cs.IT math.IT
|
The zero-error classical capacity of a quantum channel is the asymptotic rate
at which it can be used to send classical bits perfectly, so that they can be
decoded with zero probability of error. We show that there exist pairs of
quantum channels, neither of which individually have any zero-error capacity
whatsoever (even if arbitrarily many uses of the channels are available), but
such that access to even a single copy of both channels allows classical
information to be sent perfectly reliably. In other words, we prove that the
zero-error classical capacity can be superactivated. This result is the first
example of superactivation of a classical capacity of a quantum channel.
|
0906.2582
|
Strongly Secure Privacy Amplification Cannot Be Obtained by Encoder of
Slepian-Wolf Code
|
cs.IT math.IT
|
The privacy amplification is a technique to distill a secret key from a
random variable by a function so that the distilled key and eavesdropper's
random variable are statistically independent. There are three kinds of
security criteria for the key distilled by the privacy amplification: the
normalized divergence criterion, which is also known as the weak security
criterion, the variational distance criterion, and the divergence criterion,
which is also known as the strong security criterion. As a technique to distill
a secret key, it is known that the encoder of a Slepian-Wolf (the source coding
with full side-information at the decoder) code can be used as a function for
the privacy amplification if we employ the weak security criterion. In this
paper, we show that the encoder of a Slepian-Wolf code cannot be used as a
function for the privacy amplification if we employ the criteria other than the
weak one.
|
0906.2603
|
Hybrid Coding for Gaussian Broadcast Channels with Gaussian Sources
|
cs.IT math.IT
|
This paper considers a degraded Gaussian broadcast channel over which
Gaussian sources are to be communicated. When the sources are independent, this
paper shows that hybrid coding achieves the optimal distortion region, the same
as that of separate source and channel coding. It also shows that uncoded
transmission is not optimal for this setting. For correlated sources, the paper
shows that a hybrid coding strategy has a better distortion region than
separate source-channel coding below a certain signal to noise ratio threshold.
Thus, hybrid coding is a good choice for Gaussian broadcast channels with
correlated Gaussian sources.
|
0906.2609
|
Concatenate and Boost for Multiple Measurement Vector Problems
|
cs.IT math.IT
|
Multiple measurement vector (MMV) problem addresses the recovery of a set of
sparse signal vectors that share common non-zero support, and has emerged an
important topics in compressed sensing. Even though the fundamental performance
limit of recoverable sparsity level has been formally derived, conventional
algorithms still exhibit significant performance gaps from the theoretical
bound. The main contribution of this paper is a novel concatenate MMV and boost
(CoMBo) algorithm that achieves the theoretical bound. More specifically, the
algorithm concatenates MMV to a larger dimensional SMV problem and boosts it by
multiplying random orthonormal matrices. Extensive simulation results
demonstrate that CoMBo outperforms all existing methods and achieves the
theoretical bound as the number of measurement vector increases.
|
0906.2635
|
Bayesian History Reconstruction of Complex Human Gene Clusters on a
Phylogeny
|
cs.LG
|
Clusters of genes that have evolved by repeated segmental duplication present
difficult challenges throughout genomic analysis, from sequence assembly to
functional analysis. Improved understanding of these clusters is of utmost
importance, since they have been shown to be the source of evolutionary
innovation, and have been linked to multiple diseases, including HIV and a
variety of cancers. Previously, Zhang et al. (2008) developed an algorithm for
reconstructing parsimonious evolutionary histories of such gene clusters, using
only human genomic sequence data. In this paper, we propose a probabilistic
model for the evolution of gene clusters on a phylogeny, and an MCMC algorithm
for reconstruction of duplication histories from genomic sequences in multiple
species. Several projects are underway to obtain high quality BAC-based
assemblies of duplicated clusters in multiple species, and we anticipate that
our method will be useful in analyzing these valuable new data sets.
|
0906.2667
|
The use of dynamic distance potential fields for pedestrian flow around
corners
|
cs.MA physics.soc-ph
|
This contribution investigates situations in pedestrian dynamics, where
trying to walk the shortest path leads to largely different results than trying
to walk the quickest path. A heuristic one-shot method to model the influence
of the will to walk the quickest path is introduced.
|
0906.2716
|
Maximal digital straight segments and convergence of discrete geometric
estimators
|
cs.CV cs.CG cs.DM
|
Discrete geometric estimators approach geometric quantities on digitized
shapes without any knowledge of the continuous shape. A classical yet difficult
problem is to show that an estimator asymptotically converges toward the true
geometric quantity as the resolution increases. We study here the convergence
of local estimators based on Digital Straight Segment (DSS) recognition. It is
closely linked to the asymptotic growth of maximal DSS, for which we show
bounds both about their number and sizes. These results not only give better
insights about digitized curves but indicate that curvature estimators based on
local DSS recognition are not likely to converge. We indeed invalidate an
hypothesis which was essential in the only known convergence theorem of a
discrete curvature estimator. The proof involves results from arithmetic
properties of digital lines, digital convexity, combinatorics, continued
fractions and random polytopes.
|
0906.2756
|
Norms and Commitment for iOrgs(TM) Information Systems: Direct Logic(TM)
and Participatory Grounding Checking
|
cs.MA cs.LO cs.SE
|
The fundamental assumption of the Event Calculus is overly simplistic when it
comes to organizations in which time-varying properties have to be actively
maintained and managed in order to continue to hold and termination by another
action is not required for a property to no longer hold. I.e., if active
measures are not taken then things will go haywire by default. Similarly
extension and revision is required for Grounding Checking properties of systems
based on a set of ground inferences. Previously Model Checking as been
performed using the model of nondeterministic automata based on states
determined by time-points. These nondeterministic automata are not suitable for
iOrgs, which are highly structured and operate asynchronously with only loosely
bounded nondeterminism. iOrgs Information Systems have been developed as a
technology in which organizations have people that are tightly integrated with
information technology that enables them to function organizationally. iOrgs
formalize existing practices to provide a framework for addressing issues of
authority, accountability, scalability, and robustness using methods that are
analogous to human organizations. In general -iOrgs are a natural extension Web
Services, which are the standard for distributed computing and software
application interoperability in large-scale Organizational Computing. -iOrgs
are structured by Organizational Commitment that is a special case of Physical
Commitment that is defined to be information pledged. iOrgs norms are used to
illustrate the following: -Even a very simple microtheory for normative
reasoning can engender inconsistency In practice, it is impossible to verify
the consistency of a theory for a practical domain. -Improved Safety in
Reasoning. It is not safe to use classical logic and probability theory in
practical reasoning.
|
0906.2767
|
Coding cells of digital spaces: a framework to write generic digital
topology algorithms
|
cs.DM cs.CV
|
This paper proposes a concise coding of the cells of n-dimensional finite
regular grids. It induces a simple, generic and efficient framework for
implementing classical digital topology data structures and algorithms.
Discrete subsets of multidimensional images (e.g. regions, digital surfaces,
cubical cell complexes) have then a common and compact representation.
Moreover, algorithms have a straightforward and efficient implementation, which
is independent from the dimension or sizes of digital images. We illustrate
that point with generic hypersurface boundary extraction algorithms by scanning
or tracking. This framework has been implemented and basic operations as well
as the presented applications have been benchmarked.
|
0906.2770
|
Combinatorial pyramids and discrete geometry for energy-minimizing
segmentation
|
cs.CV
|
This paper defines the basis of a new hierarchical framework for segmentation
algorithms based on energy minimization schemes. This new framework is based on
two formal tools. First, a combinatorial pyramid encode efficiently a hierarchy
of partitions. Secondly, discrete geometric estimators measure precisely some
important geometric parameters of the regions. These measures combined with
photometrical and topological features of the partition allows to design energy
terms based on discrete measures. Our segmentation framework exploits these
energies to build a pyramid of image partitions with a minimization scheme.
Some experiments illustrating our framework are shown and discussed.
|
0906.2812
|
Partial randomness and dimension of recursively enumerable reals
|
cs.CC cs.IT math.IT math.LO
|
A real \alpha is called recursively enumerable ("r.e." for short) if there
exists a computable, increasing sequence of rationals which converges to
\alpha. It is known that the randomness of an r.e. real \alpha can be
characterized in various ways using each of the notions; program-size
complexity, Martin-L\"{o}f test, Chaitin \Omega number, the domination and
\Omega-likeness of \alpha, the universality of a computable, increasing
sequence of rationals which converges to \alpha, and universal probability. In
this paper, we generalize these characterizations of randomness over the notion
of partial randomness by parameterizing each of the notions above by a real T
in (0,1], where the notion of partial randomness is a stronger representation
of the compression rate by means of program-size complexity. As a result, we
present ten equivalent characterizations of the partial randomness of an r.e.
real. The resultant characterizations of partial randomness are powerful and
have many important applications. One of them is to present equivalent
characterizations of the dimension of an individual r.e. real. The equivalence
between the notion of Hausdorff dimension and compression rate by program-size
complexity (or partial randomness) has been established at present by a series
of works of many researchers over the last two decades. We present ten
equivalent characterizations of the dimension of an individual r.e. real.
|
0906.2819
|
Disjoint LDPC Coding for Gaussian Broadcast Channels
|
cs.IT math.IT
|
Low-density parity-check (LDPC) codes have been used for communication over a
two-user Gaussian broadcast channel. It has been shown in the literature that
the optimal decoding of such system requires joint decoding of both user
messages at each user. Also, a joint code design procedure should be performed.
We propose a method which uses a novel labeling strategy and is based on the
idea behind the bit-interleaved coded modulation. This method does not require
joint decoding and/or joint code optimization. Thus, it reduces the overall
complexity of near-capacity coding in broadcast channels. For different rate
pairs on the boundary of the capacity region, pairs of LDPC codes are designed
to demonstrate the success of this technique.
|
0906.2820
|
Equalization for Non-Coherent UWB Systems with Approximate Semi-Definite
Programming
|
cs.IT math.IT
|
In this paper, we propose an approximate semi-definite programming framework
for demodulation and equalization of non-coherent ultra-wide-band communication
systems with inter-symbol-interference. It is assumed that the communication
systems follow non-linear second-order Volterra models. We formulate the
demodulation and equalization problems as semi-definite programming problems.
We propose an approximate algorithm for solving the formulated semi-definite
programming problems. Compared with the existing non-linear equalization
approaches, the proposed semi-definite programming formulation and approximate
solving algorithm have low computational complexity and storage requirements.
We show that the proposed algorithm has satisfactory error probability
performance by simulation results. The proposed non-linear equalization
approach can be adopted for a wide spectrum of non-coherent ultra-wide-band
systems, due to the fact that most non-coherent ultra-wide-band systems with
inter-symbol-interference follow non-linear second-order Volterra signal
models.
|
0906.2824
|
What Does Artificial Life Tell Us About Death?
|
cs.AI cs.OH
|
Short philosophical essay
|
0906.2835
|
Employing Wikipedia's Natural Intelligence For Cross Language
Information Retrieval
|
cs.IR cs.CL
|
In this paper we present a novel method for retrieving information in
languages other than that of the query. We use this technique in combination
with existing traditional Cross Language Information Retrieval (CLIR)
techniques to improve their results. This method has a number of advantages
over traditional techniques that rely on machine translation to translate the
query and then search the target document space using a machine translation.
This method is not limited to the availability of a machine translation
algorithm for the desired language and uses already existing sources of readily
available translated information on the internet as a "middle-man" approach. In
this paper we use Wikipedia; however, any similar multilingual, cross
referenced body of documents can be used. For evaluation and comparison
purposes we also implemented a traditional machine translation approach
separately as well as the Wikipedia approach separately.
|
0906.2864
|
Discussion of Twenty Questions Problem
|
cs.IT math.IT
|
Discuss several tricks for solving twenty question problems which in this
paper is depicted as a guessing game. Player tries to find a ball in twenty
boxes by asking as few questions as possible, and these questions are answered
by only "Yes" or "No". With the discussion, demonstration of source coding
methods is the main concern.
|
0906.2895
|
Entropy Message Passing
|
cs.LG cs.IT math.IT
|
The paper proposes a new message passing algorithm for cycle-free factor
graphs. The proposed "entropy message passing" (EMP) algorithm may be viewed as
sum-product message passing over the entropy semiring, which has previously
appeared in automata theory. The primary use of EMP is to compute the entropy
of a model. However, EMP can also be used to compute expressions that appear in
expectation maximization and in gradient descent algorithms.
|
0906.2935
|
AG codes on certain maximal curves
|
math.AG cs.IT math.IT
|
Algebraic Geometric codes associated to a recently discovered class of
maximal curves are investigated. As a result, some linear codes with better
parameters with respect to the previously known ones are discovered, and 70
improvements on MinT's tables are obtained.
|
0906.2997
|
The Jewett-Krieger Construction for Tilings
|
math.DS cs.IT math.IT math.PR
|
Given a random distribution of impurities on a periodic crystal, an
equivalent uniquely ergodic tiling space is built, made of aperiodic,
repetitive tilings with finite local complexity, and with configurational
entropy close to the entropy of the impurity distribution. The construction is
the tiling analog of the Jewett-Kreger theorem.
|
0906.3036
|
Mnesors for automatic control
|
cs.AI
|
Mnesors are defined as elements of a semimodule over the min-plus integers.
This two-sorted structure is able to merge graduation properties of vectors and
idempotent properties of boolean numbers, which makes it appropriate for hybrid
systems. We apply it to the control of an inverted pendulum and design a full
logical controller, that is, without the usual algebra of real numbers.
|
0906.3068
|
Deformable Model with a Complexity Independent from Image Resolution
|
cs.CV
|
We present a parametric deformable model which recovers image components with
a complexity independent from the resolution of input images. The proposed
model also automatically changes its topology and remains fully compatible with
the general framework of deformable models. More precisely, the image space is
equipped with a metric that expands salient image details according to their
strength and their curvature. During the whole evolution of the model, the
sampling of the contour is kept regular with respect to this metric. By this
way, the vertex density is reduced along most parts of the curve while a high
quality of shape representation is preserved. The complexity of the deformable
model is thus improved and is no longer influenced by feature-preserving
changes in the resolution of input images. Building the metric requires a prior
estimation of contour curvature. It is obtained using a robust estimator which
investigates the local variations in the orientation of image gradient.
Experimental results on both computer generated and biomedical images are
presented to illustrate the advantages of our approach.
|
0906.3085
|
Poset representation and similarity comparisons os systems in IR
|
cs.IR
|
In this paper we are using the poset representation to describe the complex
answers given by IR systems after a clustering and ranking processes. The
answers considered may be given by cartographical representations or by
thematic sub-lists of documents. The poset representation, with the graph
theory and the relational representation opens many perspectives in the
definition of new similarity measures capable of taking into account both the
clustering and ranking processes. We present a general method for constructing
new similarity measures and give several examples. These measures can be used
for semi-ordered partitions; moreover, in the comparison of two sets of
answers, the corresponding similarity indicator is an increasing function of
the ranks of presentation of common answers.
|
0906.3112
|
Object-Relational Database Representations for Text Indexing
|
cs.IR cs.DB
|
One of the distinctive features of Information Retrieval systems comparing to
Database Management systems, is that they offer better compression for posting
lists, resulting in better I/O performance and thus faster query evaluation. In
this paper, we introduce database representations of the index that reduce the
size (and thus the disk I/Os) of the posting lists. This is not achieved by
redesigning the DBMS, but by exploiting the non 1NF features that existing
Object-Relational DBM systems (ORDBMS) already offer. Specifically, four
different database representations are described and detailed experimental
results for one million pages are reported. Three of these representations are
one order of magnitude more space efficient and faster (in query evaluation)
than the plain relational representation.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.