id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
cs/0511086
|
Energy-Efficient Resource Allocation in Time Division Multiple-Access
over Fading Channels
|
cs.IT math.IT
|
We investigate energy-efficiency issues and resource allocation policies for
time division multi-access (TDMA) over fading channels in the power-limited
regime. Supposing that the channels are frequency-flat block-fading and
transmitters have full or quantized channel state information (CSI), we first
minimize power under a weighted sum-rate constraint and show that the optimal
rate and time allocation policies can be obtained by water-filling over
realizations of convex envelopes of the minima for cost-reward functions. We
then address a related minimization under individual rate constraints and
derive the optimal allocation policies via greedy water-filling. Using
water-filling across frequencies and fading states, we also extend our results
to frequency-selective channels. Our approaches not only provide fundamental
power limits when each user can support an infinite number of
capacity-achieving codebooks, but also yield guidelines for practical designs
where users can only support a finite number of adaptive modulation and coding
(AMC) modes with prescribed symbol error probabilities, and also for systems
where only discrete-time allocations are allowed.
|
cs/0511087
|
Robust Inference of Trees
|
cs.LG cs.AI cs.IT math.IT
|
This paper is concerned with the reliable inference of optimal
tree-approximations to the dependency structure of an unknown distribution
generating data. The traditional approach to the problem measures the
dependency strength between random variables by the index called mutual
information. In this paper reliability is achieved by Walley's imprecise
Dirichlet model, which generalizes Bayesian learning with Dirichlet priors.
Adopting the imprecise Dirichlet model results in posterior interval
expectation for mutual information, and in a set of plausible trees consistent
with the data. Reliable inference about the actual tree is achieved by focusing
on the substructure common to all the plausible trees. We develop an exact
algorithm that infers the substructure in time O(m^4), m being the number of
random variables. The new algorithm is applied to a set of data sampled from a
known distribution. The method is shown to reliably infer edges of the actual
tree even when the data are very scarce, unlike the traditional approach.
Finally, we provide lower and upper credibility limits for mutual information
under the imprecise Dirichlet model. These enable the previous developments to
be extended to a full inferential method for trees.
|
cs/0511088
|
Bounds on Query Convergence
|
cs.LG
|
The problem of finding an optimum using noisy evaluations of a smooth cost
function arises in many contexts, including economics, business, medicine,
experiment design, and foraging theory. We derive an asymptotic bound E[ (x_t -
x*)^2 ] >= O(1/sqrt(t)) on the rate of convergence of a sequence (x_0, x_1,
>...) generated by an unbiased feedback process observing noisy evaluations of
an unknown quadratic function maximised at x*. The bound is tight, as the proof
leads to a simple algorithm which meets it. We further establish a bound on the
total regret, E[ sum_{i=1..t} (x_i - x*)^2 ] >= O(sqrt(t)) These bounds may
impose practical limitations on an agent's performance, as O(eps^-4) queries
are made before the queries converge to x* with eps accuracy.
|
cs/0511089
|
Continued Fraction Expansion as Isometry: The Law of the Iterated
Logarithm for Linear, Jump, and 2--Adic Complexity
|
cs.IT math.IT
|
In the cryptanalysis of stream ciphers and pseudorandom sequences, the
notions of linear, jump, and 2-adic complexity arise naturally to measure the
(non)randomness of a given string. We define an isometry K on F_q^\infty that
is the precise equivalent to Euclid's algorithm over the reals to calculate the
continued fraction expansion of a formal power series. The continued fraction
expansion allows to deduce the linear and jump complexity profiles of the input
sequence. Since K is an isometry, the resulting F_q^\infty-sequence is i.i.d.
for i.i.d. input. Hence the linear and jump complexity profiles may be modelled
via Bernoulli experiments (for F_2: coin tossing), and we can apply the very
precise bounds as collected by Revesz, among others the Law of the Iterated
Logarithm.
The second topic is the 2-adic span and complexity, as defined by Goresky and
Klapper. We derive again an isometry, this time on the dyadic integers Z_2
which induces an isometry A on F_2}^\infty. The corresponding jump complexity
behaves on average exactly like coin tossing.
Index terms:
Formal power series, isometry, linear complexity, jump complexity, 2-adic
complexity, 2-adic span, law of the iterated logarithm, Levy classes, stream
ciphers, pseudorandom sequences
|
cs/0511090
|
Integration of Declarative and Constraint Programming
|
cs.PL cs.AI
|
Combining a set of existing constraint solvers into an integrated system of
cooperating solvers is a useful and economic principle to solve hybrid
constraint problems. In this paper we show that this approach can also be used
to integrate different language paradigms into a unified framework.
Furthermore, we study the syntactic, semantic and operational impacts of this
idea for the amalgamation of declarative and constraint programming.
|
cs/0511091
|
Evolution of Voronoi based Fuzzy Recurrent Controllers
|
cs.AI
|
A fuzzy controller is usually designed by formulating the knowledge of a
human expert into a set of linguistic variables and fuzzy rules. Among the most
successful methods to automate the fuzzy controllers development process are
evolutionary algorithms. In this work, we propose the Recurrent Fuzzy Voronoi
(RFV) model, a representation for recurrent fuzzy systems. It is an extension
of the FV model proposed by Kavka and Schoenauer that extends the application
domain to include temporal problems. The FV model is a representation for fuzzy
controllers based on Voronoi diagrams that can represent fuzzy systems with
synergistic rules, fulfilling the $\epsilon$-completeness property and
providing a simple way to introduce a priory knowledge. In the proposed
representation, the temporal relations are embedded by including internal units
that provide feedback by connecting outputs to inputs. These internal units act
as memory elements. In the RFV model, the semantic of the internal units can be
specified together with the a priori rules. The geometric interpretation of the
rules allows the use of geometric variational operators during the evolution.
The representation and the algorithms are validated in two problems in the area
of system identification and evolutionary robotics.
|
cs/0511093
|
Artificial Agents and Speculative Bubbles
|
cs.GT cs.AI
|
Pertaining to Agent-based Computational Economics (ACE), this work presents
two models for the rise and downfall of speculative bubbles through an exchange
price fixing based on double auction mechanisms. The first model is based on a
finite time horizon context, where the expected dividends decrease along time.
The second model follows the {\em greater fool} hypothesis; the agent behaviour
depends on the comparison of the estimated risk with the greater fool's.
Simulations shed some light on the influent parameters and the necessary
conditions for the apparition of speculative bubbles in an asset market within
the considered framework.
|
cs/0511095
|
Carbon Copying Onto Dirty Paper
|
cs.IT math.IT
|
A generalization of the problem of writing on dirty paper is considered in
which one transmitter sends a common message to multiple receivers. Each
receiver experiences on its link an additive interference (in addition to the
additive noise), which is known noncausally to the transmitter but not to any
of the receivers. Applications range from wireless multi-antenna multicasting
to robust dirty paper coding.
We develop results for memoryless channels in Gaussian and binary special
cases. In most cases, we observe that the availability of side information at
the transmitter increases capacity relative to systems without such side
information, and that the lack of side information at the receivers decreases
capacity relative to systems with such side information.
For the noiseless binary case, we establish the capacity when there are two
receivers. When there are many receivers, we show that the transmitter side
information provides a vanishingly small benefit. When the interference is
large and independent across the users, we show that time sharing is optimal.
For the Gaussian case we present a coding scheme and establish its optimality
in the high signal-to-interference-plus-noise limit when there are two
receivers. When the interference is large and independent across users we show
that time-sharing is again optimal. Connections to the problem of robust dirty
paper coding are also discussed.
|
cs/0511096
|
A Single-letter Upper Bound for the Sum Rate of Multiple Access Channels
with Correlated Sources
|
cs.IT math.IT
|
The capacity region of the multiple access channel with arbitrarily
correlated sources remains an open problem. Cover, El Gamal and Salehi gave an
achievable region in the form of single-letter entropy and mutual information
expressions, without a single-letter converse. Cover, El Gamal and Salehi also
gave a converse in terms of some n-letter mutual informations, which are
incomputable. In this paper, we derive an upper bound for the sum rate of this
channel in a single-letter expression by using spectrum analysis. The
incomputability of the sum rate of Cover, El Gamal and Salehi scheme comes from
the difficulty of characterizing the possible joint distributions for the
n-letter channel inputs. Here we introduce a new data processing inequality,
which leads to a single-letter necessary condition for these possible joint
distributions. We develop a single-letter upper bound for the sum rate by using
this single-letter necessary condition on the possible joint distributions.
|
cs/0511098
|
Information and Stock Prices: A Simple Introduction
|
cs.CY cs.IT math.IT nlin.AO physics.soc-ph
|
This article summarizes recent research in financial economics about why
information, such as earnings announcements, moves stock prices. The article
does not presume any prior exposure to finance beyond what you might read in
newspapers.
|
cs/0511100
|
Density Evolution, Thresholds and the Stability Condition for Non-binary
LDPC Codes
|
cs.IT math.IT
|
We derive the density evolution equations for non-binary low-density
parity-check (LDPC) ensembles when transmission takes place over the binary
erasure channel. We introduce ensembles defined with respect to the general
linear group over the binary field. For these ensembles the density evolution
equations can be written compactly. The density evolution for the general
linear group helps us in understanding the density evolution for codes defined
with respect to finite fields. We compute thresholds for different alphabet
sizes for various LDPC ensembles. Surprisingly, the threshold is not a
monotonic function of the alphabet size. We state the stability condition for
non-binary LDPC ensembles over any binary memoryless symmetric channel. We also
give upper bounds on the MAP thresholds for various non-binary ensembles based
on EXIT curves and the area theorem.
|
cs/0511103
|
An Infeasibility Result for the Multiterminal Source-Coding Problem
|
cs.IT math.IT
|
We prove a new outer bound on the rate-distortion region for the
multiterminal source-coding problem. This bound subsumes the best outer bound
in the literature and improves upon it strictly in some cases. The improved
bound enables us to obtain a new, conclusive result for the binary erasure
version of the "CEO problem." The bound recovers many of the converse results
that have been established for special cases of the problem, including the
recent one for the Gaussian version of the CEO problem.
|
cs/0511104
|
Channel Model and Upper Bound on the Information Capacity of the Fiber
Optical Communication Channel Based on the Effects of XPM Induced
Nonlinearity
|
cs.IT math.IT
|
An upper bound to the information capacity of a wavelength-division multi-
plexed optical fiber communication system is derived in a model incorporating
the nonlinear propagation effects of cross-phase modulation (XPM). This work is
based on the paper by Mitra et al., finding lower bounds to the channel
capacity, in which physical models for propagation are used to calculate
statistical properties of the conditional probability distribution relating
input and output in a single WDM channel. In this paper we present a tractable
channel model incorporating the effects of cross phase modulation. Using this
model we find an upper bound to the information capacity of the fiber optical
communication channel at high SNR. The results provide physical insight into
the manner in which nonlinearities degrade the information capacity.
|
cs/0511105
|
The Signed Distance Function: A New Tool for Binary Classification
|
cs.LG cs.CG
|
From a geometric perspective most nonlinear binary classification algorithms,
including state of the art versions of Support Vector Machine (SVM) and Radial
Basis Function Network (RBFN) classifiers, and are based on the idea of
reconstructing indicator functions. We propose instead to use reconstruction of
the signed distance function (SDF) as a basis for binary classification. We
discuss properties of the signed distance function that can be exploited in
classification algorithms. We develop simple versions of such classifiers and
test them on several linear and nonlinear problems. On linear tests accuracy of
the new algorithm exceeds that of standard SVM methods, with an average of 50%
fewer misclassifications. Performance of the new methods also matches or
exceeds that of standard methods on several nonlinear problems including
classification of benchmark diagnostic micro-array data sets.
|
cs/0511106
|
Benefits of InterSite Pre-Processing and Clustering Methods in
E-Commerce Domain
|
cs.DB
|
This paper presents our preprocessing and clustering analysis on the
clickstream dataset proposed for the ECMLPKDD 2005 Discovery Challenge. The
main contributions of this article are double. First, after presenting the
clickstream dataset, we show how we build a rich data warehouse based an
advanced preprocesing. We take into account the intersite aspects in the given
ecommerce domain, which offers an interesting data structuration. A preliminary
statistical analysis based on time period clickstreams is given, emphasing the
importance of intersite user visits in such a context. Secondly, we describe
our crossed-clustering method which is applied on data generated from our data
warehouse. Our preliminary results are interesting and promising illustrating
the benefits of our WUM methods, even if more investigations are needed on the
same dataset.
|
cs/0511108
|
Parameter Estimation of Hidden Diffusion Processes: Particle Filter vs.
Modified Baum-Welch Algorithm
|
cs.DS cs.LG
|
We propose a new method for the estimation of parameters of hidden diffusion
processes. Based on parametrization of the transition matrix, the Baum-Welch
algorithm is improved. The algorithm is compared to the particle filter in
application to the noisy periodic systems. It is shown that the modified
Baum-Welch algorithm is capable of estimating the system parameters with better
accuracy than particle filters.
|
cs/0512002
|
On Self-Regulated Swarms, Societal Memory, Speed and Dynamics
|
cs.NE cs.AI
|
We propose a Self-Regulated Swarm (SRS) algorithm which hybridizes the
advantageous characteristics of Swarm Intelligence as the emergence of a
societal environmental memory or cognitive map via collective pheromone laying
in the landscape (properly balancing the exploration/exploitation nature of our
dynamic search strategy), with a simple Evolutionary mechanism that trough a
direct reproduction procedure linked to local environmental features is able to
self-regulate the above exploratory swarm population, speeding it up globally.
In order to test his adaptive response and robustness, we have recurred to
different dynamic multimodal complex functions as well as to Dynamic
Optimization Control problems, measuring reaction speeds and performance. Final
comparisons were made with standard Genetic Algorithms (GAs), Bacterial
Foraging strategies (BFOA), as well as with recent Co-Evolutionary approaches.
SRS's were able to demonstrate quick adaptive responses, while outperforming
the results obtained by the other approaches. Additionally, some successful
behaviors were found. One of the most interesting illustrate that the present
SRS collective swarm of bio-inspired ant-like agents is able to track about 65%
of moving peaks traveling up to ten times faster than the velocity of a single
individual composing that precise swarm tracking system.
|
cs/0512003
|
Societal Implicit Memory and his Speed on Tracking Extrema over Dynamic
Environments using Self-Regulatory Swarms
|
cs.MA cs.AI
|
In order to overcome difficult dynamic optimization and environment extrema
tracking problems, We propose a Self-Regulated Swarm (SRS) algorithm which
hybridizes the advantageous characteristics of Swarm Intelligence as the
emergence of a societal environmental memory or cognitive map via collective
pheromone laying in the landscape (properly balancing the
exploration/exploitation nature of our dynamic search strategy), with a simple
Evolutionary mechanism that trough a direct reproduction procedure linked to
local environmental features is able to self-regulate the above exploratory
swarm population, speeding it up globally. In order to test his adaptive
response and robustness, we have recurred to different dynamic multimodal
complex functions as well as to Dynamic Optimization Control problems,
measuring reaction speeds and performance. Final comparisons were made with
standard Genetic Algorithms (GAs), Bacterial Foraging strategies (BFOA), as
well as with recent Co-Evolutionary approaches. SRS's were able to demonstrate
quick adaptive responses, while outperforming the results obtained by the other
approaches. Additionally, some successful behaviors were found. One of the most
interesting illustrate that the present SRS collective swarm of bio-inspired
ant-like agents is able to track about 65% of moving peaks traveling up to ten
times faster than the velocity of a single individual composing that precise
swarm tracking system.
|
cs/0512004
|
Self-Regulated Artificial Ant Colonies on Digital Image Habitats
|
cs.MA cs.AI
|
Artificial life models, swarm intelligent and evolutionary computation
algorithms are usually built on fixed size populations. Some studies indicate
however that varying the population size can increase the adaptability of these
systems and their capability to react to changing environments. In this paper
we present an extended model of an artificial ant colony system designed to
evolve on digital image habitats. We will show that the present swarm can adapt
the size of the population according to the type of image on which it is
evolving and reacting faster to changing images, thus converging more rapidly
to the new desired regions, regulating the number of his image foraging agents.
Finally, we will show evidences that the model can be associated with the
Mathematical Morphology Watershed algorithm to improve the segmentation of
digital grey-scale images. KEYWORDS: Swarm Intelligence, Perception and Image
Processing, Pattern Recognition, Mathematical Morphology, Social Cognitive
Maps, Social Foraging, Self-Organization, Distributed Search.
|
cs/0512006
|
Capacity-Achieving Ensembles of Accumulate-Repeat-Accumulate Codes for
the Erasure Channel with Bounded Complexity
|
cs.IT math.IT
|
The paper introduces ensembles of accumulate-repeat-accumulate (ARA) codes
which asymptotically achieve capacity on the binary erasure channel (BEC) with
{\em bounded complexity}, per information bit, of encoding and decoding. It
also introduces symmetry properties which play a central role in the
construction of capacity-achieving ensembles for the BEC with bounded
complexity. The results here improve on the tradeoff between performance and
complexity provided by previous constructions of capacity-achieving ensembles
of codes defined on graphs. The superiority of ARA codes with moderate to large
block length is exemplified by computer simulations which compare their
performance with those of previously reported capacity-achieving ensembles of
LDPC and IRA codes. The ARA codes also have the advantage of being systematic.
|
cs/0512007
|
Entangled messages
|
cs.CR cs.IR
|
It is sometimes necessary to send copies of the same email to different
parties, but it is impossible to ensure that if one party reads the message the
other parties will bound to read it. We propose an entanglement based scheme
where if one party reads the message the other party will bound to read it
simultaneously.
|
cs/0512010
|
A geometry of information, I: Nerves, posets and differential forms
|
cs.AI cs.GR
|
The main theme of this workshop (Dagstuhl seminar 04351) is `Spatial
Representation: Continuous vs. Discrete'. Spatial representation has two
contrasting but interacting aspects (i) representation of spaces' and (ii)
representation by spaces. In this paper, we will examine two aspects that are
common to both interpretations of the theme, namely nerve constructions and
refinement. Representations change, data changes, spaces change. We will
examine the possibility of a `differential geometry' of spatial representations
of both types, and in the sequel give an algebra of differential forms that has
the potential to handle the dynamical aspect of such a geometry. We will
discuss briefly a conjectured class of spaces, generalising the Cantor set
which would seem ideal as a test-bed for the set of tools we are developing.
|
cs/0512013
|
The Water-Filling Game in Fading Multiple Access Channels
|
cs.IT math.IT
|
We adopt a game theoretic approach for the design and analysis of distributed
resource allocation algorithms in fading multiple access channels. The users
are assumed to be selfish, rational, and limited by average power constraints.
We show that the sum-rate optimal point on the boundary of the multipleaccess
channel capacity region is the unique Nash Equilibrium of the corresponding
water-filling game. This result sheds a new light on the opportunistic
communication principle and argues for the fairness of the sum-rate optimal
point, at least from a game theoretic perspective. The base-station is then
introduced as a player interested in maximizing a weighted sum of the
individual rates. We propose a Stackelberg formulation in which the
base-station is the designated game leader. In this set-up, the base-station
announces first its strategy defined as the decoding order of the different
users, in the successive cancellation receiver, as a function of the channel
state. In the second stage, the users compete conditioned on this particular
decoding strategy. We show that this formulation allows for achieving all the
corner points of the capacity region, in addition to the sum-rate optimal
point. On the negative side, we prove the non-existence of a base-station
strategy in this formulation that achieves the rest of the boundary points. To
overcome this limitation, we present a repeated game approach which achieves
the capacity region of the fading multiple access channel. Finally, we extend
our study to vector channels highlighting interesting differences between this
scenario and the scalar channel case.
|
cs/0512014
|
A Game-Theoretic Approach to Energy-Efficient Power Control in
Multi-Carrier CDMA Systems
|
cs.IT math.IT
|
A game-theoretic model for studying power control in multi-carrier CDMA
systems is proposed. Power control is modeled as a non-cooperative game in
which each user decides how much power to transmit over each carrier to
maximize its own utility. The utility function considered here measures the
number of reliable bits transmitted over all the carriers per Joule of energy
consumed and is particularly suitable for networks where energy efficiency is
important. The multi-dimensional nature of users' strategies and the
non-quasiconcavity of the utility function make the multi-carrier problem much
more challenging than the single-carrier or throughput-based-utility case. It
is shown that, for all linear receivers including the matched filter, the
decorrelator, and the minimum-mean-square-error (MMSE) detector, a user's
utility is maximized when the user transmits only on its "best" carrier. This
is the carrier that requires the least amount of power to achieve a particular
target signal-to-interference-plus-noise ratio (SINR) at the output of the
receiver. The existence and uniqueness of Nash equilibrium for the proposed
power control game are studied. In particular, conditions are given that must
be satisfied by the channel gains for a Nash equilibrium to exist, and the
distribution of the users among the carriers at equilibrium is also
characterized. In addition, an iterative and distributed algorithm for reaching
the equilibrium (when it exists) is presented. It is shown that the proposed
approach results in significant improvements in the total utility achieved at
equilibrium compared to a single-carrier system and also to a multi-carrier
system in which each user maximizes its utility over each carrier
independently.
|
cs/0512015
|
Joint fixed-rate universal lossy coding and identification of
continuous-alphabet memoryless sources
|
cs.IT cs.LG math.IT
|
The problem of joint universal source coding and identification is considered
in the setting of fixed-rate lossy coding of continuous-alphabet memoryless
sources. For a wide class of bounded distortion measures, it is shown that any
compactly parametrized family of $\R^d$-valued i.i.d. sources with absolutely
continuous distributions satisfying appropriate smoothness and
Vapnik--Chervonenkis learnability conditions, admits a joint scheme for
universal lossy block coding and parameter estimation, such that when the block
length $n$ tends to infinity, the overhead per-letter rate and the distortion
redundancies converge to zero as $O(n^{-1}\log n)$ and $O(\sqrt{n^{-1}\log
n})$, respectively. Moreover, the active source can be determined at the
decoder up to a ball of radius $O(\sqrt{n^{-1} \log n})$ in variational
distance, asymptotically almost surely. The system has finite memory length
equal to the block length, and can be thought of as blockwise application of a
time-invariant nonlinear filter with initial conditions determined from the
previous block. Comparisons are presented with several existing schemes for
universal vector quantization, which do not include parameter estimation
explicitly, and an extension to unbounded distortion measures is outlined.
Finally, finite mixture classes and exponential families are given as explicit
examples of parametric sources admitting joint universal compression and
modeling schemes of the kind studied here.
|
cs/0512016
|
A linear-time algorithm for finding the longest segment which scores
above a given threshold
|
cs.DS cs.CE
|
This paper describes a linear-time algorithm that finds the longest stretch
in a sequence of real numbers (``scores'') in which the sum exceeds an input
parameter. The algorithm also solves the problem of finding the longest
interval in which the average of the scores is above a fixed threshold. The
problem originates from molecular sequence analysis: for instance, the
algorithm can be employed to identify long GC-rich regions in DNA sequences.
The algorithm can also be used to trim low-quality ends of shotgun sequences in
a preprocessing step of whole-genome assembly.
|
cs/0512017
|
Approximately Universal Codes over Slow Fading Channels
|
cs.IT math.IT
|
Performance of reliable communication over a coherent slow fading channel at
high SNR is succinctly captured as a fundamental tradeoff between diversity and
multiplexing gains. We study the problem of designing codes that optimally
tradeoff the diversity and multiplexing gains. Our main contribution is a
precise characterization of codes that are universally tradeoff-optimal, i.e.,
they optimally tradeoff the diversity and multiplexing gains for every
statistical characterization of the fading channel. We denote this
characterization as one of approximate universality where the approximation is
in the connection between error probability and outage capacity with diversity
and multiplexing gains, respectively. The characterization of approximate
universality is then used to construct new coding schemes as well as to show
optimality of several schemes proposed in the space-time coding literature.
|
cs/0512018
|
DAMNED: A Distributed and Multithreaded Neural Event-Driven simulation
framework
|
cs.NE cs.LG
|
In a Spiking Neural Networks (SNN), spike emissions are sparsely and
irregularly distributed both in time and in the network architecture. Since a
current feature of SNNs is a low average activity, efficient implementations of
SNNs are usually based on an Event-Driven Simulation (EDS). On the other hand,
simulations of large scale neural networks can take advantage of distributing
the neurons on a set of processors (either workstation cluster or parallel
computer). This article presents DAMNED, a large scale SNN simulation framework
able to gather the benefits of EDS and parallel computing. Two levels of
parallelism are combined: Distributed mapping of the neural topology, at the
network level, and local multithreaded allocation of resources for simultaneous
processing of events, at the neuron level. Based on the causality of events, a
distributed solution is proposed for solving the complex problem of scheduling
without synchronization barrier.
|
cs/0512019
|
Amazing geometry of genetic space or are genetic algorithms convergent?
|
cs.NE cs.DM cs.SE
|
There is no proof yet of convergence of Genetic Algorithms. We do not supply
it too. Instead, we present some thoughts and arguments to convince the Reader,
that Genetic Algorithms are essentially bound for success. For this purpose, we
consider only the crossover operators, single- or multiple-point, together with
selection procedure. We also give a proof that the soft selection is superior
to other selection schemes.
|
cs/0512020
|
A Practical Approach to Joint Network-Source Coding
|
cs.IT math.IT
|
We are interested in how to best communicate a real valued source to a number
of destinations (sinks) over a network with capacity constraints in a
collective fidelity metric over all the sinks, a problem which we call joint
network-source coding. It is demonstrated that multiple description codes along
with proper diversity routing provide a powerful solution to joint
network-source coding. A systematic optimization approach is proposed. It
consists of optimizing the network routing given a multiple description code
and designing optimal multiple description code for the corresponding optimized
routes.
|
cs/0512023
|
Perfect Space-Time Codes with Minimum and Non-Minimum Delay for Any
Number of Antennas
|
cs.IT math.IT
|
Perfect space-time codes were first introduced by Oggier et. al. to be the
space-time codes that have full rate, full diversity-gain, non-vanishing
determinant for increasing spectral efficiency, uniform average transmitted
energy per antenna and good shaping of the constellation. These defining
conditions jointly correspond to optimality with respect to the Zheng-Tse D-MG
tradeoff, independent of channel statistics, as well as to near optimality in
maximizing mutual information. All the above traits endow the code with error
performance that is currently unmatched. Yet perfect space-time codes have been
constructed only for 2,3,4 and 6 transmit antennas. We construct minimum and
non-minimum delay perfect codes for all channel dimensions.
|
cs/0512024
|
A bound on Grassmannian codes
|
cs.IT math.IT math.MG
|
We give a new asymptotic upper bound on the size of a code in the
Grassmannian space. The bound is better than the upper bounds known previously
in the entire range of distances except very large values.
|
cs/0512025
|
Spectral approach to linear programming bounds on codes
|
cs.IT math.CO math.IT
|
We give new proofs of asymptotic upper bounds of coding theory obtained
within the frame of Delsarte's linear programming method. The proofs rely on
the analysis of eigenvectors of some finite-dimensional operators related to
orthogonal polynomials. The examples of the method considered in the paper
include binary codes, binary constant-weight codes, spherical codes, and codes
in the projective spaces.
|
cs/0512027
|
The Physical Foundation of Human Mind and a New Theory of Investment
|
cs.IT math.IT
|
This paper consists of two parts. In the first part, we develop a new
information theory, in which it is not a coincidence that information and
physical entropy share the same mathematical formula. It is an adaptation of
mind to help search for resources. We then show that psychological patterns
either reflect the constraints of physical laws or are evolutionary adaptations
to efficiently process information and to increase the chance of survival in
the environment of our evolutionary past. In the second part, we demonstrate
that the new information theory provides the foundation to understand market
behavior. One fundamental result from the information theory is that
information is costly. In general, information with higher value is more
costly. Another fundamental result from the information theory is that the
amount of information one can receive is the amount of information generated
minus equivocation. The level of equivocation, which is the measure of
information asymmetry, is determined by the correlation between the source of
information and the receiver of information. In general, how much information
one can receive depends on the background knowledge of the receiver. The
difference in cost different investors are willing to pay for information and
the difference in background knowledge about a particular information causes
the heterogeneity in information processing by the investment public, which is
the main reason of the price and volume patterns observed in the market. Many
assumptions in some of the recent models on behavioral finance can be derived
naturally from this theory.
|
cs/0512028
|
Approximately universal optimality over several dynamic and non-dynamic
cooperative diversity schemes for wireless networks
|
cs.IT math.IT
|
In this work we explicitly provide the first ever optimal, with respect to
the Zheng-Tse diversity multiplexing gain (D-MG) tradeoff, cooperative
diversity schemes for wireless relay networks. The schemes are based on
variants of perfect space-time codes and are optimal for any number of users
and all statistically symmetric (and in some cases, asymmetric) fading
distributions.
We deduce that, with respect to the D-MG tradeoff, channel knowledge at the
intermediate relays and infinite delay are unnecessary. We also show that the
non-dynamic selection decode and forward strategy, the non-dynamic amplify and
forward, the non-dynamic receive and forward, the dynamic amplify and forward
and the dynamic receive and forward cooperative diversity strategies allow for
exactly the same D-MG optimal performance.
|
cs/0512029
|
New model for rigorous analysis of LT-codes
|
cs.IT math.IT
|
We present a new model for LT codes which simplifies the analysis of the
error probability of decoding by belief propagation. For any given degree
distribution, we provide the first rigorous expression for the limiting error
probability as the length of the code goes to infinity via recent results in
random hypergraphs [Darling-Norris 2005]. For a code of finite length, we
provide an algorithm for computing the probability of error of the decoder.
This algorithm improves the one of [Karp-Luby-Shokrollahi 2004] by a linear
factor.
|
cs/0512030
|
Uncertainty Principles for Signal Concentrations
|
cs.IT math.IT
|
Uncertainty principles for concentration of signals into truncated subspaces
are considered. The ``classic'' uncertainty principle is explored as a special
case of a more general operator framework. The time-bandwidth concentration
problem is shown as a similar special case. A spatial concentration of radio
signals example is provided, and it is shown that an uncertainty principle
exists for concentration of single-frequency signals for regions in space. We
show that the uncertainty is related to the volumes of the spatial regions.
|
cs/0512032
|
A Software Framework for Vehicle-Infrastructure Cooperative Applications
|
cs.IR
|
A growing category of vehicle-infrastructure cooperative (VIC) applications
requires telematics software components distributed between an
infrastructure-based management center and a number of vehicles. This article
presents an approach based on a software framework, focusing on a Telematic
Management System (TMS), a component suite aimed to run inside an
infrastructure-based operations center, in some cases interacting with legacy
systems like Advanced Traffic Management Systems or Vehicle Relationship
Management. The TMS framework provides support for modular, flexible,
prototyping and implementation of VIC applications. This work has received the
support of the European Commission in the context of the projects REACT and
CyberCars.
|
cs/0512037
|
Evolving Stochastic Learning Algorithm Based on Tsallis Entropic Index
|
cs.NE cs.AI
|
In this paper, inspired from our previous algorithm, which was based on the
theory of Tsallis statistical mechanics, we develop a new evolving stochastic
learning algorithm for neural networks. The new algorithm combines
deterministic and stochastic search steps by employing a different adaptive
stepsize for each network weight, and applies a form of noise that is
characterized by the nonextensive entropic index q, regulated by a weight decay
term. The behavior of the learning algorithm can be made more stochastic or
deterministic depending on the trade off between the temperature T and the q
values. This is achieved by introducing a formula that defines a
time--dependent relationship between these two important learning parameters.
Our experimental study verifies that there are indeed improvements in the
convergence speed of this new evolving stochastic learning algorithm, which
makes learning faster than using the original Hybrid Learning Scheme (HLS). In
addition, experiments are conducted to explore the influence of the entropic
index q and temperature T on the convergence speed and stability of the
proposed method.
|
cs/0512038
|
Capacity of Differential versus Non-Differential Unitary Space-Time
Modulation for MIMO channels
|
cs.IT cond-mat.stat-mech math-ph math.IT math.MP
|
Differential Unitary Space-Time Modulation (DUSTM) and its earlier
nondifferential counterpart, USTM, permit high-throughput MIMO communication
entirely without the possession of channel state information (CSI) by either
the transmitter or the receiver. For an isotropically random unitary input we
obtain the exact closed-form expression for the probability density of the
DUSTM received signal, which permits the straightforward Monte Carlo evaluation
of its mutual information. We compare the performance of DUSTM and USTM through
both numerical computations of mutual information and through the analysis of
low- and high-SNR asymptotic expressions. In our comparisons the symbol
durations of the equivalent unitary space-time signals are both equal to T, as
are the number of receive antennas N. For DUSTM the number of transmit antennas
is constrained by the scheme to be M = T/2, while USTM has no such constraint.
If DUSTM and USTM utilize the same number of transmit antennas at high SNR's
the normalized mutual information of the differential and the nondifferential
schemes expressed in bits/sec/Hz are asymptotically equal, with the
differential scheme performing somewhat better, while at low SNR's the
normalized mutual information of DUSTM is asymptotically twice the normalized
mutual information of USTM. If, instead, USTM utilizes the optimum number of
transmit antennas then USTM can outperform DUSTM at sufficiently low SNR's.
|
cs/0512045
|
Branch-and-Prune Search Strategies for Numerical Constraint Solving
|
cs.AI
|
When solving numerical constraints such as nonlinear equations and
inequalities, solvers often exploit pruning techniques, which remove redundant
value combinations from the domains of variables, at pruning steps. To find the
complete solution set, most of these solvers alternate the pruning steps with
branching steps, which split each problem into subproblems. This forms the
so-called branch-and-prune framework, well known among the approaches for
solving numerical constraints. The basic branch-and-prune search strategy that
uses domain bisections in place of the branching steps is called the bisection
search. In general, the bisection search works well in case (i) the solutions
are isolated, but it can be improved further in case (ii) there are continuums
of solutions (this often occurs when inequalities are involved). In this paper,
we propose a new branch-and-prune search strategy along with several variants,
which not only allow yielding better branching decisions in the latter case,
but also work as well as the bisection search does in the former case. These
new search algorithms enable us to employ various pruning techniques in the
construction of inner and outer approximations of the solution set. Our
experiments show that these algorithms speed up the solving process often by
one order of magnitude or more when solving problems with continuums of
solutions, while keeping the same performance as the bisection search when the
solutions are isolated.
|
cs/0512047
|
Processing Uncertainty and Indeterminacy in Information Systems success
mapping
|
cs.AI
|
IS success is a complex concept, and its evaluation is complicated,
unstructured and not readily quantifiable. Numerous scientific publications
address the issue of success in the IS field as well as in other fields. But,
little efforts have been done for processing indeterminacy and uncertainty in
success research. This paper shows a formal method for mapping success using
Neutrosophic Success Map. This is an emerging tool for processing indeterminacy
and uncertainty in success research. EIS success have been analyzed using this
tool.
|
cs/0512048
|
Spatial Precoder Design for Space-Time Coded MIMO Systems: Based on
Fixed Parameters of MIMO Channels
|
cs.IT math.IT
|
In this paper, we introduce the novel use of linear spatial precoding based
on fixed and known parameters of multiple-input multiple-output (MIMO) channels
to improve the performance of space-time coded MIMO systems. We derive linear
spatial precoding schemes for both coherent (channel is known at the receiver)
and non-coherent (channel is un-known at the receiver) space-time coded MIMO
systems. Antenna spacing and antenna placement (geometry) are considered as
fixed parameters of MIMO channels, which are readily known at the transmitter.
These precoding schemes exploit the antenna placement information at both ends
of the MIMO channel to ameliorate the effect of non-ideal antenna placement on
the performance of space-time coded systems. In these schemes, the precoder is
fixed for given transmit and receive antenna configurations and transmitter
does not require any feedback of channel state information (partial or full)
from the receiver. Closed form solutions for both precoding schemes are
presented for systems with up to three receiver antennas. A generalized method
is proposed for more than three receiver antennas. We use the coherent
space-time block codes (STBC) and differential space-time block codes to
analyze the performance of proposed precoding schemes. Simulation results show
that at low SNRs, both precoders give significant performance improvement over
a non-precoded system for small antenna aperture sizes.
|
cs/0512050
|
Preference Learning in Terminology Extraction: A ROC-based approach
|
cs.LG
|
A key data preparation step in Text Mining, Term Extraction selects the
terms, or collocation of words, attached to specific concepts. In this paper,
the task of extracting relevant collocations is achieved through a supervised
learning algorithm, exploiting a few collocations manually labelled as
relevant/irrelevant. The candidate terms are described along 13 standard
statistical criteria measures. From these examples, an evolutionary learning
algorithm termed Roger, based on the optimization of the Area under the ROC
curve criterion, extracts an order on the candidate terms. The robustness of
the approach is demonstrated on two real-world domain applications, considering
different domains (biology and human resources) and different languages
(English and French).
|
cs/0512053
|
Online Learning and Resource-Bounded Dimension: Winnow Yields New Lower
Bounds for Hard Sets
|
cs.CC cs.LG
|
We establish a relationship between the online mistake-bound model of
learning and resource-bounded dimension. This connection is combined with the
Winnow algorithm to obtain new results about the density of hard sets under
adaptive reductions. This improves previous work of Fu (1995) and Lutz and Zhao
(2000), and solves one of Lutz and Mayordomo's "Twelve Problems in
Resource-Bounded Measure" (1999).
|
cs/0512054
|
Irreducible Frequent Patterns in Transactional Databases
|
cs.DS cs.DB
|
Irreducible frequent patters (IFPs) are introduced for transactional
databases. An IFP is such a frequent pattern (FP),(x1,x2,...xn), the
probability of which, P(x1,x2,...xn), cannot be represented as a product of the
probabilities of two (or more) other FPs of the smaller lengths. We have
developed an algorithm for searching IFPs in transactional databases. We argue
that IFPs represent useful tools for characterizing the transactional databases
and may have important applications to bio-systems including the immune systems
and for improving vaccination strategies. The effectiveness of the IFPs
approach has been illustrated in application to a classification problem.
|
cs/0512059
|
Competing with wild prediction rules
|
cs.LG
|
We consider the problem of on-line prediction competitive with a benchmark
class of continuous but highly irregular prediction rules. It is known that if
the benchmark class is a reproducing kernel Hilbert space, there exists a
prediction algorithm whose average loss over the first N examples does not
exceed the average loss of any prediction rule in the class plus a "regret
term" of O(N^(-1/2)). The elements of some natural benchmark classes, however,
are so irregular that these classes are not Hilbert spaces. In this paper we
develop Banach-space methods to construct a prediction algorithm with a regret
term of O(N^(-1/p)), where p is in [2,infty) and p-2 reflects the degree to
which the benchmark class fails to be a Hilbert space.
|
cs/0512062
|
Evolino for recurrent support vector machines
|
cs.NE
|
Traditional Support Vector Machines (SVMs) need pre-wired finite time windows
to predict and classify time series. They do not have an internal state
necessary to deal with sequences involving arbitrary long-term dependencies.
Here we introduce a new class of recurrent, truly sequential SVM-like devices
with internal adaptive states, trained by a novel method called EVOlution of
systems with KErnel-based outputs (Evoke), an instance of the recent Evolino
class of methods. Evoke evolves recurrent neural networks to detect and
represent temporal dependencies while using quadratic programming/support
vector regression to produce precise outputs. Evoke is the first SVM-based
mechanism learning to classify a context-sensitive language. It also
outperforms recent state-of-the-art gradient-based recurrent neural networks
(RNNs) on various time series prediction tasks.
|
cs/0512063
|
Complex Random Vectors and ICA Models: Identifiability, Uniqueness and
Separability
|
cs.IT cs.CE cs.IR cs.LG math.IT
|
In this paper the conditions for identifiability, separability and uniqueness
of linear complex valued independent component analysis (ICA) models are
established. These results extend the well-known conditions for solving
real-valued ICA problems to complex-valued models. Relevant properties of
complex random vectors are described in order to extend the Darmois-Skitovich
theorem for complex-valued models. This theorem is used to construct a proof of
a theorem for each of the above ICA model concepts. Both circular and
noncircular complex random vectors are covered. Examples clarifying the above
concepts are presented.
|
cs/0512066
|
On the Asymptotic Weight and Stopping Set Distribution of Regular LDPC
Ensembles
|
cs.IT math.IT
|
We estimate the variance of weight and stopping set distribution of regular
LDPC ensembles. Using this estimate and the second moment method we obtain
bounds on the probability that a randomly chosen code from regular LDPC
ensemble has its weight distribution and stopping set distribution close to
respective ensemble averages. We are able to show that a large fraction of
total number of codes have their weight and stopping set distribution close to
the average.
|
cs/0512069
|
Reconstructing Websites for the Lazy Webmaster
|
cs.IR cs.CY
|
Backup or preservation of websites is often not considered until after a
catastrophic event has occurred. In the face of complete website loss, "lazy"
webmasters or concerned third parties may be able to recover some of their
website from the Internet Archive. Other pages may also be salvaged from
commercial search engine caches. We introduce the concept of "lazy
preservation"- digital preservation performed as a result of the normal
operations of the Web infrastructure (search engines and caches). We present
Warrick, a tool to automate the process of website reconstruction from the
Internet Archive, Google, MSN and Yahoo. Using Warrick, we have reconstructed
24 websites of varying sizes and composition to demonstrate the feasibility and
limitations of website reconstruction from the public Web infrastructure. To
measure Warrick's window of opportunity, we have profiled the time required for
new Web resources to enter and leave search engine caches.
|
cs/0512071
|
"Going back to our roots": second generation biocomputing
|
cs.AI cs.NE
|
Researchers in the field of biocomputing have, for many years, successfully
"harvested and exploited" the natural world for inspiration in developing
systems that are robust, adaptable and capable of generating novel and even
"creative" solutions to human-defined problems. However, in this position paper
we argue that the time has now come for a reassessment of how we exploit
biology to generate new computational systems. Previous solutions (the "first
generation" of biocomputing techniques), whilst reasonably effective, are crude
analogues of actual biological systems. We believe that a new, inherently
inter-disciplinary approach is needed for the development of the emerging
"second generation" of bio-inspired methods. This new modus operandi will
require much closer interaction between the engineering and life sciences
communities, as well as a bidirectional flow of concepts, applications and
expertise. We support our argument by examining, in this new light, three
existing areas of biocomputing (genetic programming, artificial immune systems
and evolvable hardware), as well as an emerging area (natural genetic
engineering) which may provide useful pointers as to the way forward.
|
cs/0512074
|
Analytical Bounds on Maximum-Likelihood Decoded Linear Codes with
Applications to Turbo-Like Codes: An Overview
|
cs.IT math.IT
|
Upper and lower bounds on the error probability of linear codes under
maximum-likelihood (ML) decoding are shortly surveyed and applied to ensembles
of codes on graphs. For upper bounds, focus is put on Gallager bounding
techniques and their relation to a variety of other reported bounds. Within the
class of lower bounds, we address de Caen's based bounds and their
improvements, sphere-packing bounds, and information-theoretic bounds on the
bit error probability of codes defined on graphs. A comprehensive overview is
provided in a monograph by the authors which is currently in preparation.
|
cs/0512075
|
Performance versus Complexity Per Iteration for Low-Density Parity-Check
Codes: An Information-Theoretic Approach
|
cs.IT math.IT
|
The paper is focused on the tradeoff between performance and decoding
complexity per iteration for LDPC codes in terms of their gap (in rate) to
capacity. The study of this tradeoff is done via information-theoretic bounds
which also enable to get an indication on the sub-optimality of message-passing
iterative decoding algorithms (as compared to optimal ML decoding). The bounds
are generalized for parallel channels, and are applied to ensembles of
punctured LDPC codes where both intentional and random puncturing are
addressed. This work suggests an improvement in the tightness of some
information-theoretic bounds which were previously derived by Burshtein et al.
and by Sason and Urbanke.
|
cs/0512076
|
On Achievable Rates and Complexity of LDPC Codes for Parallel Channels:
Information-Theoretic Bounds and Applications
|
cs.IT math.IT
|
The paper presents bounds on the achievable rates and the decoding complexity
of low-density parity-check (LDPC) codes. It is assumed that the communication
of these codes takes place over statistically independent parallel channels
where these channels are memoryless, binary-input and output-symmetric (MBIOS).
The bounds are applied to punctured LDPC codes. A diagram concludes our
discussion by showing interconnections between the theorems in this paper and
some previously reported results.
|
cs/0512078
|
Graph-Cover Decoding and Finite-Length Analysis of Message-Passing
Iterative Decoding of LDPC Codes
|
cs.IT math.IT
|
The goal of the present paper is the derivation of a framework for the
finite-length analysis of message-passing iterative decoding of low-density
parity-check codes. To this end we introduce the concept of graph-cover
decoding. Whereas in maximum-likelihood decoding all codewords in a code are
competing to be the best explanation of the received vector, under graph-cover
decoding all codewords in all finite covers of a Tanner graph representation of
the code are competing to be the best explanation. We are interested in
graph-cover decoding because it is a theoretical tool that can be used to show
connections between linear programming decoding and message-passing iterative
decoding. Namely, on the one hand it turns out that graph-cover decoding is
essentially equivalent to linear programming decoding. On the other hand,
because iterative, locally operating decoding algorithms like message-passing
iterative decoding cannot distinguish the underlying Tanner graph from any
covering graph, graph-cover decoding can serve as a model to explain the
behavior of message-passing iterative decoding. Understanding the behavior of
graph-cover decoding is tantamount to understanding the so-called fundamental
polytope. Therefore, we give some characterizations of this polytope and
explain its relation to earlier concepts that were introduced to understand the
behavior of message-passing iterative decoding for finite-length codes.
|
cs/0512079
|
An invariant bayesian model selection principle for gaussian data in a
sparse representation
|
cs.IT math.IT
|
We develop a code length principle which is invariant to the choice of
parameterization on the model distributions. An invariant approximation formula
for easy computation of the marginal distribution is provided for gaussian
likelihood models. We provide invariant estimators of the model parameters and
formulate conditions under which these estimators are essentially posteriori
unbiased for gaussian models. An upper bound on the coarseness of
discretization on the model parameters is deduced. We introduce a
discrimination measure between probability distributions and use it to
construct probability distributions on model classes. The total code length is
shown to equal the NML code length of Rissanen to within an additive constant
when choosing Jeffreys prior distribution on the model parameters together with
a particular choice of prior distribution on the model classes. Our model
selection principle is applied to a gaussian estimation problem for data in a
wavelet representation and its performance is tested and compared to
alternative wavelet-based estimation methods in numerical experiments
|
cs/0512084
|
Understanding physics from interconnected data
|
cs.CV
|
Metal melting on release after explosion is a physical system far from
quilibrium. A complete physical model of this system does not exist, because
many interrelated effects have to be considered. General methodology needs to
be developed so as to describe and understand physical phenomena involved.
The high noise of the data, moving blur of images, the high degree of
uncertainty due to the different types of sensors, and the information
entangled and hidden inside the noisy images makes reasoning about the physical
processes very difficult. Major problems include proper information extraction
and the problem of reconstruction, as well as prediction of the missing data.
In this paper, several techniques addressing the first problem are given,
building the basis for tackling the second problem.
|
cs/0512085
|
Analyzing and Visualizing the Semantic Coverage of Wikipedia and Its
Authors
|
cs.IR
|
This paper presents a novel analysis and visualization of English Wikipedia
data. Our specific interest is the analysis of basic statistics, the
identification of the semantic structure and age of the categories in this free
online encyclopedia, and the content coverage of its highly productive authors.
The paper starts with an introduction of Wikipedia and a review of related
work. We then introduce a suite of measures and approaches to analyze and map
the semantic structure of Wikipedia. The results show that co-occurrences of
categories within individual articles have a power-law distribution, and when
mapped reveal the nicely clustered semantic structure of Wikipedia. The results
also reveal the content coverage of the article's authors, although the roles
these authors play are as varied as the authors themselves. We conclude with a
discussion of major results and planned future work.
|
cs/0512087
|
Fundamental Limits and Scaling Behavior of Cooperative Multicasting in
Wireless Networks
|
cs.IT cs.NI math.IT
|
A framework is developed for analyzing capacity gains from user cooperation
in slow fading wireless networks when the number of nodes (network size) is
large. The framework is illustrated for the case of a simple multipath-rich
Rayleigh fading channel model. Both unicasting (one source and one destination)
and multicasting (one source and several destinations) scenarios are
considered. We introduce a meaningful notion of Shannon capacity for such
systems, evaluate this capacity as a function of signal-to-noise ratio (SNR),
and develop a simple two-phase cooperative network protocol that achieves it.
We observe that the resulting capacity is the same for both unicasting and
multicasting, but show that the network size required to achieve any target
error probability is smaller for unicasting than for multicasting. Finally, we
introduce the notion of a network ``scaling exponent'' to quantify the rate of
decay of error probability with network size as a function of the targeted
fraction of the capacity. This exponent provides additional insights to system
designers by enabling a finer grain comparison of candidate cooperative
transmission protocols in even moderately sized networks.
|
cs/0512093
|
Construction of Turbo Code Interleavers from 3-Regular Hamiltonian
Graphs
|
cs.IT math.IT
|
In this letter we present a new construction of interleavers for turbo codes
from 3-regular Hamiltonian graphs. The interleavers can be generated using a
few parameters, which can be selected in such a way that the girth of the
interleaver graph (IG) becomes large, inducing a high summary distance. The
size of the search space for these parameters is derived. The proposed
interleavers themselves work as their de-interleavers.
|
cs/0512097
|
Gaussian Channels with Feedback: Optimality, Fundamental Limitations,
and Connections of Communication, Estimation, and Control
|
cs.IT math.IT
|
Gaussian channels with memory and with noiseless feedback have been widely
studied in the information theory literature. However, a coding scheme to
achieve the feedback capacity is not available. In this paper, a coding scheme
is proposed to achieve the feedback capacity for Gaussian channels. The coding
scheme essentially implements the celebrated Kalman filter algorithm, and is
equivalent to an estimation system over the same channel without feedback. It
reveals that the achievable information rate of the feedback communication
system can be alternatively given by the decay rate of the Cramer-Rao bound of
the associated estimation system. Thus, combined with the control theoretic
characterizations of feedback communication (proposed by Elia), this implies
that the fundamental limitations in feedback communication, estimation, and
control coincide. This leads to a unifying perspective that integrates
information, estimation, and control. We also establish the optimality of the
Kalman filtering in the sense of information transmission, a supplement to the
optimality of Kalman filtering in the sense of information processing proposed
by Mitter and Newton. In addition, the proposed coding scheme generalizes the
Schalkwijk-Kailath codes and reduces the coding complexity and coding delay.
The construction of the coding scheme amounts to solving a finite-dimensional
optimization problem. A simplification to the optimal stationary input
distribution developed by Yang, Kavcic, and Tatikonda is also obtained. The
results are verified in a numerical example.
|
cs/0512099
|
Mathematical Models in Schema Theory
|
cs.AI
|
In this paper, a mathematical schema theory is developed. This theory has
three roots: brain theory schemas, grid automata, and block-shemas. In Section
2 of this paper, elements of the theory of grid automata necessary for the
mathematical schema theory are presented. In Section 3, elements of brain
theory necessary for the mathematical schema theory are presented. In Section
4, other types of schemas are considered. In Section 5, the mathematical schema
theory is developed. The achieved level of schema representation allows one to
model by mathematical tools virtually any type of schemas considered before,
including schemas in neurophisiology, psychology, computer science, Internet
technology, databases, logic, and mathematics.
|
cs/0512100
|
The logic of interactive Turing reduction
|
cs.LO cs.AI math.LO
|
The paper gives a soundness and completeness proof for the implicative
fragment of intuitionistic calculus with respect to the semantics of
computability logic, which understands intuitionistic implication as
interactive algorithmic reduction. This concept -- more precisely, the
associated concept of reducibility -- is a generalization of Turing
reducibility from the traditional, input/output sorts of problems to
computational tasks of arbitrary degrees of interactivity. See
http://www.cis.upenn.edu/~giorgi/cl.html for a comprehensive online source on
computability logic.
|
cs/0512101
|
On the Complexity of finding Stopping Distance in Tanner Graphs
|
cs.IT cs.CC math.IT
|
Two decision problems related to the computation of stopping sets in Tanner
graphs are shown to be NP-complete. NP-hardness of the problem of computing the
stopping distance of a Tanner graph follows as a consequence
|
cs/0512102
|
Statistical Parameters of the Novel "Perekhresni stezhky" ("The
Cross-Paths") by Ivan Franko
|
cs.CL
|
In the paper, a complex statistical characteristics of a Ukrainian novel is
given for the first time. The distribution of word-forms with respect to their
size is studied. The linguistic laws by Zipf-Mandelbrot and Altmann-Menzerath
are analyzed.
|
cs/0601001
|
Truecluster: robust scalable clustering with model selection
|
cs.AI
|
Data-based classification is fundamental to most branches of science. While
recent years have brought enormous progress in various areas of statistical
computing and clustering, some general challenges in clustering remain: model
selection, robustness, and scalability to large datasets. We consider the
important problem of deciding on the optimal number of clusters, given an
arbitrary definition of space and clusteriness. We show how to construct a
cluster information criterion that allows objective model selection. Differing
from other approaches, our truecluster method does not require specific
assumptions about underlying distributions, dissimilarity definitions or
cluster models. Truecluster puts arbitrary clustering algorithms into a generic
unified (sampling-based) statistical framework. It is scalable to big datasets
and provides robust cluster assignments and case-wise diagnostics. Truecluster
will make clustering more objective, allows for automation, and will save time
and costs. Free R software is available.
|
cs/0601004
|
Integration of navigation and action selection functionalities in a
computational model of cortico-basal ganglia-thalamo-cortical loops
|
cs.AI cs.RO
|
This article describes a biomimetic control architecture affording an animat
both action selection and navigation functionalities. It satisfies the survival
constraint of an artificial metabolism and supports several complementary
navigation strategies. It builds upon an action selection model based on the
basal ganglia of the vertebrate brain, using two interconnected cortico-basal
ganglia-thalamo-cortical loops: a ventral one concerned with appetitive actions
and a dorsal one dedicated to consummatory actions. The performances of the
resulting model are evaluated in simulation. The experiments assess the
prolonged survival permitted by the use of high level navigation strategies and
the complementarity of navigation strategies in dynamic environments. The
correctness of the behavioral choices in situations of antagonistic or
synergetic internal states are also tested. Finally, the modelling choices are
discussed with regard to their biomimetic plausibility, while the experimental
results are estimated in terms of animat adaptivity.
|
cs/0601005
|
Analyzing language development from a network approach
|
cs.CL
|
In this paper we propose some new measures of language development using
network analyses, which is inspired by the recent surge of interests in network
studies of many real-world systems. Children's and care-takers' speech data
from a longitudinal study are represented as a series of networks, word forms
being taken as nodes and collocation of words as links. Measures on the
properties of the networks, such as size, connectivity, hub and authority
analyses, etc., allow us to make quantitative comparison so as to reveal
different paths of development. For example, the asynchrony of development in
network size and average degree suggests that children cannot be simply
classified as early talkers or late talkers by one or two measures. Children
follow different paths in a multi-dimensional space. They may develop faster in
one dimension but slower in another dimension. The network approach requires
little preprocessing of words and analyses on sentence structures, and the
characteristics of words and their usage emerge from the network and are
independent of any grammatical presumptions. We show that the change of the two
articles "the" and "a" in their roles as important nodes in the network
reflects the progress of children's syntactic development: the two articles
often start in children's networks as hubs and later shift to authorities,
while they are authorities constantly in the adult's networks. The network
analyses provide a new approach to study language development, and at the same
time language development also presents a rich area for network theories to
explore.
|
cs/0601006
|
On the Joint Source-Channel Coding Error Exponent for Discrete
Memoryless Systems: Computation and Comparison with Separate Coding
|
cs.IT math.IT
|
We investigate the computation of Csiszar's bounds for the joint
source-channel coding (JSCC) error exponent, E_J, of a communication system
consisting of a discrete memoryless source and a discrete memoryless channel.
We provide equivalent expressions for these bounds and derive explicit formulas
for the rates where the bounds are attained. These equivalent representations
can be readily computed for arbitrary source-channel pairs via Arimoto's
algorithm. When the channel's distribution satisfies a symmetry property, the
bounds admit closed-form parametric expressions. We then use our results to
provide a systematic comparison between the JSCC error exponent E_J and the
tandem coding error exponent E_T, which applies if the source and channel are
separately coded. It is shown that E_T <= E_J <= 2E_T. We establish conditions
for which E_J > E_T and for which E_J = 2E_T. Numerical examples indicate that
E_J is close to 2E_T for many source-channel pairs. This gain translates into a
power saving larger than 2 dB for a binary source transmitted over additive
white Gaussian noise channels and Rayleigh fading channels with finite output
quantization. Finally, we study the computation of the lossy JSCC error
exponent under the Hamming distortion measure.
|
cs/0601007
|
The necessity and sufficiency of anytime capacity for stabilization of a
linear system over a noisy communication link Part I: scalar systems
|
cs.IT math.IT
|
We review how Shannon's classical notion of capacity is not enough to
characterize a noisy communication channel if the channel is intended to be
used as part of a feedback loop to stabilize an unstable scalar linear system.
While classical capacity is not enough, another sense of capacity (parametrized
by reliability) called ``anytime capacity'' is shown to be necessary for the
stabilization of an unstable process. The required rate is given by the log of
the unstable system gain and the required reliability comes from the sense of
stability desired. A consequence of this necessity result is a sequential
generalization of the Schalkwijk/Kailath scheme for communication over the AWGN
channel with feedback.
In cases of sufficiently rich information patterns between the encoder and
decoder, adequate anytime capacity is also shown to be sufficient for there to
exist a stabilizing controller. These sufficiency results are then generalized
to cases with noisy observations, delayed control actions, and without any
explicit feedback between the observer and the controller. Both necessary and
sufficient conditions are extended to continuous time systems as well. We close
with comments discussing a hierarchy of difficulty for communication problems
and how these results establish where stabilization problems sit in that
hierarchy.
|
cs/0601009
|
Gaussian Fading is the Worst Fading
|
cs.IT math.IT
|
The capacity of peak-power limited, single-antenna, non-coherent, flat-fading
channels with memory is considered. The emphasis is on the capacity pre-log,
i.e., on the limiting ratio of channel capacity to the logarithm of the
signal-to-noise ratio (SNR), as the SNR tends to infinity. It is shown that,
among all stationary and ergodic fading processes of a given spectral
distribution function whose law has no mass point at zero, the Gaussian process
gives rise to the smallest pre-log.
|
cs/0601012
|
Product Multicommodity Flow in Wireless Networks
|
cs.IT math.IT
|
We provide a tight approximate characterization of the $n$-dimensional
product multicommodity flow (PMF) region for a wireless network of $n$ nodes.
Separate characterizations in terms of the spectral properties of appropriate
network graphs are obtained in both an information theoretic sense and for a
combinatorial interference model (e.g., Protocol model). These provide an inner
approximation to the $n^2$ dimensional capacity region. These results answer
the following questions which arise naturally from previous work: (a) What is
the significance of $1/\sqrt{n}$ in the scaling laws for the Protocol
interference model obtained by Gupta and Kumar (2000)? (b) Can we obtain a
tight approximation to the "maximum supportable flow" for node distributions
more general than the geometric random distribution, traffic models other than
randomly chosen source-destination pairs, and under very general assumptions on
the channel fading model?
We first establish that the random source-destination model is essentially a
one-dimensional approximation to the capacity region, and a special case of
product multi-commodity flow. Building on previous results, for a combinatorial
interference model given by a network and a conflict graph, we relate the
product multicommodity flow to the spectral properties of the underlying graphs
resulting in computational upper and lower bounds. For the more interesting
random fading model with additive white Gaussian noise (AWGN), we show that the
scaling laws for PMF can again be tightly characterized by the spectral
properties of appropriately defined graphs. As an implication, we obtain
computationally efficient upper and lower bounds on the PMF for any wireless
network with a guaranteed approximation factor.
|
cs/0601017
|
Weighted Norms of Ambiguity Functions and Wigner Distributions
|
cs.IT math.IT quant-ph
|
In this article new bounds on weighted p-norms of ambiguity functions and
Wigner functions are derived. Such norms occur frequently in several areas of
physics and engineering. In pulse optimization for Weyl--Heisenberg signaling
in wide-sense stationary uncorrelated scattering channels for example it is a
key step to find the optimal waveforms for a given scattering statistics which
is a problem also well known in radar and sonar waveform optimizations. The
same situation arises in quantum information processing and optical
communication when optimizing pure quantum states for communicating in bosonic
quantum channels, i.e. find optimal channel input states maximizing the pure
state channel fidelity. Due to the non-convex nature of this problem the
optimum and the maximizers itself are in general difficult find, numerically
and analytically. Therefore upper bounds on the achievable performance are
important which will be provided by this contribution. Based on a result due to
E. Lieb, the main theorem states a new upper bound which is independent of the
waveforms and becomes tight only for Gaussian weights and waveforms. A
discussion of this particular important case, which tighten recent results on
Gaussian quantum fidelity and coherent states, will be given. Another bound is
presented for the case where scattering is determined only by some arbitrary
region in phase space.
|
cs/0601022
|
On the Fading Number of Multiple-Input Single-Output Fading Channels
with Memory
|
cs.IT math.IT
|
We derive new upper and lower bounds on the fading number of multiple-input
single-output (MISO) fading channels of general (not necessarily Gaussian)
regular law with spatial and temporal memory. The fading number is the second
term, after the double-logarithmic term, of the high signal-to-noise ratio
(SNR) expansion of channel capacity.
In case of an isotropically distributed fading vector it is proven that the
upper and lower bound coincide, i.e., the general MISO fading number with
memory is known precisely.
The upper and lower bounds show that a type of beam-forming is asymptotically
optimal.
|
cs/0601023
|
Efficient Convergent Maximum Likelihood Decoding on Tail-Biting
Trellises
|
cs.IT math.IT
|
An algorithm for exact maximum likelihood(ML) decoding on tail-biting
trellises is presented, which exhibits very good average case behavior. An
approximate variant is proposed, whose simulated performance is observed to be
virtually indistinguishable from the exact one at all values of signal to noise
ratio, and which effectively performs computations equivalent to at most two
rounds on the tail-biting trellis. The approximate algorithm is analyzed, and
the conditions under which its output is different from the ML output are
deduced. The results of simulations on an AWGN channel for the exact and
approximate algorithms on the 16 state tail-biting trellis for the (24,12)
Extended Golay Code, and tail-biting trellises for two rate 1/2 convolutional
codes with memories of 4 and 6 respectively, are reported. An advantage of our
algorithms is that they do not suffer from the effects of limit cycles or the
presence of pseudocodewords.
|
cs/0601028
|
Superimposed Coded and Uncoded Transmissions of a Gaussian Source over
the Gaussian Channel
|
cs.IT math.IT
|
We propose to send a Gaussian source over an average-power limited additive
white Gaussian noise channel by transmitting a linear combination of the source
sequence and the result of its quantization using a high dimensional Gaussian
vector quantizer. We show that, irrespective of the rate of the vector
quantizer (assumed to be fixed and smaller than the channel's capacity), this
transmission scheme is asymptotically optimal (as the quantizer's dimension
tends to infinity) under the mean squared-error fidelity criterion. This
generalizes the classical result of Goblick about the optimality of scaled
uncoded transmission, which corresponds to choosing the rate of the vector
quantizer as zero, and the classical source-channel separation approach, which
corresponds to choosing the rate of the vector quantizer arbitrarily close to
the capacity of the channel.
|
cs/0601029
|
Sending a Bi-Variate Gaussian Source over a Gaussian MAC
|
cs.IT math.IT
|
We consider a problem where a memoryless bi-variate Gaussian source is to be
transmitted over an additive white Gaussian multiple-access channel with two
transmitting terminals and one receiving terminal. The first transmitter only
sees the first source component and the second transmitter only sees the second
source component. We are interested in the pair of mean squared-error
distortions at which the receiving terminal can reproduce each of the source
components.
It is demonstrated that in the symmetric case, below a certain
signal-to-noise ratio (SNR) threshold, which is determined by the source
correlation, uncoded communication is optimal. For SNRs above this threshold we
present outer and inner bounds on the achievable distortions.
|
cs/0601031
|
Divide-and-Evolve: a New Memetic Scheme for Domain-Independent Temporal
Planning
|
cs.AI
|
An original approach, termed Divide-and-Evolve is proposed to hybridize
Evolutionary Algorithms (EAs) with Operational Research (OR) methods in the
domain of Temporal Planning Problems (TPPs). Whereas standard Memetic
Algorithms use local search methods to improve the evolutionary solutions, and
thus fail when the local method stops working on the complete problem, the
Divide-and-Evolve approach splits the problem at hand into several, hopefully
easier, sub-problems, and can thus solve globally problems that are intractable
when directly fed into deterministic OR algorithms. But the most prominent
advantage of the Divide-and-Evolve approach is that it immediately opens up an
avenue for multi-objective optimization, even though the OR method that is used
is single-objective. Proof of concept approach on the standard
(single-objective) Zeno transportation benchmark is given, and a small original
multi-objective benchmark is proposed in the same Zeno framework to assess the
multi-objective capabilities of the proposed methodology, a breakthrough in
Temporal Planning.
|
cs/0601032
|
Efficient Open World Reasoning for Planning
|
cs.AI cs.LO
|
We consider the problem of reasoning and planning with incomplete knowledge
and deterministic actions. We introduce a knowledge representation scheme
called PSIPLAN that can effectively represent incompleteness of an agent's
knowledge while allowing for sound, complete and tractable entailment in
domains where the set of all objects is either unknown or infinite. We present
a procedure for state update resulting from taking an action in PSIPLAN that is
correct, complete and has only polynomial complexity. State update is performed
without considering the set of all possible worlds corresponding to the
knowledge state. As a result, planning with PSIPLAN is done without direct
manipulation of possible worlds. PSIPLAN representation underlies the PSIPOP
planning algorithm that handles quantified goals with or without exceptions
that no other domain independent planner has been shown to achieve. PSIPLAN has
been implemented in Common Lisp and used in an application on planning in a
collaborative interface.
|
cs/0601036
|
On the complexity of computing the capacity of codes that avoid
forbidden difference patterns
|
cs.IT math.IT
|
We consider questions related to the computation of the capacity of codes
that avoid forbidden difference patterns. The maximal number of $n$-bit
sequences whose pairwise differences do not contain some given forbidden
difference patterns increases exponentially with $n$. The exponent is the
capacity of the forbidden patterns, which is given by the logarithm of the
joint spectral radius of a set of matrices constructed from the forbidden
difference patterns. We provide a new family of bounds that allows for the
approximation, in exponential time, of the capacity with arbitrary high degree
of accuracy. We also provide a polynomial time algorithm for the problem of
determining if the capacity of a set is positive, but we prove that the same
problem becomes NP-hard when the sets of forbidden patterns are defined over an
extended set of symbols. Finally, we prove the existence of extremal norms for
the sets of matrices arising in the capacity computation. This result makes it
possible to apply a specific (even though non polynomial) approximation
algorithm. We illustrate this fact by computing exactly the capacity of codes
that were only known approximately.
|
cs/0601037
|
Constraint-based verification of abstract models of multitreaded
programs
|
cs.CL cs.PL
|
We present a technique for the automated verification of abstract models of
multithreaded programs providing fresh name generation, name mobility, and
unbounded control.
As high level specification language we adopt here an extension of
communication finite-state machines with local variables ranging over an
infinite name domain, called TDL programs. Communication machines have been
proved very effective for representing communication protocols as well as for
representing abstractions of multithreaded software.
The verification method that we propose is based on the encoding of TDL
programs into a low level language based on multiset rewriting and constraints
that can be viewed as an extension of Petri Nets. By means of this encoding,
the symbolic verification procedure developed for the low level language in our
previous work can now be applied to TDL programs. Furthermore, the encoding
allows us to isolate a decidable class of verification problems for TDL
programs that still provide fresh name generation, name mobility, and unbounded
control. Our syntactic restrictions are in fact defined on the internal
structure of threads: In order to obtain a complete and terminating method,
threads are only allowed to have at most one local variable (ranging over an
infinite domain of names).
|
cs/0601040
|
New Technologies for Sustainable Urban Transport in Europe
|
cs.RO
|
In the past few years, the European Commission has financed several projects
to examine how new technologies could improve the sustainability of European
cities. These technologies concern new public transportation modes such as
guided buses to form high capacity networks similar to light rail but at a
lower cost and better flexibility, PRT (Personal Rapid Transit) and cybercars
(small urban vehicles with fully automatic driving capabilities to be used in
carsharing mode, mostly as a complement to mass transport). They also concern
private vehicles with technologies which could improve the efficiency of the
vehicles as well as their safety (Intelligent Speed Adaptation, Adaptive Cruise
>.Control, Stop&Go, Lane Keeping,...) and how these new vehicles can complement
mass transport in the form of car-sharing services.
|
cs/0601041
|
Oblivious channels
|
cs.IT math.IT
|
Let C = {x_1,...,x_N} \subset {0,1}^n be an [n,N] binary error correcting
code (not necessarily linear). Let e \in {0,1}^n be an error vector. A codeword
x in C is said to be "disturbed" by the error e if the closest codeword to x +
e is no longer x. Let A_e be the subset of codewords in C that are disturbed by
e. In this work we study the size of A_e in random codes C (i.e. codes in which
each codeword x_i is chosen uniformly and independently at random from
{0,1}^n). Using recent results of Vu [Random Structures and Algorithms 20(3)]
on the concentration of non-Lipschitz functions, we show that |A_e| is strongly
concentrated for a wide range of values of N and ||e||.
We apply this result in the study of communication channels we refer to as
"oblivious". Roughly speaking, a channel W(y|x) is said to be oblivious if the
error distribution imposed by the channel is independent of the transmitted
codeword x. For example, the well studied Binary Symmetric Channel is an
oblivious channel.
In this work, we define oblivious and partially oblivious channels and
present lower bounds on their capacity. The oblivious channels we define have
connections to Arbitrarily Varying Channels with state constraints.
|
cs/0601042
|
LPAR-05 Workshop: Empirically Successfull Automated Reasoning in
Higher-Order Logic (ESHOL)
|
cs.AI cs.LO
|
This workshop brings together practioners and researchers who are involved in
the everyday aspects of logical systems based on higher-order logic. We hope to
create a friendly and highly interactive setting for discussions around the
following four topics. Implementation and development of proof assistants based
on any notion of impredicativity, automated theorem proving tools for
higher-order logic reasoning systems, logical framework technology for the
representation of proofs in higher-order logic, formal digital libraries for
storing, maintaining and querying databases of proofs.
We envision attendees that are interested in fostering the development and
visibility of reasoning systems for higher-order logics. We are particularly
interested in a discusssion on the development of a higher-order version of the
TPTP and in comparisons of the practical strengths of automated higher-order
reasoning systems. Additionally, the workshop includes system demonstrations.
ESHOL is the successor of the ESCAR and ESFOR workshops held at CADE 2005 and
IJCAR 2004.
|
cs/0601043
|
Combining Relational Algebra, SQL, Constraint Modelling, and Local
Search
|
cs.AI cs.LO
|
The goal of this paper is to provide a strong integration between constraint
modelling and relational DBMSs. To this end we propose extensions of standard
query languages such as relational algebra and SQL, by adding constraint
modelling capabilities to them. In particular, we propose non-deterministic
extensions of both languages, which are specially suited for combinatorial
problems. Non-determinism is introduced by means of a guessing operator, which
declares a set of relations to have an arbitrary extension. This new operator
results in languages with higher expressive power, able to express all problems
in the complexity class NP. Some syntactical restrictions which make data
complexity polynomial are shown. The effectiveness of both extensions is
demonstrated by means of several examples. The current implementation, written
in Java using local search techniques, is described. To appear in Theory and
Practice of Logic Programming (TPLP)
|
cs/0601044
|
Genetic Programming, Validation Sets, and Parsimony Pressure
|
cs.LG
|
Fitness functions based on test cases are very common in Genetic Programming
(GP). This process can be assimilated to a learning task, with the inference of
models from a limited number of samples. This paper is an investigation on two
methods to improve generalization in GP-based learning: 1) the selection of the
best-of-run individuals using a three data sets methodology, and 2) the
application of parsimony pressure in order to reduce the complexity of the
solutions. Results using GP in a binary classification setup show that while
the accuracy on the test sets is preserved, with less variances compared to
baseline results, the mean tree size obtained with the tested methods is
significantly reduced.
|
cs/0601045
|
PageRank without hyperlinks: Structural re-ranking using links induced
by language models
|
cs.IR cs.CL
|
Inspired by the PageRank and HITS (hubs and authorities) algorithms for Web
search, we propose a structural re-ranking approach to ad hoc information
retrieval: we reorder the documents in an initially retrieved set by exploiting
asymmetric relationships between them. Specifically, we consider generation
links, which indicate that the language model induced from one document assigns
high probability to the text of another; in doing so, we take care to prevent
bias against long documents. We study a number of re-ranking criteria based on
measures of centrality in the graphs formed by generation links, and show that
integrating centrality into standard language-model-based retrieval is quite
effective at improving precision at top ranks.
|
cs/0601046
|
Better than the real thing? Iterative pseudo-query processing using
cluster-based language models
|
cs.IR cs.CL
|
We present a novel approach to pseudo-feedback-based ad hoc retrieval that
uses language models induced from both documents and clusters. First, we treat
the pseudo-feedback documents produced in response to the original query as a
set of pseudo-queries that themselves can serve as input to the retrieval
process. Observing that the documents returned in response to the
pseudo-queries can then act as pseudo-queries for subsequent rounds, we arrive
at a formulation of pseudo-query-based retrieval as an iterative process.
Experiments show that several concrete instantiations of this idea, when
applied in conjunction with techniques designed to heighten precision, yield
performance results rivaling those of a number of previously-proposed
algorithms, including the standard language-modeling approach. The use of
cluster-based language models is a key contributing factor to our algorithms'
success.
|
cs/0601047
|
Automatic Detection of Trends in Dynamical Text: An Evolutionary
Approach
|
cs.IR cs.NE
|
This paper presents an evolutionary algorithm for modeling the arrival dates
of document streams, which is any time-stamped collection of documents, such as
newscasts, e-mails, IRC conversations, scientific journals archives and weblog
postings. This algorithm assigns frequencies (number of document arrivals per
time unit) to time intervals so that it produces an optimal fit to the data.
The optimization is a trade off between accurately fitting the data and
avoiding too many frequency changes; this way the analysis is able to find fits
which ignore the noise. Classical dynamic programming algorithms are limited by
memory and efficiency requirements, which can be a problem when dealing with
long streams. This suggests to explore alternative search methods which allow
for some degree of uncertainty to achieve tractability. Experiments have shown
that the designed evolutionary algorithm is able to reach the same solution
quality as those classical dynamic programming algorithms in a shorter time. We
have also explored different probabilistic models to optimize the fitting of
the date streams, and applied these algorithms to infer whether a new arrival
increases or decreases {\em interest} in the topic the document stream is
about.
|
cs/0601048
|
Permutation Polynomial Interleavers: An Algebraic-Geometric Perspective
|
cs.IT cs.DM math.IT
|
An interleaver is a critical component for the channel coding
performance of turbo codes. Algebraic constructions are
important because they admit analytical designs and
simple, practical hardware implementation. The spread factor of an
interleaver is a common measure for turbo coding
applications. Maximum-spread interleavers are interleavers whose
spread factors achieve the upper bound. An infinite sequence of
quadratic permutation polynomials over integer rings that generate
maximum-spread interleavers is presented. New properties of
permutation polynomial interleavers are investigated from an
algebraic-geometric perspective resulting in a new non-linearity metric
for interleavers. A new interleaver metric that is a function of both
the non-linearity metric and the spread factor is proposed.
It is numerically demonstrated that the spread factor has a
diminishing importance with the block length. A table of good
interleavers for a variety of interleaver lengths according to the
new metric is listed. Extensive computer simulation results with impressive
frame error rates confirm the efficacy of the new metric. Further,
when tail-biting constituent codes are used, the resulting turbo
codes are quasi-cyclic.
|
cs/0601051
|
A Constructive Semantic Characterization of Aggregates in ASP
|
cs.AI cs.LO cs.PL cs.SC
|
This technical note describes a monotone and continuous fixpoint operator to
compute the answer sets of programs with aggregates. The fixpoint operator
relies on the notion of aggregate solution. Under certain conditions, this
operator behaves identically to the three-valued immediate consequence operator
$\Phi^{aggr}_P$ for aggregate programs, independently proposed Pelov et al.
This operator allows us to closely tie the computational complexity of the
answer set checking and answer sets existence problems to the cost of checking
a solution of the aggregates in the program. Finally, we relate the semantics
described by the operator to other proposals for logic programming with
aggregates.
To appear in Theory and Practice of Logic Programming (TPLP).
|
cs/0601052
|
Artificial and Biological Intelligence
|
cs.AI
|
This article considers evidence from physical and biological sciences to show
machines are deficient compared to biological systems at incorporating
intelligence. Machines fall short on two counts: firstly, unlike brains,
machines do not self-organize in a recursive manner; secondly, machines are
based on classical logic, whereas Nature's intelligence may depend on quantum
mechanics.
|
cs/0601053
|
Wavefront Propagation and Fuzzy Based Autonomous Navigation
|
cs.RO
|
Path planning and obstacle avoidance are the two major issues in any
navigation system. Wavefront propagation algorithm, as a good path planner, can
be used to determine an optimal path. Obstacle avoidance can be achieved using
possibility theory. Combining these two functions enable a robot to
autonomously navigate to its destination. This paper presents the approach and
results in implementing an autonomous navigation system for an indoor mobile
robot. The system developed is based on a laser sensor used to retrieve data to
update a two dimensional world model of therobot environment. Waypoints in the
path are incorporated into the obstacle avoidance. Features such as ageing of
objects and smooth motion planning are implemented to enhance efficiency and
also to cater for dynamic environments.
|
cs/0601054
|
Control of a Lightweight Flexible Robotic Arm Using Sliding Modes
|
cs.RO
|
This paper presents a robust control scheme for flexible link robotic
manipulators, which is based on considering the flexible mechanical structure
as a system with slow (rigid) and fast (flexible) modes that can be controlled
separately. The rigid dynamics is controlled by means of a robust sliding-mode
approach with wellestablished stability properties while an LQR optimal design
is adopted for the flexible dynamics. Experimental results show that this
composite approach achieves good closed loop tracking properties both for the
rigid and the flexible dynamics.
|
cs/0601055
|
A Hybrid Three Layer Architecture for Fire Agent Management in Rescue
Simulation Environment
|
cs.RO
|
This paper presents a new architecture called FAIS for imple- menting
intelligent agents cooperating in a special Multi Agent environ- ment, namely
the RoboCup Rescue Simulation System. This is a layered architecture which is
customized for solving fire extinguishing problem. Structural decision making
algorithms are combined with heuristic ones in this model, so it's a hybrid
architecture.
|
cs/0601056
|
Dynamic Balance Control of Multi-arm Free-Floating Space Robots
|
cs.RO
|
This paper investigates the problem of the dynamic balance control of
multi-arm free-floating space robot during capturing an active object in close
proximity. The position and orientation of space base will be affected during
the operation of space manipulator because of the dynamics coupling between the
manipulator and space base. This dynamics coupling is unique characteristics of
space robot system. Such a disturbance will produce a serious impact between
the manipulator hand and the object. To ensure reliable and precise operation,
we propose to develop a space robot system consisting of two arms, with one arm
(mission arm) for accomplishing the capture mission, and the other one (balance
arm) compensating for the disturbance of the base. We present the coordinated
control concept for balance of the attitude of the base using the balance arm.
The mission arm can move along the given trajectory to approach and capture the
target with no considering the disturbance from the coupling of the base. We
establish a relationship between the motion of two arm that can realize the
zeros reaction to the base. The simulation studies verified the validity and
efficiency of the proposed control method.
|
cs/0601057
|
Robust Motion Control for Mobile Manipulator Using Resolved Acceleration
and Proportional-Integral Active Force Control
|
cs.RO
|
A resolved acceleration control (RAC) and proportional-integral active force
control (PIAFC) is proposed as an approach for the robust motion control of a
mobile manipulator (MM) comprising a differentially driven wheeled mobile
platform with a two-link planar arm mounted on top of the platform. The study
emphasizes on the integrated kinematic and dynamic control strategy in which
the RAC is used to manipulate the kinematic component while the PIAFC is
implemented to compensate the dynamic effects including the bounded
known/unknown disturbances and uncertainties. The effectivenss and robustness
of the proposed scheme are investigated through a rigorous simulation study and
later complemented with experimental results obtained through a number of
experiments performed on a fully developed working prototype in a laboratory
environment. A number of disturbances in the form of vibratory and impact
forces are deliberately introduced into the system to evaluate the system
performances. The investigation clearly demonstrates the extreme robustness
feature of the proposed control scheme compared to other systems considered in
the study.
|
cs/0601058
|
CAGD - Computer Aided Gripper Design for a Flexible Gripping System
|
cs.RO
|
This paper is a summary of the recently accomplished research work on
flexible gripping systems. The goal is to develop a gripper which can be used
for a great amount of geometrically variant workpieces. The economic aspect is
of particular importance during the whole development. The high flexibility of
the gripper is obtained by three parallel used principles. These are human and
computer based analysis of the gripping object as well as mechanical adaptation
of the gripper to the object with the help of servo motors. The focus is on the
gripping of free-form surfaces with suction cup.
|
cs/0601059
|
A Descriptive Model of Robot Team and the Dynamic Evolution of Robot
Team Cooperation
|
cs.RO
|
At present, the research on robot team cooperation is still in qualitative
analysis phase and lacks the description model that can quantitatively describe
the dynamical evolution of team cooperative relationships with constantly
changeable task demand in Multi-robot field. First this paper whole and static
describes organization model HWROM of robot team, then uses Markov course and
Bayesian theorem for reference, dynamical describes the team cooperative
relationships building. Finally from cooperative entity layer, ability layer
and relative layer we research team formation and cooperative mechanism, and
discuss how to optimize relative action sets during the evolution. The dynamic
evolution model of robot team and cooperative relationships between robot teams
proposed and described in this paper can not only generalize the robot team as
a whole, but also depict the dynamic evolving process quantitatively. Users can
also make the prediction of the cooperative relationship and the action of the
robot team encountering new demands based on this model. Journal web page & a
lot of robotic related papers www.ars-journal.com
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.