id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
cs/0501052
|
Stochastic Differential Games in a Non-Markovian Setting
|
cs.IT cs.CE math.IT
|
Stochastic differential games are considered in a non-Markovian setting.
Typically, in stochastic differential games the modulating process of the
diffusion equation describing the state flow is taken to be Markovian. Then
Nash equilibria or other types of solution such as Pareto equilibria are
constructed using Hamilton-Jacobi-Bellman (HJB) equations. But in a
non-Markovian setting the HJB method is not applicable. To examine the
non-Markovian case, this paper considers the situation in which the modulating
process is a fractional Brownian motion. Fractional noise calculus is used for
such models to find the Nash equilibria explicitly. Although fractional
Brownian motion is taken as the modulating process because of its versatility
in modeling in the fields of finance and networks, the approach in this paper
has the merit of being applicable to more general Gaussian stochastic
differential games with only slight conceptual modifications. This work has
applications in finance to stock price modeling which incorporates the effect
of institutional investors, and to stochastic differential portfolio games in
markets in which the stock prices follow diffusions modulated with fractional
Brownian motion.
|
cs/0501053
|
Relational Algebra as non-Distributive Lattice
|
cs.DB
|
We reduce the set of classic relational algebra operators to two binary
operations: natural join and generalized union. We further demonstrate that
this set of operators is relationally complete and honors lattice axioms.
|
cs/0501054
|
Arbitrage in Fractal Modulated Markets When the Volatility is Stochastic
|
cs.IT cs.CE math.IT
|
In this paper an arbitrage strategy is constructed for the modified
Black-Scholes model driven by fractional Brownian motion or by a time changed
fractional Brownian motion, when the volatility is stochastic. This latter
property allows the heavy tailedness of the log returns of the stock prices to
be also accounted for in addition to the long range dependence introduced by
the fractional Brownian motion. Work has been done previously on this problem
for the case with constant `volatility' and without a time change; here these
results are extended to the case of stochastic volatility models when the
modulator is fractional Brownian motion or a time change of it. (Volatility in
fractional Black-Scholes models does not carry the same meaning as in the
classic Black-Scholes framework, which is made clear in the text.)
Since fractional Brownian motion is not a semi-martingale, the Black-Scholes
differential equation is not well-defined sense for arbitrary predictable
volatility processes. However, it is shown here that any almost surely
continuous and adapted process having zero quadratic variation can act as an
integrator over functions of the integrator and over the family of continuous
adapted semi-martingales. Moreover it is shown that the integral also has zero
quadratic variation, and therefore that the integral itself can be an
integrator. This property of the integral is crucial in developing the
arbitrage strategy. Since fractional Brownian motion and a time change of
fractional Brownian motion have zero quadratic variation, these results are
applicable to these cases in particular. The appropriateness of fractional
Brownian motion as a means of modeling stock price returns is discussed as
well.
|
cs/0501055
|
Consistency Problems for Jump-Diffusion Models
|
cs.IT cs.CE math.IT
|
In this paper consistency problems for multi-factor jump-diffusion models,
where the jump parts follow multivariate point processes are examined. First
the gap between jump-diffusion models and generalized Heath-Jarrow-Morton (HJM)
models is bridged. By applying the drift condition for a generalized
arbitrage-free HJM model, the consistency condition for jump-diffusion models
is derived. Then we consider a case in which the forward rate curve has a
separable structure, and obtain a specific version of the general consistency
condition. In particular, a necessary and sufficient condition for a
jump-diffusion model to be affine is provided. Finally the Nelson-Siegel type
of forward curve structures is discussed. It is demonstrated that under
regularity condition, there exists no jump-diffusion model consistent with the
Nelson-Siegel curves.
|
cs/0501056
|
A Large Deviations Approach to Sensor Scheduling for Detection of
Correlated Random Fields
|
cs.IT math.IT
|
The problem of scheduling sensor transmissions for the detection of
correlated random fields using spatially deployed sensors is considered. Using
the large deviations principle, a closed-form expression for the error exponent
of the miss probability is given as a function of the sensor spacing and
signal-to-noise ratio (SNR). It is shown that the error exponent has a distinct
characteristic: at high SNR, the error exponent is monotonically increasing
with respect to sensor spacing, while at low SNR there is an optimal spacing
for scheduled sensors.
|
cs/0501057
|
Concavity of the auxiliary function appearing in quantum reliability
function an classical-quantum channels
|
cs.IT math.IT
|
Concavity of the auxiliary function which appears in the random coding
exponent as the lower bound of the quantum reliability function for general
quantum states is proven for s between 0 and 1.
|
cs/0501058
|
Estimation of the Number of Sources in Unbalanced Arrays via Information
Theoretic Criteria
|
cs.IT math.IT
|
Estimating the number of sources impinging on an array of sensors is a well
known and well investigated problem. A common approach for solving this problem
is to use an information theoretic criterion, such as Minimum Description
Length (MDL) or the Akaike Information Criterion (AIC). The MDL estimator is
known to be a consistent estimator, robust against deviations from the Gaussian
assumption, and non-robust against deviations from the point source and/or
temporally or spatially white additive noise assumptions. Over the years
several alternative estimation algorithms have been proposed and tested.
Usually, these algorithms are shown, using computer simulations, to have
improved performance over the MDL estimator, and to be robust against
deviations from the assumed spatial model. Nevertheless, these robust
algorithms have high computational complexity, requiring several
multi-dimensional searches.
In this paper, motivated by real life problems, a systematic approach toward
the problem of robust estimation of the number of sources using information
theoretic criteria is taken. An MDL type estimator that is robust against
deviation from assumption of equal noise level across the array is studied. The
consistency of this estimator, even when deviations from the equal noise level
assumption occur, is proven. A novel low-complexity implementation method
avoiding the need for multi-dimensional searches is presented as well, making
this estimator a favorable choice for practical applications.
|
cs/0501061
|
Optimal and Suboptimal Finger Selection Algorithms for MMSE Rake
Receivers in Impulse Radio Ultra-Wideband Systems
|
cs.IT math.IT
|
Convex relaxations of the optimal finger selection algorithm are proposed for
a minimum mean square error (MMSE) Rake receiver in an impulse radio
ultra-wideband system. First, the optimal finger selection problem is
formulated as an integer programming problem with a non-convex objective
function. Then, the objective function is approximated by a convex function and
the integer programming problem is solved by means of constraint relaxation
techniques. The proposed algorithms are suboptimal due to the approximate
objective function and the constraint relaxation steps. However, they can be
used in conjunction with the conventional finger selection algorithm, which is
suboptimal on its own since it ignores the correlation between multipath
components, to obtain performances reasonably close to that of the optimal
scheme that cannot be implemented in practice due to its complexity. The
proposed algorithms leverage convexity of the optimization problem
formulations, which is the watershed between `easy' and `difficult'
optimization problems.
|
cs/0501062
|
On The Tradeoff Between Two Types of Processing Gain
|
cs.IT math.IT
|
One of the features characterizing almost every multiple access (MA)
communication system is the processing gain. Through the use of spreading
sequences, the processing gain of Random CDMA systems (RCDMA), is devoted to
both bandwidth expansion and orthogonalization of the signals transmitted by
different users. Another type of multiple access system is Impulse Radio (IR).
In many aspects, IR systems are similar to time division multiple access (TDMA)
systems, and the processing gain of IR systems represents the ratio between the
actual transmission time and the total time between two consecutive
ransmissions (on-plus-off to on ratio). While CDMA systems, which constantly
excite the channel, rely on spreading sequences to orthogonalize the signals
transmitted by different users, IR systems transmit a series of short pulses
and the orthogonalization between the signals transmitted by different users is
achieved by the fact that most of the pulses do not collide with each other at
the receiver.
In this paper, a general class of MA communication systems that use both
types of processing gain is presented, and both IR and RCDMA systems are
demonstrated to be two special cases of this more general class of systems. The
bit error rate (BER) of several receivers as a function of the ratio between
the two types of processing gain is analyzed and compared under the constraint
that the total processing gain of the system is large and fixed. It is
demonstrated that in non inter-symbol interference (ISI) channels there is no
tradeoff between the two types of processing gain. However, in ISI channels a
tradeoff between the two types of processing gain exists. In addition, the
sub-optimality of RCDMA systems in frequency selective channels is established.
|
cs/0501063
|
Bandit Problems with Side Observations
|
cs.IT cs.LG math.IT
|
An extension of the traditional two-armed bandit problem is considered, in
which the decision maker has access to some side information before deciding
which arm to pull. At each time t, before making a selection, the decision
maker is able to observe a random variable X_t that provides some information
on the rewards to be obtained. The focus is on finding uniformly good rules
(that minimize the growth rate of the inferior sampling time) and on
quantifying how much the additional information helps. Various settings are
considered and for each setting, lower bounds on the achievable inferior
sampling time are developed and asymptotically optimal adaptive schemes
achieving these lower bounds are constructed.
|
cs/0501064
|
A Non-Cooperative Power Control Game for Multi-Carrier CDMA Systems
|
cs.IT math.IT
|
In this work, a non-cooperative power control game for multi-carrier CDMA
systems is proposed. In the proposed game, each user needs to decide how much
power to transmit over each carrier to maximize its overall utility. The
utility function considered here measures the number of reliable bits
transmitted per joule of energy consumed. It is shown that the user's utility
is maximized when the user transmits only on the carrier with the best
"effective channel". The existence and uniqueness of Nash equilibrium for the
proposed game are investigated and the properties of equilibrium are studied.
Also, an iterative and distributed algorithm for reaching the equilibrium (if
it exists) is presented. It is shown that the proposed approach results in a
significant improvement in the total utility achieved at equilibrium compared
to the case in which each user maximizes its utility over each carrier
independently.
|
cs/0501066
|
The Noncoherent Rician Fading Channel -- Part I : Structure of the
Capacity-Achieving Input
|
cs.IT math.IT
|
Transmission of information over a discrete-time memoryless Rician fading
channel is considered where neither the receiver nor the transmitter knows the
fading coefficients. First the structure of the capacity-achieving input
signals is investigated when the input is constrained to have limited
peakedness by imposing either a fourth moment or a peak constraint. When the
input is subject to second and fourth moment limitations, it is shown that the
capacity-achieving input amplitude distribution is discrete with a finite
number of mass points in the low-power regime. A similar discrete structure for
the optimal amplitude is proven over the entire SNR range when there is only a
peak power constraint. The Rician fading with phase-noise channel model, where
there is phase uncertainty in the specular component, is analyzed. For this
model it is shown that, with only an average power constraint, the
capacity-achieving input amplitude is discrete with a finite number of levels.
For the classical average power limited Rician fading channel, it is proven
that the optimal input amplitude distribution has bounded support.
|
cs/0501067
|
The Noncoherent Rician Fading Channel -- Part II : Spectral Efficiency
in the Low-Power Regime
|
cs.IT math.IT
|
Transmission of information over a discrete-time memoryless Rician fading
channel is considered where neither the receiver nor the transmitter knows the
fading coefficients. The spectral-efficiency/bit-energy tradeoff in the
low-power regime is examined when the input has limited peakedness. It is shown
that if a fourth moment input constraint is imposed or the input
peak-to-average power ratio is limited, then in contrast to the behavior
observed in average power limited channels, the minimum bit energy is not
always achieved at zero spectral efficiency. The low-power performance is also
characterized when there is a fixed peak limit that does not vary with the
average power. A new signaling scheme that overlays phase-shift keying on
on-off keying is proposed and shown to be optimally efficient in the low-power
regime.
|
cs/0501068
|
Learning to automatically detect features for mobile robots using
second-order Hidden Markov Models
|
cs.AI
|
In this paper, we propose a new method based on Hidden Markov Models to
interpret temporal sequences of sensor data from mobile robots to automatically
detect features. Hidden Markov Models have been used for a long time in pattern
recognition, especially in speech recognition. Their main advantages over other
methods (such as neural networks) are their ability to model noisy temporal
signals of variable length. We show in this paper that this approach is well
suited for interpretation of temporal sequences of mobile-robot sensor data. We
present two distinct experiments and results: the first one in an indoor
environment where a mobile robot learns to detect features like open doors or
T-intersections, the second one in an outdoor environment where a different
mobile robot has to identify situations like climbing a hill or crossing a
rock.
|
cs/0501071
|
Capacity Regions and Optimal Power Allocation for Groupwise Multiuser
Detection
|
cs.IT math.IT
|
In this paper, optimal power allocation and capacity regions are derived for
GSIC (groupwise successive interference cancellation) systems operating in
multipath fading channels, under imperfect channel estimation conditions. It is
shown that the impact of channel estimation errors on the system capacity is
two-fold: it affects the receivers' performance within a group of users, as
well as the cancellation performance (through cancellation errors). An
iterative power allocation algorithm is derived, based on which it can be shown
that the total required received power is minimized when the groups are ordered
according to their cancellation errors, and the first detected group has the
smallest cancellation error.
Performace/complexity tradeoff issues are also discussed by directly
comparing the system capacity for different implementations: GSIC with linear
minimum-mean-square error (LMMSE) receivers within the detection groups, GSIC
with matched filter receivers, multicode LMMSE systems, and simple all matched
filter receivers systems.
|
cs/0501072
|
Inferring knowledge from a large semantic network
|
cs.AI
|
In this paper, we present a rich semantic network based on a differential
analysis. We then detail implemented measures that take into account common and
differential features between words. In a last section, we describe some
industrial applications.
|
cs/0501077
|
Ontology-Based Users & Requests Clustering in Customer Service
Management System
|
cs.IR cs.CL
|
Customer Service Management is one of major business activities to better
serve company customers through the introduction of reliable processes and
procedures. Today this kind of activities is implemented through e-services to
directly involve customers into business processes. Traditionally Customer
Service Management involves application of data mining techniques to discover
usage patterns from the company knowledge memory. Hence grouping of
customers/requests to clusters is one of major technique to improve the level
of company customization. The goal of this paper is to present an efficient for
implementation approach for clustering users and their requests. The approach
uses ontology as knowledge representation model to improve the semantic
interoperability between units of the company and customers. Some fragments of
the approach tested in an industrial company are also presented in the paper.
|
cs/0501078
|
Multi-document Biography Summarization
|
cs.CL
|
In this paper we describe a biography summarization system using sentence
classification and ideas from information retrieval. Although the individual
techniques are not new, assembling and applying them to generate multi-document
biographies is new. Our system was evaluated in DUC2004. It is among the top
performers in task 5-short summaries focused by person questions.
|
cs/0501079
|
Data Mining for Actionable Knowledge: A Survey
|
cs.DB cs.AI
|
The data mining process consists of a series of steps ranging from data
cleaning, data selection and transformation, to pattern evaluation and
visualization. One of the central problems in data mining is to make the mined
patterns or knowledge actionable. Here, the term actionable refers to the mined
patterns suggest concrete and profitable actions to the decision-maker. That
is, the user can do something to bring direct benefits (increase in profits,
reduction in cost, improvement in efficiency, etc.) to the organization's
advantage. However, there has been written no comprehensive survey available on
this topic. The goal of this paper is to fill the void.
In this paper, we first present two frameworks for mining actionable
knowledge that are inexplicitly adopted by existing research methods. Then we
try to situate some of the research on this topic from two different
viewpoints: 1) data mining tasks and 2) adopted framework. Finally, we specify
issues that are either not addressed or insufficiently studied yet and conclude
the paper.
|
cs/0501081
|
A Tree Search Method for Iterative Decoding of Underdetermined Multiuser
Systems
|
cs.IT math.IT
|
Application of the turbo principle to multiuser decoding results in an
exchange of probability distributions between two sets of constraints. Firstly,
constraints imposed by the multiple-access channel, and secondly, individual
constraints imposed by each users' error control code. A-posteriori probability
computation for the first set of constraints is prohibitively complex for all
but a small number of users. Several lower complexity approaches have been
proposed in the literature. One class of methods is based on linear filtering
(e.g. LMMSE). A more recent approach is to compute approximations to the
posterior probabilities by marginalising over a subset of sequences (list
detection). Most of the list detection methods are restricted to non-singular
systems. In this paper, we introduce a transformation that permits application
of standard tree-search methods to underdetermined systems. We find that the
resulting tree-search based receiver outperforms existing methods.
|
cs/0501082
|
A Group-Theoretic Approach to the WSSUS Pulse Design Problem
|
cs.IT math.IT
|
We consider the pulse design problem in multicarrier transmission where the
pulse shapes are adapted to the second order statistics of the WSSUS channel.
Even though the problem has been addressed by many authors analytical insights
are rather limited. First we show that the problem is equivalent to the pure
state channel fidelity in quantum information theory. Next we present a new
approach where the original optimization functional is related to an eigenvalue
problem for a pseudo differential operator by utilizing unitary representations
of the Weyl--Heisenberg group.A local approximation of the operator for
underspread channels is derived which implicitly covers the concepts of pulse
scaling and optimal phase space displacement. The problem is reformulated as a
differential equation and the optimal pulses occur as eigenstates of the
harmonic oscillator Hamiltonian. Furthermore this operator--algebraic approach
is extended to provide exact solutions for different classes of scattering
environments.
|
cs/0501084
|
Towards Automated Integration of Guess and Check Programs in Answer Set
Programming: A Meta-Interpreter and Applications
|
cs.AI
|
Answer set programming (ASP) with disjunction offers a powerful tool for
declaratively representing and solving hard problems. Many NP-complete problems
can be encoded in the answer set semantics of logic programs in a very concise
and intuitive way, where the encoding reflects the typical "guess and check"
nature of NP problems: The property is encoded in a way such that polynomial
size certificates for it correspond to stable models of a program. However, the
problem-solving capacity of full disjunctive logic programs (DLPs) is beyond
NP, and captures a class of problems at the second level of the polynomial
hierarchy. While these problems also have a clear "guess and check" structure,
finding an encoding in a DLP reflecting this structure may sometimes be a
non-obvious task, in particular if the "check" itself is a coNP-complete
problem; usually, such problems are solved by interleaving separate guess and
check programs, where the check is expressed by inconsistency of the check
program. In this paper, we present general transformations of head-cycle free
(extended) disjunctive logic programs into stratified and positive (extended)
disjunctive logic programs based on meta-interpretation techniques. The answer
sets of the original and the transformed program are in simple correspondence,
and, moreover, inconsistency of the original program is indicated by a
designated answer set of the transformed program. Our transformations
facilitate the integration of separate "guess" and "check" programs, which are
often easy to obtain, automatically into a single disjunctive logic program.
Our results complement recent results on meta-interpretation in ASP, and extend
methods and techniques for a declarative "guess and check" problem solving
paradigm through ASP.
|
cs/0501085
|
Space Frequency Codes from Spherical Codes
|
cs.IT math.IT
|
A new design method for high rate, fully diverse ('spherical') space
frequency codes for MIMO-OFDM systems is proposed, which works for arbitrary
numbers of antennas and subcarriers. The construction exploits a differential
geometric connection between spherical codes and space time codes. The former
are well studied e.g. in the context of optimal sequence design in CDMA
systems, while the latter serve as basic building blocks for space frequency
codes. In addition a decoding algorithm with moderate complexity is presented.
This is achieved by a lattice based construction of spherical codes, which
permits lattice decoding algorithms and thus offers a substantial reduction of
complexity.
|
cs/0501086
|
Clever Search: A WordNet Based Wrapper for Internet Search Engines
|
cs.AI
|
This paper presents an approach to enhance search engines with information
about word senses available in WordNet. The approach exploits information about
the conceptual relations within the lexical-semantic net. In the wrapper for
search engines presented, WordNet information is used to specify user's request
or to classify the results of a publicly available web search engine, like
google, yahoo, etc.
|
cs/0501088
|
Information estimations and analysis of structures
|
cs.IT math.IT
|
In this paper have written the results of the information analysis of
structures. The obtained information estimation (IE) are based on an entropy
measure of C. Shannon. Obtained IE is univalent both for the non-isomorphic and
for the isomorphic graphs, algorithmically, it is asymptotically steady and has
vector character. IE can be used for the solution of the problems ranking of
structures by the preference, the evaluation of the structurization of subject
area, the solution of the problems of structural optimization. Information
estimations and method of the information analysis of structures it can be used
in many fields of knowledge (Electrical Systems and Circuit, Image recognition,
Computer technology, Databases and Bases of knowledge, Organic chemistry,
Biology and others) and it can be base for the structure calculus.
|
cs/0501089
|
Issues in Exploiting GermaNet as a Resource in Real Applications
|
cs.AI
|
This paper reports about experiments with GermaNet as a resource within
domain specific document analysis. The main question to be answered is: How is
the coverage of GermaNet in a specific domain? We report about results of a
field test of GermaNet for analyses of autopsy protocols and present a sketch
about the integration of GermaNet inside XDOC. Our remarks will contribute to a
GermaNet user's wish list.
|
cs/0501090
|
Stochastic Iterative Decoders
|
cs.IT math.IT
|
This paper presents a stochastic algorithm for iterative error control
decoding. We show that the stochastic decoding algorithm is an approximation of
the sum-product algorithm. When the code's factor graph is a tree, as with
trellises, the algorithm approaches maximum a-posteriori decoding. We also
demonstrate a stochastic approximations to the alternative update rule known as
successive relaxation. Stochastic decoders have very simple digital
implementations which have almost no RAM requirements. We present example
stochastic decoders for a trellis-based Hamming code, and for a Block Turbo
code constructed from Hamming codes.
|
cs/0501091
|
A complexity-regularized quantization approach to nonlinear
dimensionality reduction
|
cs.IT math.IT
|
We consider the problem of nonlinear dimensionality reduction: given a
training set of high-dimensional data whose ``intrinsic'' low dimension is
assumed known, find a feature extraction map to low-dimensional space, a
reconstruction map back to high-dimensional space, and a geometric description
of the dimension-reduced data as a smooth manifold. We introduce a
complexity-regularized quantization approach for fitting a Gaussian mixture
model to the training set via a Lloyd algorithm. Complexity regularization
controls the trade-off between adaptation to the local shape of the underlying
manifold and global geometric consistency. The resulting mixture model is used
to design the feature extraction and reconstruction maps and to define a
Riemannian metric on the low-dimensional data. We also sketch a proof of
consistency of our scheme for the purposes of estimating the unknown underlying
pdf of high-dimensional data.
|
cs/0501092
|
Multi-Vehicle Cooperative Control Using Mixed Integer Linear Programming
|
cs.RO cs.AI cs.MA
|
We present methods to synthesize cooperative strategies for multi-vehicle
control problems using mixed integer linear programming. Complex multi-vehicle
control problems are expressed as mixed logical dynamical systems. Optimal
strategies for these systems are then solved for using mixed integer linear
programming. We motivate the methods on problems derived from an adversarial
game between two teams of robots called RoboFlag. We assume the strategy for
one team is fixed and governed by state machines. The strategy for the other
team is generated using our methods. Finally, we perform an average case
computational complexity study on our approach.
|
cs/0501093
|
Transforming Business Rules Into Natural Language Text
|
cs.AI
|
The aim of the project presented in this paper is to design a system for an
NLG architecture, which supports the documentation process of eBusiness models.
A major task is to enrich the formal description of an eBusiness model with
additional information needed in an NLG task.
|
cs/0501094
|
Corpus based Enrichment of GermaNet Verb Frames
|
cs.AI
|
Lexical semantic resources, like WordNet, are often used in real applications
of natural language document processing. For example, we integrated GermaNet in
our document suite XDOC of processing of German forensic autopsy protocols. In
addition to the hypernymy and synonymy relation, we want to adapt GermaNet's
verb frames for our analysis. In this paper we outline an approach for the
domain related enrichment of GermaNet verb frames by corpus based syntactic and
co-occurred data analyses of real documents.
|
cs/0501095
|
Context Related Derivation of Word Senses
|
cs.AI
|
Real applications of natural language document processing are very often
confronted with domain specific lexical gaps during the analysis of documents
of a new domain. This paper describes an approach for the derivation of domain
specific concepts for the extension of an existing ontology. As resources we
need an initial ontology and a partially processed corpus of a domain. We
exploit the specific characteristic of the sublanguage in the corpus. Our
approach is based on syntactical structures (noun phrases) and compound
analyses to extract information required for the extension of GermaNet's
lexical resources.
|
cs/0501096
|
Transforming and Enriching Documents for the Semantic Web
|
cs.AI
|
We suggest to employ techniques from Natural Language Processing (NLP) and
Knowledge Representation (KR) to transform existing documents into documents
amenable for the Semantic Web. Semantic Web documents have at least part of
their semantics and pragmatics marked up explicitly in both a machine
processable as well as human readable manner. XML and its related standards
(XSLT, RDF, Topic Maps etc.) are the unifying platform for the tools and
methodologies developed for different application scenarios.
|
cs/0502001
|
Some Extensions of Gallager's Method to General Sources and Channels
|
cs.IT math.IT
|
The Gallager bound is well known in the area of channel coding. However, most
discussions about it mainly focus on its applications to memoryless channels.
We show in this paper that the bounds obtained by Gallager's method are very
tight even for general sources and channels that are defined in the
information-spectrum theory. Our method is mainly based on the estimations of
error exponents in those bounds, and by these estimations we proved the direct
part of the Slepian-Wolf theorem and channel coding theorem for general sources
and channels.
|
cs/0502004
|
Asymptotic Log-loss of Prequential Maximum Likelihood Codes
|
cs.LG cs.IT math.IT
|
We analyze the Dawid-Rissanen prequential maximum likelihood codes relative
to one-parameter exponential family models M. If data are i.i.d. according to
an (essentially) arbitrary P, then the redundancy grows at rate c/2 ln n. We
show that c=v1/v2, where v1 is the variance of P, and v2 is the variance of the
distribution m* in M that is closest to P in KL divergence. This shows that
prequential codes behave quite differently from other important universal codes
such as the 2-part MDL, Shtarkov and Bayes codes, for which c=1. This behavior
is undesirable in an MDL model selection setting.
|
cs/0502006
|
Neural network ensembles: Evaluation of aggregation algorithms
|
cs.AI cs.NE
|
Ensembles of artificial neural networks show improved generalization
capabilities that outperform those of single networks. However, for aggregation
to be effective, the individual networks must be as accurate and diverse as
possible. An important problem is, then, how to tune the aggregate members in
order to have an optimal compromise between these two conflicting conditions.
We present here an extensive evaluation of several algorithms for ensemble
construction, including new proposals and comparing them with standard methods
in the literature. We also discuss a potential problem with sequential
aggregation algorithms: the non-frequent but damaging selection through their
heuristics of particularly bad ensemble members. We introduce modified
algorithms that cope with this problem by allowing individual weighting of
aggregate members. Our algorithms and their weighted modifications are
favorably tested against other methods in the literature, producing a sensible
improvement in performance on most of the standard statistical databases used
as benchmarks.
|
cs/0502007
|
Identification of complex systems in the basis of wavelets
|
cs.CE cs.NE
|
In this paper is proposed the method of the identification of complex dynamic
systems. Method can be used for the identification of linear and nonlinear
complex dynamic systems for the determined or stochastic signals at the inputs
and the outputs. It is proposed to use a basis of wavelets for obtaining the
impulse transient function (ITF) of system. ITF is considered in the form of
surface in the 3D space. Are given the results of experiments on the
identification of systems in the basis of wavelets.
|
cs/0502008
|
Scientific Data Management in the Coming Decade
|
cs.DB cs.CE
|
This is a thought piece on data-intensive science requirements for databases
and science centers. It argues that peta-scale datasets will be housed by
science centers that provide substantial storage and processing for scientists
who access the data via smart notebooks. Next-generation science instruments
and simulations will generate these peta-scale datasets. The need to publish
and share data and the need for generic analysis and visualization tools will
finally create a convergence on common metadata standards. Database systems
will be judged by their support of these metadata standards and by their
ability to manage and access peta-scale datasets. The procedural
stream-of-bytes-file-centric approach to data analysis is both too cumbersome
and too serial for such large datasets. Non-procedural query and analysis of
schematized self-describing data is both easier to use and allows much more
parallelism.
|
cs/0502009
|
Performance Considerations for Gigabyte per Second Transcontinental
Disk-to-Disk File Transfers
|
cs.DB cs.PF
|
Moving data from CERN to Pasadena at a gigabyte per second using the next
generation Internet requires good networking and good disk IO. Ten Gbps
Ethernet and OC192 links are in place, so now it is simply a matter of
programming. This report describes our preliminary work and measurements in
configuring the disk subsystem for this effort. Using 24 SATA disks at each
endpoint we are able to locally read and write an NTFS volume is striped across
24 disks at 1.2 GBps. A 32-disk stripe delivers 1.7 GBps. Experiments on higher
performance and higher-capacity systems deliver up to 3.5 GBps.
|
cs/0502010
|
TerraServer SAN-Cluster Architecture and Operations Experience
|
cs.DC cs.DB
|
Microsoft TerraServer displays aerial, satellite, and to-pographic images of
the earth in a SQL database available via the Internet. It is one of the most
popular online at-lases, presenting seventeen terabytes of image data from the
United States Geological Survey (USGS). Initially de-ployed in 1998, the system
demonstrated the scalability of PC hardware and software - Windows and SQL
Server - on a single, mainframe-class processor. In September 2000, the
back-end database application was migrated to 4-node active/passive cluster
connected to an 18 terabyte Storage Area Network (SAN). The new configuration
was designed to achieve 99.99% availability for the back-end application. This
paper describes the hardware and software components of the TerraServer Cluster
and SAN, and describes our experience in configuring and operating this system
for three years. Not surprisingly, the hardware and architecture delivered
better than four-9's of availability, but operations mistakes delivered
three-9's.
|
cs/0502011
|
Where the Rubber Meets the Sky: Bridging the Gap between Databases and
Science
|
cs.DB cs.CE
|
Scientists in all domains face a data avalanche - both from better
instruments and from improved simulations. We believe that computer science
tools and computer scientists are in a position to help all the sciences by
building tools and developing techniques to manage, analyze, and visualize
peta-scale scientific information. This article is summarizes our experiences
over the last seven years trying to bridge the gap between database technology
and the needs of the astronomy community in building the World-Wide Telescope.
|
cs/0502015
|
Can Computer Algebra be Liberated from its Algebraic Yoke ?
|
cs.SC cs.CE
|
So far, the scope of computer algebra has been needlessly restricted to exact
algebraic methods. Its possible extension to approximate analytical methods is
discussed. The entangled roles of functional analysis and symbolic programming,
especially the functional and transformational paradigms, are put forward. In
the future, algebraic algorithms could constitute the core of extended symbolic
manipulation systems including primitives for symbolic approximations.
|
cs/0502016
|
Stability Analysis for Regularized Least Squares Regression
|
cs.LG
|
We discuss stability for a class of learning algorithms with respect to noisy
labels. The algorithms we consider are for regression, and they involve the
minimization of regularized risk functionals, such as L(f) := 1/N sum_i
(f(x_i)-y_i)^2+ lambda ||f||_H^2. We shall call the algorithm `stable' if, when
y_i is a noisy version of f*(x_i) for some function f* in H, the output of the
algorithm converges to f* as the regularization term and noise simultaneously
vanish. We consider two flavors of this problem, one where a data set of N
points remains fixed, and the other where N -> infinity. For the case where N
-> infinity, we give conditions for convergence to f_E (the function which is
the expectation of y(x) for each x), as lambda -> 0. For the fixed N case, we
describe the limiting 'non-noisy', 'non-regularized' function f*, and give
conditions for convergence. In the process, we develop a set of tools for
dealing with functionals such as L(f), which are applicable to many other
problems in learning theory.
|
cs/0502017
|
Estimating mutual information and multi--information in large networks
|
cs.IT cs.AI cs.CV cs.LG math.IT
|
We address the practical problems of estimating the information relations
that characterize large networks. Building on methods developed for analysis of
the neural code, we show that reliable estimates of mutual information can be
obtained with manageable computational effort. The same methods allow
estimation of higher order, multi--information terms. These ideas are
illustrated by analyses of gene expression, financial markets, and consumer
preferences. In each case, information theoretic measures correlate with
independent, intuitive measures of the underlying structures in the system.
|
cs/0502020
|
Population Sizing for Genetic Programming Based Upon Decision Making
|
cs.AI cs.NE
|
This paper derives a population sizing relationship for genetic programming
(GP). Following the population-sizing derivation for genetic algorithms in
Goldberg, Deb, and Clark (1992), it considers building block decision making as
a key facet. The analysis yields a GP-unique relationship because it has to
account for bloat and for the fact that GP solutions often use subsolution
multiple times. The population-sizing relationship depends upon tree size,
solution complexity, problem difficulty and building block expression
probability. The relationship is used to analyze and empirically investigate
population sizing for three model GP problems named ORDER, ON-OFF and LOUD.
These problems exhibit bloat to differing extents and differ in whether their
solutions require the use of a building block multiple times.
|
cs/0502021
|
Oiling the Wheels of Change: The Role of Adaptive Automatic Problem
Decomposition in Non--Stationary Environments
|
cs.NE cs.AI
|
Genetic algorithms (GAs) that solve hard problems quickly, reliably and
accurately are called competent GAs. When the fitness landscape of a problem
changes overtime, the problem is called non--stationary, dynamic or
time--variant problem. This paper investigates the use of competent GAs for
optimizing non--stationary optimization problems. More specifically, we use an
information theoretic approach based on the minimum description length
principle to adaptively identify regularities and substructures that can be
exploited to respond quickly to changes in the environment. We also develop a
special type of problems with bounded difficulties to test non--stationary
optimization problems. The results provide new insights into non-stationary
optimization problems and show that a search algorithm which automatically
identifies and exploits possible decompositions is more robust and responds
quickly to changes than a simple genetic algorithm.
|
cs/0502022
|
Sub-Structural Niching in Non-Stationary Environments
|
cs.NE cs.AI
|
Niching enables a genetic algorithm (GA) to maintain diversity in a
population. It is particularly useful when the problem has multiple optima
where the aim is to find all or as many as possible of these optima. When the
fitness landscape of a problem changes overtime, the problem is called
non--stationary, dynamic or time--variant problem. In these problems, niching
can maintain useful solutions to respond quickly, reliably and accurately to a
change in the environment. In this paper, we present a niching method that
works on the problem substructures rather than the whole solution, therefore it
has less space complexity than previously known niching mechanisms. We show
that the method is responding accurately when environmental changes occur.
|
cs/0502023
|
Sub-structural Niching in Estimation of Distribution Algorithms
|
cs.NE cs.AI
|
We propose a sub-structural niching method that fully exploits the problem
decomposition capability of linkage-learning methods such as the estimation of
distribution algorithms and concentrate on maintaining diversity at the
sub-structural level. The proposed method consists of three key components: (1)
Problem decomposition and sub-structure identification, (2) sub-structure
fitness estimation, and (3) sub-structural niche preservation. The
sub-structural niching method is compared to restricted tournament selection
(RTS)--a niching method used in hierarchical Bayesian optimization
algorithm--with special emphasis on sustained preservation of multiple global
solutions of a class of boundedly-difficult, additively-separable multimodal
problems. The results show that sub-structural niching successfully maintains
multiple global optima over large number of generations and does so with
significantly less population than RTS. Additionally, the market share of each
of the niche is much closer to the expected level in sub-structural niching
when compared to RTS.
|
cs/0502024
|
Idempotents, Mattson-Solomon Polynomials and Binary LDPC codes
|
cs.IT math.IT
|
We show how to construct an algorithm to search for binary idempotents which
may be used to construct binary LDPC codes. The algorithm, which allows control
of the key properties of sparseness, code rate and minimum distance, is
constructed in the Mattson-Solomon domain. Some of the new codes, found by
using this technique, are displayed.
|
cs/0502029
|
Scalability of Genetic Programming and Probabilistic Incremental Program
Evolution
|
cs.NE cs.AI
|
This paper discusses scalability of standard genetic programming (GP) and the
probabilistic incremental program evolution (PIPE). To investigate the need for
both effective mixing and linkage learning, two test problems are considered:
ORDER problem, which is rather easy for any recombination-based GP, and TRAP or
the deceptive trap problem, which requires the algorithm to learn interactions
among subsets of terminals. The scalability results show that both GP and PIPE
scale up polynomially with problem size on the simple ORDER problem, but they
both scale up exponentially on the deceptive problem. This indicates that while
standard recombination is sufficient when no interactions need to be
considered, for some problems linkage learning is necessary. These results are
in agreement with the lessons learned in the domain of binary-string genetic
algorithms (GAs). Furthermore, the paper investigates the effects of
introducing utnnecessary and irrelevant primitives on the performance of GP and
PIPE.
|
cs/0502033
|
Pseudo-Codewords of Cycle Codes via Zeta Functions
|
cs.IT math.IT
|
Cycle codes are a special case of low-density parity-check (LDPC) codes and
as such can be decoded using an iterative message-passing decoding algorithm on
the associated Tanner graph. The existence of pseudo-codewords is known to
cause the decoding algorithm to fail in certain instances. In this paper, we
draw a connection between pseudo-codewords of cycle codes and the so-called
edge zeta function of the associated normal graph and show how the Newton
polyhedron of the zeta function equals the fundamental cone of the code, which
plays a crucial role in characterizing the performance of iterative decoding
algorithms.
|
cs/0502034
|
Multiobjective hBOA, Clustering, and Scalability
|
cs.NE cs.AI
|
This paper describes a scalable algorithm for solving multiobjective
decomposable problems by combining the hierarchical Bayesian optimization
algorithm (hBOA) with the nondominated sorting genetic algorithm (NSGA-II) and
clustering in the objective space. It is first argued that for good
scalability, clustering or some other form of niching in the objective space is
necessary and the size of each niche should be approximately equal.
Multiobjective hBOA (mohBOA) is then described that combines hBOA, NSGA-II and
clustering in the objective space. The algorithm mohBOA differs from the
multiobjective variants of BOA and hBOA proposed in the past by including
clustering in the objective space and allocating an approximately equally sized
portion of the population to each cluster. The algorithm mohBOA is shown to
scale up well on a number of problems on which standard multiobjective
evolutionary algorithms perform poorly.
|
cs/0502035
|
Near Maximum-Likelihood Performance of Some New Cyclic Codes Constructed
in the Finite-Field Transform Domain
|
cs.IT math.IT
|
It is shown that some well-known and some new cyclic codes with orthogonal
parity-check equations can be constructed in the finite-field transform domain.
It is also shown that, for some binary linear cyclic codes, the performance of
the iterative decoder can be improved by substituting some of the dual code
codewords in the parity-check matrix with other dual code codewords formed from
linear combinations. This technique can bring the performance of a code closer
to its maximum-likelihood performance, which can be derived from the erroneous
decoded codeword whose euclidean distance with the respect to the received
block is smaller than that of the correct codeword. For (63,37), (93,47) and
(105,53) cyclic codes, the maximum-likelihood performance is realised with this
technique.
|
cs/0502036
|
Improved Iterative Decoding for Perpendicular Magnetic Recording
|
cs.IT math.IT
|
An algorithm of improving the performance of iterative decoding on
perpendicular magnetic recording is presented. This algorithm follows on the
authors' previous works on the parallel and serial concatenated turbo codes and
low-density parity-check codes. The application of this algorithm with
signal-to-noise ratio mismatch technique shows promising results in the
presence of media noise. We also show that, compare to the standard iterative
decoding algorithm, an improvement of within one order of magnitude can be
achieved.
|
cs/0502037
|
GF(2^m) Low-Density Parity-Check Codes Derived from Cyclotomic Cosets
|
cs.IT math.IT
|
Based on the ideas of cyclotomic cosets, idempotents and Mattson-Solomon
polynomials, we present a new method to construct GF(2^m), where m>0 cyclic
low-density parity-check codes. The construction method produces the dual code
idempotent which is used to define the parity-check matrix of the low-density
parity-check code. An interesting feature of this construction method is the
ability to increment the code dimension by adding more idempotents and so
steadily decrease the sparseness of the parity-check matrix. We show that the
constructed codes can achieve performance very close to the
sphere-packing-bound constrained for binary transmission.
|
cs/0502042
|
Unified Large System Analysis of MMSE and Adaptive Least Squares
Receivers for a class of Random Matrix Channels
|
cs.IT math.IT
|
We present a unified large system analysis of linear receivers for a class of
random matrix channels. The technique unifies the analysis of both the
minimum-mean-squared-error (MMSE) receiver and the adaptive least-squares (ALS)
receiver, and also uses a common approach for both random i.i.d. and random
orthogonal precoding. We derive expressions for the asymptotic
signal-to-interference-plus-noise (SINR) of the MMSE receiver, and both the
transient and steady-state SINR of the ALS receiver, trained using either
i.i.d. data sequences or orthogonal training sequences. The results are in
terms of key system parameters, and allow for arbitrary distributions of the
power of each of the data streams and the eigenvalues of the channel
correlation matrix. In the case of the ALS receiver, we allow a diagonal
loading constant and an arbitrary data windowing function. For i.i.d. training
sequences and no diagonal loading, we give a fundamental relationship between
the transient/steady-state SINR of the ALS and the MMSE receivers. We
demonstrate that for a particular ratio of receive to transmit dimensions and
window shape, all channels which have the same MMSE SINR have an identical
transient ALS SINR response. We demonstrate several applications of the
results, including an optimization of information throughput with respect to
training sequence length in coded block transmission.
|
cs/0502049
|
Generalised Bent Criteria for Boolean Functions (I)
|
cs.IT math.IT
|
Generalisations of the bent property of a boolean function are presented, by
proposing spectral analysis with respect to a well-chosen set of local unitary
transforms. Quadratic boolean functions are related to simple graphs and it is
shown that the orbit generated by successive Local Complementations on a graph
can be found within the transform spectra under investigation. The flat spectra
of a quadratic boolean function are related to modified versions of its
associated adjacency matrix.
|
cs/0502050
|
Generalised Bent Criteria for Boolean Functions (II)
|
cs.IT math.IT
|
In the first part of this paper [16], some results on how to compute the flat
spectra of Boolean constructions w.r.t. the transforms {I,H}^n, {H,N}^n and
{I,H,N}^n were presented, and the relevance of Local Complementation to the
quadratic case was indicated. In this second part, the results are applied to
develop recursive formulae for the numbers of flat spectra of some structural
quadratics. Observations are made as to the generalised Bent properties of
boolean functions of algebraic degree greater than two, and the number of flat
spectra w.r.t. {I,H,N}^n are computed for some of them.
|
cs/0502052
|
Log Analysis Case Study Using LoGS
|
cs.CR cs.IR
|
A very useful technique a network administrator can use to identify
problematic network behavior is careful analysis of logs of incoming and
outgoing network flows. The challenge one faces when attempting to undertake
this course of action, though, is that large networks tend to generate an
extremely large quantity of network traffic in a very short period of time,
resulting in very large traffic logs which must be analyzed post-generation
with an eye for contextual information which may reveal symptoms of problematic
traffic. A better technique is to perform real-time log analysis using a
real-time context-generating tool such as LoGS.
|
cs/0502053
|
A low-cost time-hopping impulse radio system for high data rate
transmission
|
cs.IT math.IT
|
We present an efficient, low-cost implementation of time-hopping impulse
radio that fulfills the spectral mask mandated by the FCC and is suitable for
high-data-rate, short-range communications. Key features are: (i) all-baseband
implementation that obviates the need for passband components, (ii) symbol-rate
(not chip rate) sampling, A/D conversion, and digital signal processing, (iii)
fast acquisition due to novel search algorithms, (iv) spectral shaping that can
be adapted to accommodate different spectrum regulations and interference
environments. Computer simulations show that this system can provide 110Mbit/s
at 7-10m distance, as well as higher data rates at shorter distances under FCC
emissions limits. Due to the spreading concept of time-hopping impulse radio,
the system can sustain multiple simultaneous users, and can suppress narrowband
interference effectively.
|
cs/0502055
|
On quasi-cyclic interleavers for parallel turbo codes
|
cs.IT math.IT
|
We present an interleaving scheme that yields quasi-cyclic turbo codes. We
prove that randomly chosen members of this family yield with probability almost
1 turbo codes with asymptotically optimum minimum distance, i.e. growing as a
logarithm of the interleaver size. These interleavers are also very practical
in terms of memory requirements and their decoding error probabilities for
small block lengths compare favorably with previous interleaving schemes.
|
cs/0502057
|
Decomposable Problems, Niching, and Scalability of Multiobjective
Estimation of Distribution Algorithms
|
cs.NE cs.AI
|
The paper analyzes the scalability of multiobjective estimation of
distribution algorithms (MOEDAs) on a class of boundedly-difficult
additively-separable multiobjective optimization problems. The paper
illustrates that even if the linkage is correctly identified, massive
multimodality of the search problems can easily overwhelm the nicher and lead
to exponential scale-up. Facetwise models are subsequently used to propose a
growth rate of the number of differing substructures between the two objectives
to avoid the niching method from being overwhelmed and lead to polynomial
scalability of MOEDAs.
|
cs/0502060
|
Perspectives for Strong Artificial Life
|
cs.AI
|
This text introduces the twin deadlocks of strong artificial life.
Conceptualization of life is a deadlock both because of the existence of a
continuum between the inert and the living, and because we only know one
instance of life. Computationalism is a second deadlock since it remains a
matter of faith. Nevertheless, artificial life realizations quickly progress
and recent constructions embed an always growing set of the intuitive
properties of life. This growing gap between theory and realizations should
sooner or later crystallize in some kind of paradigm shift and then give clues
to break the twin deadlocks.
|
cs/0502063
|
Nonlinear MMSE Multiuser Detection Based on Multivariate Gaussian
Approximation
|
cs.IT math.IT
|
In this paper, a class of nonlinear MMSE multiuser detectors are derived
based on a multivariate Gaussian approximation of the multiple access
interference. This approach leads to expressions identical to those describing
the probabilistic data association (PDA) detector, thus providing an
alternative analytical justification for this structure. A simplification to
the PDA detector based on approximating the covariance matrix of the
multivariate Gaussian distribution is suggested, resulting in a soft
interference cancellation scheme. Corresponding multiuser soft-input,
soft-output detectors delivering extrinsic log-likelihood ratios are derived
for application in iterative multiuser decoders. Finally, a large system
performance analysis is conducted for the simplified PDA, showing that the bit
error rate performance of this detector can be accurately predicted and related
to the replica method analysis for the optimal detector. Methods from
statistical neuro-dynamics are shown to provide a closely related alternative
large system prediction. Numerical results demonstrate that for large systems,
the bit error rate is accurately predicted by the analysis and found to be
close to optimal performance.
|
cs/0502067
|
Master Algorithms for Active Experts Problems based on Increasing Loss
Values
|
cs.LG cs.AI
|
We specify an experts algorithm with the following characteristics: (a) it
uses only feedback from the actions actually chosen (bandit setup), (b) it can
be applied with countably infinite expert classes, and (c) it copes with losses
that may grow in time appropriately slowly. We prove loss bounds against an
adaptive adversary. From this, we obtain master algorithms for "active experts
problems", which means that the master's actions may influence the behavior of
the adversary. Our algorithm can significantly outperform standard experts
algorithms on such problems. Finally, we combine it with a universal expert
class. This results in a (computationally infeasible) universal master
algorithm which performs - in a certain sense - almost as well as any
computable strategy, for any online problem.
|
cs/0502071
|
Analysis of Second-order Statistics Based Semi-blind Channel Estimation
in CDMA Channels
|
cs.IT math.IT
|
The performance of second order statistics (SOS) based semi-blind channel
estimation in long-code DS-CDMA systems is analyzed. The covariance matrix of
SOS estimates is obtained in the large system limit, and is used to analyze the
large-sample performance of two SOS based semi-blind channel estimation
algorithms. A notion of blind estimation efficiency is also defined and is
examined via simulation results.
|
cs/0502072
|
Batch is back: CasJobs, serving multi-TB data on the Web
|
cs.DC cs.DB
|
The Sloan Digital Sky Survey (SDSS) science database describes over 140
million objects and is over 1.5 TB in size. The SDSS Catalog Archive Server
(CAS) provides several levels of query interface to the SDSS data via the
SkyServer website. Most queries execute in seconds or minutes. However, some
queries can take hours or days, either because they require non-index scans of
the largest tables, or because they request very large result sets, or because
they represent very complex aggregations of the data. These "monster queries"
not only take a long time, they also affect response times for everyone else -
one or more of them can clog the entire system. To ameliorate this problem, we
developed a multi-server multi-queue batch job submission and tracking system
for the CAS called CasJobs. The transfer of very large result sets from queries
over the network is another serious problem. Statistics suggested that much of
this data transfer is unnecessary; users would prefer to store results locally
in order to allow further joins and filtering. To allow local analysis, a
system was developed that gives users their own personal databases (MyDB) at
the server side. Users may transfer data to their MyDB, and then perform
further analysis before extracting it to their own machine. MyDB tables also
provide a convenient way to share results of queries with collaborators without
downloading them. CasJobs is built using SOAP XML Web services and has been in
operation since May 2004.
|
cs/0502074
|
On sample complexity for computational pattern recognition
|
cs.LG cs.CC
|
In statistical setting of the pattern recognition problem the number of
examples required to approximate an unknown labelling function is linear in the
VC dimension of the target learning class. In this work we consider the
question whether such bounds exist if we restrict our attention to computable
pattern recognition methods, assuming that the unknown labelling function is
also computable. We find that in this case the number of examples required for
a computable method to approximate the labelling function not only is not
linear, but grows faster (in the VC dimension of the class) than any computable
function. No time or space constraints are put on the predictors or target
functions; the only resource we consider is the training examples.
The task of pattern recognition is considered in conjunction with another
learning problem -- data compression. An impossibility result for the task of
data compression allows us to estimate the sample complexity for pattern
recognition.
|
cs/0502075
|
How far will you walk to find your shortcut: Space Efficient Synopsis
Construction Algorithms
|
cs.DS cs.DB
|
In this paper we consider the wavelet synopsis construction problem without
the restriction that we only choose a subset of coefficients of the original
data. We provide the first near optimal algorithm. We arrive at the above
algorithm by considering space efficient algorithms for the restricted version
of the problem. In this context we improve previous algorithms by almost a
linear factor and reduce the required space to almost linear. Our techniques
also extend to histogram construction, and improve the space-running time
tradeoffs for V-Opt and range query histograms. We believe the idea applies to
a broad range of dynamic programs and demonstrate it by showing improvements in
a knapsack-like setting seen in construction of Extended Wavelets.
|
cs/0502076
|
Learning nonsingular phylogenies and hidden Markov models
|
cs.LG cs.CE math.PR math.ST q-bio.PE stat.TH
|
In this paper we study the problem of learning phylogenies and hidden Markov
models. We call a Markov model nonsingular if all transition matrices have
determinants bounded away from 0 (and 1). We highlight the role of the
nonsingularity condition for the learning problem. Learning hidden Markov
models without the nonsingularity condition is at least as hard as learning
parity with noise, a well-known learning problem conjectured to be
computationally hard. On the other hand, we give a polynomial-time algorithm
for learning nonsingular phylogenies and hidden Markov models.
|
cs/0502077
|
On the Achievable Information Rates of Finite-State Input
Two-Dimensional Channels with Memory
|
cs.IT math.IT
|
The achievable information rate of finite-state input two-dimensional (2-D)
channels with memory is an open problem, which is relevant, e.g., for
inter-symbol-interference (ISI) channels and cellular multiple-access channels.
We propose a method for simulation-based computation of such information rates.
We first draw a connection between the Shannon-theoretic information rate and
the statistical mechanics notion of free energy. Since the free energy of such
systems is intractable, we approximate it using the cluster variation method,
implemented via generalized belief propagation. The derived, fully tractable,
algorithm is shown to provide a practically accurate estimate of the
information rate. In our experimental study we calculate the information rates
of 2-D ISI channels and of hexagonal Wyner cellular networks with binary
inputs, for which formerly only bounds were known.
|
cs/0502078
|
Semantical Characterizations and Complexity of Equivalences in Answer
Set Programming
|
cs.AI cs.CC
|
In recent research on non-monotonic logic programming, repeatedly strong
equivalence of logic programs P and Q has been considered, which holds if the
programs P union R and Q union R have the same answer sets for any other
program R. This property strengthens equivalence of P and Q with respect to
answer sets (which is the particular case for R is the empty set), and has its
applications in program optimization, verification, and modular logic
programming. In this paper, we consider more liberal notions of strong
equivalence, in which the actual form of R may be syntactically restricted. On
the one hand, we consider uniform equivalence, where R is a set of facts rather
than a set of rules. This notion, which is well known in the area of deductive
databases, is particularly useful for assessing whether programs P and Q are
equivalent as components of a logic program which is modularly structured. On
the other hand, we consider relativized notions of equivalence, where R ranges
over rules over a fixed alphabet, and thus generalize our results to
relativized notions of strong and uniform equivalence. For all these notions,
we consider disjunctive logic programs in the propositional (ground) case, as
well as some restricted classes, provide semantical characterizations and
analyze the computational complexity. Our results, which naturally extend to
answer set semantics for programs with strong negation, complement the results
on strong equivalence of logic programs and pave the way for optimizations in
answer set solvers as a tool for input-based problem solving.
|
cs/0502079
|
Multilevel expander codes
|
cs.IT math.IT
|
We define multilevel codes on bipartite graphs that have properties analogous
to multilevel serial concatenations. A decoding algorithm is described that
corrects a proportion of errors equal to half the Blokh-Zyablov bound on the
minimum distance. The error probability of this algorithm has exponent similar
to that of serially concatenated multilevel codes.
|
cs/0502080
|
Sensor Configuration and Activation for Field Detection in Large Sensor
Arrays
|
cs.IT math.IT
|
The problems of sensor configuration and activation for the detection of
correlated random fields using large sensor arrays are considered. Using
results that characterize the large-array performance of sensor networks in
this application, the detection capabilities of different sensor configurations
are analyzed and compared. The dependence of the optimal choice of
configuration on parameters such as sensor signal-to-noise ratio (SNR), field
correlation, etc., is examined, yielding insights into the most effective
choices for sensor selection and activation in various operating regimes.
|
cs/0502081
|
Tables, Memorized Semirings and Applications
|
cs.MA cs.DM
|
We define and construct a new data structure, the tables, this structure
generalizes the (finite) $k$-sets sets of Eilenberg \cite{Ei}, it is versatile
(one can vary the letters, the words and the coefficients). We derive from this
structure a new semiring (with several semiring structures) which can be
applied to the needs of automatic processing multi-agents behaviour problems.
The purpose of this account/paper is to present also the basic elements of this
new structures from a combinatorial point of view. These structures present a
bunch of properties. They will be endowed with several laws namely : Sum,
Hadamard product, Cauchy product, Fuzzy operations (min, max, complemented
product) Two groups of applications are presented. The first group is linked to
the process of "forgetting" information in the tables. The second, linked to
multi-agent systems, is announced by showing a methodology to manage emergent
organization from individual behaviour models.
|
cs/0502082
|
Graphs and colorings for answer set programming
|
cs.AI cs.LO
|
We investigate the usage of rule dependency graphs and their colorings for
characterizing and computing answer sets of logic programs. This approach
provides us with insights into the interplay between rules when inducing answer
sets. We start with different characterizations of answer sets in terms of
totally colored dependency graphs that differ in graph-theoretical aspects. We
then develop a series of operational characterizations of answer sets in terms
of operators on partial colorings. In analogy to the notion of a derivation in
proof theory, our operational characterizations are expressed as
(non-deterministically formed) sequences of colorings, turning an uncolored
graph into a totally colored one. In this way, we obtain an operational
framework in which different combinations of operators result in different
formal properties. Among others, we identify the basic strategy employed by the
noMoRe system and justify its algorithmic approach. Furthermore, we distinguish
operations corresponding to Fitting's operator as well as to well-founded
semantics. (To appear in Theory and Practice of Logic Programming (TPLP))
|
cs/0502083
|
Impulse Radio Systems with Multiple Types of Ultra-Wideband Pulses
|
cs.IT math.IT
|
Spectral properties and performance of multi-pulse impulse radio
ultra-wideband systems with pulse-based polarity randomization are analyzed.
Instead of a single type of pulse transmitted in each frame, multiple types of
pulses are considered, which is shown to reduce the effects of multiple-access
interference. First, the spectral properties of a multi-pulse impulse radio
system is investigated. It is shown that the power spectral density is the
average of spectral contents of different pulse shapes. Then, approximate
closed-form expressions for bit error probability of a multi-pulse impulse
radio system are derived for RAKE receivers in asynchronous multiuser
environments. The theoretical and simulation results indicate that impulse
radio systems that are more robust against multiple-access interference than a
"classical" impulse radio system can be designed with multiple types of
ultra-wideband pulses.
|
cs/0502084
|
On the Typicality of the Linear Code Among the LDPC Coset Code Ensemble
|
cs.IT math.IT
|
Density evolution (DE) is one of the most powerful analytical tools for
low-density parity-check (LDPC) codes on memoryless
binary-input/symmetric-output channels. The case of non-symmetric channels is
tackled either by the LDPC coset code ensemble (a channel symmetrizing
argument) or by the generalized DE for linear codes on non-symmetric channels.
Existing simulations show that the bit error rate performances of these two
different approaches are nearly identical. This paper explains this phenomenon
by proving that as the minimum check node degree $d_c$ becomes sufficiently
large, the performance discrepancy of the linear and the coset LDPC codes is
theoretically indistinguishable. This typicality of linear codes among the LDPC
coset code ensemble provides insight into the concentration theorem of LDPC
coset codes.
|
cs/0502086
|
The Self-Organization of Speech Sounds
|
cs.LG cs.AI cs.CL cs.NE cs.RO math.DS
|
The speech code is a vehicle of language: it defines a set of forms used by a
community to carry information. Such a code is necessary to support the
linguistic interactions that allow humans to communicate. How then may a speech
code be formed prior to the existence of linguistic interactions? Moreover, the
human speech code is discrete and compositional, shared by all the individuals
of a community but different across communities, and phoneme inventories are
characterized by statistical regularities. How can a speech code with these
properties form? We try to approach these questions in the paper, using the
"methodology of the artificial". We build a society of artificial agents, and
detail a mechanism that shows the formation of a discrete speech code without
pre-supposing the existence of linguistic capacities or of coordinated
interactions. The mechanism is based on a low-level model of sensory-motor
interactions. We show that the integration of certain very simple and non
language-specific neural devices leads to the formation of a speech code that
has properties similar to the human speech code. This result relies on the
self-organizing properties of a generic coupling between perception and
production within agents, and on the interactions between agents. The
artificial system helps us to develop better intuitions on how speech might
have appeared, by showing how self-organization might have helped natural
selection to find speech.
|
cs/0502087
|
Self-Replicating Strands that Self-Assemble into User-Specified Meshes
|
cs.NE cs.CE cs.MA
|
It has been argued that a central objective of nanotechnology is to make
products inexpensively, and that self-replication is an effective approach to
very low-cost manufacturing. The research presented here is intended to be a
step towards this vision. In previous work (JohnnyVon 1.0), we simulated
machines that bonded together to form self-replicating strands. There were two
types of machines (called types 0 and 1), which enabled strands to encode
arbitrary bit strings. However, the information encoded in the strands had no
functional role in the simulation. The information was replicated without being
interpreted, which was a significant limitation for potential manufacturing
applications. In the current work (JohnnyVon 2.0), the information in a strand
is interpreted as instructions for assembling a polygonal mesh. There are now
four types of machines and the information encoded in a strand determines how
it folds. A strand may be in an unfolded state, in which the bonds are straight
(although they flex slightly due to virtual forces acting on the machines), or
in a folded state, in which the bond angles depend on the types of machines. By
choosing the sequence of machine types in a strand, the user can specify a
variety of polygonal shapes. A simulation typically begins with an initial
unfolded seed strand in a soup of unbonded machines. The seed strand replicates
by bonding with free machines in the soup. The child strands fold into the
encoded polygonal shape, and then the polygons drift together and bond to form
a mesh. We demonstrate that a variety of polygonal meshes can be manufactured
in the simulation, by simply changing the sequence of machine types in the
seed.
|
cs/0502088
|
Towards a Systematic Account of Different Semantics for Logic Programs
|
cs.AI cs.LO
|
In [Hitzler and Wendt 2002, 2005], a new methodology has been proposed which
allows to derive uniform characterizations of different declarative semantics
for logic programs with negation. One result from this work is that the
well-founded semantics can formally be understood as a stratified version of
the Fitting (or Kripke-Kleene) semantics. The constructions leading to this
result, however, show a certain asymmetry which is not readily understood. We
will study this situation here with the result that we will obtain a coherent
picture of relations between different semantics for normal logic programs.
|
cs/0502094
|
Coalition Formation: Concessions, Task Relationships and Complexity
Reduction
|
cs.MA
|
Solutions to the coalition formation problem commonly assume agent
rationality and, correspondingly, utility maximization. This in turn may
prevent agents from making compromises. As shown in recent studies, compromise
may facilitate coalition formation and increase agent utilities. In this study
we leverage on those new results. We devise a novel coalition formation
mechanism that enhances compromise. Our mechanism can utilize information on
task dependencies to reduce formation complexity. Further, it works well with
both cardinal and ordinal task values. Via experiments we show that the use of
the suggested compromise-based coalition formation mechanism provides
significant savings in the computation and communication complexity of
coalition formation. Our results also show that when information on task
dependencies is used, the complexity of coalition formation is further reduced.
We demonstrate successful use of the mechanism for collaborative information
filtering, where agents combine linguistic rules to analyze documents'
contents.
|
cs/0502095
|
Gradient Vector Flow Models for Boundary Extraction in 2D Images
|
cs.CV
|
The Gradient Vector Flow (GVF) is a vector diffusion approach based on
Partial Differential Equations (PDEs). This method has been applied together
with snake models for boundary extraction medical images segmentation. The key
idea is to use a diffusion-reaction PDE to generate a new external force field
that makes snake models less sensitivity to initialization as well as improves
the snake's ability to move into boundary concavities. In this paper, we
firstly review basic results about convergence and numerical analysis of usual
GVF schemes. We point out that GVF presents numerical problems due to
discontinuities image intensity. This point is considered from a practical
viewpoint from which the GVF parameters must follow a relationship in order to
improve numerical convergence. Besides, we present an analytical analysis of
the GVF dependency from the parameters values. Also, we observe that the method
can be used for multiply connected domains by just imposing the suitable
boundary condition. In the experimental results we verify these theoretical
points and demonstrate the utility of GVF on a segmentation approach that we
have developed based on snakes.
|
cs/0502096
|
Property analysis of symmetric travelling salesman problem instances
acquired through evolution
|
cs.NE cs.AI
|
We show how an evolutionary algorithm can successfully be used to evolve a
set of difficult to solve symmetric travelling salesman problem instances for
two variants of the Lin-Kernighan algorithm. Then we analyse the instances in
those sets to guide us towards deferring general knowledge about the efficiency
of the two variants in relation to structural properties of the symmetric
travelling sale sman problem.
|
cs/0503001
|
Top-Down Unsupervised Image Segmentation (it sounds like oxymoron, but
actually it is not)
|
cs.CV cs.IR
|
Pattern recognition is generally assumed as an interaction of two inversely
directed image-processing streams: the bottom-up information details gathering
and localization (segmentation) stream, and the top-down information features
aggregation, association and interpretation (recognition) stream. Inspired by
recent evidence from biological vision research and by the insights of
Kolmogorov Complexity theory, we propose a new, just top-down evolving,
procedure of initial image segmentation. We claim that traditional top-down
cognitive reasoning, which is supposed to guide the segmentation process to its
final result, is not at all a part of the image information content evaluation.
And that initial image segmentation is certainly an unsupervised process. We
present some illustrative examples, which support our claims.
|
cs/0503006
|
A New Non-Iterative Decoding Algorithm for the Erasure Channel :
Comparisons with Enhanced Iterative Methods
|
cs.IT math.IT
|
This paper investigates decoding of binary linear block codes over the binary
erasure channel (BEC). Of the current iterative decoding algorithms on this
channel, we review the Recovery Algorithm and the Guess Algorithm. We then
present a Multi-Guess Algorithm extended from the Guess Algorithm and a new
algorithm -- the In-place Algorithm. The Multi-Guess Algorithm can push the
limit to break the stopping sets. However, the performance of the Guess and the
Multi-Guess Algorithm depend on the parity-check matrix of the code.
Simulations show that we can decrease the frame error rate by several orders of
magnitude using the Guess and the Multi-Guess Algorithms when the parity-check
matrix of the code is sparse. The In-place Algorithm can obtain better
performance even if the parity check matrix is dense. We consider the
application of these algorithms in the implementation of multicast and
broadcast techniques on the Internet. Using these algorithms, a user does not
have to wait until the entire transmission has been received.
|
cs/0503011
|
Shuffling a Stacked Deck: The Case for Partially Randomized Ranking of
Search Engine Results
|
cs.IR
|
In-degree, PageRank, number of visits and other measures of Web page
popularity significantly influence the ranking of search results by modern
search engines. The assumption is that popularity is closely correlated with
quality, a more elusive concept that is difficult to measure directly.
Unfortunately, the correlation between popularity and quality is very weak for
newly-created pages that have yet to receive many visits and/or in-links.
Worse, since discovery of new content is largely done by querying search
engines, and because users usually focus their attention on the top few
results, newly-created but high-quality pages are effectively ``shut out,'' and
it can take a very long time before they become popular.
We propose a simple and elegant solution to this problem: the introduction of
a controlled amount of randomness into search result ranking methods. Doing so
offers new pages a chance to prove their worth, although clearly using too much
randomness will degrade result quality and annul any benefits achieved. Hence
there is a tradeoff between exploration to estimate the quality of new pages
and exploitation of pages already known to be of high quality. We study this
tradeoff both analytically and via simulation, in the context of an economic
objective function based on aggregate result quality amortized over time. We
show that a modest amount of randomness leads to improved search results.
|
cs/0503012
|
First-order Complete and Computationally Complete Query Languages for
Spatio-Temporal Databases
|
cs.DB
|
We address a fundamental question concerning spatio-temporal database
systems: ``What are exactly spatio-temporal queries?'' We define
spatio-temporal queries to be computable mappings that are also generic,
meaning that the result of a query may only depend to a limited extent on the
actual internal representation of the spatio-temporal data. Genericity is
defined as invariance under groups of geometric transformations that preserve
certain characteristics of spatio-temporal data (e.g., collinearity, distance,
velocity, acceleration, ...). These groups depend on the notions that are
relevant in particular spatio-temporal database applications.
These transformations also have the distinctive property that they respect
the monotone and unidirectional nature of time.
We investigate different genericity classes with respect to the constraint
database model for spatio-temporal databases and we identify sound and complete
languages for the first-order and the computable queries in these genericity
classes. We distinguish between genericity determined by time-invariant
transformations, genericity notions concerning physical quantities and
genericity determined by time-dependent transformations.
|
cs/0503018
|
Probabilistic Algorithmic Knowledge
|
cs.AI cs.LO
|
The framework of algorithmic knowledge assumes that agents use deterministic
knowledge algorithms to compute the facts they explicitly know. We extend the
framework to allow for randomized knowledge algorithms. We then characterize
the information provided by a randomized knowledge algorithm when its answers
have some probability of being incorrect. We formalize this information in
terms of evidence; a randomized knowledge algorithm returning ``Yes'' to a
query about a fact \phi provides evidence for \phi being true. Finally, we
discuss the extent to which this evidence can be used as a basis for decisions.
|
cs/0503019
|
Duality Bounds on the Cut-Off Rate with Applications to Ricean Fading
|
cs.IT math.IT
|
We propose a technique to derive upper bounds on Gallager's cost-constrained
random coding exponent function. Applying this technique to the non-coherent
peak-power or average-power limited discrete time memoryless Ricean fading
channel, we obtain the high signal-to-noise ratio (SNR) expansion of this
channel's cut-off rate. At high SNR the gap between channel capacity and the
cut-off rate approaches a finite limit. This limit is approximately 0.26 nats
per channel-use for zero specular component (Rayleigh) fading and approaches
0.39 nats per channel-use for very large specular components.
We also compute the asymptotic cut-off rate of a Rayleigh fading channel when
the receiver has access to some partial side information concerning the fading.
It is demonstrated that the cut-off rate does not utilize the side information
as efficiently as capacity, and that the high SNR gap between the two increases
to infinity as the imperfect side information becomes more and more precise.
|
cs/0503020
|
Earlier Web Usage Statistics as Predictors of Later Citation Impact
|
cs.IR
|
The use of citation counts to assess the impact of research articles is well
established. However, the citation impact of an article can only be measured
several years after it has been published. As research articles are
increasingly accessed through the Web, the number of times an article is
downloaded can be instantly recorded and counted. One would expect the number
of times an article is read to be related both to the number of times it is
cited and to how old the article is. This paper analyses how short-term Web
usage impact predicts medium-term citation impact. The physics e-print archive
(arXiv.org) is used to test this.
|
cs/0503021
|
Fast-Forward on the Green Road to Open Access: The Case Against Mixing
Up Green and Gold
|
cs.IR
|
This article is a critique of: "The 'Green' and 'Gold' Roads to Open Access:
The Case for Mixing and Matching" by Jean-Claude Guedon (in Serials Review
30(4) 2004).
Open Access (OA) means: free online access to all peer-reviewed journal
articles. Jean-Claude Guedon argues against the efficacy of author
self-archiving of peer-reviewed journal articles (the "Green" road to OA). He
suggests instead that we should convert to Open Access Publishing (the "Golden"
road to OA) by "mixing and matching" Green and Gold as follows: o First,
self-archive dissertations (not published, peer-reviewed journal articles). o
Second, identify and tag how those dissertations have been evaluated and
reviewed. o Third, self-archive unrefereed preprints (not published,
peer-reviewed journal articles). o Fourth, develop new mechanisms for
evaluating and reviewing those unrefereed preprints, at multiple levels. The
result will be OA Publishing (Gold). I argue that rather than yet another 10
years of speculation like this, what is actually needed (and imminent) is for
OA self-archiving to be mandated by research funders and institutions so that
the self-archiving of published, peer-reviewed journal articles (Green) can be
fast-forwarded to 100% OA.
|
cs/0503022
|
Theory and Practice of Transactional Method Caching
|
cs.DB
|
Nowadays, tiered architectures are widely accepted for constructing large
scale information systems. In this context application servers often form the
bottleneck for a system's efficiency. An application server exposes an object
oriented interface consisting of set of methods which are accessed by
potentially remote clients. The idea of method caching is to store results of
read-only method invocations with respect to the application server's interface
on the client side. If the client invokes the same method with the same
arguments again, the corresponding result can be taken from the cache without
contacting the server. It has been shown that this approach can considerably
improve a real world system's efficiency.
This paper extends the concept of method caching by addressing the case where
clients wrap related method invocations in ACID transactions. Demarcating
sequences of method calls in this way is supported by many important
application server standards. In this context the paper presents an
architecture, a theory and an efficient protocol for maintaining full
transactional consistency and in particular serializability when using a method
cache on the client side. In order to create a protocol for scheduling cached
method results, the paper extends a classical transaction formalism. Based on
this extension, a recovery protocol and an optimistic serializability protocol
are derived. The latter one differs from traditional transactional cache
protocols in many essential ways. An efficiency experiment validates the
approach: Using the cache a system's performance and scalability are
considerably improved.
|
cs/0503024
|
Fine-Grained Word Sense Disambiguation Based on Parallel Corpora, Word
Alignment, Word Clustering and Aligned Wordnets
|
cs.AI cs.CL
|
The paper presents a method for word sense disambiguation based on parallel
corpora. The method exploits recent advances in word alignment and word
clustering based on automatic extraction of translation equivalents and being
supported by available aligned wordnets for the languages in the corpus. The
wordnets are aligned to the Princeton Wordnet, according to the principles
established by EuroWordNet. The evaluation of the WSD system, implementing the
method described herein showed very encouraging results. The same system used
in a validation mode, can be used to check and spot alignment errors in
multilingually aligned wordnets as BalkaNet and EuroWordNet.
|
cs/0503026
|
On Generalized Computable Universal Priors and their Convergence
|
cs.LG cs.CC math.PR
|
Solomonoff unified Occam's razor and Epicurus' principle of multiple
explanations to one elegant, formal, universal theory of inductive inference,
which initiated the field of algorithmic information theory. His central result
is that the posterior of the universal semimeasure M converges rapidly to the
true sequence generating posterior mu, if the latter is computable. Hence, M is
eligible as a universal predictor in case of unknown mu. The first part of the
paper investigates the existence and convergence of computable universal
(semi)measures for a hierarchy of computability classes: recursive, estimable,
enumerable, and approximable. For instance, M is known to be enumerable, but
not estimable, and to dominate all enumerable semimeasures. We present proofs
for discrete and continuous semimeasures. The second part investigates more
closely the types of convergence, possibly implied by universality: in
difference and in ratio, with probability 1, in mean sum, and for Martin-Loef
random sequences. We introduce a generalized concept of randomness for
individual sequences and use it to exhibit difficulties regarding these issues.
In particular, we show that convergence fails (holds) on generalized-random
sequences in gappy (dense) Bernoulli classes.
|
cs/0503027
|
Authentication with Distortion Criteria
|
cs.IT cs.CR cs.MM math.IT
|
In a variety of applications, there is a need to authenticate content that
has experienced legitimate editing in addition to potential tampering attacks.
We develop one formulation of this problem based on a strict notion of
security, and characterize and interpret the associated information-theoretic
performance limits. The results can be viewed as a natural generalization of
classical approaches to traditional authentication. Additional insights into
the structure of such systems and their behavior are obtained by further
specializing the results to Bernoulli and Gaussian cases. The associated
systems are shown to be substantially better in terms of performance and/or
security than commonly advocated approaches based on data hiding and digital
watermarking. Finally, the formulation is extended to obtain efficient layered
authentication system constructions.
|
cs/0503028
|
Stabilization of Cooperative Information Agents in Unpredictable
Environment: A Logic Programming Approach
|
cs.LO cs.MA cs.PL
|
An information agent is viewed as a deductive database consisting of 3 parts:
an observation database containing the facts the agent has observed or sensed
from its surrounding environment, an input database containing the information
the agent has obtained from other agents, and an intensional database which is
a set of rules for computing derived information from the information stored in
the observation and input databases. Stabilization of a system of information
agents represents a capability of the agents to eventually get correct
information about their surrounding despite unpredictable environment changes
and the incapability of many agents to sense such changes causing them to have
temporary incorrect information. We argue that the stabilization of a system of
cooperative information agents could be understood as the convergence of the
behavior of the whole system toward the behavior of a "superagent", who has the
sensing and computing capabilities of all agents combined. We show that
unfortunately, stabilization is not guaranteed in general, even if the agents
are fully cooperative and do not hide any information from each other. We give
sufficient conditions for stabilization and discuss the consequences of our
results.
|
cs/0503030
|
A Suffix Tree Approach to Email Filtering
|
cs.AI cs.CL
|
We present an approach to email filtering based on the suffix tree data
structure. A method for the scoring of emails using the suffix tree is
developed and a number of scoring and score normalisation functions are tested.
Our results show that the character level representation of emails and classes
facilitated by the suffix tree can significantly improve classification
accuracy when compared with the currently popular methods, such as naive Bayes.
We believe the method can be extended to the classification of documents in
other domains.
|
cs/0503031
|
On the Scalability of Cooperative Time Synchronization in
Pulse-Connected Networks
|
cs.IT math.IT nlin.AO
|
The problem of time synchronization in dense wireless networks is considered.
Well established synchronization techniques suffer from an inherent scalability
problem in that synchronization errors grow with an increasing number of hops
across the network. In this work, a model for communication in wireless
networks is first developed, and then the model is used to define a new time
synchronization mechanism. A salient feature of the proposed method is that, in
the regime of asymptotically dense networks, it can average out all random
errors and maintain global synchronization in the sense that all nodes in the
multi-hop network can see identical timing signals. This is irrespective of the
distance separating any two nodes.
|
cs/0503032
|
Complexity and Approximation of Fixing Numerical Attributes in Databases
Under Integrity Constraints
|
cs.DB cs.CC
|
Consistent query answering is the problem of computing the answers from a
database that are consistent with respect to certain integrity constraints that
the database as a whole may fail to satisfy. Those answers are characterized as
those that are invariant under minimal forms of restoring the consistency of
the database. In this context, we study the problem of repairing databases by
fixing integer numerical values at the attribute level with respect to denial
and aggregation constraints. We introduce a quantitative definition of database
fix, and investigate the complexity of several decision and optimization
problems, including DFP, i.e. the existence of fixes within a given distance
from the original instance, and CQA, i.e. deciding consistency of answers to
aggregate conjunctive queries under different semantics. We provide sharp
complexity bounds, identify relevant tractable cases; and introduce
approximation algorithms for some of those that are intractable. More
specifically, we obtain results like undecidability of existence of fixes for
aggregation constraints; MAXSNP-hardness of DFP, but a good approximation
algorithm for a relevant special case; and intractability but good
approximation for CQA for aggregate queries for one database atom denials (plus
built-ins).
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.