id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
cs/0302036
|
Constraint-based analysis of composite solvers
|
cs.AI
|
Cooperative constraint solving is an area of constraint programming that
studies the interaction between constraint solvers with the aim of discovering
the interaction patterns that amplify the positive qualities of individual
solvers. Automatisation and formalisation of such studies is an important issue
of cooperative constraint solving.
In this paper we present a constraint-based analysis of composite solvers
that integrates reasoning about the individual solvers and the processed data.
The idea is to approximate this reasoning by resolution of set constraints on
the finite sets representing the predicates that express all the necessary
properties. We illustrate application of our analysis to two important
cooperation patterns: deterministic choice and loop.
|
cs/0302038
|
Tight Logic Programs
|
cs.AI cs.LO
|
This note is about the relationship between two theories of negation as
failure -- one based on program completion, the other based on stable models,
or answer sets. Francois Fages showed that if a logic program satisfies a
certain syntactic condition, which is now called ``tightness,'' then its stable
models can be characterized as the models of its completion. We extend the
definition of tightness and Fages' theorem to programs with nested expressions
in the bodies of rules, and study tight logic programs containing the
definition of the transitive closure of a predicate.
|
cs/0302039
|
Kalman-filtering using local interactions
|
cs.AI
|
There is a growing interest in using Kalman-filter models for brain
modelling. In turn, it is of considerable importance to represent Kalman-filter
in connectionist forms with local Hebbian learning rules. To our best
knowledge, Kalman-filter has not been given such local representation. It seems
that the main obstacle is the dynamic adaptation of the Kalman-gain. Here, a
connectionist representation is presented, which is derived by means of the
recursive prediction error method. We show that this method gives rise to
attractive local learning rules and can adapt the Kalman-gain.
|
cs/0303002
|
About compression of vocabulary in computer oriented languages
|
cs.CL
|
The author uses the entropy of the ideal Bose-Einstein gas to minimize losses
in computer-oriented languages.
|
cs/0303006
|
On the Notion of Cognition
|
cs.AI
|
We discuss philosophical issues concerning the notion of cognition basing
ourselves in experimental results in cognitive sciences, especially in computer
simulations of cognitive systems. There have been debates on the "proper"
approach for studying cognition, but we have realized that all approaches can
be in theory equivalent. Different approaches model different properties of
cognitive systems from different perspectives, so we can only learn from all of
them. We also integrate ideas from several perspectives for enhancing the
notion of cognition, such that it can contain other definitions of cognition as
special cases. This allows us to propose a simple classification of different
types of cognition.
|
cs/0303007
|
Glottochronology and problems of protolanguage reconstruction
|
cs.CL
|
A method of languages genealogical trees construction is proposed.
|
cs/0303009
|
Unfolding Partiality and Disjunctions in Stable Model Semantics
|
cs.AI
|
The paper studies an implementation methodology for partial and disjunctive
stable models where partiality and disjunctions are unfolded from a logic
program so that an implementation of stable models for normal
(disjunction-free) programs can be used as the core inference engine. The
unfolding is done in two separate steps. Firstly, it is shown that partial
stable models can be captured by total stable models using a simple linear and
modular program transformation. Hence, reasoning tasks concerning partial
stable models can be solved using an implementation of total stable models.
Disjunctive partial stable models have been lacking implementations which now
become available as the translation handles also the disjunctive case.
Secondly, it is shown how total stable models of disjunctive programs can be
determined by computing stable models for normal programs. Hence, an
implementation of stable models of normal programs can be used as a core engine
for implementing disjunctive programs. The feasibility of the approach is
demonstrated by constructing a system for computing stable models of
disjunctive programs using the smodels system as the core engine. The
performance of the resulting system is compared to that of dlv which is a
state-of-the-art special purpose system for disjunctive programs.
|
cs/0303015
|
Statistical efficiency of curve fitting algorithms
|
cs.CV
|
We study the problem of fitting parametrized curves to noisy data. Under
certain assumptions (known as Cartesian and radial functional models), we
derive asymptotic expressions for the bias and the covariance matrix of the
parameter estimates. We also extend Kanatani's version of the Cramer-Rao lower
bound, which he proved for unbiased estimates only, to more general estimates
that include many popular algorithms (most notably, the orthogonal least
squares and algebraic fits). We then show that the gradient-weighted algebraic
fit is statistically efficient and describe all other statistically efficient
algebraic fits.
|
cs/0303017
|
A Neural Network Assembly Memory Model with Maximum-Likelihood Recall
and Recognition Properties
|
cs.AI cs.IR cs.NE q-bio.NC q-bio.QM
|
It has been shown that a neural network model recently proposed to describe
basic memory performance is based on a ternary/binary coding/decoding algorithm
which leads to a new neural network assembly memory model (NNAMM) providing
maximum-likelihood recall/recognition properties and implying a new memory unit
architecture with Hopfield two-layer network, N-channel time gate, auxiliary
reference memory, and two nested feedback loops. For the data coding used,
conditions are found under which a version of Hopfied network implements
maximum-likelihood convolutional decoding algorithm and, simultaneously, linear
statistical classifier of arbitrary binary vectors with respect to Hamming
distance between vector analyzed and reference vector given. In addition to
basic memory performance and etc, the model explicitly describes the dependence
on time of memory trace retrieval, gives a possibility of one-trial learning,
metamemory simulation, generalized knowledge representation, and distinct
description of conscious and unconscious mental processes. It has been shown
that an assembly memory unit may be viewed as a model of a smallest inseparable
part or an 'atom' of consciousness. Some nontraditional neurobiological
backgrounds (dynamic spatiotemporal synchrony, properties of time dependent and
error detector neurons, early precise spike firing, etc) and the model's
application to solve some interdisciplinary problems from different scientific
fields are discussed.
|
cs/0303018
|
Multi-target particle filtering for the probability hypothesis density
|
cs.AI
|
When tracking a large number of targets, it is often computationally
expensive to represent the full joint distribution over target states. In cases
where the targets move independently, each target can instead be tracked with a
separate filter. However, this leads to a model-data association problem.
Another approach to solve the problem with computational complexity is to track
only the first moment of the joint distribution, the probability hypothesis
density (PHD). The integral of this distribution over any area S is the
expected number of targets within S. Since no record of object identity is
kept, the model-data association problem is avoided.
The contribution of this paper is a particle filter implementation of the PHD
filter mentioned above. This PHD particle filter is applied to tracking of
multiple vehicles in terrain, a non-linear tracking problem. Experiments show
that the filter can track a changing number of vehicles robustly, achieving
near-real-time performance.
|
cs/0303022
|
Probabilistic behavior of hash tables
|
cs.DS cs.DB
|
We extend a result of Goldreich and Ron about estimating the collision
probability of a hash function. Their estimate has a polynomial tail. We prove
that when the load factor is greater than a certain constant, the estimator has
a gaussian tail. As an application we find an estimate of an upper bound for
the average search time in hashing with chaining, for a particular user (we
allow the overall key distribution to be different from the key distribution of
a particular user). The estimator has a gaussian tail.
|
cs/0303023
|
Conferences with Internet Web-Casting as Binding Events in a Global
Brain: Example Data From Complexity Digest
|
cs.NI cs.AI
|
There is likeness of the Internet to human brains which has led to the
metaphor of the world-wide computer network as a `Global Brain'. We consider
conferences as 'binding events' in the Global Brain that can lead to
metacognitive structures on a global scale. One of the critical factors for
that phenomenon to happen (similar to the biological brain) are the time-scales
characteristic for the information exchange. In an electronic newsletter- the
Complexity Digest (ComDig) we include webcasting of audio (mp3) and video (asf)
files from international conferences in the weekly ComDig issues. Here we
present the time variation of the weekly rate of accesses to the conference
files. From those empirical data it appears that the characteristic time-scales
related to access of web-casting files is of the order of a few weeks. This is
at least an order of magnitude shorter than the characteristic time-scales of
peer reviewed publications and conference proceedings. We predict that this
observation will have profound implications on the nature of future conference
proceedings, presumably in electronic form.
|
cs/0303024
|
Differential Methods in Catadioptric Sensor Design with Applications to
Panoramic Imaging
|
cs.CV cs.RO
|
We discuss design techniques for catadioptric sensors that realize given
projections. In general, these problems do not have solutions, but approximate
solutions may often be found that are visually acceptable. There are several
methods to approach this problem, but here we focus on what we call the
``vector field approach''. An application is given where a true panoramic
mirror is derived, i.e. a mirror that yields a cylindrical projection to the
viewer without any digital unwarping.
|
cs/0303025
|
Algorithmic Clustering of Music
|
cs.SD cs.LG physics.data-an
|
We present a fully automatic method for music classification, based only on
compression of strings that represent the music pieces. The method uses no
background knowledge about music whatsoever: it is completely general and can,
without change, be used in different areas like linguistic classification and
genomics. It is based on an ideal theory of the information content in
individual objects (Kolmogorov complexity), information distance, and a
universal similarity metric. Experiments show that the method distinguishes
reasonably well between various musical genres and can even cluster pieces by
composer.
|
cs/0303031
|
A Bird's eye view of Matrix Distributed Processing
|
cs.DC cs.CE cs.DM cs.MS hep-lat physics.comp-ph
|
We present Matrix Distributed Processing, a C++ library for fast development
of efficient parallel algorithms. MDP is based on MPI and consists of a
collection of C++ classes and functions such as lattice, site and field. Once
an algorithm is written using these components the algorithm is automatically
parallel and no explicit call to communication functions is required. MDP is
particularly suitable for implementing parallel solvers for multi-dimensional
differential equations and mesh-like problems.
|
cs/0303032
|
Recent Results on No-Free-Lunch Theorems for Optimization
|
cs.NE math.OC nlin.AO
|
The sharpened No-Free-Lunch-theorem (NFL-theorem) states that the performance
of all optimization algorithms averaged over any finite set F of functions is
equal if and only if F is closed under permutation (c.u.p.) and each target
function in F is equally likely. In this paper, we first summarize some
consequences of this theorem, which have been proven recently: The average
number of evaluations needed to find a desirable (e.g., optimal) solution can
be calculated; the number of subsets c.u.p. can be neglected compared to the
overall number of possible subsets; and problem classes relevant in practice
are not likely to be c.u.p. Second, as the main result, the NFL-theorem is
extended. Necessary and sufficient conditions for NFL-results to hold are given
for arbitrary, non-uniform distributions of target functions. This yields the
most general NFL-theorem for optimization presented so far.
|
cs/0304006
|
Learning to Paraphrase: An Unsupervised Approach Using Multiple-Sequence
Alignment
|
cs.CL
|
We address the text-to-text generation problem of sentence-level paraphrasing
-- a phenomenon distinct from and more difficult than word- or phrase-level
paraphrasing. Our approach applies multiple-sequence alignment to sentences
gathered from unannotated comparable corpora: it learns a set of paraphrasing
patterns represented by word lattice pairs and automatically determines how to
apply these patterns to rewrite new sentences. The results of our evaluation
experiments show that the system derives accurate paraphrases, outperforming
baseline systems.
|
cs/0304007
|
A Method for Clustering Web Attacks Using Edit Distance
|
cs.IR cs.AI cs.CR
|
Cluster analysis often serves as the initial step in the process of data
classification. In this paper, the problem of clustering different length input
data is considered. The edit distance as the minimum number of elementary edit
operations needed to transform one vector into another is used. A heuristic for
clustering unequal length vectors, analogue to the well known k-means algorithm
is described and analyzed. This heuristic determines cluster centroids
expanding shorter vectors to the lengths of the longest ones in each cluster in
a specific way. It is shown that the time and space complexities of the
heuristic are linear in the number of input vectors. Experimental results on
real data originating from a system for classification of Web attacks are
given.
|
cs/0304009
|
Stochastic Volatility in a Quantitative Model of Stock Market Returns
|
cs.CE
|
Standard quantitative models of the stock market predict a log-normal
distribution for stock returns (Bachelier 1900, Osborne 1959), but it is
recognised (Fama 1965) that empirical data, in comparison with a Gaussian,
exhibit leptokurtosis (it has more probability mass in its tails and centre)
and fat tails (probabilities of extreme events are underestimated). Different
attempts to explain this departure from normality have coexisted. In
particular, since one of the strong assumptions of the Gaussian model concerns
the volatility, considered finite and constant, the new models were built on a
non finite (Mandelbrot 1963) or non constant (Cox, Ingersoll and Ross 1985)
volatility. We investigate in this thesis a very recent model (Dragulescu et
al. 2002) based on a Brownian motion process for the returns, and a stochastic
mean-reverting process for the volatility. In this model, the forward
Kolmogorov equation that governs the time evolution of returns is solved
analytically. We test this new theory against different stock indexes (Dow
Jones Industrial Average, Standard and Poor s and Footsie), over different
periods (from 20 to 105 years). Our aim is to compare this model with the
classical Gaussian and with a simple Neural Network, used as a benchmark. We
perform the usual statistical tests on the kurtosis and tails of the expected
distributions, paying particular attention to the outliers. As claimed by the
authors, the new model outperforms the Gaussian for any time lag, but is
artificially too complex for medium and low frequencies, where the Gaussian is
preferable. Moreover this model is still rejected for high frequencies, at a
0.05 level of significance, due to the kurtosis, incorrectly handled.
|
cs/0304019
|
Blind Normalization of Speech From Different Channels
|
cs.CL
|
We show how to construct a channel-independent representation of speech that
has propagated through a noisy reverberant channel. This is done by blindly
rescaling the cepstral time series by a non-linear function, with the form of
this scale function being determined by previously encountered cepstra from
that channel. The rescaled form of the time series is an invariant property of
it in the following sense: it is unaffected if the time series is transformed
by any time-independent invertible distortion. Because a linear channel with
stationary noise and impulse response transforms cepstra in this way, the new
technique can be used to remove the channel dependence of a cepstral time
series. In experiments, the method achieved greater channel-independence than
cepstral mean normalization, and it was comparable to the combination of
cepstral mean normalization and spectral subtraction, despite the fact that no
measurements of channel noise or reverberations were required (unlike spectral
subtraction).
|
cs/0304022
|
Self-Replicating Machines in Continuous Space with Virtual Physics
|
cs.NE cs.CE q-bio.PE
|
JohnnyVon is an implementation of self-replicating machines in continuous
two-dimensional space. Two types of particles drift about in a virtual liquid.
The particles are automata with discrete internal states but continuous
external relationships. Their internal states are governed by finite state
machines but their external relationships are governed by a simulated physics
that includes Brownian motion, viscosity, and spring-like attractive and
repulsive forces. The particles can be assembled into patterns that can encode
arbitrary strings of bits. We demonstrate that, if an arbitrary "seed" pattern
is put in a "soup" of separate individual particles, the pattern will replicate
by assembling the individual particles into copies of itself. We also show
that, given sufficient time, a soup of separate individual particles will
eventually spontaneously form self-replicating patterns. We discuss the
implications of JohnnyVon for research in nanotechnology, theoretical biology,
and artificial life.
|
cs/0304024
|
Glottochronologic Retrognostic of Language System
|
cs.CL
|
A glottochronologic retrognostic of language system is proposed
|
cs/0304027
|
"I'm sorry Dave, I'm afraid I can't do that": Linguistics, Statistics,
and Natural Language Processing circa 2001
|
cs.CL
|
A brief, general-audience overview of the history of natural language
processing, focusing on data-driven approaches.Topics include "Ambiguity and
language analysis", "Firth things first", "A 'C' change", and "The empiricists
strike back".
|
cs/0304028
|
Grid-Enabling Natural Language Engineering By Stealth
|
cs.DC cs.CL
|
We describe a proposal for an extensible, component-based software
architecture for natural language engineering applications. Our model leverages
existing linguistic resource description and discovery mechanisms based on
extended Dublin Core metadata. In addition, the application design is flexible,
allowing disparate components to be combined to suit the overall application
functionality. An application specification language provides abstraction from
the programming environment and allows ease of interface with computational
grids via a broker.
|
cs/0304029
|
An XML based Document Suite
|
cs.CL
|
We report about the current state of development of a document suite and its
applications. This collection of tools for the flexible and robust processing
of documents in German is based on the use of XML as unifying formalism for
encoding input and output data as well as process information. It is organized
in modules with limited responsibilities that can easily be combined into
pipelines to solve complex tasks. Strong emphasis is laid on a number of
techniques to deal with lexical and conceptual gaps that are typical when
starting a new application.
|
cs/0304035
|
Exploiting Sublanguage and Domain Characteristics in a Bootstrapping
Approach to Lexicon and Ontology Creation
|
cs.CL
|
It is very costly to build up lexical resources and domain ontologies.
Especially when confronted with a new application domain lexical gaps and a
poor coverage of domain concepts are a problem for the successful exploitation
of natural language document analysis systems that need and exploit such
knowledge sources. In this paper we report about ongoing experiments with
`bootstrapping techniques' for lexicon and ontology creation.
|
cs/0304036
|
An Approach for Resource Sharing in Multilingual NLP
|
cs.CL
|
In this paper we describe an approach for the analysis of documents in German
and English with a shared pool of resources. For the analysis of German
documents we use a document suite, which supports the user in tasks like
information retrieval and information extraction. The core of the document
suite is based on our tool XDOC. Now we want to exploit these methods for the
analysis of English documents as well. For this aim we need a multilingual
presentation format of the resources. These resources must be transformed into
an unified format, in which we can set additional information about linguistic
characteristics of the language depending on the analyzed documents. In this
paper we describe our approach for such an exchange model for multilingual
resources based on XML.
|
cs/0305001
|
A Framework for Searching AND/OR Graphs with Cycles
|
cs.AI
|
Search in cyclic AND/OR graphs was traditionally known to be an unsolved
problem. In the recent past several important studies have been reported in
this domain. In this paper, we have taken a fresh look at the problem. First, a
new and comprehensive theoretical framework for cyclic AND/OR graphs has been
presented, which was found missing in the recent literature. Based on this
framework, two best-first search algorithms, S1 and S2, have been developed. S1
does uninformed search and is a simple modification of the Bottom-up algorithm
by Martelli and Montanari. S2 performs a heuristically guided search and
replicates the modification in Bottom-up's successors, namely HS and AO*. Both
S1 and S2 solve the problem of searching AND/OR graphs in presence of cycles.
We then present a detailed analysis for the correctness and complexity results
of S1 and S2, using the proposed framework. We have observed through
experiments that S1 and S2 output correct results in all cases.
|
cs/0305004
|
Approximate Grammar for Information Extraction
|
cs.CL cs.AI
|
In this paper, we present the concept of Approximate grammar and how it can
be used to extract information from a documemt. As the structure of
informational strings cannot be defined well in a document, we cannot use the
conventional grammar rules to represent the information. Hence, the need arises
to design an approximate grammar that can be used effectively to accomplish the
task of Information extraction. Approximate grammars are a novel step in this
direction. The rules of an approximate grammar can be given by a user or the
machine can learn the rules from an annotated document. We have performed our
experiments in both the above areas and the results have been impressive.
|
cs/0305012
|
Time-scales, Meaning, and Availability of Information in a Global Brain
|
cs.AI cs.CY cs.NI
|
We note the importance of time-scales, meaning, and availability of
information for the emergence of novel information meta-structures at a global
scale. We discuss previous work in this area and develop future perspectives.
We focus on the transmission of scientific articles and the integration of
traditional conferences with their virtual extensions on the Internet, their
time-scales, and availability. We mention the Semantic Web as an effort for
integrating meaningful information.
|
cs/0305013
|
On Nonspecific Evidence
|
cs.AI cs.NE
|
When simultaneously reasoning with evidences about several different events
it is necessary to separate the evidence according to event. These events
should then be handled independently. However, when propositions of evidences
are weakly specified in the sense that it may not be certain to which event
they are referring, this may not be directly possible. In this paper a
criterion for partitioning evidences into subsets representing events is
established. This criterion, derived from the conflict within each subset,
involves minimising a criterion function for the overall conflict of the
partition. An algorithm based on characteristics of the criterion function and
an iterative optimisation among partitionings of evidences is proposed.
|
cs/0305014
|
Dempster's Rule for Evidence Ordered in a Complete Directed Acyclic
Graph
|
cs.AI cs.DM
|
For the case of evidence ordered in a complete directed acyclic graph this
paper presents a new algorithm with lower computational complexity for
Dempster's rule than that of step-by-step application of Dempster's rule. In
this problem, every original pair of evidences, has a corresponding evidence
against the simultaneous belief in both propositions. In this case, it is
uncertain whether the propositions of any two evidences are in logical
conflict. The original evidences are associated with the vertices and the
additional evidences are associated with the edges. The original evidences are
ordered, i.e., for every pair of evidences it is determinable which of the two
evidences is the earlier one. We are interested in finding the most probable
completely specified path through the graph, where transitions are possible
only from lower- to higher-ranked vertices. The path is here a representation
for a sequence of states, for instance a sequence of snapshots of a physical
object's track. A completely specified path means that the path includes no
other vertices than those stated in the path representation, as opposed to an
incompletely specified path that may also include other vertices than those
stated. In a hierarchical network of all subsets of the frame, i.e., of all
incompletely specified paths, the original and additional evidences support
subsets that are not disjoint, thus it is not possible to prune the network to
a tree. Instead of propagating belief, the new algorithm reasons about the
logical conditions of a completely specified path through the graph. The new
algorithm is O(|THETA| log |THETA|), compared to O(|THETA| ** log |THETA|) of
the classic brute force algorithm.
|
cs/0305015
|
Finding a Posterior Domain Probability Distribution by Specifying
Nonspecific Evidence
|
cs.AI cs.NE
|
This article is an extension of the results of two earlier articles. In [J.
Schubert, On nonspecific evidence, Int. J. Intell. Syst. 8 (1993) 711-725] we
established within Dempster-Shafer theory a criterion function called the
metaconflict function. With this criterion we can partition into subsets a set
of several pieces of evidence with propositions that are weakly specified in
the sense that it may be uncertain to which event a proposition is referring.
In a second article [J. Schubert, Specifying nonspecific evidence, in
Cluster-based specification techniques in Dempster-Shafer theory for an
evidential intelligence analysis of multiple target tracks, Ph.D. Thesis,
TRITA-NA-9410, Royal Institute of Technology, Stockholm, 1994, ISBN
91-7170-801-4] we not only found the most plausible subset for each piece of
evidence, we also found the plausibility for every subset that this piece of
evidence belongs to the subset. In this article we aim to find a posterior
probability distribution regarding the number of subsets. We use the idea that
each piece of evidence in a subset supports the existence of that subset to the
degree that this piece of evidence supports anything at all. From this we can
derive a bpa that is concerned with the question of how many subsets we have.
That bpa can then be combined with a given prior domain probability
distribution in order to obtain the sought-after posterior domain distribution.
|
cs/0305017
|
Cluster-based Specification Techniques in Dempster-Shafer Theory
|
cs.AI cs.NE
|
When reasoning with uncertainty there are many situations where evidences are
not only uncertain but their propositions may also be weakly specified in the
sense that it may not be certain to which event a proposition is referring. It
is then crucial not to combine such evidences in the mistaken belief that they
are referring to the same event. This situation would become manageable if the
evidences could be clustered into subsets representing events that should be
handled separately. In an earlier article we established within Dempster-Shafer
theory a criterion function called the metaconflict function. With this
criterion we can partition a set of evidences into subsets. Each subset
representing a separate event. In this article we will not only find the most
plausible subset for each piece of evidence, we will also find the plausibility
for every subset that the evidence belongs to the subset. Also, when the number
of subsets are uncertain we aim to find a posterior probability distribution
regarding the number of subsets.
|
cs/0305018
|
Cluster-based Specification Techniques in Dempster-Shafer Theory for an
Evidential Intelligence Analysis of MultipleTarget Tracks (Thesis Abstract)
|
cs.AI cs.NE
|
In Intelligence Analysis it is of vital importance to manage uncertainty.
Intelligence data is almost always uncertain and incomplete, making it
necessary to reason and taking decisions under uncertainty. One way to manage
the uncertainty in Intelligence Analysis is Dempster-Shafer Theory. This thesis
contains five results regarding multiple target tracks and intelligence
specification.
|
cs/0305019
|
On rho in a Decision-Theoretic Apparatus of Dempster-Shafer Theory
|
cs.AI
|
Thomas M. Strat has developed a decision-theoretic apparatus for
Dempster-Shafer theory (Decision analysis using belief functions, Intern. J.
Approx. Reason. 4(5/6), 391-417, 1990). In this apparatus, expected utility
intervals are constructed for different choices. The choice with the highest
expected utility is preferable to others. However, to find the preferred choice
when the expected utility interval of one choice is included in that of
another, it is necessary to interpolate a discerning point in the intervals.
This is done by the parameter rho, defined as the probability that the
ambiguity about the utility of every nonsingleton focal element will turn out
as favorable as possible. If there are several different decision makers, we
might sometimes be more interested in having the highest expected utility among
the decision makers rather than only trying to maximize our own expected
utility regardless of choices made by other decision makers. The preference of
each choice is then determined by the probability of yielding the highest
expected utility. This probability is equal to the maximal interval length of
rho under which an alternative is preferred. We must here take into account not
only the choices already made by other decision makers but also the rational
choices we can assume to be made by later decision makers. In Strats apparatus,
an assumption, unwarranted by the evidence at hand, has to be made about the
value of rho. We demonstrate that no such assumption is necessary. It is
sufficient to assume a uniform probability distribution for rho to be able to
discern the most preferable choice. We discuss when this approach is
justifiable.
|
cs/0305020
|
Specifying nonspecific evidence
|
cs.AI cs.NE
|
In an earlier article [J. Schubert, On nonspecific evidence, Int. J. Intell.
Syst. 8(6), 711-725 (1993)] we established within Dempster-Shafer theory a
criterion function called the metaconflict function. With this criterion we can
partition into subsets a set of several pieces of evidence with propositions
that are weakly specified in the sense that it may be uncertain to which event
a proposition is referring. Each subset in the partitioning is representing a
separate event. The metaconflict function was derived as the plausibility that
the partitioning is correct when viewing the conflict in Dempster's rule within
each subset as a newly constructed piece of metalevel evidence with a
proposition giving support against the entire partitioning. In this article we
extend the results of the previous article. We will not only find the most
plausible subset for each piece of evidence as was done in the earlier article.
In addition we will specify each piece of nonspecific evidence, in the sense
that we find to which events the proposition might be referring, by finding the
plausibility for every subset that this piece of evidence belong to the subset.
In doing this we will automatically receive indication that some evidence might
be false. We will then develop a new methodology to exploit these newly
specified pieces of evidence in a subsequent reasoning process. This will
include methods to discount evidence based on their degree of falsity and on
their degree of credibility due to a partial specification of affiliation, as
well as a refined method to infer the event of each subset.
|
cs/0305021
|
Creating Prototypes for Fast Classification in Dempster-Shafer
Clustering
|
cs.AI cs.NE
|
We develop a classification method for incoming pieces of evidence in
Dempster-Shafer theory. This methodology is based on previous work with
clustering and specification of originally nonspecific evidence. This
methodology is here put in order for fast classification of future incoming
pieces of evidence by comparing them with prototypes representing the clusters,
instead of making a full clustering of all evidence. This method has a
computational complexity of O(M * N) for each new piece of evidence, where M is
the maximum number of subsets and N is the number of prototypes chosen for each
subset. That is, a computational complexity independent of the total number of
previously arrived pieces of evidence. The parameters M and N are typically
fixed and domain dependent in any application.
|
cs/0305022
|
Applying Data Mining and Machine Learning Techniques to Submarine
Intelligence Analysis
|
cs.AI cs.DB cs.NE
|
We describe how specialized database technology and data analysis methods
were applied by the Swedish defense to help deal with the violation of Swedish
marine territory by foreign submarine intruders during the Eighties and early
Nineties. Among several approaches tried some yielded interesting information,
although most of the key questions remain unanswered. We conclude with a survey
of belief-function- and genetic-algorithm-based methods which were proposed to
support interpretation of intelligence reports and prediction of future
submarine positions, respectively.
|
cs/0305023
|
Fast Dempster-Shafer clustering using a neural network structure
|
cs.AI cs.NE
|
In this paper we study a problem within Dempster-Shafer theory where 2**n - 1
pieces of evidence are clustered by a neural structure into n clusters. The
clustering is done by minimizing a metaconflict function. Previously we
developed a method based on iterative optimization. However, for large scale
problems we need a method with lower computational complexity. The neural
structure was found to be effective and much faster than iterative optimization
for larger problems. While the growth in metaconflict was faster for the neural
structure compared with iterative optimization in medium sized problems, the
metaconflict per cluster and evidence was moderate. The neural structure was
able to find a global minimum over ten runs for problem sizes up to six
clusters.
|
cs/0305024
|
A neural network and iterative optimization hybrid for Dempster-Shafer
clustering
|
cs.AI cs.NE
|
In this paper we extend an earlier result within Dempster-Shafer theory
["Fast Dempster-Shafer Clustering Using a Neural Network Structure," in Proc.
Seventh Int. Conf. Information Processing and Management of Uncertainty in
Knowledge-Based Systems (IPMU 98)] where a large number of pieces of evidence
are clustered into subsets by a neural network structure. The clustering is
done by minimizing a metaconflict function. Previously we developed a method
based on iterative optimization. While the neural method had a much lower
computation time than iterative optimization its average clustering performance
was not as good. Here, we develop a hybrid of the two methods. We let the
neural structure do the initial clustering in order to achieve a high
computational performance. Its solution is fed as the initial state to the
iterative optimization in order to improve the clustering performance.
|
cs/0305025
|
Simultaneous Dempster-Shafer clustering and gradual determination of
number of clusters using a neural network structure
|
cs.AI cs.NE
|
In this paper we extend an earlier result within Dempster-Shafer theory
["Fast Dempster-Shafer Clustering Using a Neural Network Structure," in Proc.
Seventh Int. Conf. Information Processing and Management of Uncertainty in
Knowledge-Based Systems (IPMU'98)] where several pieces of evidence were
clustered into a fixed number of clusters using a neural structure. This was
done by minimizing a metaconflict function. We now develop a method for
simultaneous clustering and determination of number of clusters during
iteration in the neural structure. We let the output signals of neurons
represent the degree to which a pieces of evidence belong to a corresponding
cluster. From these we derive a probability distribution regarding the number
of clusters, which gradually during the iteration is transformed into a
determination of number of clusters. This gradual determination is fed back
into the neural structure at each iteration to influence the clustering
process.
|
cs/0305026
|
Fast Dempster-Shafer clustering using a neural network structure
|
cs.AI cs.NE
|
In this article we study a problem within Dempster-Shafer theory where 2**n -
1 pieces of evidence are clustered by a neural structure into n clusters. The
clustering is done by minimizing a metaconflict function. Previously we
developed a method based on iterative optimization. However, for large scale
problems we need a method with lower computational complexity. The neural
structure was found to be effective and much faster than iterative optimization
for larger problems. While the growth in metaconflict was faster for the neural
structure compared with iterative optimization in medium sized problems, the
metaconflict per cluster and evidence was moderate. The neural structure was
able to find a global minimum over ten runs for problem sizes up to six
clusters.
|
cs/0305027
|
Managing Inconsistent Intelligence
|
cs.AI cs.NE
|
In this paper we demonstrate that it is possible to manage intelligence in
constant time as a pre-process to information fusion through a series of
processes dealing with issues such as clustering reports, ranking reports with
respect to importance, extraction of prototypes from clusters and immediate
classification of newly arriving intelligence reports. These methods are used
when intelligence reports arrive which concerns different events which should
be handled independently, when it is not known a priori to which event each
intelligence report is related. We use clustering that runs as a back-end
process to partition the intelligence into subsets representing the events, and
in parallel, a fast classification that runs as a front-end process in order to
put the newly arriving intelligence into its correct information fusion
process.
|
cs/0305028
|
Dempster-Shafer clustering using Potts spin mean field theory
|
cs.AI cs.NE
|
In this article we investigate a problem within Dempster-Shafer theory where
2**q - 1 pieces of evidence are clustered into q clusters by minimizing a
metaconflict function, or equivalently, by minimizing the sum of weight of
conflict over all clusters. Previously one of us developed a method based on a
Hopfield and Tank model. However, for very large problems we need a method with
lower computational complexity. We demonstrate that the weight of conflict of
evidence can, as an approximation, be linearized and mapped to an
antiferromagnetic Potts Spin model. This facilitates efficient numerical
solution, even for large problem sizes. Optimal or nearly optimal solutions are
found for Dempster-Shafer clustering benchmark tests with a time complexity of
approximately O(N**2 log**2 N). Furthermore, an isomorphism between the
antiferromagnetic Potts spin model and a graph optimization problem is shown.
The graph model has dynamic variables living on the links, which have a priori
probabilities that are directly related to the pairwise conflict between pieces
of evidence. Hence, the relations between three different models are shown.
|
cs/0305029
|
Conflict-based Force Aggregation
|
cs.AI cs.NE
|
In this paper we present an application where we put together two methods for
clustering and classification into a force aggregation method. Both methods are
based on conflicts between elements. These methods work with different type of
elements (intelligence reports, vehicles, military units) on different
hierarchical levels using specific conflict assessment methods on each level.
We use Dempster-Shafer theory for conflict calculation between elements,
Dempster-Shafer clustering for clustering these elements, and templates for
classification. The result of these processes is a complete force aggregation
on all levels handled.
|
cs/0305030
|
Reliable Force Aggregation Using a Refined Evidence Specification from
Dempster-Shafer Clustering
|
cs.AI cs.NE
|
In this paper we develop methods for selection of templates and use these
templates to recluster an already performed Dempster-Shafer clustering taking
into account intelligence to template fit during the reclustering phase. By
this process the risk of erroneous force aggregation based on some misplace
pieces of evidence from the first clustering process is greatly reduced.
Finally, a more reliable force aggregation is performed using the result of the
second clustering. These steps are taken in order to maintain most of the
excellent computational performance of Dempster-Shafer clustering, while at the
same time improve on the clustering result by including some higher relations
among intelligence reports described by the templates. The new improved
algorithm has a computational complexity of O(n**3 log**2 n) compared to O(n**2
log**2 n) of standard Dempster-Shafer clustering using Potts spin mean field
theory.
|
cs/0305031
|
Clustering belief functions based on attracting and conflicting
metalevel evidence
|
cs.AI cs.NE
|
In this paper we develop a method for clustering belief functions based on
attracting and conflicting metalevel evidence. Such clustering is done when the
belief functions concern multiple events, and all belief functions are mixed
up. The clustering process is used as the means for separating the belief
functions into subsets that should be handled independently. While the
conflicting metalevel evidence is generated internally from pairwise conflicts
of all belief functions, the attracting metalevel evidence is assumed given by
some external source.
|
cs/0305032
|
Robust Report Level Cluster-to-Track Fusion
|
cs.AI cs.NE
|
In this paper we develop a method for report level tracking based on
Dempster-Shafer clustering using Potts spin neural networks where clusters of
incoming reports are gradually fused into existing tracks, one cluster for each
track. Incoming reports are put into a cluster and continuous reclustering of
older reports is made in order to obtain maximum association fit within the
cluster and towards the track. Over time, the oldest reports of the cluster
leave the cluster for the fixed track at the same rate as new incoming reports
are put into it. Fusing reports to existing tracks in this fashion allows us to
take account of both existing tracks and the probable future of each track, as
represented by younger reports within the corresponding cluster. This gives us
a robust report-to-track association. Compared to clustering of all available
reports this approach is computationally faster and has a better
report-to-track association than simple step-by-step association.
|
cs/0305033
|
Beslutst\"odssystemet Dezzy - en \"oversikt
|
cs.AI cs.DB
|
Within the scope of the three-year ANTI-SUBMARINE WARFARE project of the
National Defence Research Establishment, the INFORMATION SYSTEMS subproject has
developed the demonstration prototype Dezzy for handling and analysis of
intelligence reports concerning foreign underwater activities.
-----
Inom ramen f\"or FOA:s tre{\aa}riga huvudprojekt UB{\AA}TSSKYDD har
delprojekt INFORMATIONSSYSTEM utvecklat demonstrationsprototypen Dezzy till ett
beslutsst\"odsystem f\"or hantering och analys av underr\"attelser om
fr\"ammande undervattensverksamhet.
|
cs/0305036
|
Using Dynamic Simulation in the Development of Construction Machinery
|
cs.CE
|
As in the car industry for quite some time, dynamic simulation of complete
vehicles is being practiced more and more in the development of off-road
machinery. However, specific questions arise due not only to company structure
and size, but especially to the type of product. Tightly coupled, non-linear
subsystems of different domains make prediction and optimisation of the
complete system's dynamic behaviour a challenge. Furthermore, the demand for
versatile machines leads to sometimes contradictory target requirements and can
turn the design process into a hunt for the least painful compromise. This can
be avoided by profound system knowledge, assisted by simulation-driven product
development. This paper gives an overview of joint research into this issue by
Volvo Wheel Loaders and Linkoping University on that matter, lists the results
of a related literature review and introduces the term "operateability". Rather
than giving detailed answers, the problem space for ongoing and future research
is examined and possible solutions are sketched.
|
cs/0305038
|
The Evolution of the Computerized Database
|
cs.DB
|
Databases, collections of related data, are as old as the written word. A
database can be anything from a homemaker's metal recipe file to a
sophisticated data warehouse. Yet today, when we think of a database we
invariably think of computerized data and their DBMSs (database management
systems). How did we go from organizing our data in a simple metal filing box
or cabinet to storing our data in a sophisticated computerized database? How
did the computerized database evolve?
This paper defines what we mean by a database. It traces the evolution of the
database, from its start as a non-computerized set of related data, to the, now
standard, computerized RDBMS (relational database management system). Early
computerized storage methods are reviewed including both the ISAM (Indexed
Sequential Access Method) and VSAM (Virtual Storage Access Method) storage
methods. Early database models are explored including the network and
hierarchical database models. Eventually, the relational, object-relational and
object-oriented databases models are discussed. An appendix of diagrams,
including hierarchical occurrence tree, network schema, ER (entity
relationship) and UML (unified modeling language) diagrams, is included to
support the text.
This paper concludes with an exploration of current and future trends in DBMS
development. It discusses the factors affecting these trends. It delves into
the relationship between DBMSs and the increasingly popular object-oriented
development methodologies. Finally, it speculates on the future of the DBMS.
|
cs/0305040
|
Bounded LTL Model Checking with Stable Models
|
cs.LO cs.AI
|
In this paper bounded model checking of asynchronous concurrent systems is
introduced as a promising application area for answer set programming. As the
model of asynchronous systems a generalisation of communicating automata,
1-safe Petri nets, are used. It is shown how a 1-safe Petri net and a
requirement on the behaviour of the net can be translated into a logic program
such that the bounded model checking problem for the net can be solved by
computing stable models of the corresponding program. The use of the stable
model semantics leads to compact encodings of bounded reachability and deadlock
detection tasks as well as the more general problem of bounded model checking
of linear temporal logic. Correctness proofs of the devised translations are
given, and some experimental results using the translation and the Smodels
system are presented.
|
cs/0305041
|
Factorization of Language Models through Backing-Off Lattices
|
cs.CL
|
Factorization of statistical language models is the task that we resolve the
most discriminative model into factored models and determine a new model by
combining them so as to provide better estimate. Most of previous works mainly
focus on factorizing models of sequential events, each of which allows only one
factorization manner. To enable parallel factorization, which allows a model
event to be resolved in more than one ways at the same time, we propose a
general framework, where we adopt a backing-off lattice to reflect parallel
factorizations and to define the paths along which a model is resolved into
factored models, we use a mixture model to combine parallel paths in the
lattice, and generalize Katz's backing-off method to integrate all the mixture
models got by traversing the entire lattice. Based on this framework, we
formulate two types of model factorizations that are used in natural language
modeling.
|
cs/0305044
|
Updating beliefs with incomplete observations
|
cs.AI
|
Currently, there is renewed interest in the problem, raised by Shafer in
1985, of updating probabilities when observations are incomplete. This is a
fundamental problem in general, and of particular interest for Bayesian
networks. Recently, Grunwald and Halpern have shown that commonly used updating
strategies fail in this case, except under very special assumptions. In this
paper we propose a new method for updating probabilities with incomplete
observations. Our approach is deliberately conservative: we make no assumptions
about the so-called incompleteness mechanism that associates complete with
incomplete observations. We model our ignorance about this mechanism by a
vacuous lower prevision, a tool from the theory of imprecise probabilities, and
we use only coherence arguments to turn prior into posterior probabilities. In
general, this new approach to updating produces lower and upper posterior
probabilities and expectations, as well as partially determinate decisions.
This is a logical consequence of the existing ignorance about the
incompleteness mechanism. We apply the new approach to the problem of
classification of new evidence in probabilistic expert systems, where it leads
to a new, so-called conservative updating rule. In the special case of Bayesian
networks constructed using expert knowledge, we provide an exact algorithm for
classification based on our updating rule, which has linear-time complexity for
a class of networks wider than polytrees. This result is then extended to the
more general framework of credal networks, where computations are often much
harder than with Bayesian nets. Using an example, we show that our rule appears
to provide a solid basis for reliable updating with incomplete observations,
when no strong assumptions about the incompleteness mechanism are justified.
|
cs/0305048
|
2D Electrophoresis Gel Image and Diagnosis of a Disease
|
cs.CC cs.CV q-bio.QM
|
The process of diagnosing a disease from the 2D gel electrophoresis image is
a challenging problem. This is due to technical difficulties of generating
reproducible images with a normalized form and the effect of negative stain. In
this paper, we will discuss a new concept of interpreting the 2D images and
overcoming the aforementioned technical difficulties using mathematical
transformation. The method makes use of 2D gel images of proteins in serums and
we explain a way of representing the images into vectors in order to apply
machine-learning methods, such as the support vector machine.
|
cs/0305052
|
On the Existence and Convergence Computable Universal Priors
|
cs.LG cs.AI cs.CC math.ST stat.TH
|
Solomonoff unified Occam's razor and Epicurus' principle of multiple
explanations to one elegant, formal, universal theory of inductive inference,
which initiated the field of algorithmic information theory. His central result
is that the posterior of his universal semimeasure M converges rapidly to the
true sequence generating posterior mu, if the latter is computable. Hence, M is
eligible as a universal predictor in case of unknown mu. We investigate the
existence and convergence of computable universal (semi)measures for a
hierarchy of computability classes: finitely computable, estimable, enumerable,
and approximable. For instance, M is known to be enumerable, but not finitely
computable, and to dominate all enumerable semimeasures. We define seven
classes of (semi)measures based on these four computability concepts. Each
class may or may not contain a (semi)measure which dominates all elements of
another class. The analysis of these 49 cases can be reduced to four basic
cases, two of them being new. The results hold for discrete and continuous
semimeasures. We also investigate more closely the types of convergence,
possibly implied by universality: in difference and in ratio, with probability
1, in mean sum, and for Martin-Loef random sequences. We introduce a
generalized concept of randomness for individual sequences and use it to
exhibit difficulties regarding these issues.
|
cs/0305053
|
Developing Open Data Models for Linguistic Field Data
|
cs.DL cs.CL
|
The UQ Flint Archive houses the field notes and elicitation recordings made
by Elwyn Flint in the 1950's and 1960's during extensive linguistic survey work
across Queensland, Australia.
The process of digitizing the contents of the UQ Flint Archive provides a
number of interesting challenges in the context of EMELD. Firstly, all of the
linguistic data is for languages which are either endangered or extinct, and as
such forms a valuable ethnographic repository. Secondly, the physical format of
the data is itself in danger of decline, and as such digitization is an
important preservation task in the short to medium term. Thirdly, the adoption
of open standards for the encoding and presentation of text and audio data for
linguistic field data, whilst enabling preservation, represents a new field of
research in itself where best practice has yet to be formalised. Fourthly, the
provision of this linguistic data online as a new data source for future
research introduces concerns of data portability and longevity.
This paper will outline the origins of the data model, the content creation
components, presentation forms based on the data model, data capture tools and
media conversion components. It will also address some of the larger questions
regarding the digitization and annotation of linguistic field work based on
experience gained through work with the Flint Archive contents.
|
cs/0305055
|
Goodness-of-fit of the Heston model
|
cs.CE
|
An analytical formula for the probability distribution of stock-market
returns, derived from the Heston model assuming a mean-reverting stochastic
volatility, was recently proposed by Dragulescu and Yakovenko in Quantitative
Finance 2002. While replicating their results, we found two significant
weaknesses in their method to pre-process the data, which cast a shadow over
the effective goodness-of-fit of the model. We propose a new method, more truly
capturing the market, and perform a Kolmogorov-Smirnov test and a Chi Square
test on the resulting probability distribution. The results raise some
significant questions for large time lags -- 40 to 250 days -- where the
smoothness of the data does not require such a complex model; nevertheless, we
also provide some statistical evidence in favour of the Heston model for small
time lags -- 1 and 5 days -- compared with the traditional Gaussian model
assuming constant volatility.
|
cs/0305056
|
Configuration Database for BaBar On-line
|
cs.DB cs.IR
|
The configuration database is one of the vital systems in the BaBar on-line
system. It provides services for the different parts of the data acquisition
system and control system, which require run-time parameters. The original
design and implementation of the configuration database played a significant
role in the successful BaBar operations since the beginning of experiment.
Recent additions to the design of the configuration database provide better
means for the management of data and add new tools to simplify main
configuration tasks. We describe the design of the configuration database, its
implementation with the Objectivity/DB object-oriented database, and our
experience collected during the years of operation.
|
cs/0306006
|
Experience with the Open Source based implementation for ATLAS
Conditions Data Management System
|
cs.DB
|
Conditions Data in high energy physics experiments is frequently seen as
every data needed for reconstruction besides the event data itself. This
includes all sorts of slowly evolving data like detector alignment, calibration
and robustness, and data from detector control system. Also, every Conditions
Data Object is associated with a time interval of validity and a version.
Besides that, quite often is useful to tag collections of Conditions Data
Objects altogether. These issues have already been investigated and a data
model has been proposed and used for different implementations based in
commercial DBMSs, both at CERN and for the BaBar experiment. The special case
of the ATLAS complex trigger that requires online access to calibration and
alignment data poses new challenges that have to be met using a flexible and
customizable solution more in the line of Open Source components. Motivated by
the ATLAS challenges we have developed an alternative implementation, based in
an Open Source RDBMS. Several issues were investigated land will be described
in this paper:
-The best way to map the conditions data model into the relational database
concept considering what are foreseen as the most frequent queries.
-The clustering model best suited to address the scalability problem.
-Extensive tests were performed and will be described.
The very promising results from these tests are attracting the attention from
the HEP community and driving further developments.
|
cs/0306013
|
Transparent Persistence with Java Data Objects
|
cs.DB
|
Flexible and performant Persistency Service is a necessary component of any
HEP Software Framework. The building of a modular, non-intrusive and performant
persistency component have been shown to be very difficult task. In the past,
it was very often necessary to sacrifice modularity to achieve acceptable
performance. This resulted in the strong dependency of the overall Frameworks
on their Persistency subsystems.
Recent development in software technology has made possible to build a
Persistency Service which can be transparently used from other Frameworks. Such
Service doesn't force a strong architectural constraints on the overall
Framework Architecture, while satisfying high performance requirements. Java
Data Object standard (JDO) has been already implemented for almost all major
databases. It provides truly transparent persistency for any Java object (both
internal and external). Objects in other languages can be handled via
transparent proxies. Being only a thin layer on top of a used database, JDO
doesn't introduce any significant performance degradation. Also Aspect-Oriented
Programming (AOP) makes possible to treat persistency as an orthogonal Aspect
of the Application Framework, without polluting it with persistence-specific
concepts.
All these techniques have been developed primarily (or only) for the Java
environment. It is, however, possible to interface them transparently to
Frameworks built in other languages, like for example C++.
Fully functional prototypes of flexible and non-intrusive persistency modules
have been build for several other packages, as for example FreeHEP AIDA and LCG
Pool AttributeSet (package Indicium).
|
cs/0306016
|
Modelling Biochemical Operations on RNA Secondary Structures
|
cs.CE q-bio
|
In this paper we model several simple biochemical operations on RNA molecules
that modify their secondary structure by means of a suitable variation of
Gro\ss e-Rhode's Algebra Transformation Systems.
|
cs/0306017
|
Minimum Model Semantics for Logic Programs with Negation-as-Failure
|
cs.LO cs.AI cs.PL
|
We give a purely model-theoretic characterization of the semantics of logic
programs with negation-as-failure allowed in clause bodies. In our semantics
the meaning of a program is, as in the classical case, the unique minimum model
in a program-independent ordering. We use an expanded truth domain that has an
uncountable linearly ordered set of truth values between False (the minimum
element) and True (the maximum), with a Zero element in the middle. The truth
values below Zero are ordered like the countable ordinals. The values above
Zero have exactly the reverse order. Negation is interpreted as reflection
about Zero followed by a step towards Zero; the only truth value that remains
unaffected by negation is Zero. We show that every program has a unique minimum
model M_P, and that this model can be constructed with a T_P iteration which
proceeds through the countable ordinals. Furthermore, we demonstrate that M_P
can also be obtained through a model intersection construction which
generalizes the well-known model intersection theorem for classical logic
programming. Finally, we show that by collapsing the true and false values of
the infinite-valued model M_P to (the classical) True and False, we obtain a
three-valued model identical to the well-founded one.
|
cs/0306019
|
Relational databases for data management in PHENIX
|
cs.DB
|
PHENIX is one of the two large experiments at the Relativistic Heavy Ion
Collider (RHIC) at Brookhaven National Laboratory (BNL) and archives roughly
100TB of experimental data per year. In addition, large volumes of simulated
data are produced at multiple off-site computing centers. For any file catalog
to play a central role in data management it has to face problems associated
with the need for distributed access and updates. To be used effectively by the
hundreds of PHENIX collaborators in 12 countries the catalog must satisfy the
following requirements: 1) contain up-to-date data, 2) provide fast and
reliable access to the data, 3) have write permissions for the sites that store
portions of data. We present an analysis of several available Relational
Database Management Systems (RDBMS) to support a catalog meeting the above
requirements and discuss the PHENIX experience with building and using the
distributed file catalog.
|
cs/0306020
|
On the Verge of One Petabyte - the Story Behind the BaBar Database
System
|
cs.DB
|
The BaBar database has pioneered the use of a commercial ODBMS within the HEP
community. The unique object-oriented architecture of Objectivity/DB has made
it possible to manage over 700 terabytes of production data generated since
May'99, making the BaBar database the world's largest known database. The
ongoing development includes new features, addressing the ever-increasing
luminosity of the detector as well as other changing physics requirements.
Significant efforts are focused on reducing space requirements and operational
costs. The paper discusses our experience with developing a large scale
database system, emphasizing universal aspects which may be applied to any
large scale system, independently of underlying technology used.
|
cs/0306021
|
Visualization for Periodic Population Movement between Distinct
Localities
|
cs.IR
|
We present a new visualization method to summarize and present periodic
population movement between distinct locations, such as floors, buildings,
cities, or the like. In the specific case of this paper, we have chosen to
focus on student movement between college dormitories on the Columbia
University campus. The visual information is presented to the information
analyst in the form of an interactive geographical map, in which specific
temporal periods as well as individual buildings can be singled out for
detailed data exploration. The navigational interface has been designed to
specifically meet a geographical setting.
|
cs/0306022
|
Techniques for effective vocabulary selection
|
cs.CL cs.AI
|
The vocabulary of a continuous speech recognition (CSR) system is a
significant factor in determining its performance. In this paper, we present
three principled approaches to select the target vocabulary for a particular
domain by trading off between the target out-of-vocabulary (OOV) rate and
vocabulary size. We evaluate these approaches against an ad-hoc baseline
strategy. Results are presented in the form of OOV rate graphs plotted against
increasing vocabulary size for each technique.
|
cs/0306023
|
The Redesigned BaBar Event Store: Believe the Hype
|
cs.DB cs.DS
|
As the BaBar experiment progresses, it produces new and unforeseen
requirements and increasing demands on capacity and feature base. The current
system is being utilized well beyond its original design specifications, and
has scaled appropriately, maintaining data consistency and durability. The
persistent event storage system has remained largely unchanged since the
initial implementation, and thus includes many design features which have
become performance bottlenecks. Programming interfaces were designed before
sufficient usage information became available. Performance and efficiency were
traded off for added flexibility to cope with future demands. With significant
experience in managing actual production data under our belt, we are now in a
position to recraft the system to better suit current needs. The Event Store
redesign is intended to eliminate redundant features while adding new ones,
increase overall performance, and contain the physical storage cost of the
world's largest database.
|
cs/0306026
|
BdbServer++: A User Driven Data Location and Retrieval Tool
|
cs.IR
|
The adoption of Grid technology has the potential to greatly aid the BaBar
experiment. BdbServer was originally designed to extract copies of data from
the Objectivity/DB database at SLAC and IN2P3. With data now stored in multiple
locations in a variety of data formats, we are enhancing this tool. This will
enable users to extract selected deep copies of event collections and ship them
to the requested site using the facilities offered by the existing Grid
infrastructure. By building on the work done by various groups in BaBar, and
the European DataGrid, we have successfully expanded the capabilities of the
BdbServer software. This should provide a framework for future work in data
distribution.
|
cs/0306034
|
A ROOT/IO Based Software Framework for CMS
|
cs.DB
|
The implementation of persistency in the Compact Muon Solenoid (CMS) Software
Framework uses the core I/O functionality of ROOT. We will discuss the current
ROOT/IO implementation, its evolution from the prior Objectivity/DB
implementation, and the plans and ongoing work for the conversion to "POOL",
provided by the LHC Computing Grid (LCG) persistency project.
|
cs/0306036
|
Sequence Prediction based on Monotone Complexity
|
cs.AI cs.IT cs.LG math.IT math.ST stat.TH
|
This paper studies sequence prediction based on the monotone Kolmogorov
complexity Km=-log m, i.e. based on universal deterministic/one-part MDL. m is
extremely close to Solomonoff's prior M, the latter being an excellent
predictor in deterministic as well as probabilistic environments, where
performance is measured in terms of convergence of posteriors or losses.
Despite this closeness to M, it is difficult to assess the prediction quality
of m, since little is known about the closeness of their posteriors, which are
the important quantities for prediction. We show that for deterministic
computable environments, the "posterior" and losses of m converge, but rapid
convergence could only be shown on-sequence; the off-sequence behavior is
unclear. In probabilistic environments, neither the posterior nor the losses
converge, in general.
|
cs/0306039
|
Bayesian Information Extraction Network
|
cs.CL cs.AI cs.IR
|
Dynamic Bayesian networks (DBNs) offer an elegant way to integrate various
aspects of language in one model. Many existing algorithms developed for
learning and inference in DBNs are applicable to probabilistic language
modeling. To demonstrate the potential of DBNs for natural language processing,
we employ a DBN in an information extraction task. We show how to assemble
wealth of emerging linguistic instruments for shallow parsing, syntactic and
semantic tagging, morphological decomposition, named entity recognition etc. in
order to incrementally build a robust information extraction system. Our method
outperforms previously published results on an established benchmark domain.
|
cs/0306040
|
The Open Language Archives Community: An infrastructure for distributed
archiving of language resources
|
cs.CL cs.DL
|
New ways of documenting and describing language via electronic media coupled
with new ways of distributing the results via the World-Wide Web offer a degree
of access to language resources that is unparalleled in history. At the same
time, the proliferation of approaches to using these new technologies is
causing serious problems relating to resource discovery and resource creation.
This article describes the infrastructure that the Open Language Archives
Community (OLAC) has built in order to address these problems. Its technical
and usage infrastructures address problems of resource discovery by
constructing a single virtual library of distributed resources. Its governance
infrastructure addresses problems of resource creation by providing a mechanism
through which the language-resource community can express its consensus on
recommended best practices.
|
cs/0306049
|
Hyperdense Coding Modulo 6 with Filter-Machines
|
cs.CC cs.DB
|
We show how one can encode $n$ bits with $n^{o(1)}$ ``wave-bits'' using still
hypothetical filter-machines (here $o(1)$ denotes a positive quantity which
goes to 0 as $n$ goes to infity). Our present result - in a completely
different computational model - significantly improves on the quantum
superdense-coding breakthrough of Bennet and Wiesner (1992) which encoded $n$
bits by $\lceil{n/2}\rceil$ quantum-bits. We also show that our earlier
algorithm (Tech. Rep. TR03-001, ECCC, See
ftp://ftp.eccc.uni-trier.de/pub/eccc/reports/2003/TR03-001/index.html) which
used $n^{o(1)}$ muliplication for computing a representation of the dot-product
of two $n$-bit sequences modulo 6, and, similarly, an algorithm for computing a
representation of the multiplication of two $n\times n$ matrices with
$n^{2+o(1)}$ multiplications can be turned to algorithms computing the exact
dot-product or the exact matrix-product with the same number of multiplications
with filter-machines. With classical computation, computing the dot-product
needs $\Omega(n)$ multiplications and the best known algorithm for matrix
multiplication (D. Coppersmith and S. Winograd, Matrix multiplication via
arithmetic progressions, J. Symbolic Comput., 9(3):251--280, 1990) uses
$n^{2.376}$ multiplications.
|
cs/0306050
|
Introduction to the CoNLL-2003 Shared Task: Language-Independent Named
Entity Recognition
|
cs.CL
|
We describe the CoNLL-2003 shared task: language-independent named entity
recognition. We give background information on the data sets (English and
German) and the evaluation method, present a general overview of the systems
that have taken part in the task and discuss their performance.
|
cs/0306056
|
Twelve Ways to Build CMS Crossings from ROOT Files
|
cs.DB
|
The simulation of CMS raw data requires the random selection of one hundred
and fifty pileup events from a very large set of files, to be superimposed in
memory to the signal event. The use of ROOT I/O for that purpose is quite
unusual: the events are not read sequentially but pseudo-randomly, they are not
processed one by one in memory but by bunches, and they do not contain orthodox
ROOT objects but many foreign objects and templates. In this context, we have
compared the performance of ROOT containers versus the STL vectors, and the use
of trees versus a direct storage of containers. The strategy with best
performances is by far the one using clones within trees, but it stays hard to
tune and very dependant on the exact use-case. The use of STL vectors could
bring more easily similar performances in a future ROOT release.
|
cs/0306061
|
Operational Aspects of Dealing with the Large BaBar Data Set
|
cs.DB cs.DC
|
To date, the BaBar experiment has stored over 0.7PB of data in an
Objectivity/DB database. Approximately half this data-set comprises simulated
data of which more than 70% has been produced at more than 20 collaborating
institutes outside of SLAC. The operational aspects of managing such a large
data set and providing access to the physicists in a timely manner is a
challenging and complex problem. We describe the operational aspects of
managing such a large distributed data-set as well as importing and exporting
data from geographically spread BaBar collaborators. We also describe problems
common to dealing with such large datasets.
|
cs/0306062
|
Learning to Order Facts for Discourse Planning in Natural Language
Generation
|
cs.CL
|
This paper presents a machine learning approach to discourse planning in
natural language generation. More specifically, we address the problem of
learning the most natural ordering of facts in discourse plans for a specific
domain. We discuss our methodology and how it was instantiated using two
different machine learning algorithms. A quantitative evaluation performed in
the domain of museum exhibit descriptions indicates that our approach performs
significantly better than manually constructed ordering rules. Being
retrainable, the resulting planners can be ported easily to other similar
domains, without requiring language technology expertise.
|
cs/0306065
|
POOL File Catalog, Collection and Metadata Components
|
cs.DB
|
The POOL project is the common persistency framework for the LHC experiments
to store petabytes of experiment data and metadata in a distributed and grid
enabled way. POOL is a hybrid event store consisting of a data streaming layer
and a relational layer. This paper describes the design of file catalog,
collection and metadata components which are not part of the data streaming
layer of POOL and outlines how POOL aims to provide transparent and efficient
data access for a wide range of environments and use cases - ranging from a
large production site down to a single disconnected laptops. The file catalog
is the central POOL component translating logical data references to physical
data files in a grid environment. POOL collections with their associated
metadata provide an abstract way of accessing experiment data via their logical
grouping into sets of related data objects.
|
cs/0306066
|
The COMPASS Event Store in 2002
|
cs.DB
|
COMPASS, the fixed-target experiment at CERN studying the structure of the
nucleon and spectroscopy, collected over 260 TB during summer 2002 run. All
these data, together with reconstructed events information, were put from the
beginning in a database infrastructure based on Objectivity/DB and on the
hierarchical storage manager CASTOR. The experience in the usage of the
database is reviewed and the evolution of the system outlined.
|
cs/0306077
|
The TESLA Requirements Database
|
cs.DB
|
In preparation for the planned linear collider TESLA, DESY is designing the
required buildings and facilities. The accelerator and infrastructure
components have to be allocated to buildings, and their required areas for
installation, operation and maintenance have to be determined.
Interdisciplinary working groups specify the project from different viewpoints
and need to develop a common vision as a precondition for an optimal solution.
A commercial requirements database is used as a collaborative tool, enabling
concurrent requirements specification by independent working groups. The
requirements database ensures long term storage and availability of the
emerging knowledge, and it offers a central platform for communication which is
available for all project members. It is successfully operating since summer
2002 and has since then become an important tool for the design team.
|
cs/0306079
|
Integrated Information Management for TESLA
|
cs.DB
|
Next-generation projects in High Energy Physics will reach again a new
dimension of complexity. Information management has to ensure an efficient and
economic information flow within the collaborations, offering world-wide
up-to-date information access to the collaborators as one condition for
successful projects. DESY introduces several information systems in preparation
for the planned linear collider TESLA: a Requirements Management System (RMS)
is in production for the TESLA planning group, a Product Data Management System
(PDMS) is in production since the beginning of 2002 and is supporting the
cavity preparation and the general engineering of accelerator components. A
pilot Asset Management System (AMS) is in production for supporting the
management and maintenance of the technical infrastructure, and a Facility
Management System (FMS) with a Geographic Information System (GIS) is currently
being introduced to support civil engineering. Efforts have been started to
integrate the systems with the goal that users can retrieve information through
a single point of access. The paper gives an introduction to information
management and the activities at DESY.
|
cs/0306081
|
An on-line Integrated Bookkeeping: electronic run log book and Meta-Data
Repository for ATLAS
|
cs.DB
|
In the context of the ATLAS experiment there is growing evidence of the
importance of different kinds of Meta-data including all the important details
of the detector and data acquisition that are vital for the analysis of the
acquired data. The Online BookKeeper (OBK) is a component of ATLAS online
software that stores all information collected while running the experiment,
including the Meta-data associated with the event acquisition, triggering and
storage. The facilities for acquisition of control data within the on-line
software framework, together with a full functional Web interface, make the OBK
a powerful tool containing all information needed for event analysis, including
an electronic log book.
In this paper we explain how OBK plays a role as one of the main collectors
and managers of Meta-data produced on-line, and we'll also focus on the Web
facilities already available. The usage of the web interface as an electronic
run logbook is also explained, together with the future extensions.
We describe the technology used in OBK development and how we arrived at the
present level explaining the previous experience with various DBMS
technologies. The extensive performance evaluations that have been performed
and the usage in the production environment of the ATLAS test beams are also
analysed.
|
cs/0306086
|
GMA Instrumentation of the Athena Framework using NetLogger
|
cs.DC cs.IR
|
Grid applications are, by their nature, wide-area distributed applications.
This WAN aspect of Grid applications makes the use of conventional monitoring
and instrumentation tools (such as top, gprof, LSF Monitor, etc) impractical
for verification that the application is running correctly and efficiently. To
be effective, monitoring data must be "end-to-end", meaning that all components
between the Grid application endpoints must be monitored. Instrumented
applications can generate a large amount of monitoring data, so typically the
instrumentation is off by default. For jobs running on a Grid, there needs to
be a general mechanism to remotely activate the instrumentation in running
jobs. The NetLogger Toolkit Activation Service provides this mechanism.
To demonstrate this, we have instrumented the ATLAS Athena Framework with
NetLogger to generate monitoring events. We then use a GMA-based activation
service to control NetLogger's trigger mechanism. The NetLogger trigger
mechanism allows one to easily start, stop, or change the logging level of a
running program by modifying a trigger file. We present here details of the
design of the NetLogger implementation of the GMA-based activation service and
the instrumentation service for Athena. We also describe how this activation
service allows us to non-intrusively collect and visualize the ATLAS Athena
Framework monitoring data.
|
cs/0306091
|
Universal Sequential Decisions in Unknown Environments
|
cs.AI cs.CC cs.LG
|
We give a brief introduction to the AIXI model, which unifies and overcomes
the limitations of sequential decision theory and universal Solomonoff
induction. While the former theory is suited for active agents in known
environments, the latter is suited for passive prediction of unknown
environments.
|
cs/0306094
|
BaBar - A Community Web Site in an Organizational Setting
|
cs.IR
|
The BABAR Web site was established in 1993 at the Stanford Linear Accelerator
Center (SLAC) to support the BABAR experiment, to report its results, and to
facilitate communication among its scientific and engineering collaborators,
currently numbering about 600 individuals from 75 collaborating institutions in
10 countries. The BABAR Web site is, therefore, a community Web site. At the
same time it is hosted at SLAC and funded by agencies that demand adherence to
policies decided under different priorities. Additionally, the BABAR Web
administrators deal with the problems that arise during the course of managing
users, content, policies, standards, and changing technologies. Desired
solutions to some of these problems may be incompatible with the overall
administration of the SLAC Web sites and/or the SLAC policies and concerns.
There are thus different perspectives of the same Web site and differing
expectations in segments of the SLAC population which act as constraints and
challenges in any review or re-engineering activities. Web Engineering, which
post-dates the BABAR Web, has aimed to provide a comprehensive understanding of
all aspects of Web development. This paper reports on the first part of a
recent review of application of Web Engineering methods to the BABAR Web site,
which has led to explicit user and information models of the BABAR community
and how SLAC and the BABAR community relate and react to each other. The paper
identifies the issues of a community Web site in a hierarchical,
semi-governmental sector and formulates a strategy for periodic reviews of
BABAR and similar sites.
|
cs/0306095
|
The MammoGrid Project Grids Architecture
|
cs.DC cs.DB
|
The aim of the recently EU-funded MammoGrid project is, in the light of
emerging Grid technology, to develop a European-wide database of mammograms
that will be used to develop a set of important healthcare applications and
investigate the potential of this Grid to support effective co-working between
healthcare professionals throughout the EU. The MammoGrid consortium intends to
use a Grid model to enable distributed computing that spans national borders.
This Grid infrastructure will be used for deploying novel algorithms as
software directly developed or enhanced within the project. Using the MammoGrid
clinicians will be able to harness the use of massive amounts of medical image
data to perform epidemiological studies, advanced image processing,
radiographic education and ultimately, tele-diagnosis over communities of
medical "virtual organisations". This is achieved through the use of
Grid-compliant services [1] for managing (versions of) massively distributed
files of mammograms, for handling the distributed execution of mammograms
analysis software, for the development of Grid-aware algorithms and for the
sharing of resources between multiple collaborating medical centres. All this
is delivered via a novel software and hardware information infrastructure that,
in addition guarantees the integrity and security of the medical data. The
MammoGrid implementation is based on AliEn, a Grid framework developed by the
ALICE Collaboration. AliEn provides a virtual file catalogue that allows
transparent access to distributed data-sets and provides top to bottom
implementation of a lightweight Grid applicable to cases when handling of a
large number of files is required. This paper details the architecture that
will be implemented by the MammoGrid project.
|
cs/0306097
|
A family of metrics on contact structures based on edge ideals
|
cs.DM cs.CE q-bio
|
The measurement of the similarity of RNA secondary structures, and in general
of contact structures, of a fixed length has several specific applications. For
instance, it is used in the analysis of the ensemble of suboptimal secondary
structures generated by a given algorithm on a given RNA sequence, and in the
comparison of the secondary structures predicted by different algorithms on a
given RNA molecule. It is also a useful tool in the quantitative study of
sequence-structure maps. A way to measure this similarity is by means of
metrics. In this paper we introduce a new class of metrics $d_{m}$, $m\geq 3$,
on the set of all contact structures of a fixed length, based on their
representation by means of edge ideals in a polynomial ring. These metrics can
be expressed in terms of Hilbert functions of monomial ideals, which allows the
use of several public domain computer algebra systems to compute them. We study
some abstract properties of these metrics, and we obtain explicit descriptions
of them for $m=3,4$ on arbitrary contact structures and for $m=5,6$ on RNA
secondary structures.
|
cs/0306099
|
An Improved k-Nearest Neighbor Algorithm for Text Categorization
|
cs.CL
|
k is the most important parameter in a text categorization system based on
k-Nearest Neighbor algorithm (kNN).In the classification process, k nearest
documents to the test one in the training set are determined firstly. Then, the
predication can be made according to the category distribution among these k
nearest neighbors. Generally speaking, the class distribution in the training
set is uneven. Some classes may have more samples than others. Therefore, the
system performance is very sensitive to the choice of the parameter k. And it
is very likely that a fixed k value will result in a bias on large categories.
To deal with these problems, we propose an improved kNN algorithm, which uses
different numbers of nearest neighbors for different categories, rather than a
fixed number across all categories. More samples (nearest neighbors) will be
used for deciding whether a test document should be classified to a category,
which has more samples in the training set. Preliminary experiments on Chinese
text categorization show that our method is less sensitive to the parameter k
than the traditional one, and it can properly classify documents belonging to
smaller classes with a large k. The method is promising for some cases, where
estimating the parameter k via cross-validation is not allowed.
|
cs/0306102
|
Prototyping Virtual Data Technologies in ATLAS Data Challenge 1
Production
|
cs.DC cs.DB
|
For efficiency of the large production tasks distributed worldwide, it is
essential to provide shared production management tools comprised of
integratable and interoperable services. To enhance the ATLAS DC1 production
toolkit, we introduced and tested a Virtual Data services component. For each
major data transformation step identified in the ATLAS data processing pipeline
(event generation, detector simulation, background pile-up and digitization,
etc) the Virtual Data Cookbook (VDC) catalogue encapsulates the specific data
transformation knowledge and the validated parameters settings that must be
provided before the data transformation invocation. To provide for local-remote
transparency during DC1 production, the VDC database server delivered in a
controlled way both the validated production parameters and the templated
production recipes for thousands of the event generation and detector
simulation jobs around the world, simplifying the production management
solutions.
|
cs/0306103
|
Primary Numbers Database for ATLAS Detector Description Parameters
|
cs.DB cs.HC
|
We present the design and the status of the database for detector description
parameters in ATLAS experiment. The ATLAS Primary Numbers are the parameters
defining the detector geometry and digitization in simulations, as well as
certain reconstruction parameters. Since the detailed ATLAS detector
description needs more than 10,000 such parameters, a preferred solution is to
have a single verified source for all these data. The database stores the data
dictionary for each parameter collection object, providing schema evolution
support for object-based retrieval of parameters. The same Primary Numbers are
served to many different clients accessing the database: the ATLAS software
framework Athena, the Geant3 heritage framework Atlsim, the Geant4 developers
framework FADS/Goofy, the generator of XML output for detector description, and
several end-user clients for interactive data navigation, including web-based
browsers and ROOT. The choice of the MySQL database product for the
implementation provides additional benefits: the Primary Numbers database can
be used on the developers laptop when disconnected (using the MySQL embedded
server technology), with data being updated when the laptop is connected (using
the MySQL database replication).
|
cs/0306105
|
Design, implementation and deployment of the Saclay muon reconstruction
algorithms (Muonbox/y) in the Athena software framework of the ATLAS
experiment
|
cs.CE
|
This paper gives an overview of a reconstruction algorithm for muon events in
ATLAS experiment at CERN. After a short introduction on ATLAS Muon
Spectrometer, we will describe the procedure performed by the algorithms
Muonbox and Muonboy (last version) in order to achieve correctly the
reconstruction task. These algorithms have been developed in Fortran language
and are working in the official C++ framework Athena, as well as in stand alone
mode. A description of the interaction between Muonboy and Athena will be
given, together with the reconstruction performances (efficiency and momentum
resolution) obtained with MonteCarlo data.
|
cs/0306106
|
Lexicographic probability, conditional probability, and nonstandard
probability
|
cs.GT cs.AI
|
The relationship between Popper spaces (conditional probability spaces that
satisfy some regularity conditions), lexicographic probability systems (LPS's),
and nonstandard probability spaces (NPS's) is considered. If countable
additivity is assumed, Popper spaces and a subclass of LPS's are equivalent;
without the assumption of countable additivity, the equivalence no longer
holds. If the state space is finite, LPS's are equivalent to NPS's. However, if
the state space is infinite, NPS's are shown to be more general than LPS's.
|
cs/0306109
|
Distributed Heterogeneous Relational Data Warehouse In A Grid
Environment
|
cs.DC cs.DB
|
This paper examines how a "Distributed Heterogeneous Relational Data
Warehouse" can be integrated in a Grid environment that will provide physicists
with efficient access to large and small object collections drawn from
databases at multiple sites. This paper investigates the requirements of
Grid-enabling such a warehouse, and explores how these requirements may be met
by extensions to existing Grid middleware. We present initial results obtained
with a working prototype warehouse of this kind using both SQLServer and
Oracle9i, where a Grid-enabled web-services interface makes it easier for
web-applications to access the distributed contents of the databases securely.
Based on the success of the prototype, we proposes a framework for using
heterogeneous relational data warehouse through the web-service interface and
create a single "Virtual Database System" for users. The ability to
transparently access data in this way, as shown in prototype, is likely to be a
very powerful facility for HENP and other grid users wishing to collate and
analyze information distributed over Grid.
|
cs/0306114
|
D0 Data Handling Operational Experience
|
cs.DC cs.AI
|
We report on the production experience of the D0 experiment at the Fermilab
Tevatron, using the SAM data handling system with a variety of computing
hardware configurations, batch systems, and mass storage strategies. We have
stored more than 300 TB of data in the Fermilab Enstore mass storage system. We
deliver data through this system at an average rate of more than 2 TB/day to
analysis programs, with a substantial multiplication factor in the consumed
data through intelligent cache management. We handle more than 1.7 Million
files in this system and provide data delivery to user jobs at Fermilab on four
types of systems: a reconstruction farm, a large SMP system, a Linux batch
cluster, and a Linux desktop cluster. In addition, we import simulation data
generated at 6 sites worldwide, and deliver data to jobs at many more sites. We
describe the scope of the data handling deployment worldwide, the operational
experience with this system, and the feedback of that experience.
|
cs/0306119
|
A Method for Solving Distributed Service Allocation Problems
|
cs.MA
|
We present a method for solving service allocation problems in which a set of
services must be allocated to a set of agents so as to maximize a global
utility. The method is completely distributed so it can scale to any number of
services without degradation. We first formalize the service allocation problem
and then present a simple hill-climbing, a global hill-climbing, and a
bidding-protocol algorithm for solving it. We analyze the expected performance
of these algorithms as a function of various problem parameters such as the
branching factor and the number of agents. Finally, we use the sensor
allocation problem, an instance of a service allocation problem, to show the
bidding protocol at work. The simulations also show that phase transition on
the expected quality of the solution exists as the amount of communication
between agents increases.
|
cs/0306120
|
Reinforcement Learning with Linear Function Approximation and LQ control
Converges
|
cs.LG cs.AI
|
Reinforcement learning is commonly used with function approximation. However,
very few positive results are known about the convergence of function
approximation based RL control algorithms. In this paper we show that TD(0) and
Sarsa(0) with linear function approximation is convergent for a simple class of
problems, where the system is linear and the costs are quadratic (the LQ
control problem). Furthermore, we show that for systems with Gaussian noise and
non-completely observable states (the LQG problem), the mentioned RL algorithms
are still convergent, if they are combined with Kalman filtering.
|
cs/0306122
|
The Best Trail Algorithm for Assisted Navigation of Web Sites
|
cs.DS cs.IR
|
We present an algorithm called the Best Trail Algorithm, which helps solve
the hypertext navigation problem by automating the construction of memex-like
trails through the corpus. The algorithm performs a probabilistic best-first
expansion of a set of navigation trees to find relevant and compact trails. We
describe the implementation of the algorithm, scoring methods for trails,
filtering algorithms and a new metric called \emph{potential gain} which
measures the potential of a page for future navigation opportunities.
|
cs/0306124
|
Updating Probabilities
|
cs.AI
|
As examples such as the Monty Hall puzzle show, applying conditioning to
update a probability distribution on a ``naive space'', which does not take
into account the protocol used, can often lead to counterintuitive results.
Here we examine why. A criterion known as CAR (``coarsening at random'') in the
statistical literature characterizes when ``naive'' conditioning in a naive
space works. We show that the CAR condition holds rather infrequently, and we
provide a procedural characterization of it, by giving a randomized algorithm
that generates all and only distributions for which CAR holds. This
substantially extends previous characterizations of CAR. We also consider more
generalized notions of update such as Jeffrey conditioning and minimizing
relative entropy (MRE). We give a generalization of the CAR condition that
characterizes when Jeffrey conditioning leads to appropriate answers, and show
that there exist some very simple settings in which MRE essentially never gives
the right results. This generalizes and interconnects previous results obtained
in the literature on CAR and MRE.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.