id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
cs/0209021
|
Activities, Context and Ubiquitous Computing
|
cs.IR
|
Context and context-awareness provides computing environments with the
ability to usefully adapt the services or information they provide. It is the
ability to implicitly sense and automatically derive the user needs that
separates context-aware applications from traditionally designed applications,
and this makes them more attentive, responsive, and aware of their user's
identity, and their user's environment. This paper argues that context-aware
applications capable of supporting complex, cognitive activities can be built
from a model of context called Activity-Centric Context. A conceptual model of
Activity-Centric context is presented. The model is illustrated via a detailed
example.
|
cs/0209022
|
A Comparison of Different Cognitive Paradigms Using Simple Animats in a
Virtual Laboratory, with Implications to the Notion of Cognition
|
cs.AI
|
In this thesis I present a virtual laboratory which implements five different
models for controlling animats: a rule-based system, a behaviour-based system,
a concept-based system, a neural network, and a Braitenberg architecture.
Through different experiments, I compare the performance of the models and
conclude that there is no "best" model, since different models are better for
different things in different contexts.
The models I chose, although quite simple, represent different approaches for
studying cognition. Using the results as an empirical philosophical aid,
I note that there is no "best" approach for studying cognition, since
different approaches have all advantages and disadvantages, because they study
different aspects of cognition from different contexts. This has implications
for current debates on "proper" approaches for cognition: all approaches are a
bit proper, but none will be "proper enough". I draw remarks on the notion of
cognition abstracting from all the approaches used to study it, and propose a
simple classification for different types of cognition.
|
cs/0209030
|
Extremal Optimization: an Evolutionary Local-Search Algorithm
|
cs.NE cs.AI
|
A recently introduced general-purpose heuristic for finding high-quality
solutions for many hard optimization problems is reviewed. The method is
inspired by recent progress in understanding far-from-equilibrium phenomena in
terms of {\em self-organized criticality,} a concept introduced to describe
emergent complexity in physical systems. This method, called {\em extremal
optimization,} successively replaces the value of extremely undesirable
variables in a sub-optimal solution with new, random ones. Large,
avalanche-like fluctuations in the cost function self-organize from this
dynamics, effectively scaling barriers to explore local optima in distant
neighborhoods of the configuration space while eliminating the need to tune
parameters. Drawing upon models used to simulate the dynamics of granular
media, evolution, or geology, extremal optimization complements approximation
methods inspired by equilibrium statistical physics, such as {\em simulated
annealing}. It may be but one example of applying new insights into {\em
non-equilibrium phenomena} systematically to hard optimization problems. This
method is widely applicable and so far has proved competitive with -- and even
superior to -- more elaborate general-purpose heuristics on testbeds of
constrained optimization problems with up to $10^5$ variables, such as
bipartitioning, coloring, and satisfiability. Analysis of a suitable model
predicts the only free parameter of the method in accordance with all
experimental results.
|
cs/0210004
|
Revising Partially Ordered Beliefs
|
cs.AI
|
This paper deals with the revision of partially ordered beliefs. It proposes
a semantic representation of epistemic states by partial pre-orders on
interpretations and a syntactic representation by partially ordered belief
bases. Two revision operations, the revision stemming from the history of
observations and the possibilistic revision, defined when the epistemic state
is represented by a total pre-order, are generalized, at a semantic level, to
the case of a partial pre-order on interpretations, and at a syntactic level,
to the case of a partially ordered belief base. The equivalence between the two
representations is shown for the two revision operations.
|
cs/0210005
|
Positive time fractional derivative
|
cs.CE
|
In mathematical modeling of the non-squared frequency-dependent diffusions,
also known as the anomalous diffusions, it is desirable to have a positive real
Fourier transform for the time derivative of arbitrary fractional or odd
integer order. The Fourier transform of the fractional time derivative in the
Riemann-Liouville and Caputo senses, however, involves a complex power function
of the fractional order. In this study, a positive time derivative of
fractional or odd integer order is introduced to respect the positivity in
modeling the anomalous diffusions.
|
cs/0210007
|
Compilability of Abduction
|
cs.AI cs.CC
|
Abduction is one of the most important forms of reasoning; it has been
successfully applied to several practical problems such as diagnosis. In this
paper we investigate whether the computational complexity of abduction can be
reduced by an appropriate use of preprocessing. This is motivated by the fact
that part of the data of the problem (namely, the set of all possible
assumptions and the theory relating assumptions and manifestations) are often
known before the rest of the problem. In this paper, we show some complexity
results about abduction when compilation is allowed.
|
cs/0210009
|
On the Cell-based Complexity of Recognition of Bounded Configurations by
Finite Dynamic Cellular Automata
|
cs.CC cs.CV
|
This paper studies complexity of recognition of classes of bounded
configurations by a generalization of conventional cellular automata (CA) --
finite dynamic cellular automata (FDCA). Inspired by the CA-based models of
biological and computer vision, this study attempts to derive the properties of
a complexity measure and of the classes of input configurations that make it
beneficial to realize the recognition via a two-layered automaton as compared
to a one-layered automaton. A formalized model of an image pattern recognition
task is utilized to demonstrate that the derived conditions can be satisfied
for a non-empty set of practical problems.
|
cs/0210012
|
Selection of future events from a time series in relation to estimations
of forecasting uncertainty
|
cs.NE
|
A new general procedure for a priori selection of more predictable events
from a time series of observed variable is proposed. The procedure is
applicable to time series which contains different types of events that feature
significantly different predictability, or, in other words, to heteroskedastic
time series. A priori selection of future events in accordance to expected
uncertainty of their forecasts may be helpful for making practical decisions.
The procedure first implies creation of two neural network based forecasting
models, one of which is aimed at prediction of conditional mean and other -
conditional dispersion, and then elaboration of the rule for future event
selection into groups of more and less predictable events. The method is
demonstrated and tested by the example of the computer generated time series,
and then applied to the real world time series, Dow Jones Industrial Average
index.
|
cs/0210018
|
User software for the next generation
|
cs.GR cs.CE
|
New generations of neutron scattering sources and instrumentation are
providing challenges in data handling for user software. Time-of-Flight
instruments used at pulsed sources typically produce hundreds or thousands of
channels of data for each detector segment. New instruments are being designed
with thousands to hundreds of thousands of detector segments. High intensity
neutron sources make possible parametric studies and texture studies which
further increase data handling requirements. The Integrated Spectral Analysis
Workbench (ISAW) software developed at Argonne handles large numbers of spectra
simultaneously while providing operations to reduce, sort, combine and export
the data. It includes viewers to inspect the data in detail in real time. ISAW
uses existing software components and packages where feasible and takes
advantage of the excellent support for user interface design and network
communication in Java. The included scripting language simplifies repetitive
operations for analyzing many files related to a given experiment. Recent
additions to ISAW include a contour view, a time-slice table view, routines for
finding and fitting peaks in data, and support for data from other facilities
using the NeXus format. In this paper, I give an overview of features and
planned improvements of ISAW. Details of some of the improvements are covered
in other presentations at this conference.
|
cs/0210023
|
Geometric Aspects of Multiagent Systems
|
cs.MA cs.AI
|
Recent advances in Multiagent Systems (MAS) and Epistemic Logic within
Distributed Systems Theory, have used various combinatorial structures that
model both the geometry of the systems and the Kripke model structure of models
for the logic. Examining one of the simpler versions of these models,
interpreted systems, and the related Kripke semantics of the logic $S5_n$ (an
epistemic logic with $n$-agents), the similarities with the geometric /
homotopy theoretic structure of groupoid atlases is striking. These latter
objects arise in problems within algebraic K-theory, an area of algebra linked
to the study of decomposition and normal form theorems in linear algebra. They
have a natural well structured notion of path and constructions of path
objects, etc., that yield a rich homotopy theory.
|
cs/0210025
|
An Algorithm for Pattern Discovery in Time Series
|
cs.LG cs.CL
|
We present a new algorithm for discovering patterns in time series and other
sequential data. We exhibit a reliable procedure for building the minimal set
of hidden, Markovian states that is statistically capable of producing the
behavior exhibited in the data -- the underlying process's causal states.
Unlike conventional methods for fitting hidden Markov models (HMMs) to data,
our algorithm makes no assumptions about the process's causal architecture (the
number of hidden states and their transition structure), but rather infers it
from the data. It starts with assumptions of minimal structure and introduces
complexity only when the data demand it. Moreover, the causal states it infers
have important predictive optimality properties that conventional HMM states
lack. We introduce the algorithm, review the theory behind it, prove its
asymptotic reliability, use large deviation theory to estimate its rate of
convergence, and compare it to other algorithms which also construct HMMs from
data. We also illustrate its behavior on an example process, and report
selected numerical results from an implementation.
|
cs/0210026
|
Encoding a Taxonomy of Web Attacks with Different-Length Vectors
|
cs.CR cs.AI
|
Web attacks, i.e. attacks exclusively using the HTTP protocol, are rapidly
becoming one of the fundamental threats for information systems connected to
the Internet. When the attacks suffered by web servers through the years are
analyzed, it is observed that most of them are very similar, using a reduced
number of attacking techniques. It is generally agreed that classification can
help designers and programmers to better understand attacks and build more
secure applications. As an effort in this direction, a new taxonomy of web
attacks is proposed in this paper, with the objective of obtaining a
practically useful reference framework for security applications. The use of
the taxonomy is illustrated by means of multiplatform real world web attack
examples. Along with this taxonomy, important features of each attack category
are discussed. A suitable semantic-dependent web attack encoding scheme is
defined that uses different-length vectors. Possible applications are
described, which might benefit from this taxonomy and encoding scheme, such as
intrusion detection systems and application firewalls.
|
cs/0210027
|
A uniform approach to logic programming semantics
|
cs.AI cs.LO
|
Part of the theory of logic programming and nonmonotonic reasoning concerns
the study of fixed-point semantics for these paradigms. Several different
semantics have been proposed during the last two decades, and some have been
more successful and acknowledged than others. The rationales behind those
various semantics have been manifold, depending on one's point of view, which
may be that of a programmer or inspired by commonsense reasoning, and
consequently the constructions which lead to these semantics are technically
very diverse, and the exact relationships between them have not yet been fully
understood. In this paper, we present a conceptually new method, based on level
mappings, which allows to provide uniform characterizations of different
semantics for logic programs. We will display our approach by giving new and
uniform characterizations of some of the major semantics, more particular of
the least model semantics for definite programs, of the Fitting semantics, and
of the well-founded semantics. A novel characterization of the weakly perfect
model semantics will also be provided.
|
cs/0210028
|
Equivalences Among Aggregate Queries with Negation
|
cs.DB cs.LO
|
Query equivalence is investigated for disjunctive aggregate queries with
negated subgoals, constants and comparisons. A full characterization of
equivalence is given for the aggregation functions count, max, sum, prod,
toptwo and parity. A related problem is that of determining, for a given
natural number N, whether two given queries are equivalent over all databases
with at most N constants. We call this problem bounded equivalence. A complete
characterization of decidability of bounded equivalence is given. In
particular, it is shown that this problem is decidable for all the above
aggregation functions as well as for count distinct and average. For
quasilinear queries (i.e., queries where predicates that occur positively are
not repeated) it is shown that equivalence can be decided in polynomial time
for the aggregation functions count, max, sum, parity, prod, toptwo and
average. A similar result holds for count distinct provided that a few
additional conditions hold. The results are couched in terms of abstract
characteristics of aggregation functions, and new proof techniques are used.
Finally, the results above also imply that equivalence, under bag-set
semantics, is decidable for non-aggregate queries with negation.
|
cs/0210030
|
Intelligence and Cooperative Search by Coupled Local Minimizers
|
cs.AI cs.MA cs.NE
|
We show how coupling of local optimization processes can lead to better
solutions than multi-start local optimization consisting of independent runs.
This is achieved by minimizing the average energy cost of the ensemble, subject
to synchronization constraints between the state vectors of the individual
local minimizers. From an augmented Lagrangian which incorporates the
synchronization constraints both as soft and hard constraints, a network is
derived wherein the local minimizers interact and exchange information through
the synchronization constraints. From the viewpoint of neural networks, the
array can be considered as a Lagrange programming network for continuous
optimization and as a cellular neural network (CNN). The penalty weights
associated with the soft state synchronization constraints follow from the
solution to a linear program. This expresses that the energy cost of the
ensemble should maximally decrease. In this way successful local minimizers can
implicitly impose their state to the others through a mechanism of master-slave
dynamics resulting into a cooperative search mechanism. Improved information
spreading within the ensemble is obtained by applying the concept of
small-world networks. This work suggests, in an interdisciplinary context, the
importance of information exchange and state synchronization within ensembles,
towards issues as evolution, collective behaviour, optimality and intelligence.
|
cs/0211003
|
Evaluation of the Performance of the Markov Blanket Bayesian Classifier
Algorithm
|
cs.LG
|
The Markov Blanket Bayesian Classifier is a recently-proposed algorithm for
construction of probabilistic classifiers. This paper presents an empirical
comparison of the MBBC algorithm with three other Bayesian classifiers: Naive
Bayes, Tree-Augmented Naive Bayes and a general Bayesian network. All of these
are implemented using the K2 framework of Cooper and Herskovits. The
classifiers are compared in terms of their performance (using simple accuracy
measures and ROC curves) and speed, on a range of standard benchmark data sets.
It is concluded that MBBC is competitive in terms of speed and accuracy with
the other algorithms considered.
|
cs/0211004
|
The DLV System for Knowledge Representation and Reasoning
|
cs.AI cs.LO cs.PL
|
This paper presents the DLV system, which is widely considered the
state-of-the-art implementation of disjunctive logic programming, and addresses
several aspects. As for problem solving, we provide a formal definition of its
kernel language, function-free disjunctive logic programs (also known as
disjunctive datalog), extended by weak constraints, which are a powerful tool
to express optimization problems. We then illustrate the usage of DLV as a tool
for knowledge representation and reasoning, describing a new declarative
programming methodology which allows one to encode complex problems (up to
$\Delta^P_3$-complete problems) in a declarative fashion. On the foundational
side, we provide a detailed analysis of the computational complexity of the
language of DLV, and by deriving new complexity results we chart a complete
picture of the complexity of this language and important fragments thereof.
Furthermore, we illustrate the general architecture of the DLV system which
has been influenced by these results. As for applications, we overview
application front-ends which have been developed on top of DLV to solve
specific knowledge representation tasks, and we briefly describe the main
international projects investigating the potential of the system for industrial
exploitation. Finally, we report about thorough experimentation and
benchmarking, which has been carried out to assess the efficiency of the
system. The experimental results confirm the solidity of DLV and highlight its
potential for emerging application areas like knowledge management and
information integration.
|
cs/0211005
|
Prosody Based Co-analysis for Continuous Recognition of Coverbal
Gestures
|
cs.CV cs.HC
|
Although speech and gesture recognition has been studied extensively, all the
successful attempts of combining them in the unified framework were
semantically motivated, e.g., keyword-gesture cooccurrence. Such formulations
inherited the complexity of natural language processing. This paper presents a
Bayesian formulation that uses a phenomenon of gesture and speech articulation
for improving accuracy of automatic recognition of continuous coverbal
gestures. The prosodic features from the speech signal were coanalyzed with the
visual signal to learn the prior probability of co-occurrence of the prominent
spoken segments with the particular kinematical phases of gestures. It was
found that the above co-analysis helps in detecting and disambiguating visually
small gestures, which subsequently improves the rate of continuous gesture
recognition. The efficacy of the proposed approach was demonstrated on a large
database collected from the weather channel broadcast. This formulation opens
new avenues for bottom-up frameworks of multimodal integration.
|
cs/0211006
|
Maximing the Margin in the Input Space
|
cs.AI cs.LG
|
We propose a novel criterion for support vector machine learning: maximizing
the margin in the input space, not in the feature (Hilbert) space. This
criterion is a discriminative version of the principal curve proposed by Hastie
et al. The criterion is appropriate in particular when the input space is
already a well-designed feature space with rather small dimensionality. The
definition of the margin is generalized in order to represent prior knowledge.
The derived algorithm consists of two alternating steps to estimate the dual
parameters. Firstly, the parameters are initialized by the original SVM. Then
one set of parameters is updated by Newton-like procedure, and the other set is
updated by solving a quadratic programming problem. The algorithm converges in
a few steps to a local optimum under mild conditions and it preserves the
sparsity of support vectors. Although the complexity to calculate temporal
variables increases the complexity to solve the quadratic programming problem
for each step does not change. It is also shown that the original SVM can be
seen as a special case. We further derive a simplified algorithm which enables
us to use the existing code for the original SVM.
|
cs/0211007
|
Approximating Incomplete Kernel Matrices by the em Algorithm
|
cs.LG
|
In biological data, it is often the case that observed data are available
only for a subset of samples. When a kernel matrix is derived from such data,
we have to leave the entries for unavailable samples as missing. In this paper,
we make use of a parametric model of kernel matrices, and estimate missing
entries by fitting the model to existing entries. The parametric model is
created as a set of spectral variants of a complete kernel matrix derived from
another information source. For model fitting, we adopt the em algorithm based
on the information geometry of positive definite matrices. We will report
promising results on bacteria clustering experiments using two marker
sequences: 16S and gyrB.
|
cs/0211008
|
Can the whole brain be simpler than its "parts"?
|
cs.AI
|
This is the first in a series of connected papers discussing the problem of a
dynamically reconfigurable universal learning neurocomputer that could serve as
a computational model for the whole human brain. The whole series is entitled
"The Brain Zero Project. My Brain as a Dynamically Reconfigurable Universal
Learning Neurocomputer." (For more information visit the website
www.brain0.com.) This introductory paper is concerned with general methodology.
Its main goal is to explain why it is critically important for both neural
modeling and cognitive modeling to pay much attention to the basic requirements
of the whole brain as a complex computing system. The author argues that it can
be easier to develop an adequate computational model for the whole
"unprogrammed" (untrained) human brain than to find adequate formal
representations of some nontrivial parts of brain's performance. (In the same
way as, for example, it is easier to describe the behavior of a complex
analytical function than the behavior of its real and/or imaginary part.) The
"curse of dimensionality" that plagues purely phenomenological ("brainless")
cognitive theories is a natural penalty for an attempt to represent
insufficiently large parts of brain's performance in a state space of
insufficiently high dimensionality. A "partial" modeler encounters "Catch 22."
An attempt to simplify a cognitive problem by artificially reducing its
dimensionality makes the problem more difficult.
|
cs/0211014
|
Vanquishing the XCB Question: The Methodology Discovery of the Last
Shortest Single Axiom for the Equivalential Calculus
|
cs.LO cs.AI
|
With the inclusion of an effective methodology, this article answers in
detail a question that, for a quarter of a century, remained open despite
intense study by various researchers. Is the formula XCB =
e(x,e(e(e(x,y),e(z,y)),z)) a single axiom for the classical equivalential
calculus when the rules of inference consist of detachment (modus ponens) and
substitution? Where the function e represents equivalence, this calculus can be
axiomatized quite naturally with the formulas e(x,x), e(e(x,y),e(y,x)), and
e(e(x,y),e(e(y,z),e(x,z))), which correspond to reflexivity, symmetry, and
transitivity, respectively. (We note that e(x,x) is dependent on the other two
axioms.) Heretofore, thirteen shortest single axioms for classical equivalence
of length eleven had been discovered, and XCB was the only remaining formula of
that length whose status was undetermined. To show that XCB is indeed such a
single axiom, we focus on the rule of condensed detachment, a rule that
captures detachment together with an appropriately general, but restricted,
form of substitution. The proof we present in this paper consists of
twenty-five applications of condensed detachment, completing with the deduction
of transitivity followed by a deduction of symmetry. We also discuss some
factors that may explain in part why XCB resisted relinquishing its treasure
for so long. Our approach relied on diverse strategies applied by the automated
reasoning program OTTER. Thus ends the search for shortest single axioms for
the equivalential calculus.
|
cs/0211015
|
XCB, the Last of the Shortest Single Axioms for the Classical
Equivalential Calculus
|
cs.LO cs.AI
|
It has long been an open question whether the formula XCB = EpEEEpqErqr is,
with the rules of substitution and detachment, a single axiom for the classical
equivalential calculus. This paper answers that question affirmatively, thus
completing a search for all such eleven-symbol single axioms that began seventy
years ago.
|
cs/0211017
|
Probabilistic Parsing Strategies
|
cs.CL
|
We present new results on the relation between purely symbolic context-free
parsing strategies and their probabilistic counter-parts. Such parsing
strategies are seen as constructions of push-down devices from grammars. We
show that preservation of probability distribution is possible under two
conditions, viz. the correct-prefix property and the property of strong
predictiveness. These results generalize existing results in the literature
that were obtained by considering parsing strategies in isolation. From our
general results we also derive negative results on so-called generalized LR
parsing.
|
cs/0211020
|
Monadic Datalog and the Expressive Power of Languages for Web
Information Extraction
|
cs.DB
|
Research on information extraction from Web pages (wrapping) has seen much
activity recently (particularly systems implementations), but little work has
been done on formally studying the expressiveness of the formalisms proposed or
on the theoretical foundations of wrapping. In this paper, we first study
monadic datalog over trees as a wrapping language. We show that this simple
language is equivalent to monadic second order logic (MSO) in its ability to
specify wrappers. We believe that MSO has the right expressiveness required for
Web information extraction and propose MSO as a yardstick for evaluating and
comparing wrappers. Along the way, several other results on the complexity of
query evaluation and query containment for monadic datalog over trees are
established, and a simple normal form for this language is presented. Using the
above results, we subsequently study the kernel fragment Elog$^-$ of the Elog
wrapping language used in the Lixto system (a visual wrapper generator).
Curiously, Elog$^-$ exactly captures MSO, yet is easier to use. Indeed,
programs in this language can be entirely visually specified.
|
cs/0211023
|
SkyQuery: A WebService Approach to Federate Databases
|
cs.DB cs.CE
|
Traditional science searched for new objects and phenomena that led to
discoveries. Tomorrow's science will combine together the large pool of
information in scientific archives and make discoveries. Scienthists are
currently keen to federate together the existing scientific databases. The
major challenge in building a federation of these autonomous and heterogeneous
databases is system integration. Ineffective integration will result in defunct
federations and under utilized scientific data.
Astronomy, in particular, has many autonomous archives spread over the
Internet. It is now seeking to federate these, with minimal effort, into a
Virtual Observatory that will solve complex distributed computing tasks such as
answering federated spatial join queries.
In this paper, we present SkyQuery, a successful prototype of an evolving
federation of astronomy archives. It interoperates using the emerging Web
services standard. We describe the SkyQuery architecture and show how it
efficiently evaluates a probabilistic federated spatial join query.
|
cs/0211027
|
Adaptive Development of Koncepts in Virtual Animats: Insights into the
Development of Knowledge
|
cs.AI
|
As a part of our effort for studying the evolution and development of
cognition, we present results derived from synthetic experimentations in a
virtual laboratory where animats develop koncepts adaptively and ground their
meaning through action. We introduce the term "koncept" to avoid confusions and
ambiguity derived from the wide use of the word "concept". We present the
models which our animats use for abstracting koncepts from perceptions,
plastically adapt koncepts, and associate koncepts with actions. On a more
philosophical vein, we suggest that knowledge is a property of a cognitive
system, not an element, and therefore observer-dependent.
|
cs/0211028
|
Thinking Adaptive: Towards a Behaviours Virtual Laboratory
|
cs.AI cs.MA
|
In this paper we name some of the advantages of virtual laboratories; and
propose that a Behaviours Virtual Laboratory should be useful for both
biologists and AI researchers, offering a new perspective for understanding
adaptive behaviour. We present our development of a Behaviours Virtual
Laboratory, which at this stage is focused in action selection, and show some
experiments to illustrate the properties of our proposal, which can be accessed
via Internet.
|
cs/0211029
|
Modelling intracellular signalling networks using behaviour-based
systems and the blackboard architecture
|
cs.MA q-bio.CB
|
This paper proposes to model the intracellular signalling networks using a
fusion of behaviour-based systems and the blackboard architecture. In virtue of
this fusion, the model developed by us, which has been named Cellulat, allows
to take account two essential aspects of the intracellular signalling networks:
(1) the cognitive capabilities of certain types of networks' components and (2)
the high level of spatial organization of these networks. A simple example of
modelling of Ca2+ signalling pathways using Cellulat is presented here. An
intracellular signalling virtual laboratory is being developed from Cellulat.
|
cs/0211030
|
Integration of Computational Techniques for the Modelling of Signal
Transduction
|
cs.MA q-bio.CB
|
A cell can be seen as an adaptive autonomous agent or as a society of
adaptive autonomous agents, where each can exhibit a particular behaviour
depending on its cognitive capabilities. We present an intracellular signalling
model obtained by integrating several computational techniques into an
agent-based paradigm. Cellulat, the model, takes into account two essential
aspects of the intracellular signalling networks: cognitive capacities and a
spatial organization. Exemplifying the functionality of the system by modelling
the EGFR signalling pathway, we discuss the methodology as well as the purposes
of an intracellular signalling virtual laboratory, presently under development.
|
cs/0211031
|
Redundancy in Logic I: CNF Propositional Formulae
|
cs.AI cs.CC
|
A knowledge base is redundant if it contains parts that can be inferred from
the rest of it. We study the problem of checking whether a CNF formula (a set
of clauses) is redundant, that is, it contains clauses that can be derived from
the other ones. Any CNF formula can be made irredundant by deleting some of its
clauses: what results is an irredundant equivalent subset (I.E.S.) We study the
complexity of some related problems: verification, checking existence of a
I.E.S. with a given size, checking necessary and possible presence of clauses
in I.E.S.'s, and uniqueness. We also consider the problem of redundancy with
different definitions of equivalence.
|
cs/0211033
|
Propositional satisfiability in declarative programming
|
cs.LO cs.AI
|
Answer-set programming (ASP) paradigm is a way of using logic to solve search
problems. Given a search problem, to solve it one designs a theory in the logic
so that models of this theory represent problem solutions. To compute a
solution to a problem one needs to compute a model of the corresponding theory.
Several answer-set programming formalisms have been developed on the basis of
logic programming with the semantics of stable models. In this paper we show
that also the logic of predicate calculus gives rise to effective
implementations of the ASP paradigm, similar in spirit to logic programming
with stable model semantics and with a similar scope of applicability.
Specifically, we propose two logics based on predicate calculus as formalisms
for encoding search problems. We show that the expressive power of these logics
is given by the class NP-search. We demonstrate how to use them in programming
and develop computational tools for model finding. In the case of one of the
logics our techniques reduce the problem to that of propositional
satisfiability and allow one to use off-the-shelf satisfiability solvers. The
language of the other logic has more complex syntax and provides explicit means
to model some high-level constraints. For theories in this logic, we designed
our own solver that takes advantage of the expanded syntax. We present
experimental results demonstrating computational effectiveness of the overall
approach.
|
cs/0211035
|
Monadic Style Control Constructs for Inference Systems
|
cs.AI cs.PL
|
Recent advances in programming languages study and design have established a
standard way of grounding computational systems representation in category
theory. These formal results led to a better understanding of issues of control
and side-effects in functional and imperative languages. Another benefit is a
better way of modelling computational effects in logical frameworks. With this
analogy in mind, we embark on an investigation of inference systems based on
considering inference behaviour as a form of computation. We delineate a
categorical formalisation of control constructs in inference systems. This
representation emphasises the parallel between the modular articulation of the
categorical building blocks (triples) used to account for the inference
architecture and the modular composition of cognitive processes.
|
cs/0211038
|
Dynamic Adjustment of the Motivation Degree in an Action Selection
Mechanism
|
cs.AI
|
This paper presents a model for dynamic adjustment of the motivation degree,
using a reinforcement learning approach, in an action selection mechanism
previously developed by the authors. The learning takes place in the
modification of a parameter of the model of combination of internal and
external stimuli. Experiments that show the claimed properties are presented,
using a VR simulation developed for such purposes. The importance of adaptation
by learning in action selection is also discussed.
|
cs/0211039
|
Action Selection Properties in a Software Simulated Agent
|
cs.AI
|
This article analyses the properties of the Internal Behaviour network, an
action selection mechanism previously proposed by the authors, with the aid of
a simulation developed for such ends. A brief review of the Internal Behaviour
network is followed by the explanation of the implementation of the simulation.
Then, experiments are presented and discussed analysing the properties of the
action selection in the proposed model.
|
cs/0211040
|
A Model for Combination of External and Internal Stimuli in the Action
Selection of an Autonomous Agent
|
cs.AI
|
This paper proposes a model for combination of external and internal stimuli
for the action selection in an autonomous agent, based in an action selection
mechanism previously proposed by the authors. This combination model includes
additive and multiplicative elements, which allows to incorporate new
properties, which enhance the action selection. A given parameter a, which is
part of the proposed model, allows to regulate the degree of dependence of the
observed external behaviour from the internal states of the entity.
|
cs/0211041
|
An Approach to Automatic Indexing of Scientific Publications in High
Energy Physics for Database SPIRES HEP
|
cs.IR cs.DL
|
We introduce an approach to automatic indexing of e-prints based on a
pattern-matching technique making extensive use of an Associative Patterns
Dictionary (APD), developed by us. Entries in the APD consist of natural
language phrases with the same semantic interpretation as a set of keywords
from a controlled vocabulary. The method also allows to recognize within
e-prints formulae written in TeX notations that might also appear as keywords.
We present an automatic indexing system, AUTEX, which we have applied to
keyword index e-prints in selected areas in high energy physics (HEP) making
use of the DESY-HEPI thesaurus as a controlled vocabulary.
|
cs/0211042
|
Database Repairs and Analytic Tableaux
|
cs.DB cs.LO
|
In this article, we characterize in terms of analytic tableaux the repairs of
inconsistent relational databases, that is databases that do not satisfy a
given set of integrity constraints. For this purpose we provide closing and
opening criteria for branches in tableaux that are built for database instances
and their integrity constraints. We use the tableaux based characterization as
a basis for consistent query answering, that is for retrieving from the
database answers to queries that are consistent wrt the integrity constraints.
|
cs/0212004
|
Minimal-Change Integrity Maintenance Using Tuple Deletions
|
cs.DB
|
We address the problem of minimal-change integrity maintenance in the context
of integrity constraints in relational databases. We assume that
integrity-restoration actions are limited to tuple deletions. We identify two
basic computational issues: repair checking (is a database instance a repair of
a given database?) and consistent query answers (is a tuple an answer to a
given query in every repair of a given database?). We study the computational
complexity of both problems, delineating the boundary between the tractable and
the intractable. We consider denial constraints, general functional and
inclusion dependencies, as well as key and foreign key constraints. Our results
shed light on the computational feasibility of minimal-change integrity
maintenance. The tractable cases should lead to practical implementations. The
intractability results highlight the inherent limitations of any integrity
enforcement mechanism, e.g., triggers or referential constraint actions, as a
way of performing minimal-change integrity maintenance.
|
cs/0212006
|
Use of openMosix for parallel I/O balancing on storage in Linux cluster
|
cs.DC cs.DB
|
In this paper I present some experiences made in the matter of I/O for Linux
Clustering. In particular is illustrated the use of the package openMosix, a
balancer of workload for processes running in a cluster of nodes. I describe
some tests for balancing the load of I/O storage massive processes in a cluster
with four components. This work is been written for the proceedings of the
workshop Linux cluster: the openMosix approach held at CINECA, Bologna, Italy,
on 28 november 2002.
|
cs/0212008
|
Principal Manifolds and Nonlinear Dimension Reduction via Local Tangent
Space Alignment
|
cs.LG cs.AI
|
Nonlinear manifold learning from unorganized data points is a very
challenging unsupervised learning and data visualization problem with a great
variety of applications. In this paper we present a new algorithm for manifold
learning and nonlinear dimension reduction. Based on a set of unorganized data
points sampled with noise from the manifold, we represent the local geometry of
the manifold using tangent spaces learned by fitting an affine subspace in a
neighborhood of each data point. Those tangent spaces are aligned to give the
internal global coordinates of the data points with respect to the underlying
manifold by way of a partial eigendecomposition of the neighborhood connection
matrix. We present a careful error analysis of our algorithm and show that the
reconstruction errors are of second-order accuracy. We illustrate our algorithm
using curves and surfaces both in
2D/3D and higher dimensional Euclidean spaces, and 64-by-64 pixel face images
with various pose and lighting conditions. We also address several theoretical
and algorithmic issues for further research and improvements.
|
cs/0212010
|
JohnnyVon: Self-Replicating Automata in Continuous Two-Dimensional Space
|
cs.NE cs.CE
|
JohnnyVon is an implementation of self-replicating automata in continuous
two-dimensional space. Two types of particles drift about in a virtual liquid.
The particles are automata with discrete internal states but continuous
external relationships. Their internal states are governed by finite state
machines but their external relationships are governed by a simulated physics
that includes brownian motion, viscosity, and spring-like attractive and
repulsive forces. The particles can be assembled into patterns that can encode
arbitrary strings of bits. We demonstrate that, if an arbitrary "seed" pattern
is put in a "soup" of separate individual particles, the pattern will replicate
by assembling the individual particles into copies of itself. We also show
that, given sufficient time, a soup of separate individual particles will
eventually spontaneously form self-replicating patterns. We discuss the
implications of JohnnyVon for research in nanotechnology, theoretical biology,
and artificial life.
|
cs/0212011
|
Mining the Web for Lexical Knowledge to Improve Keyphrase Extraction:
Learning from Labeled and Unlabeled Data
|
cs.LG cs.IR
|
Keyphrases are useful for a variety of purposes, including summarizing,
indexing, labeling, categorizing, clustering, highlighting, browsing, and
searching. The task of automatic keyphrase extraction is to select keyphrases
from within the text of a given document. Automatic keyphrase extraction makes
it feasible to generate keyphrases for the huge number of documents that do not
have manually assigned keyphrases. Good performance on this task has been
obtained by approaching it as a supervised learning problem. An input document
is treated as a set of candidate phrases that must be classified as either
keyphrases or non-keyphrases. To classify a candidate phrase as a keyphrase,
the most important features (attributes) appear to be the frequency and
location of the candidate phrase in the document. Recent work has demonstrated
that it is also useful to know the frequency of the candidate phrase as a
manually assigned keyphrase for other documents in the same domain as the given
document (e.g., the domain of computer science). Unfortunately, this
keyphrase-frequency feature is domain-specific (the learning process must be
repeated for each new domain) and training-intensive (good performance requires
a relatively large number of training documents in the given domain, with
manually assigned keyphrases). The aim of the work described here is to remove
these limitations. In this paper, I introduce new features that are derived by
mining lexical knowledge from a very large collection of unlabeled data,
consisting of approximately 350 million Web pages without manually assigned
keyphrases. I present experiments that show that the new features result in
improved keyphrase extraction, although they are neither domain-specific nor
training-intensive.
|
cs/0212012
|
Unsupervised Learning of Semantic Orientation from a
Hundred-Billion-Word Corpus
|
cs.LG cs.IR
|
The evaluative character of a word is called its semantic orientation. A
positive semantic orientation implies desirability (e.g., "honest", "intrepid")
and a negative semantic orientation implies undesirability (e.g., "disturbing",
"superfluous"). This paper introduces a simple algorithm for unsupervised
learning of semantic orientation from extremely large corpora. The method
involves issuing queries to a Web search engine and using pointwise mutual
information to analyse the results. The algorithm is empirically evaluated
using a training corpus of approximately one hundred billion words -- the
subset of the Web that is indexed by the chosen search engine. Tested with
3,596 words (1,614 positive and 1,982 negative), the algorithm attains an
accuracy of 80%. The 3,596 test words include adjectives, adverbs, nouns, and
verbs. The accuracy is comparable with the results achieved by Hatzivassiloglou
and McKeown (1997), using a complex four-stage supervised learning algorithm
that is restricted to determining the semantic orientation of adjectives.
|
cs/0212013
|
Learning to Extract Keyphrases from Text
|
cs.LG cs.IR
|
Many academic journals ask their authors to provide a list of about five to
fifteen key words, to appear on the first page of each article. Since these key
words are often phrases of two or more words, we prefer to call them
keyphrases. There is a surprisingly wide variety of tasks for which keyphrases
are useful, as we discuss in this paper. Recent commercial software, such as
Microsoft's Word 97 and Verity's Search 97, includes algorithms that
automatically extract keyphrases from documents. In this paper, we approach the
problem of automatically extracting keyphrases from text as a supervised
learning task. We treat a document as a set of phrases, which the learning
algorithm must learn to classify as positive or negative examples of
keyphrases. Our first set of experiments applies the C4.5 decision tree
induction algorithm to this learning task. The second set of experiments
applies the GenEx algorithm to the task. We developed the GenEx algorithm
specifically for this task. The third set of experiments examines the
performance of GenEx on the task of metadata generation, relative to the
performance of Microsoft's Word 97. The fourth and final set of experiments
investigates the performance of GenEx on the task of highlighting, relative to
Verity's Search 97. The experimental results support the claim that a
specialized learning algorithm (GenEx) can generate better keyphrases than a
general-purpose learning algorithm (C4.5) and the non-learning algorithms that
are used in commercial software (Word 97 and Search 97).
|
cs/0212014
|
Extraction of Keyphrases from Text: Evaluation of Four Algorithms
|
cs.LG cs.IR
|
This report presents an empirical evaluation of four algorithms for
automatically extracting keywords and keyphrases from documents. The four
algorithms are compared using five different collections of documents. For each
document, we have a target set of keyphrases, which were generated by hand. The
target keyphrases were generated for human readers; they were not tailored for
any of the four keyphrase extraction algorithms. Each of the algorithms was
evaluated by the degree to which the algorithm's keyphrases matched the
manually generated keyphrases. The four algorithms were (1) the AutoSummarize
feature in Microsoft's Word 97, (2) an algorithm based on Eric Brill's
part-of-speech tagger, (3) the Summarize feature in Verity's Search 97, and (4)
NRC's Extractor algorithm. For all five document collections, NRC's Extractor
yields the best match with the manually generated keyphrases.
|
cs/0212015
|
Answering Subcognitive Turing Test Questions: A Reply to French
|
cs.CL
|
Robert French has argued that a disembodied computer is incapable of passing
a Turing Test that includes subcognitive questions. Subcognitive questions are
designed to probe the network of cultural and perceptual associations that
humans naturally develop as we live, embodied and embedded in the world. In
this paper, I show how it is possible for a disembodied computer to answer
subcognitive questions appropriately, contrary to French's claim. My approach
to answering subcognitive questions is to use statistical information extracted
from a very large collection of text. In particular, I show how it is possible
to answer a sample of subcognitive questions taken from French, by issuing
queries to a search engine that indexes about 350 million Web pages. This
simple algorithm may shed light on the nature of human (sub-) cognition, but
the scope of this paper is limited to demonstrating that French is mistaken: a
disembodied computer can answer subcognitive questions.
|
cs/0212017
|
Classes of Spatiotemporal Objects and Their Closure Properties
|
cs.DB
|
We present a data model for spatio-temporal databases. In this model
spatio-temporal data is represented as a finite union of objects described by
means of a spatial reference object, a temporal object and a geometric
transformation function that determines the change or movement of the reference
object in time.
We define a number of practically relevant classes of spatio-temporal
objects, and give complete results concerning closure under Boolean set
operators for these classes. Since only few classes are closed under all set
operators, we suggest an extension of the model, which leads to better closure
properties, and therefore increased practical applicability. We also discuss a
normal form for this extended data model.
|
cs/0212018
|
Real numbers having ultimately periodic representations in abstract
numeration systems
|
cs.CC cs.CL
|
Using a genealogically ordered infinite regular language, we know how to
represent an interval of R. Numbers having an ultimately periodic
representation play a special role in classical numeration systems. The aim of
this paper is to characterize the numbers having an ultimately periodic
representation in generalized systems built on a regular language. The
syntactical properties of these words are also investigated. Finally, we show
the equivalence of the classical "theta"-expansions with our generalized
representations in some special case related to a Pisot number "theta".
|
cs/0212019
|
Thinking, Learning, and Autonomous Problem Solving
|
cs.NE
|
Ever increasing computational power will require methods for automatic
programming. We present an alternative to genetic programming, based on a
general model of thinking and learning. The advantage is that evolution takes
place in the space of constructs and can thus exploit the mathematical
structures of this space. The model is formalized, and a macro language is
presented which allows for a formal yet intuitive description of the problem
under consideration. A prototype has been developed to implement the scheme in
PERL. This method will lead to a concentration on the analysis of problems, to
a more rapid prototyping, to the treatment of new problem classes, and to the
investigation of philosophical problems. We see fields of application in
nonlinear differential equations, pattern recognition, robotics, model
building, and animated pictures.
|
cs/0212020
|
Learning Algorithms for Keyphrase Extraction
|
cs.LG cs.CL cs.IR
|
Many academic journals ask their authors to provide a list of about five to
fifteen keywords, to appear on the first page of each article. Since these key
words are often phrases of two or more words, we prefer to call them
keyphrases. There is a wide variety of tasks for which keyphrases are useful,
as we discuss in this paper. We approach the problem of automatically
extracting keyphrases from text as a supervised learning task. We treat a
document as a set of phrases, which the learning algorithm must learn to
classify as positive or negative examples of keyphrases. Our first set of
experiments applies the C4.5 decision tree induction algorithm to this learning
task. We evaluate the performance of nine different configurations of C4.5. The
second set of experiments applies the GenEx algorithm to the task. We developed
the GenEx algorithm specifically for automatically extracting keyphrases from
text. The experimental results support the claim that a custom-designed
algorithm (GenEx), incorporating specialized procedural domain knowledge, can
generate better keyphrases than a generalpurpose algorithm (C4.5). Subjective
human evaluation of the keyphrases generated by Extractor suggests that about
80% of the keyphrases are acceptable to human readers. This level of
performance should be satisfactory for a wide variety of applications.
|
cs/0212021
|
A Simple Model of Unbounded Evolutionary Versatility as a Largest-Scale
Trend in Organismal Evolution
|
cs.NE cs.CE q-bio.PE
|
The idea that there are any large-scale trends in the evolution of biological
organisms is highly controversial. It is commonly believed, for example, that
there is a large-scale trend in evolution towards increasing complexity, but
empirical and theoretical arguments undermine this belief. Natural selection
results in organisms that are well adapted to their local environments, but it
is not clear how local adaptation can produce a global trend. In this paper, I
present a simple computational model, in which local adaptation to a randomly
changing environment results in a global trend towards increasing evolutionary
versatility. In this model, for evolutionary versatility to increase without
bound, the environment must be highly dynamic. The model also shows that
unbounded evolutionary versatility implies an accelerating evolutionary pace. I
believe that unbounded increase in evolutionary versatility is a large-scale
trend in evolution. I discuss some of the testable predictions about organismal
evolution that are suggested by the model.
|
cs/0212022
|
Algorithms for Rapidly Dispersing Robot Swarms in Unknown Environments
|
cs.RO
|
We develop and analyze algorithms for dispersing a swarm of primitive robots
in an unknown environment, R. The primary objective is to minimize the
makespan, that is, the time to fill the entire region. An environment is
composed of pixels that form a connected subset of the integer grid.
There is at most one robot per pixel and robots move horizontally or
vertically at unit speed. Robots enter R by means of k>=1 door pixels
Robots are primitive finite automata, only having local communication, local
sensors, and a constant-sized memory.
We first give algorithms for the single-door case (i.e., k=1), analyzing the
algorithms both theoretically and experimentally. We prove that our algorithms
have optimal makespan 2A-1, where A is the area of R.
We next give an algorithm for the multi-door case (k>1), based on a
wall-following version of the leader-follower strategy. We prove that our
strategy is O(log(k+1))-competitive, and that this bound is tight for our
strategy and other related strategies.
|
cs/0212023
|
How to Shift Bias: Lessons from the Baldwin Effect
|
cs.LG cs.NE
|
An inductive learning algorithm takes a set of data as input and generates a
hypothesis as output. A set of data is typically consistent with an infinite
number of hypotheses; therefore, there must be factors other than the data that
determine the output of the learning algorithm. In machine learning, these
other factors are called the bias of the learner. Classical learning algorithms
have a fixed bias, implicit in their design. Recently developed learning
algorithms dynamically adjust their bias as they search for a hypothesis.
Algorithms that shift bias in this manner are not as well understood as
classical algorithms. In this paper, we show that the Baldwin effect has
implications for the design and analysis of bias shifting algorithms. The
Baldwin effect was proposed in 1896, to explain how phenomena that might appear
to require Lamarckian evolution (inheritance of acquired characteristics) can
arise from purely Darwinian evolution. Hinton and Nowlan presented a
computational model of the Baldwin effect in 1987. We explore a variation on
their model, which we constructed explicitly to illustrate the lessons that the
Baldwin effect has for research in bias shifting algorithms. The main lesson is
that it appears that a good strategy for shift of bias in a learning algorithm
is to begin with a weak bias and gradually shift to a strong bias.
|
cs/0212024
|
Unsupervised Language Acquisition: Theory and Practice
|
cs.CL cs.LG
|
In this thesis I present various algorithms for the unsupervised machine
learning of aspects of natural languages using a variety of statistical models.
The scientific object of the work is to examine the validity of the so-called
Argument from the Poverty of the Stimulus advanced in favour of the proposition
that humans have language-specific innate knowledge. I start by examining an a
priori argument based on Gold's theorem, that purports to prove that natural
languages cannot be learned, and some formal issues related to the choice of
statistical grammars rather than symbolic grammars. I present three novel
algorithms for learning various parts of natural languages: first, an algorithm
for the induction of syntactic categories from unlabelled text using
distributional information, that can deal with ambiguous and rare words;
secondly, a set of algorithms for learning morphological processes in a variety
of languages, including languages such as Arabic with non-concatenative
morphology; thirdly an algorithm for the unsupervised induction of a
context-free grammar from tagged text. I carefully examine the interaction
between the various components, and show how these algorithms can form the
basis for a empiricist model of language acquisition. I therefore conclude that
the Argument from the Poverty of the Stimulus is unsupported by the evidence.
|
cs/0212025
|
Searching for Plannable Domains can Speed up Reinforcement Learning
|
cs.AI
|
Reinforcement learning (RL) involves sequential decision making in uncertain
environments. The aim of the decision-making agent is to maximize the benefit
of acting in its environment over an extended period of time. Finding an
optimal policy in RL may be very slow. To speed up learning, one often used
solution is the integration of planning, for example, Sutton's Dyna algorithm,
or various other methods using macro-actions.
Here we suggest to separate plannable, i.e., close to deterministic parts of
the world, and focus planning efforts in this domain. A novel reinforcement
learning method called plannable RL (pRL) is proposed here. pRL builds a simple
model, which is used to search for macro actions. The simplicity of the model
makes planning computationally inexpensive. It is shown that pRL finds an
optimal policy, and that plannable macro actions found by pRL are near-optimal.
In turn, it is unnecessary to try large numbers of macro actions, which enables
fast learning. The utility of pRL is demonstrated by computer simulations.
|
cs/0212027
|
Qualitative Study of a Robot Arm as a Hamiltonian System
|
cs.RO
|
A double pendulum subject to external torques is used as a model to study the
stability of a planar manipulator with two links and two rotational driven
joints. The hamiltonian equations of motion and the fixed points (stationary
solutions) in phase space are determined. Under suitable conditions, the
presence of constant torques does not change the number of fixed points, and
preserves the topology of orbits in their linear neighborhoods; two equivalent
invariant manifolds are observed, each corresponding to a saddle-center fixed
point.
|
cs/0212028
|
Technical Note: Bias and the Quantification of Stability
|
cs.LG cs.CV
|
Research on bias in machine learning algorithms has generally been concerned
with the impact of bias on predictive accuracy. We believe that there are other
factors that should also play a role in the evaluation of bias. One such factor
is the stability of the algorithm; in other words, the repeatability of the
results. If we obtain two sets of data from the same phenomenon, with the same
underlying probability distribution, then we would like our learning algorithm
to induce approximately the same concepts from both sets of data. This paper
introduces a method for quantifying stability, based on a measure of the
agreement between concepts. We also discuss the relationships among stability,
predictive accuracy, and bias.
|
cs/0212029
|
A Theory of Cross-Validation Error
|
cs.LG cs.CV
|
This paper presents a theory of error in cross-validation testing of
algorithms for predicting real-valued attributes. The theory justifies the
claim that predicting real-valued attributes requires balancing the conflicting
demands of simplicity and accuracy. Furthermore, the theory indicates precisely
how these conflicting demands must be balanced, in order to minimize
cross-validation error. A general theory is presented, then it is developed in
detail for linear regression and instance-based learning.
|
cs/0212030
|
Theoretical Analyses of Cross-Validation Error and Voting in
Instance-Based Learning
|
cs.LG cs.CV
|
This paper begins with a general theory of error in cross-validation testing
of algorithms for supervised learning from examples. It is assumed that the
examples are described by attribute-value pairs, where the values are symbolic.
Cross-validation requires a set of training examples and a set of testing
examples. The value of the attribute that is to be predicted is known to the
learner in the training set, but unknown in the testing set. The theory
demonstrates that cross-validation error has two components: error on the
training set (inaccuracy) and sensitivity to noise (instability). This general
theory is then applied to voting in instance-based learning. Given an example
in the testing set, a typical instance-based learning algorithm predicts the
designated attribute by voting among the k nearest neighbors (the k most
similar examples) to the testing example in the training set. Voting is
intended to increase the stability (resistance to noise) of instance-based
learning, but a theoretical analysis shows that there are circumstances in
which voting can be destabilizing. The theory suggests ways to minimize
cross-validation error, by insuring that voting is stable and does not
adversely affect accuracy.
|
cs/0212031
|
Contextual Normalization Applied to Aircraft Gas Turbine Engine
Diagnosis
|
cs.LG cs.CE cs.CV
|
Diagnosing faults in aircraft gas turbine engines is a complex problem. It
involves several tasks, including rapid and accurate interpretation of patterns
in engine sensor data. We have investigated contextual normalization for the
development of a software tool to help engine repair technicians with
interpretation of sensor data. Contextual normalization is a new strategy for
employing machine learning. It handles variation in data that is due to
contextual factors, rather than the health of the engine. It does this by
normalizing the data in a context-sensitive manner. This learning strategy was
developed and tested using 242 observations of an aircraft gas turbine engine
in a test cell, where each observation consists of roughly 12,000 numbers,
gathered over a 12 second interval. There were eight classes of observations:
seven deliberately implanted classes of faults and a healthy class. We compared
two approaches to implementing our learning strategy: linear regression and
instance-based learning. We have three main results. (1) For the given problem,
instance-based learning works better than linear regression. (2) For this
problem, contextual normalization works better than other common forms of
normalization. (3) The algorithms described here can be the basis for a useful
software tool for assisting technicians with the interpretation of sensor data.
|
cs/0212032
|
Thumbs Up or Thumbs Down? Semantic Orientation Applied to Unsupervised
Classification of Reviews
|
cs.LG cs.CL cs.IR
|
This paper presents a simple unsupervised learning algorithm for classifying
reviews as recommended (thumbs up) or not recommended (thumbs down). The
classification of a review is predicted by the average semantic orientation of
the phrases in the review that contain adjectives or adverbs. A phrase has a
positive semantic orientation when it has good associations (e.g., "subtle
nuances") and a negative semantic orientation when it has bad associations
(e.g., "very cavalier"). In this paper, the semantic orientation of a phrase is
calculated as the mutual information between the given phrase and the word
"excellent" minus the mutual information between the given phrase and the word
"poor". A review is classified as recommended if the average semantic
orientation of its phrases is positive. The algorithm achieves an average
accuracy of 74% when evaluated on 410 reviews from Epinions, sampled from four
different domains (reviews of automobiles, banks, movies, and travel
destinations). The accuracy ranges from 84% for automobile reviews to 66% for
movie reviews.
|
cs/0212033
|
Mining the Web for Synonyms: PMI-IR versus LSA on TOEFL
|
cs.LG cs.CL cs.IR
|
This paper presents a simple unsupervised learning algorithm for recognizing
synonyms, based on statistical data acquired by querying a Web search engine.
The algorithm, called PMI-IR, uses Pointwise Mutual Information (PMI) and
Information Retrieval (IR) to measure the similarity of pairs of words. PMI-IR
is empirically evaluated using 80 synonym test questions from the Test of
English as a Foreign Language (TOEFL) and 50 synonym test questions from a
collection of tests for students of English as a Second Language (ESL). On both
tests, the algorithm obtains a score of 74%. PMI-IR is contrasted with Latent
Semantic Analysis (LSA), which achieves a score of 64% on the same 80 TOEFL
questions. The paper discusses potential applications of the new unsupervised
learning algorithm and some implications of the results for LSA and LSI (Latent
Semantic Indexing).
|
cs/0212034
|
Types of Cost in Inductive Concept Learning
|
cs.LG cs.CV
|
Inductive concept learning is the task of learning to assign cases to a
discrete set of classes. In real-world applications of concept learning, there
are many different types of cost involved. The majority of the machine learning
literature ignores all types of cost (unless accuracy is interpreted as a type
of cost measure). A few papers have investigated the cost of misclassification
errors. Very few papers have examined the many other types of cost. In this
paper, we attempt to create a taxonomy of the different types of cost that are
involved in inductive concept learning. This taxonomy may help to organize the
literature on cost-sensitive learning. We hope that it will inspire researchers
to investigate all types of cost in inductive concept learning in more depth.
|
cs/0212035
|
Exploiting Context When Learning to Classify
|
cs.LG cs.CV
|
This paper addresses the problem of classifying observations when features
are context-sensitive, specifically when the testing set involves a context
that is different from the training set. The paper begins with a precise
definition of the problem, then general strategies are presented for enhancing
the performance of classification algorithms on this type of problem. These
strategies are tested on two domains. The first domain is the diagnosis of gas
turbine engines. The problem is to diagnose a faulty engine in one context,
such as warm weather, when the fault has previously been seen only in another
context, such as cold weather. The second domain is speech recognition. The
problem is to recognize words spoken by a new speaker, not represented in the
training set. For both domains, exploiting context results in substantially
more accurate classification.
|
cs/0212036
|
Myths and Legends of the Baldwin Effect
|
cs.LG cs.NE
|
This position paper argues that the Baldwin effect is widely misunderstood by
the evolutionary computation community. The misunderstandings appear to fall
into two general categories. Firstly, it is commonly believed that the Baldwin
effect is concerned with the synergy that results when there is an evolving
population of learning individuals. This is only half of the story. The full
story is more complicated and more interesting. The Baldwin effect is concerned
with the costs and benefits of lifetime learning by individuals in an evolving
population. Several researchers have focussed exclusively on the benefits, but
there is much to be gained from attention to the costs. This paper explains the
two sides of the story and enumerates ten of the costs and benefits of lifetime
learning by individuals in an evolving population. Secondly, there is a cluster
of misunderstandings about the relationship between the Baldwin effect and
Lamarckian inheritance of acquired characteristics. The Baldwin effect is not
Lamarckian. A Lamarckian algorithm is not better for most evolutionary
computing problems than a Baldwinian algorithm. Finally, Lamarckian inheritance
is not a better model of memetic (cultural) evolution than the Baldwin effect.
|
cs/0212037
|
The Management of Context-Sensitive Features: A Review of Strategies
|
cs.LG cs.CV
|
In this paper, we review five heuristic strategies for handling
context-sensitive features in supervised machine learning from examples. We
discuss two methods for recovering lost (implicit) contextual information. We
mention some evidence that hybrid strategies can have a synergetic effect. We
then show how the work of several machine learning researchers fits into this
framework. While we do not claim that these strategies exhaust the
possibilities, it appears that the framework includes all of the techniques
that can be found in the published literature on contextsensitive learning.
|
cs/0212038
|
The Identification of Context-Sensitive Features: A Formal Definition of
Context for Concept Learning
|
cs.LG cs.CV
|
A large body of research in machine learning is concerned with supervised
learning from examples. The examples are typically represented as vectors in a
multi-dimensional feature space (also known as attribute-value descriptions). A
teacher partitions a set of training examples into a finite number of classes.
The task of the learning algorithm is to induce a concept from the training
examples. In this paper, we formally distinguish three types of features:
primary, contextual, and irrelevant features. We also formally define what it
means for one feature to be context-sensitive to another feature.
Context-sensitive features complicate the task of the learner and potentially
impair the learner's performance. Our formal definitions make it possible for a
learner to automatically identify context-sensitive features. After
context-sensitive features have been identified, there are several strategies
that the learner can employ for managing the features; however, a discussion of
these strategies is outside of the scope of this paper. The formal definitions
presented here correct a flaw in previously proposed definitions. We discuss
the relationship between our work and a formal definition of relevance.
|
cs/0212039
|
Low Size-Complexity Inductive Logic Programming: The East-West Challenge
Considered as a Problem in Cost-Sensitive Classification
|
cs.LG cs.NE
|
The Inductive Logic Programming community has considered proof-complexity and
model-complexity, but, until recently, size-complexity has received little
attention. Recently a challenge was issued "to the international computing
community" to discover low size-complexity Prolog programs for classifying
trains. The challenge was based on a problem first proposed by Ryszard
Michalski, 20 years ago. We interpreted the challenge as a problem in
cost-sensitive classification and we applied a recently developed
cost-sensitive classifier to the competition. Our algorithm was relatively
successful (we won a prize). This paper presents our algorithm and analyzes the
results of the competition.
|
cs/0212040
|
Data Engineering for the Analysis of Semiconductor Manufacturing Data
|
cs.LG cs.CE cs.CV
|
We have analyzed manufacturing data from several different semiconductor
manufacturing plants, using decision tree induction software called Q-YIELD.
The software generates rules for predicting when a given product should be
rejected. The rules are intended to help the process engineers improve the
yield of the product, by helping them to discover the causes of rejection.
Experience with Q-YIELD has taught us the importance of data engineering --
preprocessing the data to enable or facilitate decision tree induction. This
paper discusses some of the data engineering problems we have encountered with
semiconductor manufacturing data. The paper deals with two broad classes of
problems: engineering the features in a feature vector representation and
engineering the definition of the target concept (the classes). Manufacturing
process data present special problems for feature engineering, since the data
have multiple levels of granularity (detail, resolution). Engineering the
target concept is important, due to our focus on understanding the past, as
opposed to the more common focus in machine learning on predicting the future.
|
cs/0212041
|
Robust Classification with Context-Sensitive Features
|
cs.LG cs.CV
|
This paper addresses the problem of classifying observations when features
are context-sensitive, especially when the testing set involves a context that
is different from the training set. The paper begins with a precise definition
of the problem, then general strategies are presented for enhancing the
performance of classification algorithms on this type of problem. These
strategies are tested on three domains. The first domain is the diagnosis of
gas turbine engines. The problem is to diagnose a faulty engine in one context,
such as warm weather, when the fault has previously been seen only in another
context, such as cold weather. The second domain is speech recognition. The
context is given by the identity of the speaker. The problem is to recognize
words spoken by a new speaker, not represented in the training set. The third
domain is medical prognosis. The problem is to predict whether a patient with
hepatitis will live or die. The context is the age of the patient. For all
three domains, exploiting context results in substantially more accurate
classification.
|
cs/0212042
|
Increasing Evolvability Considered as a Large-Scale Trend in Evolution
|
cs.NE cs.CE q-bio.PE
|
Evolvability is the capacity to evolve. This paper introduces a simple
computational model of evolvability and demonstrates that, under certain
conditions, evolvability can increase indefinitely, even when there is no
direct selection for evolvability. The model shows that increasing evolvability
implies an accelerating evolutionary pace. It is suggested that the conditions
for indefinitely increasing evolvability are satisfied in biological and
cultural evolution. We claim that increasing evolvability is a large-scale
trend in evolution. This hypothesis leads to testable predictions about
biological and cultural evolution.
|
cs/0212045
|
Local Community Identification through User Access Patterns
|
cs.IR cs.HC
|
Community identification algorithms have been used to enhance the quality of
the services perceived by its users. Although algorithms for community have a
widespread use in the Web, their application to portals or specific subsets of
the Web has not been much studied. In this paper, we propose a technique for
local community identification that takes into account user access behavior
derived from access logs of servers in the Web. The technique takes a departure
from the existing community algorithms since it changes the focus of in terest,
moving from authors to users. Our approach does not use relations imposed by
authors (e.g. hyperlinks in the case of Web pages). It uses information derived
from user accesses to a service in order to infer relationships. The
communities identified are of great interest to content providers since they
can be used to improve quality of their services. We also propose an evaluation
methodology for analyzing the results obtained by the algorithm. We present two
case studies based on actual data from two services: an online bookstore and an
online radio. The case of the online radio is particularly relevant, because it
emphasizes the contribution of the proposed algorithm to find out communities
in an environment (i.e., streaming media service) without links, that represent
the relations imposed by authors (e.g. hyperlinks in the case of Web pages).
|
cs/0212049
|
An Ehrenfeucht-Fraisse Game Approach to Collapse Results in Database
Theory
|
cs.LO cs.DB
|
We present a new Ehrenfeucht-Fraisse game approach to collapse results in
database theory and we show that, in principle, this approach suffices to prove
every natural generic collapse result. Following this approach we can deal with
certain infinite databases where previous, highly involved methods fail. We
prove the natural generic collapse for Z-embeddable databases over any linearly
ordered context structure with arbitrary monadic predicates, and for
N-embeddable databases over the context structure (R,<,+,Mon_Q,Groups). Here,
N, Z, R, denote the sets of natural numbers, integers, and real numbers,
respectively. Groups is the collection of all subgroups of (R,+) that contain
Z, and Mon_Q is the collection of all subsets of a particular infinite subset Q
of N. Restricting the complexity of the formulas that may be used to formulate
queries to Boolean combinations of purely existential first-order formulas, we
even obtain the collapse for N-embeddable databases over any linearly ordered
context structure with arbitrary predicates. Finally, we develop the notion of
N-representable databases, which is a natural generalization of the classical
notion of finitely representable databases. We show that natural generic
collapse results for N-embeddable databases can be lifted to the larger class
of N-representable databases. To obtain, in particular, the collapse result for
(N,<,+,Mon_Q), we explicitly construct a winning strategy for the duplicator in
the presence of the built-in addition relation +. This, as a side product, also
leads to an Ehrenfeucht-Fraisse game proof of the theorem of Ginsburg and
Spanier, stating that the spectra of FO(<,+)-sentences are semi-linear.
|
cs/0212051
|
ExploitingWeb Service Semantics: Taxonomies vs. Ontologies
|
cs.DB
|
Comprehensive semantic descriptions of Web services are essential to exploit
them in their full potential, that is, discovering them dynamically, and
enabling automated service negotiation, composition and monitoring. The
semantic mechanisms currently available in service registries which are based
on taxonomies fail to provide the means to achieve this. Although the terms
taxonomy and ontology are sometimes used interchangably there is a critical
difference. A taxonomy indicates only class/subclass relationship whereas an
ontology describes a domain completely. The essential mechanisms that ontology
languages provide include their formal specification (which allows them to be
queried) and their ability to define properties of classes. Through properties
very accurate descriptions of services can be defined and services can be
related to other services or resources. In this paper, we discuss the
advantages of describing service semantics through ontology languages and
describe how to relate the semantics defined with the services advertised in
service registries like UDDI and ebXML.
|
cs/0212052
|
Improving the Functionality of UDDI Registries through Web Service
Semantics
|
cs.DB
|
In this paper we describe a framework for exploiting the semantics of Web
services through UDDI registries. As a part of this framework, we extend the
DAML-S upper ontology to describe the functionality we find essential for
e-businesses. This functionality includes relating the services with electronic
catalogs, describing the complementary services and finding services according
to the properties of products or services. Once the semantics is defined, there
is a need for a mechanism in the service registry to relate it with the service
advertised. The ontology model developed is general enough to be used with any
service registry. However when it comes to relating the semantics with services
advertised, the capabilities provided by the registry effects how this is
achieved. We demonstrate how to integrate the described service semantics to
UDDI registries.
|
cs/0212053
|
Merging Locally Correct Knowledge Bases: A Preliminary Report
|
cs.AI cs.LO
|
Belief integration methods are often aimed at deriving a single and
consistent knowledge base that retains as much as possible of the knowledge
bases to integrate. The rationale behind this approach is the minimal change
principle: the result of the integration process should differ as less as
possible from the knowledge bases to integrate. We show that this principle can
be reformulated in terms of a more general model of belief revision, based on
the assumption that inconsistency is due to the mistakes the knowledge bases
contain. Current belief revision strategies are based on a specific kind of
mistakes, which however does not include all possible ones. Some alternative
possibilities are discussed.
|
cs/0301001
|
Least squares fitting of circles and lines
|
cs.CV
|
We study theoretical and computational aspects of the least squares fit (LSF)
of circles and circular arcs. First we discuss the existence and uniqueness of
LSF and various parametrization schemes. Then we evaluate several popular
circle fitting algorithms and propose a new one that surpasses the existing
methods in reliability. We also discuss and compare direct (algebraic) circle
fits.
|
cs/0301006
|
Temporal plannability by variance of the episode length
|
cs.AI
|
Optimization of decision problems in stochastic environments is usually
concerned with maximizing the probability of achieving the goal and minimizing
the expected episode length. For interacting agents in time-critical
applications, learning of the possibility of scheduling of subtasks (events) or
the full task is an additional relevant issue. Besides, there exist highly
stochastic problems where the actual trajectories show great variety from
episode to episode, but completing the task takes almost the same amount of
time. The identification of sub-problems of this nature may promote e.g.,
planning, scheduling and segmenting Markov decision processes. In this work,
formulae for the average duration as well as the standard deviation of the
duration of events are derived. The emerging Bellman-type equation is a simple
extension of Sobel's work (1982). Methods of dynamic programming as well as
methods of reinforcement learning can be applied for our extension. Computer
demonstration on a toy problem serve to highlight the principle.
|
cs/0301007
|
Kalman filter control in the reinforcement learning framework
|
cs.LG cs.AI
|
There is a growing interest in using Kalman-filter models in brain modelling.
In turn, it is of considerable importance to make Kalman-filters amenable for
reinforcement learning. In the usual formulation of optimal control it is
computed off-line by solving a backward recursion. In this technical note we
show that slight modification of the linear-quadratic-Gaussian Kalman-filter
model allows the on-line estimation of optimal control and makes the bridge to
reinforcement learning. Moreover, the learning rule for value estimation
assumes a Hebbian form weighted by the error of the value estimation.
|
cs/0301008
|
Formal Concept Analysis and Resolution in Algebraic Domains
|
cs.LO cs.AI
|
We relate two formerly independent areas: Formal concept analysis and logic
of domains. We will establish a correspondene between contextual attribute
logic on formal contexts resp. concept lattices and a clausal logic on coherent
algebraic cpos. We show how to identify the notion of formal concept in the
domain theoretic setting. In particular, we show that a special instance of the
resolution rule from the domain logic coincides with the concept closure
operator from formal concept analysis. The results shed light on the use of
contexts and domains for knowledge representation and reasoning purposes.
|
cs/0301009
|
A Script Language for Data Integration in Database
|
cs.DB
|
A Script Language in this paper is designed to transform the original data
into the target data by the computing formula. The Script Language can be
translated into the corresponding SQL Language, and the computation is finally
implemented by the first type of dynamic SQL. The Script Language has the
operations of insert, update, delete, union, intersect, and minus for the table
in the database.The Script Language is edited by a text file and you can easily
modify the computing formula in the text file to deal with the situations when
the computing formula have been changed. So you only need modify the text of
the script language, but needn't change the programs that have complied.
|
cs/0301010
|
Comparisons and Computation of Well-founded Semantics for Disjunctive
Logic Programs
|
cs.AI
|
Much work has been done on extending the well-founded semantics to general
disjunctive logic programs and various approaches have been proposed. However,
these semantics are different from each other and no consensus is reached about
which semantics is the most intended. In this paper we look at disjunctive
well-founded reasoning from different angles. We show that there is an
intuitive form of the well-founded reasoning in disjunctive logic programming
which can be characterized by slightly modifying some exisitng approaches to
defining disjunctive well-founded semantics, including program transformations,
argumentation, unfounded sets (and resolution-like procedure). We also provide
a bottom-up procedure for this semantics. The significance of our work is not
only in clarifying the relationship among different approaches, but also shed
some light on what is an intended well-founded semantics for disjunctive logic
programs.
|
cs/0301014
|
Convergence and Loss Bounds for Bayesian Sequence Prediction
|
cs.LG cs.AI math.PR
|
The probability of observing $x_t$ at time $t$, given past observations
$x_1...x_{t-1}$ can be computed with Bayes' rule if the true generating
distribution $\mu$ of the sequences $x_1x_2x_3...$ is known. If $\mu$ is
unknown, but known to belong to a class $M$ one can base ones prediction on the
Bayes mix $\xi$ defined as a weighted sum of distributions $\nu\in M$. Various
convergence results of the mixture posterior $\xi_t$ to the true posterior
$\mu_t$ are presented. In particular a new (elementary) derivation of the
convergence $\xi_t/\mu_t\to 1$ is provided, which additionally gives the rate
of convergence. A general sequence predictor is allowed to choose an action
$y_t$ based on $x_1...x_{t-1}$ and receives loss $\ell_{x_t y_t}$ if $x_t$ is
the next symbol of the sequence. No assumptions are made on the structure of
$\ell$ (apart from being bounded) and $M$. The Bayes-optimal prediction scheme
$\Lambda_\xi$ based on mixture $\xi$ and the Bayes-optimal informed prediction
scheme $\Lambda_\mu$ are defined and the total loss $L_\xi$ of $\Lambda_\xi$ is
bounded in terms of the total loss $L_\mu$ of $\Lambda_\mu$. It is shown that
$L_\xi$ is bounded for bounded $L_\mu$ and $L_\xi/L_\mu\to 1$ for $L_\mu\to
\infty$. Convergence of the instantaneous losses are also proven.
|
cs/0301017
|
Completeness and Decidability Properties for Functional Dependencies in
XML
|
cs.DB
|
XML is of great importance in information storage and retrieval because of
its recent emergence as a standard for data representation and interchange on
the Internet. However XML provides little semantic content and as a result
several papers have addressed the topic of how to improve the semantic
expressiveness of XML. Among the most important of these approaches has been
that of defining integrity constraints in XML. In a companion paper we defined
strong functional dependencies in XML(XFDs). We also presented a set of axioms
for reasoning about the implication of XFDs and showed that the axiom system is
sound for arbitrary XFDs. In this paper we prove that the axioms are also
complete for unary XFDs (XFDs with a single path on the l.h.s.). The second
contribution of the paper is to prove that the implication problem for unary
XFDs is decidable and to provide a linear time algorithm for it.
|
cs/0301018
|
Novel Runtime Systems Support for Adaptive Compositional Modeling on the
Grid
|
cs.CE cs.DC
|
Grid infrastructures and computing environments have progressed significantly
in the past few years. The vision of truly seamless Grid usage relies on
runtime systems support that is cognizant of the operational issues underlying
grid computations and, at the same time, is flexible enough to accommodate
diverse application scenarios. This paper addresses the twin aspects of Grid
infrastructure and application support through a novel combination of two
computational technologies: Weaves - a source-language independent parallel
runtime compositional framework that operates through reverse-analysis of
compiled object files, and runtime recommender systems that aid in dynamic
knowledge-based application composition. Domain-specific adaptivity is
exploited through a novel compositional system that supports runtime
recommendation of code modules and a sophisticated checkpointing and runtime
migration solution that can be transparently deployed over Grid
infrastructures. A core set of "adaptivity schemas" are provided as templates
for adaptive composition of large-scale scientific computations. Implementation
issues, motivating application contexts, and preliminary results are described.
|
cs/0301023
|
A semantic framework for preference handling in answer set programming
|
cs.AI
|
We provide a semantic framework for preference handling in answer set
programming. To this end, we introduce preference preserving consequence
operators. The resulting fixpoint characterizations provide us with a uniform
semantic framework for characterizing preference handling in existing
approaches. Although our approach is extensible to other semantics by means of
an alternating fixpoint theory, we focus here on the elaboration of preferences
under answer set semantics. Alternatively, we show how these approaches can be
characterized by the concept of order preservation. These uniform semantic
characterizations provide us with new insights about interrelationships and
moreover about ways of implementation.
|
cs/0302001
|
Many Hard Examples in Exact Phase Transitions with Application to
Generating Hard Satisfiable Instances
|
cs.CC cond-mat.stat-mech cs.AI cs.DM
|
This paper first analyzes the resolution complexity of two random CSP models
(i.e. Model RB/RD) for which we can establish the existence of phase
transitions and identify the threshold points exactly. By encoding CSPs into
CNF formulas, it is proved that almost all instances of Model RB/RD have no
tree-like resolution proofs of less than exponential size. Thus, we not only
introduce new families of CNF formulas hard for resolution, which is a central
task of Proof-Complexity theory, but also propose models with both many hard
instances and exact phase transitions. Then, the implications of such models
are addressed. It is shown both theoretically and experimentally that an
application of Model RB/RD might be in the generation of hard satisfiable
instances, which is not only of practical importance but also related to some
open problems in cryptography such as generating one-way functions.
Subsequently, a further theoretical support for the generation method is shown
by establishing exponential lower bounds on the complexity of solving random
satisfiable and forced satisfiable instances of RB/RD near the threshold.
Finally, conclusions are presented, as well as a detailed comparison of Model
RB/RD with the Hamiltonian cycle problem and random 3-SAT, which, respectively,
exhibit three different kinds of phase transition behavior in NP-complete
problems.
|
cs/0302002
|
Optimizing GoTools' Search Heuristics using Genetic Algorithms
|
cs.NE
|
GoTools is a program which solves life & death problems in the game of Go.
This paper describes experiments using a Genetic Algorithm to optimize
heuristic weights used by GoTools' tree-search. The complete set of heuristic
weights is composed of different subgroups, each of which can be optimized with
a suitable fitness function. As a useful side product, an MPI interface for
FreePascal was implemented to allow the use of a parallelized fitness function
running on a Beowulf cluster. The aim of this exercise is to optimize the
current version of GoTools, and to make tools available in preparation of an
extension of GoTools for solving open boundary life & death problems, which
will introduce more heuristic parameters to be fine tuned.
|
cs/0302004
|
Unique Pattern Matching in Strings
|
cs.PL cs.DB
|
Regular expression patterns are a key feature of document processing
languages like Perl and XDuce. It is in this context that the first and longest
match policies have been proposed to disambiguate the pattern matching process.
We formally define a matching semantics with these policies and show that the
generally accepted method of simulating longest match by first match and
recursion is incorrect. We continue by solving the associated type inference
problem, which consists in calculating for every subexpression the set of words
the subexpression can still match when these policies are in effect, and show
how this algorithm can be used to efficiently implement the matching process.
|
cs/0302012
|
The New AI: General & Sound & Relevant for Physics
|
cs.AI cs.LG quant-ph
|
Most traditional artificial intelligence (AI) systems of the past 50 years
are either very limited, or based on heuristics, or both. The new millennium,
however, has brought substantial progress in the field of theoretically optimal
and practically feasible algorithms for prediction, search, inductive inference
based on Occam's razor, problem solving, decision making, and reinforcement
learning in environments of a very general type. Since inductive inference is
at the heart of all inductive sciences, some of the results are relevant not
only for AI and computer science but also for physics, provoking nontraditional
predictions based on Zuse's thesis of the computer-generated universe.
|
cs/0302014
|
An Algorithm for Aligning Sentences in Bilingual Corpora Using Lexical
Information
|
cs.CL
|
In this paper we describe an algorithm for aligning sentences with their
translations in a bilingual corpus using lexical information of the languages.
Existing efficient algorithms ignore word identities and consider only the
sentence lengths (Brown, 1991; Gale and Church, 1993). For a sentence in the
source language text, the proposed algorithm picks the most likely translation
from the target language text using lexical information and certain heuristics.
It does not do statistical analysis using sentence lengths. The algorithm is
language independent. It also aids in detecting addition and deletion of text
in translations. The algorithm gives comparable results with the existing
algorithms in most of the cases while it does better in cases where statistical
algorithms do not give good results.
|
cs/0302015
|
Unsupervised Learning in a Framework of Information Compression by
Multiple Alignment, Unification and Search
|
cs.AI cs.LG
|
This paper describes a novel approach to unsupervised learning that has been
developed within a framework of "information compression by multiple alignment,
unification and search" (ICMAUS), designed to integrate learning with other AI
functions such as parsing and production of language, fuzzy pattern
recognition, probabilistic and exact forms of reasoning, and others.
|
cs/0302021
|
Building an Open Language Archives Community on the OAI Foundation
|
cs.CL cs.DL
|
The Open Language Archives Community (OLAC) is an international partnership
of institutions and individuals who are creating a worldwide virtual library of
language resources. The Dublin Core (DC) Element Set and the OAI Protocol have
provided a solid foundation for the OLAC framework. However, we need more
precision in community-specific aspects of resource description than is offered
by DC. Furthermore, many of the institutions and individuals who might
participate in OLAC do not have the technical resources to support the OAI
protocol. This paper presents our solutions to these two problems. To address
the first, we have developed an extensible application profile for language
resource metadata. To address the second, we have implemented Vida (the virtual
data provider) and Viser (the virtual service provider), which permit community
members to provide data and services without having to implement the OAI
protocol. These solutions are generic and could be adopted by other specialized
subcommunities.
|
cs/0302023
|
Segmentation, Indexing, and Visualization of Extended Instructional
Videos
|
cs.IR cs.CV
|
We present a new method for segmenting, and a new user interface for indexing
and visualizing, the semantic content of extended instructional videos. Given a
series of key frames from the video, we generate a condensed view of the data
by clustering frames according to media type and visual similarities. Using
various visual filters, key frames are first assigned a media type (board,
class, computer, illustration, podium, and sheet). Key frames of media type
board and sheet are then clustered based on contents via an algorithm with
near-linear cost. A novel user interface, the result of two user studies,
displays related topics using icons linked topologically, allowing users to
quickly locate semantically related portions of the video. We analyze the
accuracy of the segmentation tool on 17 instructional videos, each of which is
from 75 to 150 minutes in duration (a total of 40 hours); the classification
accuracy exceeds 96%.
|
cs/0302024
|
Analysis and Interface for Instructional Video
|
cs.IR cs.CV
|
We present a new method for segmenting, and a new user interface for indexing
and visualizing, the semantic content of extended instructional videos. Using
various visual filters, key frames are first assigned a media type (board,
class, computer, illustration, podium, and sheet). Key frames of media type
board and sheet are then clustered based on contents via an algorithm with
near-linear cost. A novel user interface, the result of two user studies,
displays related topics using icons linked topologically, allowing users to
quickly locate semantically related portions of the video. We analyze the
accuracy of the segmentation tool on 17 instructional videos, each of which is
from 75 to 150 minutes in duration (a total of 40 hours); it exceeds 96%.
|
cs/0302029
|
Defeasible Logic Programming: An Argumentative Approach
|
cs.AI
|
The work reported here introduces Defeasible Logic Programming (DeLP), a
formalism that combines results of Logic Programming and Defeasible
Argumentation. DeLP provides the possibility of representing information in the
form of weak rules in a declarative manner, and a defeasible argumentation
inference mechanism for warranting the entailed conclusions.
In DeLP an argumentation formalism will be used for deciding between
contradictory goals. Queries will be supported by arguments that could be
defeated by other arguments. A query q will succeed when there is an argument A
for q that is warranted, ie, the argument A that supports q is found undefeated
by a warrant procedure that implements a dialectical analysis.
The defeasible argumentation basis of DeLP allows to build applications that
deal with incomplete and contradictory information in dynamic domains. Thus,
the resulting approach is suitable for representing agent's knowledge and for
providing an argumentation based reasoning mechanism to agents.
|
cs/0302032
|
Empirical Methods for Compound Splitting
|
cs.CL
|
Compounded words are a challenge for NLP applications such as machine
translation (MT). We introduce methods to learn splitting rules from
monolingual and parallel corpora. We evaluate them against a gold standard and
measure their impact on performance of statistical MT systems. Results show
accuracy of 99.1% and performance gains for MT of 0.039 BLEU on a
German-English noun phrase translation task.
|
cs/0302034
|
Interest Rate Model Calibration Using Semidefinite Programming
|
cs.CE
|
We show that, for the purpose of pricing Swaptions, the Swap rate and the
corresponding Forward rates can be considered lognormal under a single
martingale measure. Swaptions can then be priced as options on a basket of
lognormal assets and an approximation formula is derived for such options. This
formula is centered around a Black-Scholes price with an appropriate
volatility, plus a correction term that can be interpreted as the expected
tracking error. The calibration problem can then be solved very efficiently
using semidefinite programming.
|
cs/0302035
|
Risk-Management Methods for the Libor Market Model Using Semidefinite
Programming
|
cs.CE
|
When interest rate dynamics are described by the Libor Market Model as in
BGM97, we show how some essential risk-management results can be obtained from
the dual of the calibration program. In particular, if the objetive is to
maximize another swaption's price, we show that the optimal dual variables
describe a hedging portfolio in the sense of \cite{Avel96}. In the general
case, the local sensitivity of the covariance matrix to all market movement
scenarios can be directly computed from the optimal dual solution. We also show
how semidefinite programming can be used to manage the Gamma exposure of a
portfolio.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.