id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
cs/0008033
|
Temporal Expressions in Japanese-to-English Machine Translation
|
cs.CL
|
This paper describes in outline a method for translating Japanese temporal
expressions into English. We argue that temporal expressions form a special
subset of language that is best handled as a special module in machine
translation. The paper deals with problems of lexical idiosyncrasy as well as
the choice of articles and prepositions within temporal expressions. In
addition temporal expressions are considered as parts of larger structures, and
the question of whether to translate them as noun phrases or adverbials is
addressed.
|
cs/0008034
|
Lexicalized Stochastic Modeling of Constraint-Based Grammars using
Log-Linear Measures and EM Training
|
cs.CL
|
We present a new approach to stochastic modeling of constraint-based grammars
that is based on log-linear models and uses EM for estimation from unannotated
data. The techniques are applied to an LFG grammar for German. Evaluation on an
exact match task yields 86% precision for an ambiguity rate of 5.4, and 90%
precision on a subcat frame match for an ambiguity rate of 25. Experimental
comparison to training from a parsebank shows a 10% gain from EM training.
Also, a new class-based grammar lexicalization is presented, showing a 10% gain
over unlexicalized models.
|
cs/0008035
|
Using a Probabilistic Class-Based Lexicon for Lexical Ambiguity
Resolution
|
cs.CL
|
This paper presents the use of probabilistic class-based lexica for
disambiguation in target-word selection. Our method employs minimal but precise
contextual information for disambiguation. That is, only information provided
by the target-verb, enriched by the condensed information of a probabilistic
class-based lexicon, is used. Induction of classes and fine-tuning to verbal
arguments is done in an unsupervised manner by EM-based clustering techniques.
The method shows promising results in an evaluation on real-world translations.
|
cs/0008036
|
Probabilistic Constraint Logic Programming. Formal Foundations of
Quantitative and Statistical Inference in Constraint-Based Natural Language
Processing
|
cs.CL
|
In this thesis, we present two approaches to a rigorous mathematical and
algorithmic foundation of quantitative and statistical inference in
constraint-based natural language processing. The first approach, called
quantitative constraint logic programming, is conceptualized in a clear logical
framework, and presents a sound and complete system of quantitative inference
for definite clauses annotated with subjective weights. This approach combines
a rigorous formal semantics for quantitative inference based on subjective
weights with efficient weight-based pruning for constraint-based systems. The
second approach, called probabilistic constraint logic programming, introduces
a log-linear probability distribution on the proof trees of a constraint logic
program and an algorithm for statistical inference of the parameters and
properties of such probability models from incomplete, i.e., unparsed data. The
possibility of defining arbitrary properties of proof trees as properties of
the log-linear probability model and efficiently estimating appropriate
parameter values for them permits the probabilistic modeling of arbitrary
context-dependencies in constraint logic programs. The usefulness of these
ideas is evaluated empirically in a small-scale experiment on finding the
correct parses of a constraint-based grammar. In addition, we address the
problem of computational intractability of the calculation of expectations in
the inference task and present various techniques to approximately solve this
task. Moreover, we present an approximate heuristic technique for searching for
the most probable analysis in probabilistic constraint logic programs.
|
cs/0009001
|
Complexity analysis for algorithmically simple strings
|
cs.LG
|
Given a reference computer, Kolmogorov complexity is a well defined function
on all binary strings. In the standard approach, however, only the asymptotic
properties of such functions are considered because they do not depend on the
reference computer. We argue that this approach can be more useful if it is
refined to include an important practical case of simple binary strings.
Kolmogorov complexity calculus may be developed for this case if we restrict
the class of available reference computers. The interesting problem is to
define a class of computers which is restricted in a {\it natural} way modeling
the real-life situation where only a limited class of computers is physically
available to us. We give an example of what such a natural restriction might
look like mathematically, and show that under such restrictions some error
terms, even logarithmic in complexity, can disappear from the standard
complexity calculus.
Keywords: Kolmogorov complexity; Algorithmic information theory.
|
cs/0009003
|
Automatic Extraction of Subcategorization Frames for Czech
|
cs.CL
|
We present some novel machine learning techniques for the identification of
subcategorization information for verbs in Czech. We compare three different
statistical techniques applied to this problem. We show how the learning
algorithm can be used to discover previously unknown subcategorization frames
from the Czech Prague Dependency Treebank. The algorithm can then be used to
label dependents of a verb in the Czech treebank as either arguments or
adjuncts. Using our techniques, we ar able to achieve 88% precision on unseen
parsed text.
|
cs/0009005
|
Fast Approximation of Centrality
|
cs.DS cond-mat.dis-nn cs.SI
|
Social studies researchers use graphs to model group activities in social
networks. An important property in this context is the centrality of a vertex:
the inverse of the average distance to each other vertex. We describe a
randomized approximation algorithm for centrality in weighted graphs. For
graphs exhibiting the small world phenomenon, our method estimates the
centrality of all vertices with high probability within a (1+epsilon) factor in
near-linear time.
|
cs/0009007
|
Robust Classification for Imprecise Environments
|
cs.LG
|
In real-world environments it usually is difficult to specify target
operating conditions precisely, for example, target misclassification costs.
This uncertainty makes building robust classification systems problematic. We
show that it is possible to build a hybrid classifier that will perform at
least as well as the best available classifier for any target conditions. In
some cases, the performance of the hybrid actually can surpass that of the best
known classifier. This robust performance extends across a wide variety of
comparison frameworks, including the optimization of metrics such as accuracy,
expected cost, lift, precision, recall, and workforce utilization. The hybrid
also is efficient to build, to store, and to update. The hybrid is based on a
method for the comparison of classifier performance that is robust to imprecise
class distributions and misclassification costs. The ROC convex hull (ROCCH)
method combines techniques from ROC analysis, decision analysis and
computational geometry, and adapts them to the particulars of analyzing learned
classifiers. The method is efficient and incremental, minimizes the management
of classifier performance data, and allows for clear visual comparisons and
sensitivity analyses. Finally, we point to empirical evidence that a robust
hybrid classifier indeed is needed for many real-world problems.
|
cs/0009008
|
Introduction to the CoNLL-2000 Shared Task: Chunking
|
cs.CL
|
We describe the CoNLL-2000 shared task: dividing text into syntactically
related non-overlapping groups of words, so-called text chunking. We give
background information on the data sets, present a general overview of the
systems that have taken part in the shared task and briefly discuss their
performance.
|
cs/0009009
|
Learning to Filter Spam E-Mail: A Comparison of a Naive Bayesian and a
Memory-Based Approach
|
cs.CL cs.IR cs.LG
|
We investigate the performance of two machine learning algorithms in the
context of anti-spam filtering. The increasing volume of unsolicited bulk
e-mail (spam) has generated a need for reliable anti-spam filters. Filters of
this type have so far been based mostly on keyword patterns that are
constructed by hand and perform poorly. The Naive Bayesian classifier has
recently been suggested as an effective method to construct automatically
anti-spam filters with superior performance. We investigate thoroughly the
performance of the Naive Bayesian filter on a publicly available corpus,
contributing towards standard benchmarks. At the same time, we compare the
performance of the Naive Bayesian filter to an alternative memory-based
learning approach, after introducing suitable cost-sensitive evaluation
measures. Both methods achieve very accurate spam filtering, outperforming
clearly the keyword-based filter of a widely used e-mail reader.
|
cs/0009011
|
Anaphora Resolution in Japanese Sentences Using Surface Expressions and
Examples
|
cs.CL
|
Anaphora resolution is one of the major problems in natural language
processing. It is also one of the important tasks in machine translation and
man/machine dialogue. We solve the problem by using surface expressions and
examples. Surface expressions are the words in sentences which provide clues
for anaphora resolution. Examples are linguistic data which are actually used
in conversations and texts. The method using surface expressions and examples
is a practical method. This thesis handles almost all kinds of anaphora: i. The
referential property and number of a noun phrase ii. Noun phrase direct
anaphora iii. Noun phrase indirect anaphora iv. Pronoun anaphora v. Verb phrase
ellipsis
|
cs/0009012
|
Modeling Ambiguity in a Multi-Agent System
|
cs.CL cs.AI cs.MA
|
This paper investigates the formal pragmatics of ambiguous expressions by
modeling ambiguity in a multi-agent system. Such a framework allows us to give
a more refined notion of the kind of information that is conveyed by ambiguous
expressions. We analyze how ambiguity affects the knowledge of the dialog
participants and, especially, what they know about each other after an
ambiguous sentence has been uttered. The agents communicate with each other by
means of a TELL-function, whose application is constrained by an implementation
of some of Grice's maxims. The information states of the multi-agent system
itself are represented as a Kripke structures and TELL is an update function on
those structures. This framework enables us to distinguish between the
information conveyed by ambiguous sentences vs. the information conveyed by
disjunctions, and between semantic ambiguity vs. perceived ambiguity.
|
cs/0009014
|
Combining Linguistic and Spatial Information for Document Analysis
|
cs.CL cs.DL
|
We present a framework to analyze color documents of complex layout. In
addition, no assumption is made on the layout. Our framework combines in a
content-driven bottom-up approach two different sources of information: textual
and spatial. To analyze the text, shallow natural language processing tools,
such as taggers and partial parsers, are used. To infer relations of the
logical layout we resort to a qualitative spatial calculus closely related to
Allen's calculus. We evaluate the system against documents from a color journal
and present the results of extracting the reading order from the journal's
pages. In this case, our analysis is successful as it extracts the intended
reading order from the document.
|
cs/0009015
|
A Tableaux Calculus for Ambiguous Quantification
|
cs.CL
|
Coping with ambiguity has recently received a lot of attention in natural
language processing. Most work focuses on the semantic representation of
ambiguous expressions. In this paper we complement this work in two ways.
First, we provide an entailment relation for a language with ambiguous
expressions. Second, we give a sound and complete tableaux calculus for
reasoning with statements involving ambiguous quantification. The calculus
interleaves partial disambiguation steps with steps in a traditional deductive
process, so as to minimize and postpone branching in the proof process, and
thereby increases its efficiency.
|
cs/0009016
|
Contextual Inference in Computational Semantics
|
cs.CL cs.AI
|
In this paper, an application of automated theorem proving techniques to
computational semantics is considered. In order to compute the presuppositions
of a natural language discourse, several inference tasks arise. Instead of
treating these inferences independently of each other, we show how integrating
techniques from formal approaches to context into deduction can help to compute
presuppositions more efficiently. Contexts are represented as Discourse
Representation Structures and the way they are nested is made explicit. In
addition, a tableau calculus is present which keeps track of contextual
information, and thereby allows to avoid carrying out redundant inference steps
as it happens in approaches that neglect explicit nesting of contexts.
|
cs/0009017
|
A Tableau Calculus for Pronoun Resolution
|
cs.CL cs.AI
|
We present a tableau calculus for reasoning in fragments of natural language.
We focus on the problem of pronoun resolution and the way in which it
complicates automated theorem proving for natural language processing. A method
for explicitly manipulating contextual information during deduction is
proposed, where pronouns are resolved against this context during deduction. As
a result, pronoun resolution and deduction can be interleaved in such a way
that pronouns are only resolved if this is licensed by a deduction rule; this
helps us to avoid the combinatorial complexity of total pronoun disambiguation.
|
cs/0009018
|
A Resolution Calculus for Dynamic Semantics
|
cs.CL cs.AI
|
This paper applies resolution theorem proving to natural language semantics.
The aim is to circumvent the computational complexity triggered by natural
language ambiguities like pronoun binding, by interleaving pronoun binding with
resolution deduction. Therefore disambiguation is only applied to expression
that actually occur during derivations.
|
cs/0009019
|
Computing Presuppositions by Contextual Reasoning
|
cs.AI cs.CL
|
This paper describes how automated deduction methods for natural language
processing can be applied more efficiently by encoding context in a more
elaborate way. Our work is based on formal approaches to context, and we
provide a tableau calculus for contextual reasoning. This is explained by
considering an example from the problem area of presupposition projection.
|
cs/0009022
|
A Comparison between Supervised Learning Algorithms for Word Sense
Disambiguation
|
cs.CL cs.AI
|
This paper describes a set of comparative experiments, including cross-corpus
evaluation, between five alternative algorithms for supervised Word Sense
Disambiguation (WSD), namely Naive Bayes, Exemplar-based learning, SNoW,
Decision Lists, and Boosting. Two main conclusions can be drawn: 1) The
LazyBoosting algorithm outperforms the other four state-of-the-art algorithms
in terms of accuracy and ability to tune to new domains; 2) The domain
dependence of WSD systems seems very strong and suggests that some kind of
adaptation or tuning is required for cross-corpus application.
|
cs/0009025
|
Parsing with the Shortest Derivation
|
cs.CL
|
Common wisdom has it that the bias of stochastic grammars in favor of shorter
derivations of a sentence is harmful and should be redressed. We show that the
common wisdom is wrong for stochastic grammars that use elementary trees
instead of context-free rules, such as Stochastic Tree-Substitution Grammars
used by Data-Oriented Parsing models. For such grammars a non-probabilistic
metric based on the shortest derivation outperforms a probabilistic metric on
the ATIS and OVIS corpora, while it obtains very competitive results on the
Wall Street Journal corpus. This paper also contains the first published
experiments with DOP on the Wall Street Journal.
|
cs/0009026
|
An improved parser for data-oriented lexical-functional analysis
|
cs.CL
|
We present an LFG-DOP parser which uses fragments from LFG-annotated
sentences to parse new sentences. Experiments with the Verbmobil and Homecentre
corpora show that (1) Viterbi n best search performs about 100 times faster
than Monte Carlo search while both achieve the same accuracy; (2) the DOP
hypothesis which states that parse accuracy increases with increasing fragment
size is confirmed for LFG-DOP; (3) LFG-DOP's relative frequency estimator
performs worse than a discounted frequency estimator; and (4) LFG-DOP
significantly outperforms Tree-DOP is evaluated on tree structures only.
|
cs/0009027
|
A Classification Approach to Word Prediction
|
cs.CL cs.AI cs.LG
|
The eventual goal of a language model is to accurately predict the value of a
missing word given its context. We present an approach to word prediction that
is based on learning a representation for each word as a function of words and
linguistics predicates in its context. This approach raises a few new questions
that we address. First, in order to learn good word representations it is
necessary to use an expressive representation of the context. We present a way
that uses external knowledge to generate expressive context representations,
along with a learning method capable of handling the large number of features
generated this way that can, potentially, contribute to each prediction.
Second, since the number of words ``competing'' for each prediction is large,
there is a need to ``focus the attention'' on a smaller subset of these. We
exhibit the contribution of a ``focus of attention'' mechanism to the
performance of the word predictor. Finally, we describe a large scale
experimental study in which the approach presented is shown to yield
significant improvements in word prediction tasks.
|
cs/0010001
|
Design of an Electro-Hydraulic System Using Neuro-Fuzzy Techniques
|
cs.RO cs.LG
|
Increasing demands in performance and quality make drive systems fundamental
parts in the progressive automation of industrial processes. Their conventional
models become inappropriate and have limited scope if one requires a precise
and fast performance. So, it is important to incorporate learning capabilities
into drive systems in such a way that they improve their accuracy in realtime,
becoming more autonomous agents with some degree of intelligence. To
investigate this challenge, this chapter presents the development of a learning
control system that uses neuro-fuzzy techniques in the design of a tracking
controller to an experimental electro-hydraulic actuator. We begin the chapter
by presenting the neuro-fuzzy modeling process of the actuator. This part
surveys the learning algorithm, describes the laboratorial system, and presents
the modeling steps as the choice of actuator representative variables, the
acquisition of training and testing data sets, and the acquisition of the
neuro-fuzzy inverse-model of the actuator. In the second part of the chapter,
we use the extracted neuro-fuzzy model and its learning capabilities to design
the actuator position controller based on the feedback-error-learning
technique. Through a set of experimental results, we show the generalization
properties of the controller, its learning capability in actualizing in
realtime the initial neuro-fuzzy inverse-model, and its compensation action
improving the electro-hydraulics tracking performance.
|
cs/0010002
|
Noise Effects in Fuzzy Modelling Systems
|
cs.NE cs.LG
|
Noise is source of ambiguity for fuzzy systems. Although being an important
aspect, the effects of noise in fuzzy modeling have been little investigated.
This paper presents a set of tests using three well-known fuzzy modeling
algorithms. These evaluate perturbations in the extracted rule-bases caused by
noise polluting the learning data, and the corresponding deformations in each
learned functional relation. We present results to show: 1) how these fuzzy
modeling systems deal with noise; 2) how the established fuzzy model structure
influences noise sensitivity of each algorithm; and 3) whose characteristics of
the learning algorithms are relevant to noise attenuation.
|
cs/0010003
|
Torque Ripple Minimization in a Switched Reluctance Drive by Neuro-Fuzzy
Compensation
|
cs.RO cs.LG
|
Simple power electronic drive circuit and fault tolerance of converter are
specific advantages of SRM drives, but excessive torque ripple has limited its
use to special applications. It is well known that controlling the current
shape adequately can minimize the torque ripple. This paper presents a new
method for shaping the motor currents to minimize the torque ripple, using a
neuro-fuzzy compensator. In the proposed method, a compensating signal is added
to the output of a PI controller, in a current-regulated speed control loop.
Numerical results are presented in this paper, with an analysis of the effects
of changing the form of the membership function of the neuro-fuzzy compensator.
|
cs/0010004
|
A Fuzzy Relational Identification Algorithm and Its Application to
Predict The Behaviour of a Motor Drive System
|
cs.RO cs.LG
|
Fuzzy relational identification builds a relational model describing systems
behaviour by a nonlinear mapping between its variables. In this paper, we
propose a new fuzzy relational algorithm based on simplified max-min relational
equation. The algorithm presents an adaptation method applied to gravity-center
of each fuzzy set based on error integral value between measured and predicted
system output, and uses the concept of time-variant universe of discourses. The
identification algorithm also includes a method to attenuate noise influence in
extracted system relational model using a fuzzy filtering mechanism. The
algorithm is applied to one-step forward prediction of a simulated and
experimental motor drive system. The identified model has its input-output
variables (stator-reference current and motor speed signal) treated as fuzzy
sets, whereas the relations existing between them are described by means of a
matrix R defining the relational model extracted by the algorithm. The results
show the good potentialities of the algorithm in predict the behaviour of the
system and attenuate through the fuzzy filtering method possible noise
distortions in the relational model.
|
cs/0010006
|
Applications of Data Mining to Electronic Commerce
|
cs.LG cs.DB
|
Electronic commerce is emerging as the killer domain for data mining
technology.
The following are five desiderata for success. Seldom are they they all
present in one data mining application.
1. Data with rich descriptions. For example, wide customer records with many
potentially useful fields allow data mining algorithms to search beyond obvious
correlations.
2. A large volume of data. The large model spaces corresponding to rich data
demand many training instances to build reliable models.
3. Controlled and reliable data collection. Manual data entry and integration
from legacy systems both are notoriously problematic; fully automated
collection is considerably better.
4. The ability to evaluate results. Substantial, demonstrable return on
investment can be very convincing.
5. Ease of integration with existing processes. Even if pilot studies show
potential benefit, deploying automated solutions to previously manual processes
is rife with pitfalls. Building a system to take advantage of the mined
knowledge can be a substantial undertaking. Furthermore, one often must deal
with social and political issues involved in the automation of a previously
manual business process.
|
cs/0010010
|
Fault Detection using Immune-Based Systems and Formal Language
Algorithms
|
cs.CE cs.LG
|
This paper describes two approaches for fault detection: an immune-based
mechanism and a formal language algorithm. The first one is based on the
feature of immune systems in distinguish any foreign cell from the body own
cell. The formal language approach assumes the system as a linguistic source
capable of generating a certain language, characterised by a grammar. Each
algorithm has particular characteristics, which are analysed in the paper,
namely in what cases they can be used with advantage. To test their
practicality, both approaches were applied on the problem of fault detection in
an induction motor.
|
cs/0010012
|
Finding consensus in speech recognition: word error minimization and
other applications of confusion networks
|
cs.CL
|
We describe a new framework for distilling information from word lattices to
improve the accuracy of speech recognition and obtain a more perspicuous
representation of a set of alternative hypotheses. In the standard MAP decoding
approach the recognizer outputs the string of words corresponding to the path
with the highest posterior probability given the acoustics and a language
model. However, even given optimal models, the MAP decoder does not necessarily
minimize the commonly used performance metric, word error rate (WER). We
describe a method for explicitly minimizing WER by extracting word hypotheses
with the highest posterior probabilities from word lattices. We change the
standard problem formulation by replacing global search over a large set of
sentence hypotheses with local search over a small set of word candidates. In
addition to improving the accuracy of the recognizer, our method produces a new
representation of the set of candidate hypotheses that specifies the sequence
of word-level confusions in a compact lattice format. We study the properties
of confusion networks and examine their use for other tasks, such as lattice
compression, word spotting, confidence annotation, and reevaluation of
recognition hypotheses using higher-level knowledge sources.
|
cs/0010013
|
A Public-key based Information Management Model for Mobile Agents
|
cs.CR cs.DC cs.IR cs.NI
|
Mobile code based computing requires development of protection schemes that
allow digital signature and encryption of data collected by the agents in
untrusted hosts. These algorithms could not rely on carrying encryption keys if
these keys could be stolen or used to counterfeit data by hostile hosts and
agents. As a consequence, both information and keys must be protected in a way
that only authorized hosts, that is the host that provides information and the
server that has sent the mobile agent, could modify (by changing or removing)
retrieved data. The data management model proposed in this work allows the
information collected by the agents to be protected against handling by other
hosts in the information network. It has been done by using standard public-key
cryptography modified to support protection of data in distributed environments
without requiring an interactive protocol with the host that has dropped the
agent. Their significance stands on the fact that it is the first model that
supports a full-featured protection of mobile agents allowing remote hosts to
change its own information if required before agent returns to its originating
server.
|
cs/0010014
|
On a cepstrum-based speech detector robust to white noise
|
cs.CL cs.CV cs.HC
|
We study effects of additive white noise on the cepstral representation of
speech signals. Distribution of each individual cepstrum coefficient of speech
is shown to depend strongly on noise and to overlap significantly with the
cepstrum distribution of noise. Based on these studies, we suggest a scalar
quantity, V, equal to the sum of weighted cepstral coefficients, which is able
to classify frames containing speech against noise-like frames. The
distributions of V for speech and noise frames are reasonably well separated
above SNR = 5 dB, demonstrating the feasibility of robust speech detector based
on V.
|
cs/0010020
|
Using existing systems to supplement small amounts of annotated
grammatical relations training data
|
cs.CL
|
Grammatical relationships (GRs) form an important level of natural language
processing, but different sets of GRs are useful for different purposes.
Therefore, one may often only have time to obtain a small training corpus with
the desired GR annotations. To boost the performance from using such a small
training corpus on a transformation rule learner, we use existing systems that
find related types of annotations.
|
cs/0010021
|
Towards Understanding the Predictability of Stock Markets from the
Perspective of Computational Complexity
|
cs.CE cs.CC
|
This paper initiates a study into the century-old issue of market
predictability from the perspective of computational complexity. We develop a
simple agent-based model for a stock market where the agents are traders
equipped with simple trading strategies, and their trades together determine
the stock prices. Computer simulations show that a basic case of this model is
already capable of generating price graphs which are visually similar to the
recent price movements of high tech stocks. In the general model, we prove that
if there are a large number of traders but they employ a relatively small
number of strategies, then there is a polynomial-time algorithm for predicting
future price movements with high accuracy. On the other hand, if the number of
strategies is large, market prediction becomes complete in two new
computational complexity classes CPP and BCPP, which are between P^NP[O(log n)]
and PP. These computational completeness results open up a novel possibility
that the price graph of an actual stock could be sufficiently deterministic for
various prediction goals but appear random to all polynomial-time prediction
algorithms.
|
cs/0010022
|
Noise-Tolerant Learning, the Parity Problem, and the Statistical Query
Model
|
cs.LG cs.AI cs.DS
|
We describe a slightly sub-exponential time algorithm for learning parity
functions in the presence of random classification noise. This results in a
polynomial-time algorithm for the case of parity functions that depend on only
the first O(log n log log n) bits of input. This is the first known instance of
an efficient noise-tolerant algorithm for a concept class that is provably not
learnable in the Statistical Query model of Kearns. Thus, we demonstrate that
the set of problems learnable in the statistical query model is a strict subset
of those problems learnable in the presence of noise in the PAC model.
In coding-theory terms, what we give is a poly(n)-time algorithm for decoding
linear k by n codes in the presence of random noise for the case of k = c log n
loglog n for some c > 0. (The case of k = O(log n) is trivial since one can
just individually check each of the 2^k possible messages and choose the one
that yields the closest codeword.)
A natural extension of the statistical query model is to allow queries about
statistical properties that involve t-tuples of examples (as opposed to single
examples). The second result of this paper is to show that any class of
functions learnable (strongly or weakly) with t-wise queries for t = O(log n)
is also weakly learnable with standard unary queries. Hence this natural
extension to the statistical query model does not increase the set of weakly
learnable functions.
|
cs/0010023
|
Oracle Complexity and Nontransitivity in Pattern Recognition
|
cs.CC cs.AI cs.CV cs.DS
|
Different mathematical models of recognition processes are known. In the
present paper we consider a pattern recognition algorithm as an oracle
computation on a Turing machine. Such point of view seems to be useful in
pattern recognition as well as in recursion theory. Use of recursion theory in
pattern recognition shows connection between a recognition algorithm comparison
problem and complexity problems of oracle computation. That is because in many
cases we can take into account only the number of sign computations or in other
words volume of oracle information needed. Therefore, the problem of
recognition algorithm preference can be formulated as a complexity optimization
problem of oracle computation. Furthermore, introducing a certain "natural"
preference relation on a set of recognizing algorithms, we discover it to be
nontransitive. This relates to the well known nontransitivity paradox in
probability theory.
Keywords: Pattern Recognition, Recursion Theory, Nontransitivity, Preference
Relation
|
cs/0010024
|
Exploring automatic word sense disambiguation with decision lists and
the Web
|
cs.CL
|
The most effective paradigm for word sense disambiguation, supervised
learning, seems to be stuck because of the knowledge acquisition bottleneck. In
this paper we take an in-depth study of the performance of decision lists on
two publicly available corpora and an additional corpus automatically acquired
from the Web, using the fine-grained highly polysemous senses in WordNet.
Decision lists are shown a versatile state-of-the-art technique. The
experiments reveal, among other facts, that SemCor can be an acceptable (0.7
precision for polysemous words) starting point for an all-words system. The
results on the DSO corpus show that for some highly polysemous words 0.7
precision seems to be the current state-of-the-art limit. On the other hand,
independently constructed hand-tagged corpora are not mutually useful, and a
corpus automatically acquired from the Web is shown to fail.
|
cs/0010025
|
Extraction of semantic relations from a Basque monolingual dictionary
using Constraint Grammar
|
cs.CL
|
This paper deals with the exploitation of dictionaries for the semi-automatic
construction of lexicons and lexical knowledge bases. The final goal of our
research is to enrich the Basque Lexical Database with semantic information
such as senses, definitions, semantic relations, etc., extracted from a Basque
monolingual dictionary. The work here presented focuses on the extraction of
the semantic relations that best characterise the headword, that is, those of
synonymy, antonymy, hypernymy, and other relations marked by specific relators
and derivation. All nominal, verbal and adjectival entries were treated. Basque
uses morphological inflection to mark case, and therefore semantic relations
have to be inferred from suffixes rather than from prepositions. Our approach
combines a morphological analyser and surface syntax parsing (based on
Constraint Grammar), and has proven very successful for highly inflected
languages such as Basque. Both the effort to write the rules and the actual
processing time of the dictionary have been very low. At present we have
extracted 42,533 relations, leaving only 2,943 (9%) definitions without any
extracted relation. The error rate is extremely low, as only 2.2% of the
extracted relations are wrong.
|
cs/0010026
|
Enriching very large ontologies using the WWW
|
cs.CL
|
This paper explores the possibility to exploit text on the world wide web in
order to enrich the concepts in existing ontologies. First, a method to
retrieve documents from the WWW related to a concept is described. These
document collections are used 1) to construct topic signatures (lists of
topically related words) for each concept in WordNet, and 2) to build
hierarchical clusters of the concepts (the word senses) that lexicalize a given
word. The overall goal is to overcome two shortcomings of WordNet: the lack of
topical links among concepts, and the proliferation of senses. Topic signatures
are validated on a word sense disambiguation task with good results, which are
improved when the hierarchical clusters are used.
|
cs/0010027
|
One Sense per Collocation and Genre/Topic Variations
|
cs.CL
|
This paper revisits the one sense per collocation hypothesis using
fine-grained sense distinctions and two different corpora. We show that the
hypothesis is weaker for fine-grained sense distinctions (70% vs. 99% reported
earlier on 2-way ambiguities). We also show that one sense per collocation does
hold across corpora, but that collocations vary from one corpus to the other,
following genre and topic variations. This explains the low results when
performing word sense disambiguation across corpora. In fact, we demonstrate
that when two independent corpora share a related genre/topic, the word sense
disambiguation results would be better. Future work on word sense
disambiguation will have to take into account genre and topic as important
parameters on their models.
|
cs/0010030
|
Reduction of Intermediate Alphabets in Finite-State Transducer Cascades
|
cs.CL
|
This article describes an algorithm for reducing the intermediate alphabets
in cascades of finite-state transducers (FSTs). Although the method modifies
the component FSTs, there is no change in the overall relation described by the
whole cascade. No additional information or special algorithm, that could
decelerate the processing of input, is required at runtime. Two examples from
Natural Language Processing are used to illustrate the effect of the algorithm
on the sizes of the FSTs and their alphabets. With some FSTs the number of arcs
and symbols shrank considerably.
|
cs/0010031
|
Opportunity Cost Algorithms for Combinatorial Auctions
|
cs.CE cs.DS
|
Two general algorithms based on opportunity costs are given for approximating
a revenue-maximizing set of bids an auctioneer should accept, in a
combinatorial auction in which each bidder offers a price for some subset of
the available goods and the auctioneer can only accept non-intersecting bids.
Since this problem is difficult even to approximate in general, the algorithms
are most useful when the bids are restricted to be connected node subsets of an
underlying object graph that represents which objects are relevant to each
other. The approximation ratios of the algorithms depend on structural
properties of this graph and are small constants for many interesting families
of object graphs. The running times of the algorithms are linear in the size of
the bid graph, which describes the conflicts between bids. Extensions of the
algorithms allow for efficient processing of additional constraints, such as
budget constraints that associate bids with particular bidders and limit how
many bids from a particular bidder can be accepted.
|
cs/0010032
|
Super Logic Programs
|
cs.AI cs.LO
|
The Autoepistemic Logic of Knowledge and Belief (AELB) is a powerful
nonmonotic formalism introduced by Teodor Przymusinski in 1994. In this paper,
we specialize it to a class of theories called `super logic programs'. We argue
that these programs form a natural generalization of standard logic programs.
In particular, they allow disjunctions and default negation of arbibrary
positive objective formulas.
Our main results are two new and powerful characterizations of the static
semant ics of these programs, one syntactic, and one model-theoretic. The
syntactic fixed point characterization is much simpler than the fixed point
construction of the static semantics for arbitrary AELB theories. The
model-theoretic characterization via Kripke models allows one to construct
finite representations of the inherently infinite static expansions.
Both characterizations can be used as the basis of algorithms for query
answering under the static semantics. We describe a query-answering interpreter
for super programs which we developed based on the model-theoretic
characterization and which is available on the web.
|
cs/0010033
|
A Formal Framework for Linguistic Annotation (revised version)
|
cs.CL cs.DB cs.DS
|
`Linguistic annotation' covers any descriptive or analytic notations applied
to raw language data. The basic data may be in the form of time functions -
audio, video and/or physiological recordings - or it may be textual. The added
notations may include transcriptions of all sorts (from phonetic features to
discourse structures), part-of-speech and sense tagging, syntactic analysis,
`named entity' identification, co-reference annotation, and so on. While there
are several ongoing efforts to provide formats and tools for such annotations
and to publish annotated linguistic databases, the lack of widely accepted
standards is becoming a critical problem. Proposed standards, to the extent
they exist, have focused on file formats. This paper focuses instead on the
logical structure of linguistic annotations. We survey a wide variety of
existing annotation formats and demonstrate a common conceptual core, the
annotation graph. This provides a formal framework for constructing,
maintaining and searching linguistic annotations, while remaining consistent
with many alternative data structures and file formats.
|
cs/0010037
|
On the relationship between fuzzy logic and four-valued relevance logic
|
cs.AI
|
In fuzzy propositional logic, to a proposition a partial truth in [0,1] is
assigned. It is well known that under certain circumstances, fuzzy logic
collapses to classical logic. In this paper, we will show that under dual
conditions, fuzzy logic collapses to four-valued (relevance) logic, where
propositions have truth-value true, false, unknown, or contradiction. As a
consequence, fuzzy entailment may be considered as ``in between'' four-valued
(relevance) entailment and classical entailment.
|
cs/0011001
|
Utilizing the World Wide Web as an Encyclopedia: Extracting Term
Descriptions from Semi-Structured Texts
|
cs.CL
|
In this paper, we propose a method to extract descriptions of technical terms
from Web pages in order to utilize the World Wide Web as an encyclopedia. We
use linguistic patterns and HTML text structures to extract text fragments
containing term descriptions. We also use a language model to discard
extraneous descriptions, and a clustering method to summarize resultant
descriptions. We show the effectiveness of our method by way of experiments.
|
cs/0011002
|
A Novelty-based Evaluation Method for Information Retrieval
|
cs.CL
|
In information retrieval research, precision and recall have long been used
to evaluate IR systems. However, given that a number of retrieval systems
resembling one another are already available to the public, it is valuable to
retrieve novel relevant documents, i.e., documents that cannot be retrieved by
those existing systems. In view of this problem, we propose an evaluation
method that favors systems retrieving as many novel documents as possible. We
also used our method to evaluate systems that participated in the IREX
workshop.
|
cs/0011003
|
Applying Machine Translation to Two-Stage Cross-Language Information
Retrieval
|
cs.CL
|
Cross-language information retrieval (CLIR), where queries and documents are
in different languages, needs a translation of queries and/or documents, so as
to standardize both of them into a common representation. For this purpose, the
use of machine translation is an effective approach. However, computational
cost is prohibitive in translating large-scale document collections. To resolve
this problem, we propose a two-stage CLIR method. First, we translate a given
query into the document language, and retrieve a limited number of foreign
documents. Second, we machine translate only those documents into the user
language, and re-rank them based on the translation result. We also show the
effectiveness of our method by way of experiments using Japanese queries and
English technical documents.
|
cs/0011007
|
Tree-gram Parsing: Lexical Dependencies and Structural Relations
|
cs.CL cs.AI cs.HC
|
This paper explores the kinds of probabilistic relations that are important
in syntactic disambiguation. It proposes that two widely used kinds of
relations, lexical dependencies and structural relations, have complementary
disambiguation capabilities. It presents a new model based on structural
relations, the Tree-gram model, and reports experiments showing that structural
relations should benefit from enrichment by lexical dependencies.
|
cs/0011008
|
A Lambda-Calculus with letrec, case, constructors and non-determinism
|
cs.PL cs.AI cs.SC
|
A non-deterministic call-by-need lambda-calculus \calc with case,
constructors, letrec and a (non-deterministic) erratic choice, based on
rewriting rules is investigated. A standard reduction is defined as a variant
of left-most outermost reduction. The semantics is defined by contextual
equivalence of expressions instead of using $\alpha\beta(\eta)$-equivalence. It
is shown that several program transformations are correct, for example all
(deterministic) rules of the calculus, and in addition the rules for garbage
collection, removing indirections and unique copy.
This shows that the combination of a context lemma and a meta-rewriting on
reductions using complete sets of commuting (forking, resp.) diagrams is a
useful and successful method for providing a semantics of a functional
programming language and proving correctness of program transformations.
|
cs/0011011
|
Formal Properties of XML Grammars and Languages
|
cs.DM cs.CL
|
XML documents are described by a document type definition (DTD). An
XML-grammar is a formal grammar that captures the syntactic features of a DTD.
We investigate properties of this family of grammars. We show that every
XML-language basically has a unique XML-grammar. We give two characterizations
of languages generated by XML-grammars, one is set-theoretic, the other is by a
kind of saturation property. We investigate decidability problems and prove
that some properties that are undecidable for general context-free languages
become decidable for XML-languages. We also characterize those XML-grammars
that generate regular XML-languages.
|
cs/0011012
|
Causes and Explanations: A Structural-Model Approach, Part I: Causes
|
cs.AI
|
We propose a new definition of actual cause, using structural equations to
model counterfactuals. We show that the definition yields a plausible and
elegant account of causation that handles well examples which have caused
problems for other definitions and resolves major difficulties in the
traditional account.
|
cs/0011014
|
Chip-level CMP Modeling and Smart Dummy for HDP and Conformal CVD Films
|
cs.CE
|
Chip-level CMP modeling is investigated to obtain the post-CMP film profile
thickness across a die from its design layout file and a few film deposition
and CMP parameters. The work covers both HDP and conformal CVD film. The
experimental CMP results agree well with the modeled results. Different
algorithms for filling of dummy structure are compared. A smart algorithm for
dummy filling is presented, which achieves maximal pattern-density uniformity
and CMP planarity.
|
cs/0011016
|
Designing Proxies for Stock Market Indices is Computationally Hard
|
cs.CE cs.CC
|
In this paper, we study the problem of designing proxies (or portfolios) for
various stock market indices based on historical data. We use four different
methods for computing market indices, all of which are formulas used in actual
stock market analysis. For each index, we consider three criteria for designing
the proxy: the proxy must either track the market index, outperform the market
index, or perform within a margin of error of the index while maintaining a low
volatility. In eleven of the twelve cases (all combinations of four indices
with three criteria except the problem of sacrificing return for less
volatility using the price-relative index) we show that the problem is NP-hard,
and hence most likely intractable.
|
cs/0011018
|
Optimal Buy-and-Hold Strategies for Financial Markets with Bounded Daily
Returns
|
cs.CE cs.DS
|
In the context of investment analysis, we formulate an abstract online
computing problem called a planning game and develop general tools for solving
such a game. We then use the tools to investigate a practical buy-and-hold
trading problem faced by long-term investors in stocks. We obtain the unique
optimal static online algorithm for the problem and determine its exact
competitive ratio. We also compare this algorithm with the popular dollar
averaging strategy using actual market data.
|
cs/0011020
|
The Use of Instrumentation in Grammar Engineering
|
cs.CL
|
This paper explores the usefulness of a technique from software engineering,
code instrumentation, for the development of large-scale natural language
grammars. Information about the usage of grammar rules in test and corpus
sentences is used to improve grammar and testsuite, as well as adapting a
grammar to a specific genre. Results show that less than half of a
large-coverage grammar for German is actually tested by two large testsuites,
and that 10--30% of testing time is redundant. This methodology applied can be
seen as a re-use of grammar writing knowledge for testsuite compilation.
|
cs/0011023
|
Optimal Bidding Algorithms Against Cheating in Multiple-Object Auctions
|
cs.CE cs.DS
|
This paper studies some basic problems in a multiple-object auction model
using methodologies from theoretical computer science. We are especially
concerned with situations where an adversary bidder knows the bidding
algorithms of all the other bidders. In the two-bidder case, we derive an
optimal randomized bidding algorithm, by which the disadvantaged bidder can
procure at least half of the auction objects despite the adversary's a priori
knowledge of his algorithm. In the general $k$-bidder case, if the number of
objects is a multiple of $k$, an optimal randomized bidding algorithm is found.
If the $k-1$ disadvantaged bidders employ that same algorithm, each of them can
obtain at least $1/k$ of the objects regardless of the bidding algorithm the
adversary uses. These two algorithms are based on closed-form solutions to
certain multivariate probability distributions. In situations where a
closed-form solution cannot be obtained, we study a restricted class of bidding
algorithms as an approximation to desired optimal algorithms.
|
cs/0011024
|
Algorithms for Rewriting Aggregate Queries Using Views
|
cs.DB
|
Queries involving aggregation are typical in database applications. One of
the main ideas to optimize the execution of an aggregate query is to reuse
results of previously answered queries. This leads to the problem of rewriting
aggregate queries using views. Due to a lack of theory, algorithms for this
problem were rather ad-hoc. They were sound, but were not proven to be
complete.
Recently we have given syntactic characterizations for the equivalence of
aggregate queries and applied them to decide when there exist rewritings.
However, these decision procedures do not lend themselves immediately to an
implementation. In this paper, we present practical algorithms for rewriting
queries with $\COUNT$ and $\SUM$. Our algorithms are sound. They are also
complete for important cases. Our techniques can be used to improve well-known
procedures for rewriting non-aggregate queries. These procedures can then be
adapted to obtain algorithms for rewriting queries with $\MIN$ and $\MAX$. The
algorithms presented are a basis for realizing optimizers that rewrite queries
using views.
|
cs/0011028
|
Retrieval from Captioned Image Databases Using Natural Language
Processing
|
cs.CL cs.IR
|
It might appear that natural language processing should improve the accuracy
of information retrieval systems, by making available a more detailed analysis
of queries and documents. Although past results appear to show that this is not
so, if the focus is shifted to short phrases rather than full documents, the
situation becomes somewhat different. The ANVIL system uses a natural language
technique to obtain high accuracy retrieval of images which have been annotated
with a descriptive textual caption. The natural language techniques also allow
additional contextual information to be derived from the relation between the
query and the caption, which can help users to understand the overall
collection of retrieval results. The techniques have been successfully used in
a information retrieval system which forms both a testbed for research and the
basis of a commercial system.
|
cs/0011030
|
Logic Programming Approaches for Representing and Solving Constraint
Satisfaction Problems: A Comparison
|
cs.AI
|
Many logic programming based approaches can be used to describe and solve
combinatorial search problems. On the one hand there is constraint logic
programming which computes a solution as an answer substitution to a query
containing the variables of the constraint satisfaction problem. On the other
hand there are systems based on stable model semantics, abductive systems, and
first order logic model generators which compute solutions as models of some
theory. This paper compares these different approaches from the point of view
of knowledge representation (how declarative are the programs) and from the
point of view of performance (how good are they at solving typical problems).
|
cs/0011032
|
Top-down induction of clustering trees
|
cs.LG
|
An approach to clustering is presented that adapts the basic top-down
induction of decision trees method towards clustering. To this aim, it employs
the principles of instance based learning. The resulting methodology is
implemented in the TIC (Top down Induction of Clustering trees) system for
first order clustering. The TIC system employs the first order logical decision
tree representation of the inductive logic programming system Tilde. Various
experiments with TIC are presented, in both propositional and relational
domains.
|
cs/0011033
|
Web Mining Research: A Survey
|
cs.LG cs.DB
|
With the huge amount of information available online, the World Wide Web is a
fertile area for data mining research. The Web mining research is at the cross
road of research from several research communities, such as database,
information retrieval, and within AI, especially the sub-areas of machine
learning and natural language processing. However, there is a lot of confusions
when comparing research efforts from different point of views. In this paper,
we survey the research in the area of Web mining, point out some confusions
regarded the usage of the term Web mining and suggest three Web mining
categories. Then we situate some of the research with respect to these three
categories. We also explore the connection between the Web mining categories
and the related agent paradigm. For the survey, we focus on representation
issues, on the process, on the learning algorithm, and on the application of
the recent works as the criteria. We conclude the paper with some research
issues.
|
cs/0011034
|
Semantic interpretation of temporal information by abductive inference
|
cs.CL
|
Besides temporal information explicitly available in verbs and adjuncts, the
temporal interpretation of a text also depends on general world knowledge and
default assumptions. We will present a theory for describing the relation
between, on the one hand, verbs, their tenses and adjuncts and, on the other,
the eventualities and periods of time they represent and their relative
temporal locations.
The theory is formulated in logic and is a practical implementation of the
concepts described in Ness Schelkens et al. We will show how an abductive
resolution procedure can be used on this representation to extract temporal
information from texts.
|
cs/0011035
|
Abductive reasoning with temporal information
|
cs.CL
|
Texts in natural language contain a lot of temporal information, both
explicit and implicit. Verbs and temporal adjuncts carry most of the explicit
information, but for a full understanding general world knowledge and default
assumptions have to be taken into account. We will present a theory for
describing the relation between, on the one hand, verbs, their tenses and
adjuncts and, on the other, the eventualities and periods of time they
represent and their relative temporal locations, while allowing interaction
with general world knowledge.
The theory is formulated in an extension of first order logic and is a
practical implementation of the concepts described in Van Eynde 2001 and
Schelkens et al. 2000. We will show how an abductive resolution procedure can
be used on this representation to extract temporal information from texts. The
theory presented here is an extension of that in Verdoolaege et al. 2000,
adapted to VanEynde 2001, with a simplified and extended analysis of adjuncts
and with more emphasis on how a model can be constructed.
|
cs/0011038
|
Provably Fast and Accurate Recovery of Evolutionary Trees through
Harmonic Greedy Triplets
|
cs.DS cs.LG
|
We give a greedy learning algorithm for reconstructing an evolutionary tree
based on a certain harmonic average on triplets of terminal taxa. After the
pairwise distances between terminal taxa are estimated from sequence data, the
algorithm runs in O(n^2) time using O(n) work space, where n is the number of
terminal taxa. These time and space complexities are optimal in the sense that
the size of an input distance matrix is n^2 and the size of an output tree is
n. Moreover, in the Jukes-Cantor model of evolution, the algorithm recovers the
correct tree topology with high probability using sample sequences of length
polynomial in (1) n, (2) the logarithm of the error probability, and (3) the
inverses of two small parameters.
|
cs/0011040
|
Do All Fragments Count?
|
cs.CL
|
We aim at finding the minimal set of fragments which achieves maximal parse
accuracy in Data Oriented Parsing. Experiments with the Penn Wall Street
Journal treebank show that counts of almost arbitrary fragments within parse
trees are important, leading to improved parse accuracy over previous models
tested on this treebank. We isolate a number of dependency relations which
previous models neglect but which contribute to higher parse accuracy.
|
cs/0011041
|
EquiX---A Search and Query Language for XML
|
cs.DB
|
EquiX is a search language for XML that combines the power of querying with
the simplicity of searching. Requirements for such languages are discussed and
it is shown that EquiX meets the necessary criteria. Both a graphical abstract
syntax and a formal concrete syntax are presented for EquiX queries. In
addition, the semantics is defined and an evaluation algorithm is presented.
The evaluation algorithm is polynomial under combined complexity.
EquiX combines pattern matching, quantification and logical expressions to
query both the data and meta-data of XML documents. The result of a query in
EquiX is a set of XML documents. A DTD describing the result documents is
derived automatically from the query.
|
cs/0011042
|
Order-consistent programs are cautiously monotonic
|
cs.LO cs.AI
|
Some normal logic programs under the answer set (stable model) semantics lack
the appealing property of "cautious monotonicity." That is, augmenting a
program with one of its consequences may cause it to lose another of its
consequences. The syntactic condition of "order-consistency" was shown by Fages
to guarantee existence of an answer set. This note establishes that
order-consistent programs are not only consistent, but cautiously monotonic.
From this it follows that they are also "cumulative." That is, augmenting an
order-consistent with some of its consequences does not alter its consequences.
In fact, as we show, its answer sets remain unchanged.
|
cs/0011044
|
Scaling Up Inductive Logic Programming by Learning from Interpretations
|
cs.LG
|
When comparing inductive logic programming (ILP) and attribute-value learning
techniques, there is a trade-off between expressive power and efficiency.
Inductive logic programming techniques are typically more expressive but also
less efficient. Therefore, the data sets handled by current inductive logic
programming systems are small according to general standards within the data
mining community. The main source of inefficiency lies in the assumption that
several examples may be related to each other, so they cannot be handled
independently.
Within the learning from interpretations framework for inductive logic
programming this assumption is unnecessary, which allows to scale up existing
ILP algorithms. In this paper we explain this learning setting in the context
of relational databases. We relate the setting to propositional data mining and
to the classical ILP setting, and show that learning from interpretations
corresponds to learning from multiple relations and thus extends the
expressiveness of propositional learning, while maintaining its efficiency to a
large extent (which is not the case in the classical ILP setting).
As a case study, we present two alternative implementations of the ILP system
Tilde (Top-down Induction of Logical DEcision trees): Tilde-classic, which
loads all data in main memory, and Tilde-LDS, which loads the examples one by
one. We experimentally compare the implementations, showing Tilde-LDS can
handle large data sets (in the order of 100,000 examples or 100 MB) and indeed
scales up linearly in the number of examples.
|
cs/0012004
|
Improving Performance of heavily loaded agents
|
cs.MA cs.AI
|
With the increase in agent-based applications, there are now agent systems
that support \emph{concurrent} client accesses. The ability to process large
volumes of simultaneous requests is critical in many such applications. In such
a setting, the traditional approach of serving these requests one at a time via
queues (e.g. \textsf{FIFO} queues, priority queues) is insufficient.
Alternative models are essential to improve the performance of such
\emph{heavily loaded} agents. In this paper, we propose a set of
\emph{cost-based algorithms} to \emph{optimize} and \emph{merge} multiple
requests submitted to an agent. In order to merge a set of requests, one first
needs to identify commonalities among such requests. First, we provide an
\emph{application independent framework} within which an agent developer may
specify relationships (called \emph{invariants}) between requests. Second, we
provide two algorithms (and various accompanying heuristics) which allow an
agent to automatically rewrite requests so as to avoid redundant work---these
algorithms take invariants associated with the agent into account. Our
algorithms are independent of any specific agent framework. For an
implementation, we implemented both these algorithms on top of the \impact
agent development platform, and on top of a (non-\impact) geographic database
agent. Based on these implementations, we conducted experiments and show that
our algorithms are considerably more efficient than methods that use the $A^*$
algorithm.
|
cs/0012010
|
The Role of Commutativity in Constraint Propagation Algorithms
|
cs.PF cs.AI
|
Constraint propagation algorithms form an important part of most of the
constraint programming systems. We provide here a simple, yet very general
framework that allows us to explain several constraint propagation algorithms
in a systematic way. In this framework we proceed in two steps. First, we
introduce a generic iteration algorithm on partial orderings and prove its
correctness in an abstract setting. Then we instantiate this algorithm with
specific partial orderings and functions to obtain specific constraint
propagation algorithms.
In particular, using the notions commutativity and semi-commutativity, we
show that the {\tt AC-3}, {\tt PC-2}, {\tt DAC} and {\tt DPC} algorithms for
achieving (directional) arc consistency and (directional) path consistency are
instances of a single generic algorithm. The work reported here extends and
simplifies that of Apt \citeyear{Apt99b}.
|
cs/0012011
|
Towards a Universal Theory of Artificial Intelligence based on
Algorithmic Probability and Sequential Decision Theory
|
cs.AI cs.CC cs.IT cs.LG math.IT
|
Decision theory formally solves the problem of rational agents in uncertain
worlds if the true environmental probability distribution is known.
Solomonoff's theory of universal induction formally solves the problem of
sequence prediction for unknown distribution. We unify both theories and give
strong arguments that the resulting universal AIXI model behaves optimal in any
computable environment. The major drawback of the AIXI model is that it is
uncomputable. To overcome this problem, we construct a modified algorithm
AIXI^tl, which is still superior to any other time t and space l bounded agent.
The computation time of AIXI^tl is of the order t x 2^l.
|
cs/0012020
|
Creativity and Delusions: A Neurocomputational Approach
|
cs.NE cs.AI
|
Thinking is one of the most interesting mental processes. Its complexity is
sometimes simplified and its different manifestations are classified into
normal and abnormal, like the delusional and disorganized thought or the
creative one. The boundaries between these facets of thinking are fuzzy causing
difficulties in medical, academic, and philosophical discussions. Considering
the dopaminergic signal-to-noise neuronal modulation in the central nervous
system, and the existence of semantic maps in human brain, a self-organizing
neural network model was developed to unify the different thought processes
into a single neurocomputational substrate. Simulations were performed varying
the dopaminergic modulation and observing the different patterns that emerged
at the semantic map. Assuming that the thought process is the total pattern
elicited at the output layer of the neural network, the model shows how the
normal and abnormal thinking are generated and that there are no borders
between their different manifestations. Actually, a continuum of different
qualitative reasoning, ranging from delusion to disorganization of thought, and
passing through the normal and the creative thinking, seems to be more
plausible. The model is far from explaining the complexities of human thinking
but, at least, it seems to be a good metaphorical and unifying view of the many
facets of this phenomenon usually studied in separated settings.
|
cs/0012021
|
A Benchmark for Image Retrieval using Distributed Systems over the
Internet: BIRDS-I
|
cs.IR cs.MM
|
The performance of CBIR algorithms is usually measured on an isolated
workstation. In a real-world environment the algorithms would only constitute a
minor component among the many interacting components. The Internet
dramati-cally changes many of the usual assumptions about measuring CBIR
performance. Any CBIR benchmark should be designed from a networked systems
standpoint. These benchmarks typically introduce communication overhead because
the real systems they model are distributed applications. We present our
implementation of a client/server benchmark called BIRDS-I to measure image
retrieval performance over the Internet. It has been designed with the trend
toward the use of small personalized wireless systems in mind. Web-based CBIR
implies the use of heteroge-neous image sets, imposing certain constraints on
how the images are organized and the type of performance metrics applicable.
BIRDS-I only requires controlled human intervention for the compilation of the
image collection and none for the generation of ground truth in the measurement
of retrieval accuracy. Benchmark image collections need to be evolved
incrementally toward the storage of millions of images and that scaleup can
only be achieved through the use of computer-aided compilation. Finally, our
scoring metric introduces a tightly optimized image-ranking window.
|
cs/0101010
|
An Even Faster and More Unifying Algorithm for Comparing Trees via
Unbalanced Bipartite Matchings
|
cs.CV cs.DS
|
A widely used method for determining the similarity of two labeled trees is
to compute a maximum agreement subtree of the two trees. Previous work on this
similarity measure is only concerned with the comparison of labeled trees of
two special kinds, namely, uniformly labeled trees (i.e., trees with all their
nodes labeled by the same symbol) and evolutionary trees (i.e., leaf-labeled
trees with distinct symbols for distinct leaves). This paper presents an
algorithm for comparing trees that are labeled in an arbitrary manner. In
addition to this generality, this algorithm is faster than the previous
algorithms.
Another contribution of this paper is on maximum weight bipartite matchings.
We show how to speed up the best known matching algorithms when the input
graphs are node-unbalanced or weight-unbalanced. Based on these enhancements,
we obtain an efficient algorithm for a new matching problem called the
hierarchical bipartite matching problem, which is at the core of our maximum
agreement subtree algorithm.
|
cs/0101012
|
Communities of Practice in the Distributed International Environment
|
cs.HC cs.IR
|
Modern commercial organisations are facing pressures which have caused them
to lose personnel. When they lose people, they also lose their knowledge.
Organisations also have to cope with the internationalisation of business
forcing collaboration and knowledge sharing across time and distance. Knowledge
Management (KM) claims to tackle these issues. This paper looks at an area
where KM does not offer sufficient support, that is, the sharing of knowledge
that is not easy to articulate.
The focus in this paper is on Communities of Practice in commercial
organisations. We do this by exploring knowledge sharing in Lave and Wenger's
[1] theory of Communities of Practice and investigating how Communities of
Practice may translate to a distributed international environment. The paper
reports on two case studies that explore the functioning of Communities of
Practice across international boundaries.
|
cs/0101014
|
On the problem of computing the well-founded semantics
|
cs.LO cs.AI cs.DS
|
The well-founded semantics is one of the most widely studied and used
semantics of logic programs with negation. In the case of finite propositional
programs, it can be computed in polynomial time, more specifically, in
O(|At(P)|size(P)) steps, where size(P) denotes the total number of occurrences
of atoms in a logic program P. This bound is achieved by an algorithm
introduced by Van Gelder and known as the alternating-fixpoint algorithm.
Improving on the alternating-fixpoint algorithm turned out to be difficult. In
this paper we study extensions and modifications of the alternating-fixpoint
approach. We then restrict our attention to the class of programs whose rules
have no more than one positive occurrence of an atom in their bodies. For
programs in that class we propose a new implementation of the
alternating-fixpoint method in which false atoms are computed in a top-down
fashion. We show that our algorithm is faster than other known algorithms and
that for a wide class of programs it is linear and so, asymptotically optimal.
|
cs/0101015
|
Combinatorial Toolbox for Protein Sequence Design and Landscape Analysis
in the Grand Canonical Model
|
cs.CE cs.CC q-bio.BM
|
In modern biology, one of the most important research problems is to
understand how protein sequences fold into their native 3D structures. To
investigate this problem at a high level, one wishes to analyze the protein
landscapes, i.e., the structures of the space of all protein sequences and
their native 3D structures. Perhaps the most basic computational problem at
this level is to take a target 3D structure as input and design a fittest
protein sequence with respect to one or more fitness functions of the target 3D
structure. We develop a toolbox of combinatorial techniques for protein
landscape analysis in the Grand Canonical model of Sun, Brem, Chan, and Dill.
The toolbox is based on linear programming, network flow, and a linear-size
representation of all minimum cuts of a network. It not only substantially
expands the network flow technique for protein sequence design in Kleinberg's
seminal work but also is applicable to a considerably broader collection of
computational problems than those considered by Kleinberg. We have used this
toolbox to obtain a number of efficient algorithms and hardness results. We
have further used the algorithms to analyze 3D structures drawn from the
Protein Data Bank and have discovered some novel relationships between such
native 3D structures and the Grand Canonical model.
|
cs/0101016
|
A Dynamic Programming Approach to De Novo Peptide Sequencing via Tandem
Mass Spectrometry
|
cs.CE cs.DS
|
The tandem mass spectrometry fragments a large number of molecules of the
same peptide sequence into charged prefix and suffix subsequences, and then
measures mass/charge ratios of these ions. The de novo peptide sequencing
problem is to reconstruct the peptide sequence from a given tandem mass
spectral data of k ions. By implicitly transforming the spectral data into an
NC-spectrum graph G=(V,E) where |V|=2k+2, we can solve this problem in
O(|V|+|E|) time and O(|V|) space using dynamic programming. Our approach can be
further used to discover a modified amino acid in O(|V||E|) time and to analyze
data with other types of noise in O(|V||E|) time. Our algorithms have been
implemented and tested on actual experimental data.
|
cs/0101019
|
General Loss Bounds for Universal Sequence Prediction
|
cs.AI cs.LG math.ST stat.TH
|
The Bayesian framework is ideally suited for induction problems. The
probability of observing $x_t$ at time $t$, given past observations
$x_1...x_{t-1}$ can be computed with Bayes' rule if the true distribution $\mu$
of the sequences $x_1x_2x_3...$ is known. The problem, however, is that in many
cases one does not even have a reasonable estimate of the true distribution. In
order to overcome this problem a universal distribution $\xi$ is defined as a
weighted sum of distributions $\mu_i\inM$, where $M$ is any countable set of
distributions including $\mu$. This is a generalization of Solomonoff
induction, in which $M$ is the set of all enumerable semi-measures. Systems
which predict $y_t$, given $x_1...x_{t-1}$ and which receive loss $l_{x_t y_t}$
if $x_t$ is the true next symbol of the sequence are considered. It is proven
that using the universal $\xi$ as a prior is nearly as good as using the
unknown true distribution $\mu$. Furthermore, games of chance, defined as a
sequence of bets, observations, and rewards are studied. The time needed to
reach the winning zone is bounded in terms of the relative entropy of $\mu$ and
$\xi$. Extensions to arbitrary alphabets, partial and delayed prediction, and
more active systems are discussed.
|
cs/0101030
|
Tree Contractions and Evolutionary Trees
|
cs.CE cs.DS
|
An evolutionary tree is a rooted tree where each internal vertex has at least
two children and where the leaves are labeled with distinct symbols
representing species. Evolutionary trees are useful for modeling the
evolutionary history of species. An agreement subtree of two evolutionary trees
is an evolutionary tree which is also a topological subtree of the two given
trees. We give an algorithm to determine the largest possible number of leaves
in any agreement subtree of two trees T_1 and T_2 with n leaves each. If the
maximum degree d of these trees is bounded by a constant, the time complexity
is O(n log^2(n)) and is within a log(n) factor of optimal. For general d, this
algorithm runs in O(n d^2 log(d) log^2(n)) time or alternatively in O(n d
sqrt(d) log^3(n)) time.
|
cs/0101031
|
Cavity Matchings, Label Compressions, and Unrooted Evolutionary Trees
|
cs.CE cs.DS
|
We present an algorithm for computing a maximum agreement subtree of two
unrooted evolutionary trees. It takes O(n^{1.5} log n) time for trees with
unbounded degrees, matching the best known time complexity for the rooted case.
Our algorithm allows the input trees to be mixed trees, i.e., trees that may
contain directed and undirected edges at the same time. Our algorithm adopts a
recursive strategy exploiting a technique called label compression. The
backbone of this technique is an algorithm that computes the maximum weight
matchings over many subgraphs of a bipartite graph as fast as it takes to
compute a single matching.
|
cs/0101034
|
Data Security Equals Graph Connectivity
|
cs.CR cs.DB cs.DS
|
To protect sensitive information in a cross tabulated table, it is a common
practice to suppress some of the cells in the table. This paper investigates
four levels of data security of a two-dimensional table concerning the
effectiveness of this practice. These four levels of data security protect the
information contained in, respectively, individual cells, individual rows and
columns, several rows or columns as a whole, and a table as a whole. The paper
presents efficient algorithms and NP-completeness results for testing and
achieving these four levels of data security. All these complexity results are
obtained by means of fundamental equivalences between the four levels of data
security of a table and four types of connectivity of a graph constructed from
that table.
|
cs/0101036
|
The Generalized Universal Law of Generalization
|
cs.CV cs.AI math.PR physics.soc-ph
|
It has been argued by Shepard that there is a robust psychological law that
relates the distance between a pair of items in psychological space and the
probability that they will be confused with each other. Specifically, the
probability of confusion is a negative exponential function of the distance
between the pair of items. In experimental contexts, distance is typically
defined in terms of a multidimensional Euclidean space-but this assumption
seems unlikely to hold for complex stimuli. We show that, nonetheless, the
Universal Law of Generalization can be derived in the more complex setting of
arbitrary stimuli, using a much more universal measure of distance. This
universal distance is defined as the length of the shortest program that
transforms the representations of the two items of interest into one another:
the algorithmic information distance. It is universal in the sense that it
minorizes every computable distance: it is the smallest computable distance. We
show that the universal law of generalization holds with probability going to
one-provided the confusion probabilities are computable. We also give a
mathematically more appealing form
|
cs/0102002
|
On the Automated Classification of Web Sites
|
cs.IR
|
In this paper we discuss several issues related to automated text
classification of web sites. We analyze the nature of web content and metadata
in relation to requirements for text features. We find that HTML metatags are a
good source of text features, but are not in wide use despite their role in
search engine rankings. We present an approach for targeted spidering including
metadata extraction and opportunistic crawling of specific semantic hyperlinks.
We describe a system for automatically classifying web sites into industry
categories and present performance results based on different combinations of
text features and training data. This system can serve as the basis for a
generalized framework for automated metadata creation.
|
cs/0102003
|
Fast Pricing of European Asian Options with Provable Accuracy:
Single-stock and Basket Options
|
cs.CE
|
This paper develops three polynomial-time pricing techniques for European
Asian options with provably small errors, where the stock prices follow
binomial trees or trees of higher-degree. The first technique is the first
known Monte Carlo algorithm with analytical error bounds suitable for pricing
single-stock options with meaningful confidence and speed. The second technique
is a general recursive bucketing-based scheme that can use the
Aingworth-Motwani-Oldham aggregation algorithm, Monte-Carlo simulation and
possibly others as the base-case subroutine. This scheme enables robust
trade-offs between accuracy and time over subtrees of different sizes. For
long-term options or high frequency price averaging, it can price single-stock
options with smaller errors in less time than the base-case algorithms
themselves. The third technique combines Fast Fourier Transform with
bucketing-based schemes for pricing basket options. This technique takes
polynomial time in the number of days and the number of stocks, and does not
add any errors to those already incurred in the companion bucketing scheme.
This technique assumes that the price of each underlying stock moves
independently.
|
cs/0102008
|
Optimal Bid Sequences for Multiple-Object Auctions with Unequal Budgets
|
cs.CE cs.DM cs.DS
|
In a multiple-object auction, every bidder tries to win as many objects as
possible with a bidding algorithm. This paper studies position-randomized
auctions, which form a special class of multiple-object auctions where a
bidding algorithm consists of an initial bid sequence and an algorithm for
randomly permuting the sequence. We are especially concerned with situations
where some bidders know the bidding algorithms of others. For the case of only
two bidders, we give an optimal bidding algorithm for the disadvantaged bidder.
Our result generalizes previous work by allowing the bidders to have unequal
budgets. One might naturally anticipate that the optimal expected numbers of
objects won by the bidders would be proportional to their budgets.
Surprisingly, this is not true. Our new algorithm runs in optimal O(n) time in
a straightforward manner. The case with more than two bidders is open.
|
cs/0102010
|
The Enhanced Double Digest Problem for DNA Physical Mapping
|
cs.CE cs.DM cs.DS
|
The double digest problem is a common NP-hard approach to constructing
physical maps of DNA sequences. This paper presents a new approach called the
enhanced double digest problem. Although this new problem is also NP-hard, it
can be solved in linear time in certain theoretically interesting cases.
|
cs/0102011
|
A Price Dynamics in Bandwidth Markets for Point-to-point Connections
|
cs.NI cond-mat.soft cs.MA
|
We simulate a network of N routers and M network users making concurrent
point-to-point connections by buying and selling router capacity from each
other. The resources need to be acquired in complete sets, but there is only
one spot market for each router. In order to describe the internal dynamics of
the market, we model the observed prices by N-dimensional Ito-processes.
Modeling using stochastic processes is novel in this context of describing
interactions between end-users in a system with shared resources, and allows a
standard set of mathematical tools to be applied. The derived models can also
be used to price contingent claims on network capacity and thus to price
complex network services such as quality of service levels, multicast, etc.
|
cs/0102014
|
On the predictability of Rainfall in Kerala- An application of ABF
Neural Network
|
cs.NE cs.AI
|
Rainfall in Kerala State, the southern part of Indian Peninsula in particular
is caused by the two monsoons and the two cyclones every year. In general,
climate and rainfall are highly nonlinear phenomena in nature giving rise to
what is known as the `butterfly effect'. We however attempt to train an ABF
neural network on the time series rainfall data and show for the first time
that in spite of the fluctuations resulting from the nonlinearity in the
system, the trends in the rainfall pattern in this corner of the globe have
remained unaffected over the past 87 years from 1893 to 1980. We also
successfully filter out the chaotic part of the system and illustrate that its
effects are marginal over long term predictions.
|
cs/0102015
|
Non-convex cost functionals in boosting algorithms and methods for panel
selection
|
cs.NE cs.LG cs.NA math.NA
|
In this document we propose a new improvement for boosting techniques as
proposed in Friedman '99 by the use of non-convex cost functional. The idea is
to introduce a correlation term to better deal with forecasting of additive
time series. The problem is discussed in a theoretical way to prove the
existence of minimizing sequence, and in a numerical way to propose a new
"ArgMin" algorithm. The model has been used to perform the touristic presence
forecast for the winter season 1999/2000 in Trentino (italian Alps).
|
cs/0102018
|
An effective Procedure for Speeding up Algorithms
|
cs.CC cs.AI cs.LG
|
The provably asymptotically fastest algorithm within a factor of 5 for
formally described problems will be constructed. The main idea is to enumerate
all programs provably equivalent to the original problem by enumerating all
proofs. The algorithm could be interpreted as a generalization and improvement
of Levin search, which is, within a multiplicative constant, the fastest
algorithm for inverting functions. Blum's speed-up theorem is avoided by taking
into account only programs for which a correctness proof exists. Furthermore,
it is shown that the fastest program that computes a certain function is also
one of the shortest programs provably computing this function. To quantify this
statement, the definition of Kolmogorov complexity is extended, and two new
natural measures for the complexity of a function are defined.
|
cs/0102019
|
Easy and Hard Constraint Ranking in OT: Algorithms and Complexity
|
cs.CL cs.CC
|
We consider the problem of ranking a set of OT constraints in a manner
consistent with data.
We speed up Tesar and Smolensky's RCD algorithm to be linear on the number of
constraints. This finds a ranking so each attested form x_i beats or ties a
particular competitor y_i. We also generalize RCD so each x_i beats or ties all
possible competitors.
Alas, this more realistic version of learning has no polynomial algorithm
unless P=NP! Indeed, not even generation does. So one cannot improve
qualitatively upon brute force:
Merely checking that a single (given) ranking is consistent with given forms
is coNP-complete if the surface forms are fully observed and Delta_2^p-complete
if not. Indeed, OT generation is OptP-complete. As for ranking, determining
whether any consistent ranking exists is coNP-hard (but in Delta_2^p) if the
forms are fully observed, and Sigma_2^p-complete if not.
Finally, we show that generation and ranking are easier in derivational
theories: in P, and NP-complete.
|
cs/0102020
|
Multi-Syllable Phonotactic Modelling
|
cs.CL
|
This paper describes a novel approach to constructing phonotactic models. The
underlying theoretical approach to phonological description is the
multisyllable approach in which multiple syllable classes are defined that
reflect phonotactically idiosyncratic syllable subcategories. A new
finite-state formalism, OFS Modelling, is used as a tool for encoding,
automatically constructing and generalising phonotactic descriptions.
Language-independent prototype models are constructed which are instantiated on
the basis of data sets of phonological strings, and generalised with a
clustering algorithm. The resulting approach enables the automatic construction
of phonotactic models that encode arbitrarily close approximations of a
language's set of attested phonological forms. The approach is applied to the
construction of multi-syllable word-level phonotactic models for German,
English and Dutch.
|
cs/0102021
|
Taking Primitive Optimality Theory Beyond the Finite State
|
cs.CL
|
Primitive Optimality Theory (OTP) (Eisner, 1997a; Albro, 1998), a
computational model of Optimality Theory (Prince and Smolensky, 1993), employs
a finite state machine to represent the set of active candidates at each stage
of an Optimality Theoretic derivation, as well as weighted finite state
machines to represent the constraints themselves. For some purposes, however,
it would be convenient if the set of candidates were limited by some set of
criteria capable of being described only in a higher-level grammar formalism,
such as a Context Free Grammar, a Context Sensitive Grammar, or a Multiple
Context Free Grammar (Seki et al., 1991). Examples include reduplication and
phrasal stress models. Here we introduce a mechanism for OTP-like Optimality
Theory in which the constraints remain weighted finite state machines, but sets
of candidates are represented by higher-level grammars. In particular, we use
multiple context-free grammars to model reduplication in the manner of
Correspondence Theory (McCarthy and Prince, 1995), and develop an extended
version of the Earley Algorithm (Earley, 1970) to apply the constraints to a
reduplicating candidate set.
|
cs/0102022
|
Finite-State Phonology: Proceedings of the 5th Workshop of the ACL
Special Interest Group in Computational Phonology (SIGPHON)
|
cs.CL
|
Home page of the workshop proceedings, with pointers to the individually
archived papers. Includes front matter from the printed version of the
proceedings.
|
cs/0102026
|
Mathematical Model of Word Length on the Basis of the Cebanov-Fucks
Distribution with Uniform Parameter Distribution
|
cs.CL
|
The data on 13 typologically different languages have been processed using a
two-parameter word length model, based on 1-displaced uniform Poisson
distribution. Statistical dependencies of the 2nd parameter on the 1st one are
revealed for the German texts and genre of letters.
|
cs/0102027
|
Gene Expression Programming: a New Adaptive Algorithm for Solving
Problems
|
cs.AI cs.NE
|
Gene expression programming, a genotype/phenotype genetic algorithm (linear
and ramified), is presented here for the first time as a new technique for the
creation of computer programs. Gene expression programming uses character
linear chromosomes composed of genes structurally organized in a head and a
tail. The chromosomes function as a genome and are subjected to modification by
means of mutation, transposition, root transposition, gene transposition, gene
recombination, and one- and two-point recombination. The chromosomes encode
expression trees which are the object of selection. The creation of these
separate entities (genome and expression tree) with distinct functions allows
the algorithm to perform with high efficiency that greatly surpasses existing
adaptive techniques. The suite of problems chosen to illustrate the power and
versatility of gene expression programming includes symbolic regression,
sequence induction with and without constant creation, block stacking, cellular
automata rules for the density-classification problem, and two problems of
boolean concept learning: the 11-multiplexer and the GP rule problem.
|
cs/0103002
|
Quantitative Neural Network Model of the Tip-of-the-Tongue Phenomenon
Based on Synthesized Memory-Psycholinguistic-Metacognitive Approach
|
cs.CL cs.AI q-bio.NC q-bio.QM
|
A new three-stage computer artificial neural network model of the
tip-of-the-tongue phenomenon is proposed. Each word's node is build from some
interconnected learned auto-associative two-layer neural networks each of which
represents separate word's semantic, lexical, or phonological components. The
model synthesizes memory, psycholinguistic, and metamemory approaches, bridges
speech errors and naming chronometry research traditions, and can explain
quantitatively many tip-of-the-tongue effects.
|
cs/0103003
|
Learning Policies with External Memory
|
cs.LG
|
In order for an agent to perform well in partially observable domains, it is
usually necessary for actions to depend on the history of observations. In this
paper, we explore a {\it stigmergic} approach, in which the agent's actions
include the ability to set and clear bits in an external memory, and the
external memory is included as part of the input to the agent. In this case, we
need to learn a reactive policy in a highly non-Markovian domain. We explore
two algorithms: SARSA(\lambda), which has had empirical success in partially
observable domains, and VAPS, a new algorithm due to Baird and Moore, with
convergence guarantees in partially observable domains. We compare the
performance of these two algorithms on benchmark problems.
|
cs/0103004
|
Rapid Application Evolution and Integration Through Document
Metamorphosis
|
cs.DB
|
The Harland document management system implements a data model in which
document (object) structure can be altered by mixin-style multiple inheritance
at any time. This kind of structural fluidity has long been supported by
knowledge-base management systems, but its use has primarily been in support of
reasoning and inference. In this paper, we report our experiences building and
supporting several non-trivial applications on top of this data model. Based on
these experiences, we argue that structural fluidity is convenient for
data-intensive applications other than knowledge-base management. Specifically,
we suggest that this flexible data model is a natural fit for the decoupled
programming methodology that arises naturally when using enterprise component
frameworks.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.