id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
cs/0103007
|
Two-parameter Model of Word Length "Language - Genre"
|
cs.CL
|
A two-parameter model of word length measured by the number of syllables
comprising it is proposed. The first parameter is dependent on language type,
the second one - on text genre and reflects the degree of completion of
synergetic processes of language optimization.
|
cs/0103010
|
Magical Number Seven Plus or Minus Two: Syntactic Structure Recognition
in Japanese and English Sentences
|
cs.CL
|
George A. Miller said that human beings have only seven chunks in short-term
memory, plus or minus two. We counted the number of bunsetsus (phrases) whose
modifiees are undetermined in each step of an analysis of the dependency
structure of Japanese sentences, and which therefore must be stored in
short-term memory. The number was roughly less than nine, the upper bound of
seven plus or minus two. We also obtained similar results with English
sentences under the assumption that human beings recognize a series of words,
such as a noun phrase (NP), as a unit. This indicates that if we assume that
the human cognitive units in Japanese and English are bunsetsu and NP
respectively, analysis will support Miller's $7 \pm 2$ theory.
|
cs/0103011
|
A Machine-Learning Approach to Estimating the Referential Properties of
Japanese Noun Phrases
|
cs.CL
|
The referential properties of noun phrases in the Japanese language, which
has no articles, are useful for article generation in Japanese-English machine
translation and for anaphora resolution in Japanese noun phrases. They are
generally classified as generic noun phrases, definite noun phrases, and
indefinite noun phrases. In the previous work, referential properties were
estimated by developing rules that used clue words. If two or more rules were
in conflict with each other, the category having the maximum total score given
by the rules was selected as the desired category. The score given by each rule
was established by hand, so the manpower cost was high. In this work, we
automatically adjusted these scores by using a machine-learning method and
succeeded in reducing the amount of manpower needed to adjust these scores.
|
cs/0103012
|
Meaning Sort - Three examples: dictionary construction, tagged corpus
construction, and information presentation system
|
cs.CL
|
It is often useful to sort words into an order that reflects relations among
their meanings as obtained by using a thesaurus. In this paper, we introduce a
method of arranging words semantically by using several types of `{\sf is-a}'
thesauri and a multi-dimensional thesaurus. We also describe three major
applications where a meaning sort is useful and show the effectiveness of a
meaning sort. Since there is no doubt that a word list in meaning-order is
easier to use than a word list in some random order, a meaning sort, which can
easily produce a word list in meaning-order, must be useful and effective.
|
cs/0103013
|
CRL at Ntcir2
|
cs.CL
|
We have developed systems of two types for NTCIR2. One is an enhenced version
of the system we developed for NTCIR1 and IREX. It submitted retrieval results
for JJ and CC tasks. A variety of parameters were tried with the system. It
used such characteristics of newspapers as locational information in the CC
tasks. The system got good results for both of the tasks. The other system is a
portable system which avoids free parameters as much as possible. The system
submitted retrieval results for JJ, JE, EE, EJ, and CC tasks. The system
automatically determined the number of top documents and the weight of the
original query used in automatic-feedback retrieval. It also determined
relevant terms quite robustly. For EJ and JE tasks, it used document expansion
to augment the initial queries. It achieved good results, except on the CC
tasks.
|
cs/0103015
|
Fitness Uniform Selection to Preserve Genetic Diversity
|
cs.AI cs.DC cs.LG q-bio
|
In evolutionary algorithms, the fitness of a population increases with time
by mutating and recombining individuals and by a biased selection of more fit
individuals. The right selection pressure is critical in ensuring sufficient
optimization progress on the one hand and in preserving genetic diversity to be
able to escape from local optima on the other. We propose a new selection
scheme, which is uniform in the fitness values. It generates selection pressure
towards sparsely populated fitness regions, not necessarily towards higher
fitness, as is the case for all other selection schemes. We show that the new
selection scheme can be much more effective than standard selection schemes.
|
cs/0103020
|
Belief Revision: A Critique
|
cs.AI cs.LO
|
We examine carefully the rationale underlying the approaches to belief change
taken in the literature, and highlight what we view as methodological problems.
We argue that to study belief change carefully, we must be quite explicit about
the ``ontology'' or scenario underlying the belief change process. This is
something that has been missing in previous work, with its focus on postulates.
Our analysis shows that we must pay particular attention to two issues that
have often been taken for granted: The first is how we model the agent's
epistemic state. (Do we use a set of beliefs, or a richer structure, such as an
ordering on worlds? And if we use a set of beliefs, in what language are these
beliefs are expressed?) We show that even postulates that have been called
``beyond controversy'' are unreasonable when the agent's beliefs include
beliefs about her own epistemic state as well as the external world. The second
is the status of observations. (Are observations known to be true, or just
believed? In the latter case, how firm is the belief?) Issues regarding the
status of observations arise particularly when we consider iterated belief
revision, and we must confront the possibility of revising by p and then by
not-p.
|
cs/0103022
|
Secure, Efficient Data Transport and Replica Management for
High-Performance Data-Intensive Computing
|
cs.DC cs.DB
|
An emerging class of data-intensive applications involve the geographically
dispersed extraction of complex scientific information from very large
collections of measured or computed data. Such applications arise, for example,
in experimental physics, where the data in question is generated by
accelerators, and in simulation science, where the data is generated by
supercomputers. So-called Data Grids provide essential infrastructure for such
applications, much as the Internet provides essential services for applications
such as e-mail and the Web. We describe here two services that we believe are
fundamental to any Data Grid: reliable, high-speed transporet and replica
management. Our high-speed transport service, GridFTP, extends the popular FTP
protocol with new features required for Data Grid applciations, such as
striping and partial file access. Our replica management service integrates a
replica catalog with GridFTP transfers to provide for the creation,
registration, location, and management of dataset replicas. We present the
design of both services and also preliminary performance results. Our
implementations exploit security and other services provided by the Globus
Toolkit.
|
cs/0103026
|
A Decision Tree of Bigrams is an Accurate Predictor of Word Sense
|
cs.CL
|
This paper presents a corpus-based approach to word sense disambiguation
where a decision tree assigns a sense to an ambiguous word based on the bigrams
that occur nearby. This approach is evaluated using the sense-tagged corpora
from the 1998 SENSEVAL word sense disambiguation exercise. It is more accurate
than the average results reported for 30 of 36 words, and is more accurate than
the best results for 19 of 36 words.
|
cs/0104005
|
Bootstrapping Structure using Similarity
|
cs.LG cs.CL
|
In this paper a new similarity-based learning algorithm, inspired by string
edit-distance (Wagner and Fischer, 1974), is applied to the problem of
bootstrapping structure from scratch. The algorithm takes a corpus of
unannotated sentences as input and returns a corpus of bracketed sentences. The
method works on pairs of unstructured sentences or sentences partially
bracketed by the algorithm that have one or more words in common. It finds
parts of sentences that are interchangeable (i.e. the parts of the sentences
that are different in both sentences). These parts are taken as possible
constituents of the same type. While this corresponds to the basic
bootstrapping step of the algorithm, further structure may be learned from
comparison with other (similar) sentences.
We used this method for bootstrapping structure from the flat sentences of
the Penn Treebank ATIS corpus, and compared the resulting structured sentences
to the structured sentences in the ATIS corpus. Similarly, the algorithm was
tested on the OVIS corpus. We obtained 86.04 % non-crossing brackets precision
on the ATIS corpus and 89.39 % non-crossing brackets precision on the OVIS
corpus.
|
cs/0104006
|
ABL: Alignment-Based Learning
|
cs.LG cs.CL
|
This paper introduces a new type of grammar learning algorithm, inspired by
string edit distance (Wagner and Fischer, 1974). The algorithm takes a corpus
of flat sentences as input and returns a corpus of labelled, bracketed
sentences. The method works on pairs of unstructured sentences that have one or
more words in common. When two sentences are divided into parts that are the
same in both sentences and parts that are different, this information is used
to find parts that are interchangeable. These parts are taken as possible
constituents of the same type. After this alignment learning step, the
selection learning step selects the most probable constituents from all
possible constituents.
This method was used to bootstrap structure on the ATIS corpus (Marcus et
al., 1993) and on the OVIS (Openbaar Vervoer Informatie Systeem (OVIS) stands
for Public Transport Information System.) corpus (Bonnema et al., 1997). While
the results are encouraging (we obtained up to 89.25 % non-crossing brackets
precision), this paper will point out some of the shortcomings of our approach
and will suggest possible solutions.
|
cs/0104007
|
Bootstrapping Syntax and Recursion using Alignment-Based Learning
|
cs.LG cs.CL
|
This paper introduces a new type of unsupervised learning algorithm, based on
the alignment of sentences and Harris's (1951) notion of interchangeability.
The algorithm is applied to an untagged, unstructured corpus of natural
language sentences, resulting in a labelled, bracketed version of the corpus.
Firstly, the algorithm aligns all sentences in the corpus in pairs, resulting
in a partition of the sentences consisting of parts of the sentences that are
similar in both sentences and parts that are dissimilar. This information is
used to find (possibly overlapping) constituents. Next, the algorithm selects
(non-overlapping) constituents. Several instances of the algorithm are applied
to the ATIS corpus (Marcus et al., 1993) and the OVIS (Openbaar Vervoer
Informatie Systeem (OVIS) stands for Public Transport Information System.)
corpus (Bonnema et al., 1997). Apart from the promising numerical results, the
most striking result is that even the simplest algorithm based on alignment
learns recursion.
|
cs/0104008
|
Event Indexing Systems for Efficient Selection and Analysis of HERA Data
|
cs.DB cs.IR
|
The design and implementation of two software systems introduced to improve
the efficiency of offline analysis of event data taken with the ZEUS Detector
at the HERA electron-proton collider at DESY are presented. Two different
approaches were made, one using a set of event directories and the other using
a tag database based on a commercial object-oriented database management
system. These are described and compared. Both systems provide quick direct
access to individual collision events in a sequential data store of several
terabytes, and they both considerably improve the event analysis efficiency. In
particular the tag database provides a very flexible selection mechanism and
can dramatically reduce the computing time needed to extract small subsamples
from the total event sample. Gains as large as a factor 20 have been obtained.
|
cs/0104009
|
Evaluating Recommendation Algorithms by Graph Analysis
|
cs.IR cs.DM cs.DS
|
We present a novel framework for evaluating recommendation algorithms in
terms of the `jumps' that they make to connect people to artifacts. This
approach emphasizes reachability via an algorithm within the implicit graph
structure underlying a recommender dataset, and serves as a complement to
evaluation in terms of predictive accuracy. The framework allows us to consider
questions relating algorithmic parameters to properties of the datasets. For
instance, given a particular algorithm `jump,' what is the average path length
from a person to an artifact? Or, what choices of minimum ratings and jumps
maintain a connected graph? We illustrate the approach with a common jump
called the `hammock' using movie recommender datasets.
|
cs/0104010
|
Type Arithmetics: Computation based on the theory of types
|
cs.CL
|
The present paper shows meta-programming turn programming, which is rich
enough to express arbitrary arithmetic computations. We demonstrate a type
system that implements Peano arithmetics, slightly generalized to negative
numbers. Certain types in this system denote numerals. Arithmetic operations on
such types-numerals - addition, subtraction, and even division - are expressed
as type reduction rules executed by a compiler. A remarkable trait is that
division by zero becomes a type error - and reported as such by a compiler.
|
cs/0104011
|
Potholes on the Royal Road
|
cs.NE nlin.AO
|
It is still unclear how an evolutionary algorithm (EA) searches a fitness
landscape, and on what fitness landscapes a particular EA will do well. The
validity of the building-block hypothesis, a major tenet of traditional genetic
algorithm theory, remains controversial despite its continued use to justify
claims about EAs. This paper outlines a research program to begin to answer
some of these open questions, by extending the work done in the royal road
project. The short-term goal is to find a simple class of functions which the
simple genetic algorithm optimizes better than other optimization methods, such
as hillclimbers. A dialectical heuristic for searching for such a class is
introduced. As an example of using the heuristic, the simple genetic algorithm
is compared with a set of hillclimbers on a simple subset of the
hyperplane-defined functions, the pothole functions.
|
cs/0104013
|
Shooting Over or Under the Mark: Towards a Reliable and Flexible
Anticipation in the Economy
|
cs.CE
|
The real monetary economy is grounded upon monetary flow equilibration or the
activity of actualizing monetary flow continuity at each economic agent except
for the central bank. Every update of monetary flow continuity at each agent
constantly causes monetary flow equilibration at the neighborhood agents. Every
monetary flow equilibration as the activity of shooting the mark identified as
monetary flow continuity turns out to be off the mark, and constantly generate
the similar activities in sequence. Monetary flow equilibration ceaselessly
reverberating in the economy performs two functions. One is to seek an
organization on its own, and the other is to perturb the ongoing organization.
Monetary flow equilibration as the agency of seeking and perturbing its
organization also serves as a means of predicting its behavior. The likely
organizational behavior could be the one that remains most robust against
monetary flow equilibration as an agency of applying perturbations.
|
cs/0104014
|
Tracing a Faint Fingerprint of the Invisible Hand?
|
cs.CE
|
Any economic agent constituting the monetary economy maintains the activity
of monetary flow equilibration for fulfilling the condition of monetary flow
continuity in the record, except at the central bank. At the same time,
monetary flow equilibration at one economic agent constantly induces at other
agents in the economy further flow disequilibrium to be eliminated
subsequently. We propose the rate of monetary flow disequilibration as a figure
measuring the progressive movement of the economy. The rate of disequilibration
was read out of both the Japanese and the United States monetary economy
recorded over the last fifty years.
|
cs/0104017
|
Local Search Techniques for Constrained Portfolio Selection Problems
|
cs.CE cs.AI
|
We consider the problem of selecting a portfolio of assets that provides the
investor a suitable balance of expected return and risk. With respect to the
seminal mean-variance model of Markowitz, we consider additional constraints on
the cardinality of the portfolio and on the quantity of individual shares. Such
constraints better capture the real-world trading system, but make the problem
more difficult to be solved with exact methods. We explore the use of local
search techniques, mainly tabu search, for the portfolio selection problem. We
compare and combine previous work on portfolio selection that makes use of the
local search approach and we propose new algorithms that combine different
neighborhood relations. In addition, we show how the use of randomization and
of a simple form of adaptiveness simplifies the setting of a large number of
critical parameters. Finally, we show how our techniques perform on public
benchmarks.
|
cs/0104018
|
Several new domain-type and boundary-type numerical discretization
schemes with radial basis function
|
cs.NA cs.CE
|
This paper is concerned with a few novel RBF-based numerical schemes
discretizing partial differential equations. For boundary-type methods, we
derive the indirect and direct symmetric boundary knot methods (BKM). The
resulting interpolation matrix of both is always symmetric irrespective of
boundary geometry and conditions. In particular, the direct BKM applies the
practical physical variables rather than expansion coefficients and becomes
very competitive to the boundary element method. On the other hand, based on
the multiple reciprocity principle, we invent the RBF-based boundary particle
method (BPM) for general inhomogeneous problems without a need using inner
nodes. The direct and symmetric BPM schemes are also developed.
For domain-type RBF discretization schemes, by using the Green integral we
develop a new Hermite RBF scheme called as the modified Kansa method (MKM),
which differs from the symmetric Hermite RBF scheme in that the MKM discretizes
both governing equation and boundary conditions on the same boundary nodes. The
local spline version of the MKM is named as the finite knot method (FKM). Both
MKM and FKM significantly reduce calculation errors at nodes adjacent to
boundary. In addition, the nonsingular high-order fundamental or general
solution is strongly recommended as the RBF in the domain-type methods and dual
reciprocity method approximation of particular solution relating to the BKM.
It is stressed that all the above discretization methods of boundary-type and
domain-type are symmetric, meshless, and integration-free. The spline-based
schemes will produce desirable symmetric sparse banded interpolation matrix. In
appendix, we present a Hermite scheme to eliminate edge effect on the RBF
geometric modeling and imaging.
|
cs/0104019
|
Dynamic Nonlocal Language Modeling via Hierarchical Topic-Based
Adaptation
|
cs.CL
|
This paper presents a novel method of generating and applying hierarchical,
dynamic topic-based language models. It proposes and evaluates new cluster
generation, hierarchical smoothing and adaptive topic-probability estimation
techniques. These combined models help capture long-distance lexical
dependencies. Experiments on the Broadcast News corpus show significant
improvement in perplexity (10.5% overall and 33.5% on target vocabulary).
|
cs/0104020
|
Coaxing Confidences from an Old Friend: Probabilistic Classifications
from Transformation Rule Lists
|
cs.CL cs.AI
|
Transformation-based learning has been successfully employed to solve many
natural language processing problems. It has many positive features, but one
drawback is that it does not provide estimates of class membership
probabilities.
In this paper, we present a novel method for obtaining class membership
probabilities from a transformation-based rule list classifier. Three
experiments are presented which measure the modeling accuracy and cross-entropy
of the probabilistic classifier on unseen data and the degree to which the
output probabilities from the classifier can be used to estimate confidences in
its classification decisions.
The results of these experiments show that, for the task of text chunking,
the estimates produced by this technique are more informative than those
generated by a state-of-the-art decision tree.
|
cs/0104022
|
Microplanning with Communicative Intentions: The SPUD System
|
cs.CL
|
The process of microplanning encompasses a range of problems in Natural
Language Generation (NLG), such as referring expression generation, lexical
choice, and aggregation, problems in which a generator must bridge underlying
domain-specific representations and general linguistic representations. In this
paper, we describe a uniform approach to microplanning based on declarative
representations of a generator's communicative intent. These representations
describe the results of NLG: communicative intent associates the concrete
linguistic structure planned by the generator with inferences that show how the
meaning of that structure communicates needed information about some
application domain in the current discourse context. Our approach, implemented
in the SPUD (sentence planning using description) microplanner, uses the
lexicalized tree-adjoining grammar formalism (LTAG) to connect structure to
meaning and uses modal logic programming to connect meaning to context. At the
same time, communicative intent representations provide a resource for the
process of NLG. Using representations of communicative intent, a generator can
augment the syntax, semantics and pragmatics of an incomplete sentence
simultaneously, and can assess its progress on the various problems of
microplanning incrementally. The declarative formulation of communicative
intent translates into a well-defined methodology for designing grammatical and
conceptual resources which the generator can use to achieve desired
microplanning behavior in a specified domain.
|
cs/0105001
|
Correction of Errors in a Modality Corpus Used for Machine Translation
by Using Machine-learning Method
|
cs.CL
|
We performed corpus correction on a modality corpus for machine translation
by using such machine-learning methods as the maximum-entropy method. We thus
constructed a high-quality modality corpus based on corpus correction. We
compared several kinds of methods for corpus correction in our experiments and
developed a good method for corpus correction.
|
cs/0105002
|
Man [and Woman] vs. Machine: A Case Study in Base Noun Phrase Learning
|
cs.CL
|
A great deal of work has been done demonstrating the ability of machine
learning algorithms to automatically extract linguistic knowledge from
annotated corpora. Very little work has gone into quantifying the difference in
ability at this task between a person and a machine. This paper is a first step
in that direction.
|
cs/0105003
|
Rule Writing or Annotation: Cost-efficient Resource Usage for Base Noun
Phrase Chunking
|
cs.CL cs.AI
|
This paper presents a comprehensive empirical comparison between two
approaches for developing a base noun phrase chunker: human rule writing and
active learning using interactive real-time human annotation. Several novel
variations on active learning are investigated, and underlying cost models for
cross-modal machine learning comparison are presented and explored. Results
show that it is more efficient and more successful by several measures to train
a system using active learning annotation rather than hand-crafted rule writing
at a comparable level of human labor investment.
|
cs/0105004
|
Parallel implementation of the TRANSIMS micro-simulation
|
cs.CE
|
This paper describes the parallel implementation of the TRANSIMS traffic
micro-simulation. The parallelization method is domain decomposition, which
means that each CPU of the parallel computer is responsible for a different
geographical area of the simulated region. We describe how information between
domains is exchanged, and how the transportation network graph is partitioned.
An adaptive scheme is used to optimize load balancing. We then demonstrate how
computing speeds of our parallel micro-simulations can be systematically
predicted once the scenario and the computer architecture are known. This makes
it possible, for example, to decide if a certain study is feasible with a
certain computing budget, and how to invest that budget. The main ingredients
of the prediction are knowledge about the parallel implementation of the
micro-simulation, knowledge about the characteristics of the partitioning of
the transportation network graph, and knowledge about the interaction of these
quantities with the computer system. In particular, we investigate the
differences between switched and non-switched topologies, and the effects of 10
Mbit, 100 Mbit, and Gbit Ethernet. keywords: Traffic simulation, parallel
computing, transportation planning, TRANSIMS
|
cs/0105005
|
A Complete WordNet1.5 to WordNet1.6 Mapping
|
cs.CL
|
We describe a robust approach for linking already existing lexical/semantic
hierarchies. We use a constraint satisfaction algorithm (relaxation labelling)
to select --among a set of candidates-- the node in a target taxonomy that
bests matches each node in a source taxonomy. In this paper we present the
complete mapping of the nominal, verbal, adjectival and adverbial parts of
WordNet 1.5 onto WordNet 1.6.
|
cs/0105012
|
Joint and conditional estimation of tagging and parsing models
|
cs.CL
|
This paper compares two different ways of estimating statistical language
models. Many statistical NLP tagging and parsing models are estimated by
maximizing the (joint) likelihood of the fully-observed training data. However,
since these applications only require the conditional probability
distributions, these distributions can in principle be learnt by maximizing the
conditional likelihood of the training data. Perhaps somewhat surprisingly,
models estimated by maximizing the joint were superior to models estimated by
maximizing the conditional, even though some of the latter models intuitively
had access to ``more information''.
|
cs/0105014
|
Errata and supplements to: Orthonormal RBF Wavelet and Ridgelet-like
Series and Transforms for High-Dimensional Problems
|
cs.NA cs.CE
|
In recent years some attempts have been done to relate the RBF with wavelets
in handling high dimensional multiscale problems. To the author's knowledge,
however, the orthonormal and bi-orthogonal RBF wavelets are still missing in
the literature. By using the nonsingular general solution and singular
fundamental solution of differential operator, recently the present author,
refer. 3, made some substantial headway to derive the orthonormal RBF wavelets
series and transforms. The methodology can be generalized to create the RBF
wavelets by means of the orthogonal convolution kernel function of various
integral operators. In particular, it is stressed that the presented RBF
wavelets does not apply the tensor product to handle multivariate problems at
all.
This note is to correct some errata in reference 3 and also to supply a few
latest advances in the study of orthornormal RBF wavelet transforms.
|
cs/0105015
|
The alldifferent Constraint: A Survey
|
cs.PL cs.AI
|
The constraint of difference is known to the constraint programming community
since Lauriere introduced Alice in 1978. Since then, several solving strategies
have been designed for this constraint. In this paper we give both a practical
overview and an abstract comparison of these different strategies.
|
cs/0105016
|
Probabilistic top-down parsing and language modeling
|
cs.CL
|
This paper describes the functioning of a broad-coverage probabilistic
top-down parser, and its application to the problem of language modeling for
speech recognition. The paper first introduces key notions in language modeling
and probabilistic parsing, and briefly reviews some previous approaches to
using syntactic structure for language modeling. A lexicalized probabilistic
top-down parser is then presented, which performs very well, in terms of both
the accuracy of returned parses and the efficiency with which they are found,
relative to the best broad-coverage statistical parsers. A new language model
which utilizes probabilistic top-down parsing is then outlined, and empirical
results show that it improves upon previous work in test corpus perplexity.
Interpolation with a trigram model yields an exceptional improvement relative
to the improvement observed by other models, demonstrating the degree to which
the information captured by our parsing model is orthogonal to that captured by
a trigram model. A small recognition experiment also demonstrates the utility
of the model.
|
cs/0105017
|
Optimization Over Zonotopes and Training Support Vector Machines
|
cs.CG cs.AI
|
We make a connection between classical polytopes called zonotopes and Support
Vector Machine (SVM) classifiers. We combine this connection with the ellipsoid
method to give some new theoretical results on training SVMs. We also describe
some special properties of soft margin C-SVMs as parameter C goes to infinity.
|
cs/0105019
|
Robust Probabilistic Predictive Syntactic Processing
|
cs.CL
|
This thesis presents a broad-coverage probabilistic top-down parser, and its
application to the problem of language modeling for speech recognition. The
parser builds fully connected derivations incrementally, in a single pass from
left-to-right across the string. We argue that the parsing approach that we
have adopted is well-motivated from a psycholinguistic perspective, as a model
that captures probabilistic dependencies between lexical items, as part of the
process of building connected syntactic structures. The basic parser and
conditional probability models are presented, and empirical results are
provided for its parsing accuracy on both newspaper text and spontaneous
telephone conversations. Modifications to the probability model are presented
that lead to improved performance. A new language model which uses the output
of the parser is then defined. Perplexity and word error rate reduction are
demonstrated over trigram models, even when the trigram is trained on
significantly more data. Interpolation on a word-by-word basis with a trigram
model yields additional improvements.
|
cs/0105021
|
Solving Composed First-Order Constraints from Discrete-Time Robust
Control
|
cs.LO cs.AI cs.CE
|
This paper deals with a problem from discrete-time robust control which
requires the solution of constraints over the reals that contain both universal
and existential quantifiers. For solving this problem we formulate it as a
program in a (fictitious) constraint logic programming language with explicit
quantifier notation. This allows us to clarify the special structure of the
problem, and to extend an algorithm for computing approximate solution sets of
first-order constraints over the reals to exploit this structure. As a result
we can deal with inputs that are clearly out of reach for current symbolic
solvers.
|
cs/0105022
|
Multi-Channel Parallel Adaptation Theory for Rule Discovery
|
cs.AI
|
In this paper, we introduce a new machine learning theory based on
multi-channel parallel adaptation for rule discovery. This theory is
distinguished from the familiar parallel-distributed adaptation theory of
neural networks in terms of channel-based convergence to the target rules. We
show how to realize this theory in a learning system named CFRule. CFRule is a
parallel weight-based model, but it departs from traditional neural computing
in that its internal knowledge is comprehensible. Furthermore, when the model
converges upon training, each channel converges to a target rule. The model
adaptation rule is derived by multi-level parallel weight optimization based on
gradient descent. Since, however, gradient descent only guarantees local
optimization, a multi-channel regression-based optimization strategy is
developed to effectively deal with this problem. Formally, we prove that the
CFRule model can explicitly and precisely encode any given rule set. Also, we
prove a property related to asynchronous parallel convergence, which is a
critical element of the multi-channel parallel adaptation theory for rule
learning. Thanks to the quantizability nature of the CFRule model, rules can be
extracted completely and soundly via a threshold-based mechanism. Finally, the
practical application of the theory is demonstrated in DNA promoter recognition
and hepatitis prognosis prediction.
|
cs/0105023
|
Generating a 3D Simulation of a Car Accident from a Written Description
in Natural Language: the CarSim System
|
cs.CL
|
This paper describes a prototype system to visualize and animate 3D scenes
from car accident reports, written in French. The problem of generating such a
3D simulation can be divided into two subtasks: the linguistic analysis and the
virtual scene generation. As a means of communication between these two
modules, we first designed a template formalism to represent a written accident
report. The CarSim system first processes written reports, gathers relevant
information, and converts it into a formal description. Then, it creates the
corresponding 3D scene and animates the vehicles.
|
cs/0105025
|
Market-Based Reinforcement Learning in Partially Observable Worlds
|
cs.AI cs.LG cs.MA cs.NE
|
Unlike traditional reinforcement learning (RL), market-based RL is in
principle applicable to worlds described by partially observable Markov
Decision Processes (POMDPs), where an agent needs to learn short-term memories
of relevant previous events in order to execute optimal actions. Most previous
work, however, has focused on reactive settings (MDPs) instead of POMDPs. Here
we reimplement a recent approach to market-based RL and for the first time
evaluate it in a toy POMDP setting.
|
cs/0105026
|
Toward Natural Gesture/Speech Control of a Large Display
|
cs.CV cs.HC
|
In recent years because of the advances in computer vision research, free
hand gestures have been explored as means of human-computer interaction (HCI).
Together with improved speech processing technology it is an important step
toward natural multimodal HCI. However, inclusion of non-predefined continuous
gestures into a multimodal framework is a challenging problem. In this paper,
we propose a structured approach for studying patterns of multimodal language
in the context of a 2D-display control. We consider systematic analysis of
gestures from observable kinematical primitives to their semantics as pertinent
to a linguistic structure. Proposed semantic classification of co-verbal
gestures distinguishes six categories based on their spatio-temporal deixis. We
discuss evolution of a computational framework for gesture and speech
integration which was used to develop an interactive testbed (iMAP). The
testbed enabled elicitation of adequate, non-sequential, multimodal patterns in
a narrative mode of HCI. Conducted user studies illustrate significance of
accounting for the temporal alignment of gesture and speech parts in semantic
mapping. Furthermore, co-occurrence analysis of gesture/speech production
suggests syntactic organization of gestures at the lexical level.
|
cs/0105027
|
Bounds on sample size for policy evaluation in Markov environments
|
cs.LG cs.AI cs.CC
|
Reinforcement learning means finding the optimal course of action in
Markovian environments without knowledge of the environment's dynamics.
Stochastic optimization algorithms used in the field rely on estimates of the
value of a policy. Typically, the value of a policy is estimated from results
of simulating that very policy in the environment. This approach requires a
large amount of simulation as different points in the policy space are
considered. In this paper, we develop value estimators that utilize data
gathered when using one policy to estimate the value of using another policy,
resulting in much more data-efficient algorithms. We consider the question of
accumulating a sufficient experience and give PAC-style bounds.
|
cs/0105030
|
The OLAC Metadata Set and Controlled Vocabularies
|
cs.CL cs.DL
|
As language data and associated technologies proliferate and as the language
resources community rapidly expands, it has become difficult to locate and
reuse existing resources. Are there any lexical resources for such-and-such a
language? What tool can work with transcripts in this particular format? What
is a good format to use for linguistic data of this type? Questions like these
dominate many mailing lists, since web search engines are an unreliable way to
find language resources. This paper describes a new digital infrastructure for
language resource discovery, based on the Open Archives Initiative, and called
OLAC -- the Open Language Archives Community. The OLAC Metadata Set and the
associated controlled vocabularies facilitate consistent description and
focussed searching. We report progress on the metadata set and controlled
vocabularies, describing current issues and soliciting input from the language
resources community.
|
cs/0105032
|
Learning to Cooperate via Policy Search
|
cs.LG cs.MA
|
Cooperative games are those in which both agents share the same payoff
structure. Value-based reinforcement-learning algorithms, such as variants of
Q-learning, have been applied to learning cooperative games, but they only
apply when the game state is completely observable to both agents. Policy
search methods are a reasonable alternative to value-based methods for
partially observable environments. In this paper, we provide a gradient-based
distributed policy-search method for cooperative games and compare the notion
of local optimum to that of Nash equilibrium. We demonstrate the effectiveness
of this method experimentally in a small, partially observable simulated soccer
domain.
|
cs/0105035
|
Historical Dynamics of Lexical System as Random Walk Process
|
cs.CL
|
It is offered to consider word meanings changes in diachrony as
semicontinuous random walk with reflecting and swallowing screens. The basic
characteristics of word life cycle are defined. Verification of the model has
been realized on the data of Russian words distribution on various age periods.
|
cs/0105036
|
Disjunctive Logic Programs with Inheritance
|
cs.LO cs.AI
|
The paper proposes a new knowledge representation language, called DLP<,
which extends disjunctive logic programming (with strong negation) by
inheritance. The addition of inheritance enhances the knowledge modeling
features of the language providing a natural representation of default
reasoning with exceptions.
A declarative model-theoretic semantics of DLP< is provided, which is shown
to generalize the Answer Set Semantics of disjunctive logic programs.
The knowledge modeling features of the language are illustrated by encoding
classical nonmonotonic problems in DLP<.
The complexity of DLP< is analyzed, proving that inheritance does not cause
any computational overhead, as reasoning in DLP< has exactly the same
complexity as reasoning in disjunctive logic programming. This is confirmed by
the existence of an efficient translation from DLP< to plain disjunctive logic
programming. Using this translation, an advanced KR system supporting the DLP<
language has been implemented on top of the DLV system and has subsequently
been integrated into DLV.
|
cs/0105037
|
Integrating Prosodic and Lexical Cues for Automatic Topic Segmentation
|
cs.CL
|
We present a probabilistic model that uses both prosodic and lexical cues for
the automatic segmentation of speech into topically coherent units. We propose
two methods for combining lexical and prosodic information using hidden Markov
models and decision trees. Lexical information is obtained from a speech
recognizer, and prosodic features are extracted automatically from speech
waveforms. We evaluate our approach on the Broadcast News corpus, using the
DARPA-TDT evaluation metrics. Results show that the prosodic model alone is
competitive with word-based segmentation methods. Furthermore, we achieve a
significant reduction in error by combining the prosodic and word-based
knowledge sources.
|
cs/0106003
|
A note on radial basis function computing
|
cs.CE cs.CG
|
This note carries three purposes involving our latest advances on the radial
basis function (RBF) approach. First, we will introduce a new scheme employing
the boundary knot method (BKM) to nonlinear convection-diffusion problem. It is
stressed that the new scheme directly results in a linear BKM formulation of
nonlinear problems by using response point-dependent RBFs, which can be solved
by any linear solver. Then we only need to solve a single nonlinear algebraic
equation for desirable unknown at some inner node of interest. The numerical
results demonstrate high accuracy and efficiency of this nonlinear BKM
strategy. Second, we extend the concepts of distance function, which include
time-space and variable transformation distance functions. Finally, we
demonstrate that if the nodes are symmetrically placed, the RBF coefficient
matrices have either centrosymmetric or skew centrosymmetric structures. The
factorization features of such matrices lead to a considerable reduction in the
RBF computing effort. A simple approach is also presented to reduce the
ill-conditioning of RBF interpolation matrices in general cases.
|
cs/0106004
|
Soft Scheduling
|
cs.AI cs.PL
|
Classical notions of disjunctive and cumulative scheduling are studied from
the point of view of soft constraint satisfaction. Soft disjunctive scheduling
is introduced as an instance of soft CSP and preferences included in this
problem are applied to generate a lower bound based on existing discrete
capacity resource. Timetabling problems at Purdue University and Faculty of
Informatics at Masaryk University considering individual course requirements of
students demonstrate practical problems which are solved via proposed methods.
Implementation of general preference constraint solver is discussed and first
computational results for timetabling problem are presented.
|
cs/0106005
|
The Representation of Legal Contracts
|
cs.AI cs.CY
|
The paper outlines ongoing research on logic-based tools for the analysis and
representation of legal contracts of the kind frequently encountered in
large-scale engineering projects and complex, long-term trading agreements. We
consider both contract formation and contract performance, in each case
identifying the representational issues and the prospects for providing
automated support tools.
|
cs/0106006
|
A Constraint-Driven System for Contract Assembly
|
cs.AI
|
We present an approach for modelling the structure and coarse content of
legal documents with a view to providing automated support for the drafting of
contracts and contract database retrieval. The approach is designed to be
applicable where contract drafting is based on model-form contracts or on
existing examples of a similar type. The main features of the approach are: (1)
the representation addresses the structure and the interrelationships between
the constituent parts of contracts, but not the text of the document itself;
(2) the representation of documents is separated from the mechanisms that
manipulate it; and (3) the drafting process is subject to a collection of
explicitly stated constraints that govern the structure of the documents. We
describe the representation of document instances and of 'generic documents',
which are data structures used to drive the creation of new document instances,
and we show extracts from a sample session to illustrate the features of a
prototype system implemented in MacProlog.
|
cs/0106007
|
Modelling Contractual Arguments
|
cs.AI
|
One influential approach to assessing the "goodness" of arguments is offered
by the Pragma-Dialectical school (p-d) (Eemeren & Grootendorst 1992). This can
be compared with Rhetorical Structure Theory (RST) (Mann & Thompson 1988), an
approach that originates in discourse analysis. In p-d terms an argument is
good if it avoids committing a fallacy, whereas in RST terms an argument is
good if it is coherent. RST has been criticised (Snoeck Henkemans 1997) for
providing only a partially functional account of argument, and similar
criticisms have been raised in the Natural Language Generation (NLG)
community-particularly by Moore & Pollack (1992)- with regards to its account
of intentionality in text in general. Mann and Thompson themselves note that
although RST can be successfully applied to a wide range of texts from diverse
domains, it fails to characterise some types of text, most notably legal
contracts. There is ongoing research in the Artificial Intelligence and Law
community exploring the potential for providing electronic support to contract
negotiators, focusing on long-term, complex engineering agreements (see for
example Daskalopulu & Sergot 1997). This paper provides a brief introduction to
RST and illustrates its shortcomings with respect to contractual text. An
alternative approach for modelling argument structure is presented which not
only caters for contractual text, but also overcomes the aforementioned
limitations of RST.
|
cs/0106008
|
Computing Functional and Relational Box Consistency by Structured
Propagation in Atomic Constraint Systems
|
cs.PL cs.AI
|
Box consistency has been observed to yield exponentially better performance
than chaotic constraint propagation in the interval constraint system obtained
by decomposing the original expression into primitive constraints. The claim
was made that the improvement is due to avoiding decomposition. In this paper
we argue that the improvement is due to replacing chaotic iteration by a more
structured alternative.
To this end we distinguish the existing notion of box consistency from
relational box consistency. We show that from a computational point of view it
is important to maintain the functional structure in constraint systems that
are associated with a system of equations. So far, it has only been considered
computationally important that constraint propagation be fair. With the
additional structure of functional constraint systems, one can define and
implement computationally effective, structured, truncated constraint
propagations. The existing algorithm for box consistency is one such. Our
results suggest that there are others worth investigating.
|
cs/0106010
|
Modelling Legal Contracts as Processes
|
cs.AI cs.LO
|
This paper concentrates on the representation of the legal relations that
obtain between parties once they have entered a contractual agreement and their
evolution as the agreement progresses through time. Contracts are regarded as
process and they are analysed in terms of the obligations that are active at
various points during their life span. An informal notation is introduced that
summarizes conveniently the states of an agreement as it evolves over time.
Such a representation enables us to determine what the status of an agreement
is, given an event or a sequence of events that concern the performance of
actions by the agents involved. This is useful both in the context of contract
drafting (where parties might wish to preview how their agreement might evolve)
and in the context of contract performance monitoring (where parties might with
to establish what their legal positions are once their agreement is in force).
The discussion is based on an example that illustrates some typical patterns of
contractual obligations.
|
cs/0106011
|
Computational properties of environment-based disambiguation
|
cs.CL cs.HC
|
The standard pipeline approach to semantic processing, in which sentences are
morphologically and syntactically resolved to a single tree before they are
interpreted, is a poor fit for applications such as natural language
interfaces. This is because the environment information, in the form of the
objects and events in the application's run-time environment, cannot be used to
inform parsing decisions unless the input sentence is semantically analyzed,
but this does not occur until after parsing in the single-tree semantic
architecture. This paper describes the computational properties of an
alternative architecture, in which semantic analysis is performed on all
possible interpretations during parsing, in polynomial time.
|
cs/0106012
|
Computational Properties of Metaquerying Problems
|
cs.DB cs.CC
|
Metaquerying is a datamining technology by which hidden dependencies among
several database relations can be discovered. This tool has already been
successfully applied to several real-world applications. Recent papers provide
only preliminary results about the complexity of metaquerying. In this paper we
define several variants of metaquerying that encompass, as far as we know, all
variants defined in the literature. We study both the combined complexity and
the data complexity of these variants. We show that, under the combined
complexity measure, metaquerying is generally intractable (unless P=NP), lying
sometimes quite high in the complexity hierarchies (as high as NP^PP),
depending on the characteristics of the plausibility index. However, we are
able to single out some tractable and interesting metaquerying cases (whose
combined complexity is LOGCFL-complete). As for the data complexity of
metaquerying, we prove that, in general, this is in TC0, but lies within AC0 in
some simpler cases. Finally, we discuss implementation of metaqueries, by
providing algorithms to answer them.
|
cs/0106014
|
L.T.Kuzin: Research Program
|
cs.DM cs.AI cs.SE
|
Lev T. Kuzin (1928--1997) is one of the founders of modern cybernetics and
information science in Russia. He was awarded and honored the USSR State Prize
for inspiring vision into the future of technical cybernetics and his invention
and innovation of key technologies.
The last years he interested in the computational models of geometrical and
algebraic nature and their applications in various branches of computer science
and information technologies. In the recent years the interest in computation
models based on object notion has grown tremendously stimulating an interest to
Kuzin's ideas. This year of 50th Anniversary of Cybernetics and on the occasion
of his 70th birthday on September 12, 1998 seems especially appropriate for
discussing Kuzin's Research Program.
|
cs/0106015
|
Organizing Encyclopedic Knowledge based on the Web and its Application
to Question Answering
|
cs.CL
|
We propose a method to generate large-scale encyclopedic knowledge, which is
valuable for much NLP research, based on the Web. We first search the Web for
pages containing a term in question. Then we use linguistic patterns and HTML
structures to extract text fragments describing the term. Finally, we organize
extracted term descriptions based on word senses and domains. In addition, we
apply an automatically generated encyclopedia to a question answering system
targeting the Japanese Information-Technology Engineers Examination.
|
cs/0106016
|
File mapping Rule-based DBMS and Natural Language Processing
|
cs.CL cs.AI cs.DB cs.IR cs.LG cs.PL
|
This paper describes the system of storage, extract and processing of
information structured similarly to the natural language. For recursive
inference the system uses the rules having the same representation, as the
data. The environment of storage of information is provided with the File
Mapping (SHM) mechanism of operating system. In the paper the main principles
of construction of dynamic data structure and language for record of the
inference rules are stated; the features of available implementation are
considered and the description of the application realizing semantic
information retrieval on the natural language is given.
|
cs/0106021
|
Object-oriented solutions
|
cs.LO cs.DB cs.PL
|
In this paper are briefly outlined the motivations, mathematical ideas in
use, pre-formalization and assumptions, object-as-functor construction, `soft'
types and concept constructions, case study for concepts based on variable
domains, extracting a computational background, and examples of evaluations.
|
cs/0106023
|
Object-oriented tools for advanced applications
|
cs.LO cs.DB cs.PL
|
This paper contains a brief discussion of the Application Development
Environment (ADE) that is used to build database applications involving the
graphical user interface (GUI). ADE computing separates the database access and
the user interface. The variety of applications may be generated that
communicate with different and distinct desktop databases. The advanced
techniques allows to involve remote or stored procedures retrieval and call.
|
cs/0106024
|
Objects and their computational framework
|
cs.LO cs.DB cs.PL
|
Most of the object notions are embedded into a logical domain, especially
when dealing with a database theory. Thus, their properties within a
computational domain are not yet studied properly. The main topic of this paper
is to analyze different concepts of the distinct computational primitive frames
to extract the useful object properties and their possible advantages. Some
important metaoperators are used to unify the approaches and to establish their
possible correspondences.
|
cs/0106025
|
Information Integration and Computational Logic
|
cs.AI
|
Information Integration is a young and exciting field with enormous research
and commercial significance in the new world of the Information Society. It
stands at the crossroad of Databases and Artificial Intelligence requiring
novel techniques that bring together different methods from these fields.
Information from disparate heterogeneous sources often with no a-priori common
schema needs to be synthesized in a flexible, transparent and intelligent way
in order to respond to the demands of a query thus enabling a more informed
decision by the user or application program. The field although relatively
young has already found many practical applications particularly for
integrating information over the World Wide Web. This paper gives a brief
introduction of the field highlighting some of the main current and future
research issues and application areas. It attempts to evaluate the current and
potential role of Computational Logic in this and suggests some of the problems
where logic-based techniques could be used.
|
cs/0106026
|
Event Driven Computations for Relational Query Language
|
cs.LO cs.DB cs.PL
|
This paper deals with an extended model of computations which uses the
parameterized families of entities for data objects and reflects a preliminary
outline of this problem. Some topics are selected out, briefly analyzed and
arranged to cover a general problem. The authors intended more to discuss the
particular topics, their interconnection and computational meaning as a panel
proposal, so that this paper is not yet to be evaluated as a closed journal
paper. To save space all the technical and implementation features are left for
the future paper.
Data object is a schematic entity and modelled by the partial function. A
notion of type is extended by the variable domains which depend on events and
types. A variable domain is built from the potential and schematic individuals
and generates the valid families of types depending on a sequence of events.
Each valid type consists of the actual individuals which are actual relatively
the event or script. In case when a type depends on the script then
corresponding view for data objects is attached, otherwise a snapshot is
generated. The type thus determined gives an upper range for typed variables so
that the local ranges are event driven resulting is the families of actual
individuals. An expressive power of the query language is extended using the
extensional and intentional relations.
|
cs/0106027
|
Event Driven Objects
|
cs.LO cs.DB cs.SE
|
A formal consideration in this paper is given for the essential notations to
characterize the object that is distinguished in a problem domain. The distinct
object is represented by another idealized object, which is a schematic
element. When the existence of an element is significant, then a class of these
partial elements is dropped down into actual, potential and virtual objects.
The potential objects are gathered into the variable domains which are the
extended ranges for unbound variables. The families of actual objects are shown
to be parameterized with the types and events. The transitions between events
are shown to be driven by the scripts. A computational framework arises which
is described by the commutative diagrams.
|
cs/0106028
|
Pricing Virtual Paths with Quality-of-Service Guarantees as Bundle
Derivatives
|
cs.NI cs.CE
|
We describe a model of a communication network that allows us to price
complex network services as financial derivative contracts based on the spot
price of the capacity in individual routers. We prove a theorem of a Girsanov
transform that is useful for pricing linear derivatives on underlying assets,
which can be used to price many complex network services, and it is used to
price an option that gives access to one of several virtual channels between
two network nodes, during a specified future time interval. We give the
continuous time hedging strategy, for which the option price is independent of
the service providers attitude towards risk. The option price contains the
density function of a sum of lognormal variables, which has to be evaluated
numerically.
|
cs/0106029
|
Building Views with Description Logics in ADE: Application Development
Environment
|
cs.LO cs.DB cs.DS
|
Any of views is formally defined within description logics that were
established as a family of logics for modeling complex hereditary structures
and have a suitable expressive power. This paper considers the Application
Development Environment (ADE) over generalized variable concepts that are used
to build database applications involving the supporting views. The front-end
user interacts the database via separate ADE access mechanism intermediated by
view support. The variety of applications may be generated that communicate
with different and distinct desktop databases in a data warehouse. The advanced
techniques allows to involve remote or stored procedures retrieval and call.
|
cs/0106030
|
Logic, Individuals and Concepts
|
cs.LO cs.DB cs.DM cs.SE
|
This extended abstract gives a brief outline of the connections between the
descriptions and variable concepts. Thus, the notion of a concept is extended
to include both the syntax and semantics features. The evaluation map in use is
parameterized by a kind of computational environment, the index, giving rise to
indexed concepts. The concepts are inhabited into language by the descriptions
from the higher order logic. In general the idea of object-as-functor should
assist the designer to outline a programming tool in conceptual shell style.
|
cs/0106031
|
Complexity Results and Practical Algorithms for Logics in Knowledge
Representation
|
cs.LO cs.AI
|
Description Logics (DLs) are used in knowledge-based systems to represent and
reason about terminological knowledge of the application domain in a
semantically well-defined manner. In this thesis, we establish a number of
novel complexity results and give practical algorithms for expressive DLs that
provide different forms of counting quantifiers.
We show that, in many cases, adding local counting in the form of qualifying
number restrictions to DLs does not increase the complexity of the inference
problems, even if binary coding of numbers in the input is assumed. On the
other hand, we show that adding different forms of global counting restrictions
to a logic may increase the complexity of the inference problems dramatically.
We provide exact complexity results and a practical, tableau based algorithm
for the DL SHIQ, which forms the basis of the highly optimized DL system iFaCT.
Finally, we describe a tableau algorithm for the clique guarded fragment
(CGF), which we hope will serve as the basis for an efficient implementation of
a CGF reasoner.
|
cs/0106033
|
location.location.location: Internet Addresses as Evolving Property
|
cs.CY cs.HC cs.IR
|
I describe recent developments in the rules governing registration and
ownership of Internet and World Wide Web addresses or "domain names." I
consider the idea that "virtual" properties like domain names are more similar
to real estate than to trademarks. Therefore, it would be economically
efficient to grant domain name owners stronger rights than those of trademarks
and copyright holders.
|
cs/0106034
|
Solving equations in the relational algebra
|
cs.LO cs.DB
|
Enumerating all solutions of a relational algebra equation is a natural and
powerful operation which, when added as a query language primitive to the
nested relational algebra, yields a query language for nested relational
databases, equivalent to the well-known powerset algebra. We study
\emph{sparse} equations, which are equations with at most polynomially many
solutions. We look at their complexity, and compare their expressive power with
that of similar notions in the powerset algebra.
|
cs/0106035
|
Polymorphic type inference for the relational algebra
|
cs.LO cs.DB
|
We give a polymorphic account of the relational algebra. We introduce a
formalism of ``type formulas'' specifically tuned for relational algebra
expressions, and present an algorithm that computes the ``principal'' type for
a given expression. The principal type of an expression is a formula that
specifies, in a clear and concise manner, all assignments of types (sets of
attributes) to relation names, under which a given relational algebra
expression is well-typed, as well as the output type that expression will have
under each of these assignments. Topics discussed include complexity and
polymorphic expressive power.
|
cs/0106036
|
Convergence and Error Bounds for Universal Prediction of Nonbinary
Sequences
|
cs.LG cs.AI cs.CC math.PR
|
Solomonoff's uncomputable universal prediction scheme $\xi$ allows to predict
the next symbol $x_k$ of a sequence $x_1...x_{k-1}$ for any Turing computable,
but otherwise unknown, probabilistic environment $\mu$. This scheme will be
generalized to arbitrary environmental classes, which, among others, allows the
construction of computable universal prediction schemes $\xi$. Convergence of
$\xi$ to $\mu$ in a conditional mean squared sense and with $\mu$ probability 1
is proven. It is shown that the average number of prediction errors made by the
universal $\xi$ scheme rapidly converges to those made by the best possible
informed $\mu$ scheme. The schemes, theorems and proofs are given for general
finite alphabet, which results in additional complications as compared to the
binary case. Several extensions of the presented theory and results are
outlined. They include general loss functions and bounds, games of chance,
infinite alphabet, partial and delayed prediction, classification, and more
active systems.
|
cs/0106039
|
Iterative Residual Rescaling: An Analysis and Generalization of LSI
|
cs.CL cs.IR
|
We consider the problem of creating document representations in which
inter-document similarity measurements correspond to semantic similarity. We
first present a novel subspace-based framework for formalizing this task. Using
this framework, we derive a new analysis of Latent Semantic Indexing (LSI),
showing a precise relationship between its performance and the uniformity of
the underlying distribution of documents over topics. This analysis helps
explain the improvements gained by Ando's (2000) Iterative Residual Rescaling
(IRR) algorithm: IRR can compensate for distributional non-uniformity. A
further benefit of our framework is that it provides a well-motivated,
effective method for automatically determining the rescaling factor IRR depends
on, leading to further improvements. A series of experiments over various
settings and with several evaluation metrics validates our claims.
|
cs/0106040
|
Stacking classifiers for anti-spam filtering of e-mail
|
cs.CL cs.AI
|
We evaluate empirically a scheme for combining classifiers, known as stacked
generalization, in the context of anti-spam filtering, a novel cost-sensitive
application of text categorization. Unsolicited commercial e-mail, or "spam",
floods mailboxes, causing frustration, wasting bandwidth, and exposing minors
to unsuitable content. Using a public corpus, we show that stacking can improve
the efficiency of automatically induced anti-spam filters, and that such
filters can be used in real-life applications.
|
cs/0106043
|
Using the Distribution of Performance for Studying Statistical NLP
Systems and Corpora
|
cs.CL
|
Statistical NLP systems are frequently evaluated and compared on the basis of
their performances on a single split of training and test data. Results
obtained using a single split are, however, subject to sampling noise. In this
paper we argue in favour of reporting a distribution of performance figures,
obtained by resampling the training data, rather than a single number. The
additional information from distributions can be used to make statistically
quantified statements about differences across parameter settings, systems, and
corpora.
|
cs/0106044
|
A Sequential Model for Multi-Class Classification
|
cs.AI cs.CL cs.LG
|
Many classification problems require decisions among a large number of
competing classes. These tasks, however, are not handled well by general
purpose learning methods and are usually addressed in an ad-hoc fashion. We
suggest a general approach -- a sequential learning model that utilizes
classifiers to sequentially restrict the number of competing classes while
maintaining, with high probability, the presence of the true outcome in the
candidates set. Some theoretical and computational properties of the model are
discussed and we argue that these are important in NLP-like domains. The
advantages of the model are illustrated in an experiment in part-of-speech
tagging.
|
cs/0106046
|
Expressing the cone radius in the relational calculus with real
polynomial constraints
|
cs.DB cs.LO
|
We show that there is a query expressible in first-order logic over the reals
that returns, on any given semi-algebraic set A, for every point a radius
around which A is conical. We obtain this result by combining famous results
from calculus and real algebraic geometry, notably Sard's theorem and Thom's
first isotopy lemma, with recent algorithmic results by Rannou.
|
cs/0106047
|
Modeling informational novelty in a conversational system with a hybrid
statistical and grammar-based approach to natural language generation
|
cs.CL
|
We present a hybrid statistical and grammar-based system for surface natural
language generation (NLG) that uses grammar rules, conditions on using those
grammar rules, and corpus statistics to determine the word order. We also
describe how this surface NLG module is implemented in a prototype
conversational system, and how it attempts to model informational novelty by
varying the word order. Using a combination of rules and statistical
information, the conversational system expresses the novel information
differently than the given information, based on the run-time dialog state. We
also discuss our plans for evaluating the generation strategy.
|
cs/0106054
|
Software Toolkit for Building Embedded and Distributed Knowledge-based
Systems
|
cs.AI cs.DC cs.MA
|
The paper discusses the basic principles and the architecture of the software
toolkit for constructing knowledge-based systems which can be used
cooperatively over computer networks and also embedded into larger software
systems in different ways. Presented architecture is based on frame knowledge
representation and production rules, which also allows to interface high-level
programming languages and relational databases by exposing corresponding
classes or database tables as frames. Frames located on the remote computers
can also be transparently accessed and used in inference, and the dynamic
knowledge for specific frames can also be transferred over the network. The
issues of implementation of such a system are addressed, which use Java
programming language, CORBA and XML for external knowledge representation.
Finally, some applications of the toolkit are considered, including e-business
approach to knowledge sharing, intelligent web behaviours, etc.
|
cs/0106055
|
A Seamless Integration of Association Rule Mining with Database Systems
|
cs.DB
|
The need for Knowledge and Data Discovery Management Systems (KDDMS) that
support ad hoc data mining queries has been long recognized. A significant
amount of research has gone into building tightly coupled systems that
integrate association rule mining with database systems. In this paper, we
describe a seamless integration scheme for database queries and association
rule discovery using a common query optimizer for both. Query trees of
expressions in an extended algebra are used for internal representation in the
optimizer. The algebraic representation is flexible enough to deal with
constrained association rule queries and other variations of association rule
specifications. We propose modularization to simplify the query tree for
complex tasks in data mining. It paves the way for making use of existing
algorithms for constructing query plans in the optimization process. How the
integration scheme we present will facilitate greater user control over the
data mining process is also discussed. The work described in this paper forms
part of a larger project for fully integrating data mining with database
management.
|
cs/0106059
|
CHR as grammar formalism. A first report
|
cs.PL cs.CL
|
Grammars written as Constraint Handling Rules (CHR) can be executed as
efficient and robust bottom-up parsers that provide a straightforward,
non-backtracking treatment of ambiguity. Abduction with integrity constraints
as well as other dynamic hypothesis generation techniques fit naturally into
such grammars and are exemplified for anaphora resolution, coordination and
text interpretation.
|
cs/0107002
|
Enhancing Constraint Propagation with Composition Operators
|
cs.AI
|
Constraint propagation is a general algorithmic approach for pruning the
search space of a CSP. In a uniform way, K. R. Apt has defined a computation as
an iteration of reduction functions over a domain. He has also demonstrated the
need for integrating static properties of reduction functions (commutativity
and semi-commutativity) to design specialized algorithms such as AC3 and DAC.
We introduce here a set of operators for modeling compositions of reduction
functions. Two of the major goals are to tackle parallel computations, and
dynamic behaviours (such as slow convergence).
|
cs/0107005
|
The Role of Conceptual Relations in Word Sense Disambiguation
|
cs.CL
|
We explore many ways of using conceptual distance measures in Word Sense
Disambiguation, starting with the Agirre-Rigau conceptual density measure. We
use a generalized form of this measure, introducing many (parameterized)
refinements and performing an exhaustive evaluation of all meaningful
combinations. We finally obtain a 42% improvement over the original algorithm,
and show that measures of conceptual distance are not worse indicators for
sense disambiguation than measures based on word-coocurrence (exemplified by
the Lesk algorithm). Our results, however, reinforce the idea that only a
combination of different sources of knowledge might eventually lead to accurate
word sense disambiguation.
|
cs/0107006
|
Looking Under the Hood : Tools for Diagnosing your Question Answering
Engine
|
cs.CL
|
In this paper we analyze two question answering tasks : the TREC-8 question
answering task and a set of reading comprehension exams. First, we show that
Q/A systems perform better when there are multiple answer opportunities per
question. Next, we analyze common approaches to two subproblems: term overlap
for answer sentence identification, and answer typing for short answer
extraction. We present general tools for analyzing the strengths and
limitations of techniques for these subproblems. Our results quantify the
limitations of both term overlap and answer typing to distinguish between
competing answer candidates.
|
cs/0107007
|
The Risk Profile Problem for Stock Portfolio Optimization
|
cs.CE cs.DM cs.DS
|
This work initiates research into the problem of determining an optimal
investment strategy for investors with different attitudes towards the
trade-offs of risk and profit. The probability distribution of the return
values of the stocks that are considered by the investor are assumed to be
known, while the joint distribution is unknown. The problem is to find the best
investment strategy in order to minimize the probability of losing a certain
percentage of the invested capital based on different attitudes of the
investors towards future outcomes of the stock market.
For portfolios made up of two stocks, this work shows how to exactly and
quickly solve the problem of finding an optimal portfolio for aggressive or
risk-averse investors, using an algorithm based on a fast greedy solution to a
maximum flow problem. However, an investor looking for an average-case
guarantee (so is neither aggressive or risk-averse) must deal with a more
difficult problem. In particular, it is #P-complete to compute the distribution
function associated with the average-case bound. On the positive side,
approximate answers can be computed by using random sampling techniques similar
to those for high-dimensional volume estimation. When k>2 stocks are
considered, it is proved that a simple solution based on the same flow concepts
as the 2-stock algorithm would imply that P = NP, so is highly unlikely. This
work gives approximation algorithms for this case as well as exact algorithms
for some important special cases.
|
cs/0107012
|
Three-Stage Quantitative Neural Network Model of the Tip-of-the-Tongue
Phenomenon
|
cs.CL cs.AI q-bio.NC q-bio.QM
|
A new three-stage computer artificial neural network model of the
tip-of-the-tongue phenomenon is shortly described, and its stochastic nature
was demonstrated. A way to calculate strength and appearance probability of
tip-of-the-tongue states, neural network mechanism of feeling-of-knowing
phenomenon are proposed. The model synthesizes memory, psycholinguistic, and
metamemory approaches, bridges speech errors and naming chronometry research
traditions. A model analysis of a tip-of-the-tongue case from Anton Chekhov's
short story 'A Horsey Name' is performed. A new 'throw-up-one's-arms effect' is
defined.
|
cs/0107013
|
The Logic Programming Paradigm and Prolog
|
cs.PL cs.AI
|
This is a tutorial on logic programming and Prolog appropriate for a course
on programming languages for students familiar with imperative programming.
|
cs/0107014
|
Transformations of CCP programs
|
cs.PL cs.AI cs.LO
|
We introduce a transformation system for concurrent constraint programming
(CCP). We define suitable applicability conditions for the transformations
which guarantee that the input/output CCP semantics is preserved also when
distinguishing deadlocked computations from successful ones and when
considering intermediate results of (possibly) non-terminating computations.
The system allows us to optimize CCP programs while preserving their intended
meaning: In addition to the usual benefits that one has for sequential
declarative languages, the transformation of concurrent programs can also lead
to the elimination of communication channels and of synchronization points, to
the transformation of non-deterministic computations into deterministic ones,
and to the crucial saving of computational space. Furthermore, since the
transformation system preserves the deadlock behavior of programs, it can be
used for proving deadlock freeness of a given program wrt a class of queries.
To this aim it is sometimes sufficient to apply our transformations and to
specialize the resulting program wrt the given queries in such a way that the
obtained program is trivially deadlock free.
|
cs/0107016
|
Introduction to the CoNLL-2001 Shared Task: Clause Identification
|
cs.CL
|
We describe the CoNLL-2001 shared task: dividing text into clauses. We give
background information on the data sets, present a general overview of the
systems that have taken part in the shared task and briefly discuss their
performance.
|
cs/0107017
|
Learning Computational Grammars
|
cs.CL
|
This paper reports on the "Learning Computational Grammars" (LCG) project, a
postdoc network devoted to studying the application of machine learning
techniques to grammars suitable for computational use. We were interested in a
more systematic survey to understand the relevance of many factors to the
success of learning, esp. the availability of annotated data, the kind of
dependencies in the data, and the availability of knowledge bases (grammars).
We focused on syntax, esp. noun phrase (NP) syntax.
|
cs/0107018
|
Combining a self-organising map with memory-based learning
|
cs.CL
|
Memory-based learning (MBL) has enjoyed considerable success in corpus-based
natural language processing (NLP) tasks and is thus a reliable method of
getting a high-level of performance when building corpus-based NLP systems.
However there is a bottleneck in MBL whereby any novel testing item has to be
compared against all the training items in memory base. For this reason there
has been some interest in various forms of memory editing whereby some method
of selecting a subset of the memory base is employed to reduce the number of
comparisons. This paper investigates the use of a modified self-organising map
(SOM) to select a subset of the memory items for comparison. This method
involves reducing the number of comparisons to a value proportional to the
square root of the number of training items. The method is tested on the
identification of base noun-phrases in the Wall Street Journal corpus, using
sections 15 to 18 for training and section 20 for testing.
|
cs/0107019
|
Applying Natural Language Generation to Indicative Summarization
|
cs.CL
|
The task of creating indicative summaries that help a searcher decide whether
to read a particular document is a difficult task. This paper examines the
indicative summarization task from a generation perspective, by first analyzing
its required content via published guidelines and corpus analysis. We show how
these summaries can be factored into a set of document features, and how an
implemented content planner uses the topicality document feature to create
indicative multidocument query-based summaries.
|
cs/0107020
|
Transformation-Based Learning in the Fast Lane
|
cs.CL
|
Transformation-based learning has been successfully employed to solve many
natural language processing problems. It achieves state-of-the-art performance
on many natural language processing tasks and does not overtrain easily.
However, it does have a serious drawback: the training time is often
intorelably long, especially on the large corpora which are often used in NLP.
In this paper, we present a novel and realistic method for speeding up the
training time of a transformation-based learner without sacrificing
performance. The paper compares and contrasts the training time needed and
performance achieved by our modified learner with two other systems: a standard
transformation-based learner, and the ICA system \cite{hepple00:tbl}. The
results of these experiments show that our system is able to achieve a
significant improvement in training time while still achieving the same
performance as a standard transformation-based learner. This is a valuable
contribution to systems and algorithms which utilize transformation-based
learning at any part of the execution.
|
cs/0107021
|
Multidimensional Transformation-Based Learning
|
cs.CL
|
This paper presents a novel method that allows a machine learning algorithm
following the transformation-based learning paradigm \cite{brill95:tagging} to
be applied to multiple classification tasks by training jointly and
simultaneously on all fields. The motivation for constructing such a system
stems from the observation that many tasks in natural language processing are
naturally composed of multiple subtasks which need to be resolved
simultaneously; also tasks usually learned in isolation can possibly benefit
from being learned in a joint framework, as the signals for the extra tasks
usually constitute inductive bias.
The proposed algorithm is evaluated in two experiments: in one, the system is
used to jointly predict the part-of-speech and text chunks/baseNP chunks of an
English corpus; and in the second it is used to learn the joint prediction of
word segment boundaries and part-of-speech tagging for Chinese. The results
show that the simultaneous learning of multiple tasks does achieve an
improvement in each task upon training the same tasks sequentially. The
part-of-speech tagging result of 96.63% is state-of-the-art for individual
systems on the particular train/test split.
|
cs/0107026
|
Annotated revision programs
|
cs.AI cs.LO
|
Revision programming is a formalism to describe and enforce updates of belief
sets and databases. That formalism was extended by Fitting who assigned
annotations to revision atoms. Annotations provide a way to quantify the
confidence (probability) that a revision atom holds. The main goal of our paper
is to reexamine the work of Fitting, argue that his semantics does not always
provide results consistent with intuition, and to propose an alternative
treatment of annotated revision programs. Our approach differs from that
proposed by Fitting in two key aspects: we change the notion of a model of a
program and we change the notion of a justified revision. We show that under
this new approach fundamental properties of justified revisions of standard
revision programs extend to the annotated case.
|
cs/0107027
|
Fixed-parameter complexity of semantics for logic programs
|
cs.LO cs.AI
|
A decision problem is called parameterized if its input is a pair of strings.
One of these strings is referred to as a parameter. The problem: given a
propositional logic program P and a non-negative integer k, decide whether P
has a stable model of size no more than k, is an example of a parameterized
decision problem with k serving as a parameter. Parameterized problems that are
NP-complete often become solvable in polynomial time if the parameter is fixed.
The problem to decide whether a program P has a stable model of size no more
than k, where k is fixed and not a part of input, can be solved in time
O(mn^k), where m is the size of P and n is the number of atoms in P. Thus, this
problem is in the class P. However, algorithms with the running time given by a
polynomial of order k are not satisfactory even for relatively small values of
k.
The key question then is whether significantly better algorithms (with the
degree of the polynomial not dependent on k) exist. To tackle it, we use the
framework of fixed-parameter complexity. We establish the fixed-parameter
complexity for several parameterized decision problems involving models,
supported models and stable models of logic programs. We also establish the
fixed-parameter complexity for variants of these problems resulting from
restricting attention to Horn programs and to purely negative programs. Most of
the problems considered in the paper have high fixed-parameter complexity.
Thus, it is unlikely that fixing bounds on models (supported models, stable
models) will lead to fast algorithms to decide the existence of such models.
|
cs/0107028
|
Propositional satisfiability in answer-set programming
|
cs.AI cs.LO
|
We show that propositional logic and its extensions can support answer-set
programming in the same way stable logic programming and disjunctive logic
programming do. To this end, we introduce a logic based on the logic of
propositional schemata and on a version of the Closed World Assumption. We call
it the extended logic of propositional schemata with CWA (PS+, in symbols). An
important feature of this logic is that it supports explicit modeling of
constraints on cardinalities of sets. In the paper, we characterize the class
of problems that can be solved by finite PS+ theories. We implement a
programming system based on the logic PS+ and design and implement a solver for
processing theories in PS+. We present encouraging performance results for our
approach --- we show it to be competitive with smodels, a state-of-the-art
answer-set programming system based on stable logic programming.
|
cs/0107029
|
aspps --- an implementation of answer-set programming with propositional
schemata
|
cs.AI cs.LO
|
We present an implementation of an answer-set programming paradigm, called
aspps (short for answer-set programming with propositional schemata). The
system aspps is designed to process PS+ theories. It consists of two basic
modules. The first module, psgrnd, grounds an PS+ theory. The second module,
referred to as aspps, is a solver. It computes models of ground PS+ theories.
|
cs/0107032
|
Coupled Clustering: a Method for Detecting Structural Correspondence
|
cs.LG cs.CL cs.IR
|
This paper proposes a new paradigm and computational framework for
identification of correspondences between sub-structures of distinct composite
systems. For this, we define and investigate a variant of traditional data
clustering, termed coupled clustering, which simultaneously identifies
corresponding clusters within two data sets. The presented method is
demonstrated and evaluated for detecting topical correspondences in textual
corpora.
|
cs/0107033
|
Yet another zeta function and learning
|
cs.LG cs.DM math.PR
|
We study the convergence speed of the batch learning algorithm, and compare
its speed to that of the memoryless learning algorithm and of learning with
memory (as analyzed in joint work with N. Komarova). We obtain precise results
and show in particular that the batch learning algorithm is never worse than
the memoryless learning algorithm (at least asymptotically). Its performance
vis-a-vis learning with full memory is less clearcut, and depends on
certainprobabilistic assumptions. These results necessitate theintroduction of
the moment zeta function of a probability distribution and the study of some of
its properties.
|
cs/0108003
|
The Partial Evaluation Approach to Information Personalization
|
cs.IR cs.PL
|
Information personalization refers to the automatic adjustment of information
content, structure, and presentation tailored to an individual user. By
reducing information overload and customizing information access,
personalization systems have emerged as an important segment of the Internet
economy. This paper presents a systematic modeling methodology - PIPE
(`Personalization is Partial Evaluation') - for personalization.
Personalization systems are designed and implemented in PIPE by modeling an
information-seeking interaction in a programmatic representation. The
representation supports the description of information-seeking activities as
partial information and their subsequent realization by partial evaluation, a
technique for specializing programs. We describe the modeling methodology at a
conceptual level and outline representational choices. We present two
application case studies that use PIPE for personalizing web sites and describe
how PIPE suggests a novel evaluation criterion for information system designs.
Finally, we mention several fundamental implications of adopting the PIPE model
for personalization and when it is (and is not) applicable.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.