id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
cs/9809028
|
Separating Dependency from Constituency in a Tree Rewriting System
|
cs.CL
|
In this paper we present a new tree-rewriting formalism called Link-Sharing
Tree Adjoining Grammar (LSTAG) which is a variant of synchronous TAGs. Using
LSTAG we define an approach towards coordination where linguistic dependency is
distinguished from the notion of constituency. Such an approach towards
coordination that explicitly distinguishes dependencies from constituency gives
a better formal understanding of its representation when compared to previous
approaches that use tree-rewriting systems which conflate the two issues.
|
cs/9809029
|
Incremental Parser Generation for Tree Adjoining Grammars
|
cs.CL
|
This paper describes the incremental generation of parse tables for the
LR-type parsing of Tree Adjoining Languages (TALs). The algorithm presented
handles modifications to the input grammar by updating the parser generated so
far. In this paper, a lazy generation of LR-type parsers for TALs is defined in
which parse tables are created by need while parsing. We then describe an
incremental parser generator for TALs which responds to modification of the
input grammar by updating parse tables built so far.
|
cs/9809032
|
Stable models and an alternative logic programming paradigm
|
cs.LO cs.AI
|
In this paper we reexamine the place and role of stable model semantics in
logic programming and contrast it with a least Herbrand model approach to Horn
programs. We demonstrate that inherent features of stable model semantics
naturally lead to a logic programming system that offers an interesting
alternative to more traditional logic programming styles of Horn logic
programming, stratified logic programming and logic programming with
well-founded semantics. The proposed approach is based on the interpretation of
program clauses as constraints. In this setting programs do not describe a
single intended model, but a family of stable models. These stable models
encode solutions to the constraint satisfaction problem described by the
program. Our approach imposes restrictions on the syntax of logic programs. In
particular, function symbols are eliminated from the language. We argue that
the resulting logic programming system is well-attuned to problems in the class
NP, has a well-defined domain of applications, and an emerging methodology of
programming. We point out that what makes the whole approach viable is recent
progress in implementations of algorithms to compute stable models of
propositional logic programs.
|
cs/9809033
|
Efficient Retrieval of Similar Time Sequences Using DFT
|
cs.DB
|
We propose an improvement of the known DFT-based indexing technique for fast
retrieval of similar time sequences. We use the last few Fourier coefficients
in the distance computation without storing them in the index since every
coefficient at the end is the complex conjugate of a coefficient at the
beginning and as strong as its counterpart. We show analytically that this
observation can accelerate the search time of the index by more than a factor
of two. This result was confirmed by our experiments, which were carried out on
real stock prices and synthetic data.
|
cs/9809034
|
Semantics and Conversations for an Agent Communication Language
|
cs.MA cs.AI
|
We address the issues of semantics and conversations for agent communication
languages and the Knowledge Query Manipulation Language (KQML) in particular.
Based on ideas from speech act theory, we present a semantic description for
KQML that associates ``cognitive'' states of the agent with the use of the
language's primitives (performatives). We have used this approach to describe
the semantics for the whole set of reserved KQML performatives. Building on the
semantics, we devise the conversation policies, i.e., a formal description of
how KQML performatives may be combined into KQML exchanges (conversations),
using a Definite Clause Grammar. Our research offers methods for a speech act
theory-based semantic description of a language of communication acts and for
the specification of the protocols associated with these acts. Languages of
communication acts address the issue of communication among software
applications at a level of abstraction that is useful to the emerging software
agents paradigm.
|
cs/9809036
|
Document Archiving, Replication and Migration Container for Mobile Web
Users
|
cs.MA cs.MM
|
With the increasing use of mobile workstations for a wide variety of tasks
and associated information needs, and with many variations of available
networks, access to data becomes a prime consideration. This paper discusses
issues of workstation mobility and proposes a solution wherein the data
structures are accessed in an encapsulated form - through the Portable File
System (PFS) wrapper. The paper discusses an implementation of the Portable
File System, highlighting the architecture and commenting upon performance of
an experimental system. Although investigations have been focused upon mobile
access of WWW documents, this technique could be applied to any mobile data
access situation.
|
cs/9809049
|
Aspects of Evolutionary Design by Computers
|
cs.NE
|
This paper examines the four main types of Evolutionary Design by computers:
Evolutionary Design Optimisation, Evolutionary Art, Evolutionary Artificial
Life Forms and Creative Evolutionary Design. Definitions for all four areas are
provided. A review of current work in each of these areas is given, with
examples of the types of applications that have been tackled. The different
properties and requirements of each are examined. Descriptions of typical
representations and evolutionary algorithms are provided and examples of
designs evolved using these techniques are shown. The paper then discusses how
the boundaries of these areas are beginning to merge, resulting in four new
'overlapping' types of Evolutionary Design: Integral Evolutionary Design,
Artificial Life Based Evolutionary Design, Aesthetic Evolutionary AL and
Aesthetic Evolutionary Design. Finally, the last part of the paper discusses
some common problems faced by creators of Evolutionary Design systems,
including: interdependent elements in designs, epistasis, and constraint
handling.
|
cs/9809050
|
A Freely Available Morphological Analyzer, Disambiguator and Context
Sensitive Lemmatizer for German
|
cs.CL
|
In this paper we present Morphy, an integrated tool for German morphology,
part-of-speech tagging and context-sensitive lemmatization. Its large lexicon
of more than 320,000 word forms plus its ability to process German compound
nouns guarantee a wide morphological coverage. Syntactic ambiguities can be
resolved with a standard statistical part-of-speech tagger. By using the output
of the tagger, the lemmatizer can determine the correct root even for ambiguous
word forms. The complete package is freely available and can be downloaded from
the World Wide Web.
|
cs/9809051
|
Spoken Language Dialogue Systems and Components: Best practice in
development and evaluation (DISC 24823) - Periodic Progress Report 1: Basic
Details of the Action
|
cs.CL cs.SE
|
The DISC project aims to (a) build an in-depth understanding of the
state-of-the-art in spoken language dialogue systems (SLDSs) and components
development and evaluation with the purpose of (b) developing a first best
practice methodology in the field. The methodology will be accompanied by (c) a
series of development and evaluation support tools. To the limited extent
possible within the duration of the project, the draft versions of the
methodology and the tools will be (d) tested by SLDS developers from industry
and research, and will be (e) packaged to best suit their needs. In the first
year of DISC, (a) has been accomplished, and (b) and (c) have started. A
proposal to complete the work proposed above by adding 12 months to the 18
months of the present project, has been submitted to Esprit Long-Term Research
in March 1998.
|
cs/9809106
|
Processing Unknown Words in HPSG
|
cs.CL
|
The lexical acquisition system presented in this paper incrementally updates
linguistic properties of unknown words inferred from their surrounding context
by parsing sentences with an HPSG grammar for German. We employ a gradual,
information-based concept of ``unknownness'' providing a uniform treatment for
the range of completely known to maximally unknown lexical entries. ``Unknown''
information is viewed as revisable information, which is either generalizable
or specializable. Updating takes place after parsing, which only requires a
modified lexical lookup. Revisable pieces of information are identified by
grammar-specified declarations which provide access paths into the parse
feature structure. The updating mechanism revises the corresponding places in
the lexical feature structures iff the context actually provides new
information. For revising generalizable information, type union is required. A
worked-out example demonstrates the inferential capacity of our implemented
system.
|
cs/9809107
|
Computing Declarative Prosodic Morphology
|
cs.CL
|
This paper describes a computational, declarative approach to prosodic
morphology that uses inviolable constraints to denote small finite candidate
sets which are filtered by a restrictive incremental optimization mechanism.
The new approach is illustrated with an implemented fragment of Modern Hebrew
verbs couched in MicroCUF, an expressive constraint logic formalism. For
generation and parsing of word forms, I propose a novel off-line technique to
eliminate run-time optimization. It produces a finite-state oracle that
efficiently restricts the constraint interpreter's search space. As a
byproduct, unknown words can be analyzed without special mechanisms. Unlike
pure finite-state transducer approaches, this hybrid setup allows for more
expressivity in constraints to specify e.g. token identity for reduplication or
arithmetic constraints for phonetics.
|
cs/9809108
|
Learning Nested Agent Models in an Information Economy
|
cs.MA cs.AI
|
We present our approach to the problem of how an agent, within an economic
Multi-Agent System, can determine when it should behave strategically (i.e.
learn and use models of other agents), and when it should act as a simple
price-taker. We provide a framework for the incremental implementation of
modeling capabilities in agents, and a description of the forms of knowledge
required. The agents were implemented and different populations simulated in
order to learn more about their behavior and the merits of using and learning
agent models. Our results show, among other lessons, how savvy buyers can avoid
being ``cheated'' by sellers, how price volatility can be used to
quantitatively predict the benefits of deeper models, and how specific types of
agent populations influence system behavior.
|
cs/9809110
|
Similarity-Based Models of Word Cooccurrence Probabilities
|
cs.CL cs.AI cs.LG
|
In many applications of natural language processing (NLP) it is necessary to
determine the likelihood of a given word combination. For example, a speech
recognizer may need to determine which of the two word combinations ``eat a
peach'' and ``eat a beach'' is more likely. Statistical NLP methods determine
the likelihood of a word combination from its frequency in a training corpus.
However, the nature of language is such that many word combinations are
infrequent and do not occur in any given corpus. In this work we propose a
method for estimating the probability of such previously unseen word
combinations using available information on ``most similar'' words.
We describe probabilistic word association models based on distributional
word similarity, and apply them to two tasks, language modeling and pseudo-word
disambiguation. In the language modeling task, a similarity-based model is used
to improve probability estimates for unseen bigrams in a back-off language
model. The similarity-based method yields a 20% perplexity improvement in the
prediction of unseen bigrams and statistically significant reductions in
speech-recognition error.
We also compare four similarity-based estimation methods against back-off and
maximum-likelihood estimation methods on a pseudo-word sense disambiguation
task in which we controlled for both unigram and bigram frequency to avoid
giving too much weight to easy-to-disambiguate high-frequency configurations.
The similarity-based methods perform up to 40% better on this particular task.
|
cs/9809111
|
Evolution of Neural Networks to Play the Game of Dots-and-Boxes
|
cs.NE cs.LG
|
Dots-and-Boxes is a child's game which remains analytically unsolved. We
implement and evolve artificial neural networks to play this game, evaluating
them against simple heuristic players. Our networks do not evaluate or predict
the final outcome of the game, but rather recommend moves at each stage.
Superior generalisation of play by co-evolved populations is found, and a
comparison made with networks trained by back-propagation using simple
heuristics as an oracle.
|
cs/9809112
|
On the Evaluation and Comparison of Taggers: The Effect of Noise in
Testing Corpora
|
cs.CL
|
This paper addresses the issue of {\sc pos} tagger evaluation. Such
evaluation is usually performed by comparing the tagger output with a reference
test corpus, which is assumed to be error-free. Currently used corpora contain
noise which causes the obtained performance to be a distortion of the real
value. We analyze to what extent this distortion may invalidate the comparison
between taggers or the measure of the improvement given by a new system. The
main conclusion is that a more rigorous testing experimentation
setting/designing is needed to reliably evaluate and compare tagger accuracies.
|
cs/9809113
|
Improving Tagging Performance by Using Voting Taggers
|
cs.CL
|
We present a bootstrapping method to develop an annotated corpus, which is
specially useful for languages with few available resources. The method is
being applied to develop a corpus of Spanish of over 5Mw. The method consists
on taking advantage of the collaboration of two different POS taggers. The
cases in which both taggers agree present a higher accuracy and are used to
retrain the taggers.
|
cs/9809121
|
Using Local Optimality Criteria for Efficient Information Retrieval with
Redundant Information Filters
|
cs.IR cs.AI
|
We consider information retrieval when the data, for instance multimedia, is
coputationally expensive to fetch. Our approach uses "information filters" to
considerably narrow the universe of possiblities before retrieval. We are
especially interested in redundant information filters that save time over more
general but more costly filters. Efficient retrieval requires that decision
must be made about the necessity, order, and concurrent processing of proposed
filters (an "execution plan"). We develop simple polynomial-time local criteria
for optimal execution plans, and show that most forms of concurrency are
suboptimal with information filters. Although the general problem of finding an
optimal execution plan is likely exponential in the number of filters, we show
experimentally that our local optimality criteria, used in a polynomial-time
algorithm, nearly always find the global optimum with 15 filters or less, a
sufficient number of filters for most applications. Our methods do not require
special hardware and avoid the high processor idleness that is characteristic
of massive parallelism solutions to this problem. We apply our ideas to an
important application, information retrieval of cpationed data using
natural-language understanding, a problem for which the natural-language
processing can be the bottleneck if not implemented well.
|
cs/9809122
|
Practical algorithms for on-line sampling
|
cs.LG cs.DS
|
One of the core applications of machine learning to knowledge discovery
consists on building a function (a hypothesis) from a given amount of data (for
instance a decision tree or a neural network) such that we can use it
afterwards to predict new instances of the data. In this paper, we focus on a
particular situation where we assume that the hypothesis we want to use for
prediction is very simple, and thus, the hypotheses class is of feasible size.
We study the problem of how to determine which of the hypotheses in the class
is almost the best one. We present two on-line sampling algorithms for
selecting hypotheses, give theoretical bounds on the number of necessary
examples, and analize them exprimentally. We compare them with the simple batch
sampling approach commonly used and show that in most of the situations our
algorithms use much fewer number of examples.
|
cs/9809123
|
A role of constraint in self-organization
|
cs.NE cs.CG
|
In this paper we introduce a neural network model of self-organization. This
model uses a variation of Hebb rule for updating its synaptic weights, and
surely converges to the equilibrium status. The key point of the convergence is
the update rule that constrains the total synaptic weight and this seems to
make the model stable. We investigate the role of the constraint and show that
it is the constraint that makes the model stable. For analyzing this setting,
we propose a simple probabilistic game that models the neural network and the
self-organization process. Then, we investigate the characteristics of this
game, namely, the probability that the game becomes stable and the number of
the steps it takes.
|
cs/9810002
|
Pre-fetching tree-structured data in distributed memory
|
cs.DC cs.DB
|
A distributed heap storage manager has been implemented on the Fujitsu AP1000
multicomputer. The performance of various pre-fetching strategies is
experimentally compared. Subjective programming benefits and objective
performance benefits of up to 10% in pre-fetching are found for certain
applications, but not for all. The performance benefits of pre-fetching depend
on the specific data structure and access patterns. We suggest that control of
pre-fetching strategy be dynamically under the control of the application.
|
cs/9810003
|
A Linear Shift Invariant Multiscale Transform
|
cs.CV
|
This paper presents a multiscale decomposition algorithm. Unlike standard
wavelet transforms, the proposed operator is both linear and shift invariant.
The central idea is to obtain shift invariance by averaging the aligned wavelet
transform projections over all circular shifts of the signal. It is shown how
the same transform can be obtained by a linear filter bank.
|
cs/9810005
|
Anytime Coalition Structure Generation with Worst Case Guarantees
|
cs.MA cs.AI
|
Coalition formation is a key topic in multiagent systems. One would prefer a
coalition structure that maximizes the sum of the values of the coalitions, but
often the number of coalition structures is too large to allow exhaustive
search for the optimal one. But then, can the coalition structure found via a
partial search be guaranteed to be within a bound from optimum? We show that
none of the previous coalition structure generation algorithms can establish
any bound because they search fewer nodes than a threshold that we show
necessary for establishing a bound. We present an algorithm that establishes a
tight bound within this minimal amount of search, and show that any other
algorithm would have to search strictly more. The fraction of nodes needed to
be searched approaches zero as the number of agents grows. If additional time
remains, our anytime algorithm searches further, and establishes a
progressively lower tight bound. Surprisingly, just searching one more node
drops the bound in half. As desired, our algorithm lowers the bound rapidly
early on, and exhibits diminishing returns to computation. It also drastically
outperforms its obvious contenders. Finally, we show how to distribute the
desired search across self-interested manipulative agents.
|
cs/9810012
|
Ultrametric Distance in Syntax
|
cs.CL q-bio.NC
|
Phrase structure trees have a hierarchical structure. In many subjects, most
notably in Taxonomy such tree structures have been studied using ultrametrics.
Here syntactical hierarchical phrase trees are subject to a similar analysis,
which is much siompler as the branching structure is more readily discernible
and switched. The occurence of hierarchical structure elsewhere in linguistics
is mentioned. The phrase tree can be represented by a matrix and the elements
of the matrix can be represented by triangles. The height at which branching
occurs is not prescribed in previous syntatic models, but it is by using the
ultrametric matrix. The ambiguity of which branching height to choose is
resolved by postulating that branching occurs at the lowest height available.
An ultrametric produces a measure of the complexity of sentences: presumably
the complexity of sentence increases as a language is aquired so that this can
be tested. A All ultrametric triangles are equilateral or isocles, here it is
shown that X structur implies that there are no equilateral triangles.
Restricting attention to simple syntax a minium ultrametric distance between
lexical categories is calculatex. This ultrametric distance is shown to be
different than the matrix obtasined from feaures. It is shown that the
definition of c-commabnd can be replaced by an equivalent ultrametric
definition. The new definition invokes a minimum distance between nodes and
this is more aesthetically satisfing than previouv varieties of definitions.
From the new definition of c-command follows a new definition of government.
|
cs/9810014
|
Resources for Evaluation of Summarization Techniques
|
cs.CL
|
We report on two corpora to be used in the evaluation of component systems
for the tasks of (1) linear segmentation of text and (2) summary-directed
sentence extraction. We present characteristics of the corpora, methods used in
the collection of user judgments, and an overview of the application of the
corpora to evaluating the component system. Finally, we discuss the problems
and issues with construction of the test set which apply broadly to the
construction of evaluation resources for language technologies.
|
cs/9810015
|
Restrictions on Tree Adjoining Languages
|
cs.CL
|
Several methods are known for parsing languages generated by Tree Adjoining
Grammars (TAGs) in O(n^6) worst case running time. In this paper we investigate
which restrictions on TAGs and TAG derivations are needed in order to lower
this O(n^6) time complexity, without introducing large runtime constants, and
without losing any of the generative power needed to capture the syntactic
constructions in natural language that can be handled by unrestricted TAGs. In
particular, we describe an algorithm for parsing a strict subclass of TAG in
O(n^5), and attempt to show that this subclass retains enough generative power
to make it useful in the general case.
|
cs/9810016
|
SYNERGY: A Linear Planner Based on Genetic Programming
|
cs.AI
|
In this paper we describe SYNERGY, which is a highly parallelizable, linear
planning system that is based on the genetic programming paradigm. Rather than
reasoning about the world it is planning for, SYNERGY uses artificial
selection, recombination and fitness measure to generate linear plans that
solve conjunctive goals. We ran SYNERGY on several domains (e.g., the briefcase
problem and a few variants of the robot navigation problem), and the
experimental results show that our planner is capable of handling problem
instances that are one to two orders of magnitude larger than the ones solved
by UCPOP. In order to facilitate the search reduction and to enhance the
expressive power of SYNERGY, we also propose two major extensions to our
planning system: a formalism for using hierarchical planning operators, and a
framework for planning in dynamic environments.
|
cs/9810017
|
General Theory of Image Normalization
|
cs.CV
|
We give a systematic, abstract formulation of the image normalization method
as applied to a general group of image transformations, and then illustrate the
abstract analysis by applying it to the hierarchy of viewing transformations of
a planar object.
|
cs/9810018
|
A Proof Theoretic View of Constraint Programming
|
cs.AI cs.PL
|
We provide here a proof theoretic account of constraint programming that
attempts to capture the essential ingredients of this programming style. We
exemplify it by presenting proof rules for linear constraints over interval
domains, and illustrate their use by analyzing the constraint propagation
process for the {\tt SEND + MORE = MONEY} puzzle. We also show how this
approach allows one to build new constraint solvers.
|
cs/9810020
|
Computational Geometry Column 33
|
cs.CG cs.AI cs.GR
|
Several recent SIGGRAPH papers on surface simplification are described.
|
cs/9811003
|
A Winnow-Based Approach to Context-Sensitive Spelling Correction
|
cs.LG cs.CL
|
A large class of machine-learning problems in natural language require the
characterization of linguistic context. Two characteristic properties of such
problems are that their feature space is of very high dimensionality, and their
target concepts refer to only a small subset of the features in the space.
Under such conditions, multiplicative weight-update algorithms such as Winnow
have been shown to have exceptionally good theoretical properties. We present
an algorithm combining variants of Winnow and weighted-majority voting, and
apply it to a problem in the aforementioned class: context-sensitive spelling
correction. This is the task of fixing spelling errors that happen to result in
valid words, such as substituting "to" for "too", "casual" for "causal", etc.
We evaluate our algorithm, WinSpell, by comparing it against BaySpell, a
statistics-based method representing the state of the art for this task. We
find: (1) When run with a full (unpruned) set of features, WinSpell achieves
accuracies significantly higher than BaySpell was able to achieve in either the
pruned or unpruned condition; (2) When compared with other systems in the
literature, WinSpell exhibits the highest performance; (3) The primary reason
that WinSpell outperforms BaySpell is that WinSpell learns a better linear
separator; (4) When run on a test set drawn from a different corpus than the
training set was drawn from, WinSpell is better able than BaySpell to adapt,
using a strategy we will present that combines supervised learning on the
training set with unsupervised learning on the (noisy) test set.
|
cs/9811004
|
Does Meaning Evolve?
|
cs.CL q-bio.PE
|
A common method of making a theory more understandable, is by comparing it to
another theory which has been better developed. Radical interpretation is a
theory which attempts to explain how communication has meaning. Radical
interpretation is treated as another time-dependent theory and compared to the
time dependent theory of biological evolution. The main reason for doing this
is to find the nature of the time dependence; producing analogs between the two
theories is a necessary prerequisite to this and brings up many problems. Once
the nature of the time dependence is better known it might allow the underlying
mechanism to be uncovered. Several similarities and differences are uncovered,
there appear to be more differences than similarities.
|
cs/9811006
|
Machine Learning of Generic and User-Focused Summarization
|
cs.CL cs.LG
|
A key problem in text summarization is finding a salience function which
determines what information in the source should be included in the summary.
This paper describes the use of machine learning on a training corpus of
documents and their abstracts to discover salience functions which describe
what combination of features is optimal for a given summarization task. The
method addresses both "generic" and user-focused summaries.
|
cs/9811008
|
Translating near-synonyms: Possibilities and preferences in the
interlingua
|
cs.CL
|
This paper argues that an interlingual representation must explicitly
represent some parts of the meaning of a situation as possibilities (or
preferences), not as necessary or definite components of meaning (or
constraints). Possibilities enable the analysis and generation of nuance,
something required for faithful translation. Furthermore, the representation of
the meaning of words, especially of near-synonyms, is crucial, because it
specifies which nuances words can convey in which contexts.
|
cs/9811009
|
Choosing the Word Most Typical in Context Using a Lexical Co-occurrence
Network
|
cs.CL
|
This paper presents a partial solution to a component of the problem of
lexical choice: choosing the synonym most typical, or expected, in context. We
apply a new statistical approach to representing the context of a word through
lexical co-occurrence networks. The implementation was trained and evaluated on
a large corpus, and results show that the inclusion of second-order
co-occurrence relations improves the performance of our implemented lexical
choice program.
|
cs/9811010
|
Learning to Resolve Natural Language Ambiguities: A Unified Approach
|
cs.CL cs.LG
|
We analyze a few of the commonly used statistics based and machine learning
algorithms for natural language disambiguation tasks and observe that they can
be re-cast as learning linear separators in the feature space. Each of the
methods makes a priori assumptions, which it employs, given the data, when
searching for its hypothesis. Nevertheless, as we show, it searches a space
that is as rich as the space of all linear separators. We use this to build an
argument for a data driven approach which merely searches for a good linear
separator in the feature space, without further assumptions on the domain or a
specific problem.
We present such an approach - a sparse network of linear separators,
utilizing the Winnow learning algorithm - and show how to use it in a variety
of ambiguity resolution problems. The learning approach presented is
attribute-efficient and, therefore, appropriate for domains having very large
number of attributes.
In particular, we present an extensive experimental comparison of our
approach with other methods on several well studied lexical disambiguation
tasks such as context-sensitive spelling correction, prepositional phrase
attachment and part of speech tagging. In all cases we show that our approach
either outperforms other methods tried for these tasks or performs comparably
to the best.
|
cs/9811013
|
The Asilomar Report on Database Research
|
cs.DB cs.DL
|
The database research community is rightly proud of success in basic
research, and its remarkable record of technology transfer. Now the field needs
to radically broaden its research focus to attack the issues of capturing,
storing, analyzing, and presenting the vast array of online data. The database
research community should embrace a broader research agenda -- broadening the
definition of database management to embrace all the content of the Web and
other online data stores, and rethinking our fundamental assumptions in light
of technology shifts. To accelerate this transition, we recommend changing the
way research results are evaluated and presented. In particular, we advocate
encouraging more speculative and long-range work, moving conferences to a
poster format, and publishing all research literature on the Web.
|
cs/9811016
|
Comparing a statistical and a rule-based tagger for German
|
cs.CL
|
In this paper we present the results of comparing a statistical tagger for
German based on decision trees and a rule-based Brill-Tagger for German. We
used the same training corpus (and therefore the same tag-set) to train both
taggers. We then applied the taggers to the same test corpus and compared their
respective behavior and in particular their error rates. Both taggers perform
similarly with an error rate of around 5%. From the detailed error analysis it
can be seen that the rule-based tagger has more problems with unknown words
than the statistical tagger. But the results are opposite for tokens that are
many-ways ambiguous. If the unknown words are fed into the taggers with the
help of an external lexicon (such as the Gertwol system) the error rate of the
rule-based tagger drops to 4.7%, and the respective rate of the statistical
taggers drops to around 3.7%. Combining the taggers by using the output of one
tagger to help the other did not lead to any further improvement.
|
cs/9811018
|
P-model Alternative to the T-model
|
cs.CL q-bio.NC
|
Standard linguistic analysis of syntax uses the T-model. This model requires
the ordering: D-structure $>$ S-structure $>$ LF. Between each of these
representations there is movement which alters the order of the constituent
words; movement is achieved using the principles and parameters of syntactic
theory. Psychological serial models do not accommodate the T-model immediately
so that here a new model called the P-model is introduced. Here it is argued
that the LF representation should be replaced by a variant of Frege's three
qualities. In the F-representation the order of elements is not necessarily the
same as that in LF and it is suggested that the correct ordering is:
F-representation $>$ D-structure $>$ S-structure. Within this framework
movement originates as the outcome of emphasis applied to the sentence.
|
cs/9811019
|
Locked and Unlocked Polygonal Chains in 3D
|
cs.CG cs.DS cs.RO
|
In this paper, we study movements of simple polygonal chains in 3D. We say
that an open, simple polygonal chain can be straightened if it can be
continuously reconfigured to a straight sequence of segments in such a manner
that both the length of each link and the simplicity of the chain are
maintained throughout the movement. The analogous concept for closed chains is
convexification: reconfiguration to a planar convex polygon. Chains that cannot
be straightened or convexified are called locked. While there are open chains
in 3D that are locked, we show that if an open chain has a simple orthogonal
projection onto some plane, it can be straightened. For closed chains, we show
that there are unknotted but locked closed chains, and we provide an algorithm
for convexifying a planar simple polygon in 3D with a polynomial number of
moves.
|
cs/9811022
|
Expoiting Syntactic Structure for Language Modeling
|
cs.CL
|
The paper presents a language model that develops syntactic structure and
uses it to extract meaningful information from the word history, thus enabling
the use of long distance dependencies. The model assigns probability to every
joint sequence of words--binary-parse-structure with headword annotation and
operates in a left-to-right manner --- therefore usable for automatic speech
recognition. The model, its probabilistic parameterization, and a set of
experiments meant to evaluate its predictive power are presented; an
improvement over standard trigram modeling is achieved.
|
cs/9811024
|
The Essence of Constraint Propagation
|
cs.AI
|
We show that several constraint propagation algorithms (also called (local)
consistency, consistency enforcing, Waltz, filtering or narrowing algorithms)
are instances of algorithms that deal with chaotic iteration. To this end we
propose a simple abstract framework that allows us to classify and compare
these algorithms and to establish in a uniform way their basic properties.
|
cs/9811025
|
A Structured Language Model
|
cs.CL
|
The paper presents a language model that develops syntactic structure and
uses it to extract meaningful information from the word history, thus enabling
the use of long distance dependencies. The model assigns probability to every
joint sequence of words - binary-parse-structure with headword annotation. The
model, its probabilistic parametrization, and a set of experiments meant to
evaluate its predictive power are presented.
|
cs/9811029
|
A Human - machine interface for teleoperation of arm manipulators in a
complex environment
|
cs.RO cs.AI
|
This paper discusses the feasibility of using configuration space (C-space)
as a means of visualization and control in operator-guided real-time motion of
a robot arm manipulator. The motivation is to improve performance of the human
operator in tasks involving the manipulator motion in an environment with
obstacles. Unlike some other motion planning tasks, operators are known to make
expensive mistakes in such tasks, even in a simpler two-dimensional case. They
have difficulty learning better procedures and their performance improves very
little with practice. Using an example of a two-dimensional arm manipulator, we
show that translating the problem into C-space improves the operator
performance rather remarkably, on the order of magnitude compared to the usual
work space control. An interface that makes the transfer possible is described,
and an example of its use in a virtual environment is shown.
|
cs/9811030
|
Generating Segment Durations in a Text-To-Speech System: A Hybrid
Rule-Based/Neural Network Approach
|
cs.NE cs.HC
|
A combination of a neural network with rule firing information from a
rule-based system is used to generate segment durations for a text-to-speech
system. The system shows a slight improvement in performance over a neural
network system without the rule firing information. Synthesized speech using
segment durations was accepted by listeners as having about the same quality as
speech generated using segment durations extracted from natural speech.
|
cs/9811031
|
Speech Synthesis with Neural Networks
|
cs.NE cs.HC
|
Text-to-speech conversion has traditionally been performed either by
concatenating short samples of speech or by using rule-based systems to convert
a phonetic representation of speech into an acoustic representation, which is
then converted into speech. This paper describes a system that uses a
time-delay neural network (TDNN) to perform this phonetic-to-acoustic mapping,
with another neural network to control the timing of the generated speech. The
neural network system requires less memory than a concatenation system, and
performed well in tests comparing it to commercial systems using other
technologies.
|
cs/9811032
|
Text-To-Speech Conversion with Neural Networks: A Recurrent TDNN
Approach
|
cs.NE cs.HC
|
This paper describes the design of a neural network that performs the
phonetic-to-acoustic mapping in a speech synthesis system. The use of a
time-domain neural network architecture limits discontinuities that occur at
phone boundaries. Recurrent data input also helps smooth the output parameter
tracks. Independent testing has demonstrated that the voice quality produced by
this system compares favorably with speech from existing commercial
text-to-speech systems.
|
cs/9812001
|
A Probabilistic Approach to Lexical Semantic Knowledge Acquisition and S
tructural Disambiguation
|
cs.CL
|
In this thesis, I address the problem of automatically acquiring lexical
semantic knowledge, especially that of case frame patterns, from large corpus
data and using the acquired knowledge in structural disambiguation. The
approach I adopt has the following characteristics: (1) dividing the problem
into three subproblems: case slot generalization, case dependency learning, and
word clustering (thesaurus construction). (2) viewing each subproblem as that
of statistical estimation and defining probability models for each subproblem,
(3) adopting the Minimum Description Length (MDL) principle as learning
strategy, (4) employing efficient learning algorithms, and (5) viewing the
disambiguation problem as that of statistical prediction. Major contributions
of this thesis include: (1) formalization of the lexical knowledge acquisition
problem, (2) development of a number of learning methods for lexical knowledge
acquisition, and (3) development of a high-performance disambiguation method.
|
cs/9812002
|
Training Reinforcement Neurocontrollers Using the Polytope Algorithm
|
cs.NE
|
A new training algorithm is presented for delayed reinforcement learning
problems that does not assume the existence of a critic model and employs the
polytope optimization algorithm to adjust the weights of the action network so
that a simple direct measure of the training performance is maximized.
Experimental results from the application of the method to the pole balancing
problem indicate improved training performance compared with critic-based and
genetic reinforcement approaches.
|
cs/9812003
|
Neural Network Methods for Boundary Value Problems Defined in
Arbitrarily Shaped Domains
|
cs.NE cond-mat.dis-nn cs.NA math-ph math.MP math.NA physics.comp-ph
|
Partial differential equations (PDEs) with Dirichlet boundary conditions
defined on boundaries with simple geometry have been succesfuly treated using
sigmoidal multilayer perceptrons in previous works. This article deals with the
case of complex boundary geometry, where the boundary is determined by a number
of points that belong to it and are closely located, so as to offer a
reasonable representation. Two networks are employed: a multilayer perceptron
and a radial basis function network. The later is used to account for the
satisfaction of the boundary conditions. The method has been successfuly tested
on two-dimensional and three-dimensional PDEs and has yielded accurate
solutions.
|
cs/9812004
|
Name Strategy: Its Existence and Implications
|
cs.CL cs.AI math.HO
|
It is argued that colour name strategy, object name strategy, and chunking
strategy in memory are all aspects of the same general phenomena, called
stereotyping. It is pointed out that the Berlin-Kay universal partial ordering
of colours and the frequency of traffic accidents classified by colour are
surprisingly similar. Some consequences of the existence of a name strategy for
the philosophy of language and mathematics are discussed. It is argued that
real valued quantities occur {\it ab initio}. The implication of real valued
truth quantities is that the {\bf Continuum Hypothesis} of pure mathematics is
side-stepped. The existence of name strategy shows that thought/sememes and
talk/phonemes can be separate, and this vindicates the assumption of thought
occurring before talk used in psycholinguistic speech production models.
|
cs/9812005
|
Optimal Multi-Paragraph Text Segmentation by Dynamic Programming
|
cs.CL
|
There exist several methods of calculating a similarity curve, or a sequence
of similarity values, representing the lexical cohesion of successive text
constituents, e.g., paragraphs. Methods for deciding the locations of fragment
boundaries are, however, scarce. We propose a fragmentation method based on
dynamic programming. The method is theoretically sound and guaranteed to
provide an optimal splitting on the basis of a similarity curve, a preferred
fragment length, and a cost function defined. The method is especially useful
when control on fragment size is of importance.
|
cs/9812006
|
A High Quality Text-To-Speech System Composed of Multiple Neural
Networks
|
cs.NE cs.HC
|
While neural networks have been employed to handle several different
text-to-speech tasks, ours is the first system to use neural networks
throughout, for both linguistic and acoustic processing. We divide the
text-to-speech task into three subtasks, a linguistic module mapping from text
to a linguistic representation, an acoustic module mapping from the linguistic
representation to speech, and a video module mapping from the linguistic
representation to animated images. The linguistic module employs a
letter-to-sound neural network and a postlexical neural network. The acoustic
module employs a duration neural network and a phonetic neural network. The
visual neural network is employed in parallel to the acoustic module to drive a
talking head. The use of neural networks that can be retrained on the
characteristics of different voices and languages affords our system a degree
of adaptability and naturalness heretofore unavailable.
|
cs/9812010
|
Towards a computational theory of human daydreaming
|
cs.AI
|
This paper examines the phenomenon of daydreaming: spontaneously recalling or
imagining personal or vicarious experiences in the past or future. The
following important roles of daydreaming in human cognition are postulated:
plan preparation and rehearsal, learning from failures and successes, support
for processes of creativity, emotion regulation, and motivation.
A computational theory of daydreaming and its implementation as the program
DAYDREAMER are presented. DAYDREAMER consists of 1) a scenario generator based
on relaxed planning, 2) a dynamic episodic memory of experiences used by the
scenario generator, 3) a collection of personal goals and control goals which
guide the scenario generator, 4) an emotion component in which daydreams
initiate, and are initiated by, emotional states arising from goal outcomes,
and 5) domain knowledge of interpersonal relations and common everyday
occurrences.
The role of emotions and control goals in daydreaming is discussed. Four
control goals commonly used in guiding daydreaming are presented:
rationalization, failure/success reversal, revenge, and preparation. The role
of episodic memory in daydreaming is considered, including how daydreamed
information is incorporated into memory and later used. An initial version of
DAYDREAMER which produces several daydreams (in English) is currently running.
|
cs/9812013
|
The Self-Organizing Symbiotic Agent
|
cs.NE cs.CC
|
In [N. A. Baas, Emergence, Hierarchies, and Hyper-structures, in C.G. Langton
ed., Artificial Life III, Addison Wesley, 1994.] a general framework for the
study of Emergence and hyper-structure was presented. This approach is mostly
concerned with the description of such systems. In this paper we will try to
bring forth a different aspect of this model we feel will be useful in the
engineering of agent based solutions, namely the symbiotic approach. In this
approach a self-organizing method of dividing the more complex "main-problem"
to a hyper-structure of "sub-problems" with the aim of reducing complexity is
desired. A description of the general problem will be given along with some
instances of related work. This paper is intended to serve as an introductory
challenge for general solutions to the described problem.
|
cs/9812014
|
An Adaptive Agent Oriented Software Architecture
|
cs.DC cs.MA
|
A new approach to software design based on an agent-oriented architecture is
presented. Unlike current research, we consider software to be designed and
implemented with this methodology in mind. In this approach agents are
considered adaptively communicating concurrent modules which are divided into a
white box module responsible for the communications and learning, and a black
box which is the independent specialized processes of the agent. A distributed
Learning policy is also introduced for adaptability.
|
cs/9812017
|
A reusable iterative optimization software library to solve
combinatorial problems with approximate reasoning
|
cs.AI
|
Real world combinatorial optimization problems such as scheduling are
typically too complex to solve with exact methods. Additionally, the problems
often have to observe vaguely specified constraints of different importance,
the available data may be uncertain, and compromises between antagonistic
criteria may be necessary. We present a combination of approximate reasoning
based constraints and iterative optimization based heuristics that help to
model and solve such problems in a framework of C++ software libraries called
StarFLIP++. While initially developed to schedule continuous caster units in
steel plants, we present in this paper results from reusing the library
components in a shift scheduling system for the workforce of an industrial
production plant.
|
cs/9812018
|
A Flexible Shallow Approach to Text Generation
|
cs.CL
|
In order to support the efficient development of NL generation systems, two
orthogonal methods are currently pursued with emphasis: (1) reusable, general,
and linguistically motivated surface realization components, and (2) simple,
task-oriented template-based techniques. In this paper we argue that, from an
application-oriented perspective, the benefits of both are still limited. In
order to improve this situation, we suggest and evaluate shallow generation
methods associated with increased flexibility. We advise a close connection
between domain-motivated and linguistic ontologies that supports the quick
adaptation to new tasks and domains, rather than the reuse of general
resources. Our method is especially designed for generating reports with
limited linguistic variations.
|
cs/9812021
|
Forgetting Exceptions is Harmful in Language Learning
|
cs.CL cs.LG
|
We show that in language learning, contrary to received wisdom, keeping
exceptional training instances in memory can be beneficial for generalization
accuracy. We investigate this phenomenon empirically on a selection of
benchmark natural language processing tasks: grapheme-to-phoneme conversion,
part-of-speech tagging, prepositional-phrase attachment, and base noun phrase
chunking. In a first series of experiments we combine memory-based learning
with training set editing techniques, in which instances are edited based on
their typicality and class prediction strength. Results show that editing
exceptional instances (with low typicality or low class prediction strength)
tends to harm generalization accuracy. In a second series of experiments we
compare memory-based learning and decision-tree learning methods on the same
selection of tasks, and find that decision-tree learning often performs worse
than memory-based learning. Moreover, the decrease in performance can be linked
to the degree of abstraction from exceptions (i.e., pruning or eagerness). We
provide explanations for both results in terms of the properties of the natural
language processing tasks and the learning algorithms.
|
cs/9812022
|
Hypertree Decompositions and Tractable Queries
|
cs.DB cs.AI
|
Several important decision problems on conjunctive queries (CQs) are
NP-complete in general but become tractable, and actually highly
parallelizable, if restricted to acyclic or nearly acyclic queries. Examples
are the evaluation of Boolean CQs and query containment. These problems were
shown tractable for conjunctive queries of bounded treewidth and of bounded
degree of cyclicity. The so far most general concept of nearly acyclic queries
was the notion of queries of bounded query-width introduced by Chekuri and
Rajaraman (1997). While CQs of bounded query width are tractable, it remained
unclear whether such queries are efficiently recognizable. Chekuri and
Rajaraman stated as an open problem whether for each constant k it can be
determined in polynomial time if a query has query width less than or equal to
k. We give a negative answer by proving this problem NP-complete (specifically,
for k=4). In order to circumvent this difficulty, we introduce the new concept
of hypertree decomposition of a query and the corresponding notion of hypertree
width. We prove: (a) for each k, the class of queries with query width bounded
by k is properly contained in the class of queries whose hypertree width is
bounded by k; (b) unlike query width, constant hypertree-width is efficiently
recognizable; (c) Boolean queries of constant hypertree width can be
efficiently evaluated.
|
cs/9901001
|
TDLeaf(lambda): Combining Temporal Difference Learning with Game-Tree
Search
|
cs.LG cs.AI
|
In this paper we present TDLeaf(lambda), a variation on the TD(lambda)
algorithm that enables it to be used in conjunction with minimax search. We
present some experiments in both chess and backgammon which demonstrate its
utility and provide comparisons with TD(lambda) and another less radical
variant, TD-directed(lambda). In particular, our chess program, ``KnightCap,''
used TDLeaf(lambda) to learn its evaluation function while playing on the Free
Internet Chess Server (FICS, fics.onenet.net). It improved from a 1650 rating
to a 2100 rating in just 308 games. We discuss some of the reasons for this
success and the relationship between our results and Tesauro's results in
backgammon.
|
cs/9901002
|
KnightCap: A chess program that learns by combining TD(lambda) with
game-tree search
|
cs.LG cs.AI
|
In this paper we present TDLeaf(lambda), a variation on the TD(lambda)
algorithm that enables it to be used in conjunction with game-tree search. We
present some experiments in which our chess program ``KnightCap'' used
TDLeaf(lambda) to learn its evaluation function while playing on the Free
Internet Chess Server (FICS, fics.onenet.net). The main success we report is
that KnightCap improved from a 1650 rating to a 2150 rating in just 308 games
and 3 days of play. As a reference, a rating of 1650 corresponds to about level
B human play (on a scale from E (1000) to A (1800)), while 2150 is human master
level. We discuss some of the reasons for this success, principle among them
being the use of on-line, rather than self-play.
|
cs/9901003
|
Fixpoint 3-valued semantics for autoepistemic logic
|
cs.LO cs.AI
|
The paper presents a constructive fixpoint semantics for autoepistemic logic
(AEL). This fixpoint characterizes a unique but possibly three-valued belief
set of an autoepistemic theory. It may be three-valued in the sense that for a
subclass of formulas F, the fixpoint may not specify whether F is believed or
not. The paper presents a constructive 3-valued semantics for autoepistemic
logic (AEL). We introduce a derivation operator and define the semantics as its
least fixpoint. The semantics is 3-valued in the sense that, for some formulas,
the least fixpoint does not specify whether they are believed or not. We show
that complete fixpoints of the derivation operator correspond to Moore's stable
expansions. In the case of modal representations of logic programs our least
fixpoint semantics expresses well-founded semantics or 3-valued Fitting-Kunen
semantics (depending on the embedding used). We show that, computationally, our
semantics is simpler than the semantics proposed by Moore (assuming that the
polynomial hierarchy does not collapse).
|
cs/9901004
|
On the geometry of similarity search: dimensionality curse and
concentration of measure
|
cs.IR cs.CG cs.DB cs.DS
|
We suggest that the curse of dimensionality affecting the similarity-based
search in large datasets is a manifestation of the phenomenon of concentration
of measure on high-dimensional structures. We prove that, under certain
geometric assumptions on the query domain $\Omega$ and the dataset $X$, if
$\Omega$ satisfies the so-called concentration property, then for most query
points $x^\ast$ the ball of radius $(1+\e)d_X(x^\ast)$ centred at $x^\ast$
contains either all points of $X$ or else at least $C_1\exp(-C_2\e^2n)$ of
them. Here $d_X(x^\ast)$ is the distance from $x^\ast$ to the nearest neighbour
in $X$ and $n$ is the dimension of $\Omega$.
|
cs/9901005
|
An Empirical Approach to Temporal Reference Resolution (journal version)
|
cs.CL
|
Scheduling dialogs, during which people negotiate the times of appointments,
are common in everyday life. This paper reports the results of an in-depth
empirical investigation of resolving explicit temporal references in scheduling
dialogs. There are four phases of this work: data annotation and evaluation,
model development, system implementation and evaluation, and model evaluation
and analysis. The system and model were developed primarily on one set of data,
and then applied later to a much more complex data set, to assess the
generalizability of the model for the task being performed. Many different
types of empirical methods are applied to pinpoint the strengths and weaknesses
of the approach. Detailed annotation instructions were developed and an
intercoder reliability study was performed, showing that naive annotators can
reliably perform the targeted annotations. A fully automatic system has been
developed and evaluated on unseen test data, with good results on both data
sets. We adopt a pure realization of a recency-based focus model to identify
precisely when it is and is not adequate for the task being addressed. In
addition to system results, an in-depth evaluation of the model itself is
presented, based on detailed manual annotations. The results are that few
errors occur specifically due to the model of focus being used, and the set of
anaphoric relations defined in the model are low in ambiguity for both data
sets.
|
cs/9901008
|
Fast Computational Algorithms for the Discrete Wavelet Transform and
Applications of Localized Orthonormal Bases in Signal Classification
|
cs.MS cs.CE
|
We construct an algorithm for implementing the discrete wavelet transform by
means of matrices in SO_2(R) for orthonormal compactly supported wavelets and
matrices in SL_m(R), m > = 2, for compactly supported biorthogonal wavelets. We
show that in 1 dimension the total operation count using this algorithm can be
reduced to about 50% of the conventional convolution and downsampling by
2-operation for both orthonormal and biorthogonal filters. In the special case
of biorthogonal symmetric odd-odd filters, we show an implementation yielding a
total operation count of about 38% of the conventional method. In 2 dimensions
we show an implementation of this algorithm yielding a reduction in the total
operation count of about 70% when the filters are orthonormal, a reduction of
about 62% for general biorthogonal filters, and a reduction of about 70% if the
filters are symmetric odd-odd length filters. We further extend these results
to 3 dimensions. We also show how the SO_2(R)-method for implementing the
discrete wavelet transform may be exploited to compute short FIR filters, and
we construct edge mappings where we try to improve upon the degree of
preservation of regularity in the conventional methods. We also consider a
two-class waveform discrimination problem. A statistical space-frequency
analysis is performed on a training data set using the LDB-algorithm of N.Saito
and R.Coifman. The success of the algorithm on this particular problem is
evaluated on a disjoint test data set.
|
cs/9901012
|
Extremal problems in logic programming and stable model computation
|
cs.LO cs.AI
|
We study the following problem: given a class of logic programs C, determine
the maximum number of stable models of a program from C. We establish the
maximum for the class of all logic programs with at most n clauses, and for the
class of all logic programs of size at most n. We also characterize the
programs for which the maxima are attained. We obtain similar results for the
class of all disjunctive logic programs with at most n clauses, each of length
at most m, and for the class of all disjunctive logic programs of size at most
n. Our results on logic programs have direct implication for the design of
algorithms to compute stable models. Several such algorithms, similar in spirit
to the Davis-Putnam procedure, are described in the paper. Our results imply
that there is an algorithm that finds all stable models of a program with n
clauses after considering the search space of size O(3^{n/3}) in the worst
case. Our results also provide some insights into the question of
representability of families of sets as families of stable models of logic
programs.
|
cs/9901014
|
Minimum Description Length Induction, Bayesianism, and Kolmogorov
Complexity
|
cs.LG cs.AI cs.CC cs.IT cs.LO math.IT math.PR physics.data-an
|
The relationship between the Bayesian approach and the minimum description
length approach is established. We sharpen and clarify the general modeling
principles MDL and MML, abstracted as the ideal MDL principle and defined from
Bayes's rule by means of Kolmogorov complexity. The basic condition under which
the ideal principle should be applied is encapsulated as the Fundamental
Inequality, which in broad terms states that the principle is valid when the
data are random, relative to every contemplated hypothesis and also these
hypotheses are random relative to the (universal) prior. Basically, the ideal
principle states that the prior probability associated with the hypothesis
should be given by the algorithmic universal probability, and the sum of the
log universal probability of the model plus the log of the probability of the
data given the model should be minimized. If we restrict the model class to the
finite sets then application of the ideal principle turns into Kolmogorov's
minimal sufficient statistic. In general we show that data compression is
almost always the best strategy, both in hypothesis identification and
prediction.
|
cs/9901016
|
Representation Theory for Default Logic
|
cs.LO cs.AI
|
Default logic can be regarded as a mechanism to represent families of belief
sets of a reasoning agent. As such, it is inherently second-order. In this
paper, we study the problem of representability of a family of theories as the
set of extensions of a default theory. We give a complete solution to the
representability by means of normal default theories. We obtain partial results
on representability by arbitrary default theories. We construct examples of
denumerable families of non-including theories that are not representable. We
also study the concept of equivalence between default theories.
|
cs/9902001
|
Compacting the Penn Treebank Grammar
|
cs.CL
|
Treebanks, such as the Penn Treebank (PTB), offer a simple approach to
obtaining a broad coverage grammar: one can simply read the grammar off the
parse trees in the treebank. While such a grammar is easy to obtain, a
square-root rate of growth of the rule set with corpus size suggests that the
derived grammar is far from complete and that much more treebanked text would
be required to obtain a complete grammar, if one exists at some limit. However,
we offer an alternative explanation in terms of the underspecification of
structures within the treebank. This hypothesis is explored by applying an
algorithm to compact the derived grammar by eliminating redundant rules --
rules whose right hand sides can be parsed by other rules. The size of the
resulting compacted grammar, which is significantly less than that of the full
treebank grammar, is shown to approach a limit. However, such a compacted
grammar does not yield very good performance figures. A version of the
compaction algorithm taking rule probabilities into account is proposed, which
is argued to be more linguistically motivated. Combined with simple
thresholding, this method can be used to give a 58% reduction in grammar size
without significant change in parsing performance, and can produce a 69%
reduction with some gain in recall, but a loss in precision.
|
cs/9902002
|
Automatic Identification of Subjects for Textual Documents in Digital
Libraries
|
cs.DL cs.CL
|
The amount of electronic documents in the Internet grows very quickly. How to
effectively identify subjects for documents becomes an important issue. In
past, the researches focus on the behavior of nouns in documents. Although
subjects are composed of nouns, the constituents that determine which nouns are
subjects are not only nouns. Based on the assumption that texts are
well-organized and event-driven, nouns and verbs together contribute the
process of subject identification. This paper considers four factors: 1) word
importance, 2) word frequency, 3) word co-occurrence, and 4) word distance and
proposes a model to identify subjects for textual documents. The preliminary
experiments show that the performance of the proposed model is close to that of
human beings.
|
cs/9902005
|
Mutual Search
|
cs.DS cs.CC cs.DB cs.DC cs.DM cs.IR
|
We introduce a search problem called ``mutual search'' where $k$ \agents,
arbitrarily distributed over $n$ sites, are required to locate one another by
posing queries of the form ``Anybody at site $i$?''. We ask for the least
number of queries that is necessary and sufficient. For the case of two \agents
using deterministic protocols we obtain the following worst-case results: In an
oblivious setting (where all pre-planned queries are executed) there is no
savings: $n-1$ queries are required and are sufficient. In a nonoblivious
setting we can exploit the paradigm of ``no news is also news'' to obtain
significant savings: in the synchronous case $0.586n$ queries suffice and
$0.536n$ queries are required; in the asynchronous case $0.896n$ queries
suffice and a fortiori 0.536 queries are required; for $o(\sqrt{n})$ \agents
using a deterministic protocol less than $n$ queries suffice; there is a simple
randomized protocol for two \agents with worst-case expected $0.5n$ queries and
all randomized protocols require at least $0.125n$ worst-case expected queries.
The graph-theoretic framework we formulate for expressing and analyzing
algorithms for this problem may be of independent interest.
|
cs/9902006
|
A Discipline of Evolutionary Programming
|
cs.NE cs.AI cs.CC cs.DS cs.LG cs.MA
|
Genetic fitness optimization using small populations or small population
updates across generations generally suffers from randomly diverging
evolutions. We propose a notion of highly probable fitness optimization through
feasible evolutionary computing runs on small size populations. Based on
rapidly mixing Markov chains, the approach pertains to most types of
evolutionary genetic algorithms, genetic programming and the like. We establish
that for systems having associated rapidly mixing Markov chains and appropriate
stationary distributions the new method finds optimal programs (individuals)
with probability almost 1. To make the method useful would require a structured
design methodology where the development of the program and the guarantee of
the rapidly mixing property go hand in hand. We analyze a simple example to
show that the method is implementable. More significant examples require
theoretical advances, for example with respect to the Metropolis filter.
|
cs/9902015
|
Resource Discovery in Trilogy
|
cs.DL cs.AI cs.MA
|
Trilogy is a collaborative project whose key aim is the development of an
integrated virtual laboratory to support research training within each
institution and collaborative projects between the partners. In this paper, the
architecture and underpinning platform of the system is described with
particular emphasis being placed on the structure and the integration of the
distributed database. A key element is the ontology that provides the
multi-agent system with a conceptualisation specification of the domain; this
ontology is explained, accompanied by a discussion how such a system is
integrated and used within the virtual laboratory. Although in this paper,
Telecommunications and in particular Broadband networks are used as exemplars,
the underlying system principles are applicable to any domain where a
combination of experimental and literature-based resources are required.
|
cs/9902017
|
Not Available
|
cs.DL cs.DB
|
withdrawn by author
|
cs/9902018
|
ZBroker: A Query Routing Broker for Z39.50 Databases
|
cs.DL cs.DB
|
A query routing broker is a software agent that determines from a large set
of accessing information sources the ones most relevant to a user's information
need. As the number of information sources on the Internet increases
dramatically, future users will have to rely on query routing brokers to decide
a small number of information sources to query without incurring too much query
processing overheads. In this paper, we describe a query routing broker known
as ZBroker developed for bibliographic database servers that support the Z39.50
protocol. ZBroker samples the content of each bibliographic database by using
training queries and their results, and summarizes the bibliographic database
content into a knowledge base. We present the design and implementation of
ZBroker and describe its Web-based user interface.
|
cs/9902021
|
Visualization of Retrieved Documents using a Presentation Server
|
cs.DL cs.IR
|
In any search-based digital library (DL) systems dealing with a non-trivial
number of documents, users are often required to go through a long list of
short document descriptions in order to identify what they are looking for. To
tackle the problem, a variety of document organization algorithms and/or
visualization techniques have been used to guide users in selecting relevant
documents. Since these techniques require heavy computations, however, we
developed a presentation server designed to serve as an intermediary between
retrieval servers and clients equipped with a visualization interface. In
addition, we designed our own visual interface by which users can view a set of
documents from different perspectives through layers of document maps. We
finally ran experiments to show that the visual interface, in conjunction with
the presentation server, indeed helps users in selecting relevant documents
from the retrieval results.
|
cs/9902024
|
Algorithms of Two-Level Parallelization for DSMC of Unsteady Flows in
Molecular Gasdynamics
|
cs.CE cs.PF
|
The general scheme of two-level parallelization (TLP) for direct simulation
Monte Carlo of unsteady gas flows on shared memory multiprocessor computers has
been described. The high efficient algorithm of parallel independent runs is
used on the first level. The data parallelization is employed for the second
one. Two versions of TLP algorithm are elaborated with static and dynamic load
balancing. The method of dynamic processor reallocation is used for dynamic
load balancing. Two gasdynamic unsteady problems were used to study speedup and
efficiency of the algorithms. The conditions of efficient application field for
the algorithms have been determined.
|
cs/9902025
|
An Efficient Mean Field Approach to the Set Covering Problem
|
cs.NE
|
A mean field feedback artificial neural network algorithm is developed and
explored for the set covering problem. A convenient encoding of the inequality
constraints is achieved by means of a multilinear penalty function. An
approximate energy minimum is obtained by iterating a set of mean field
equations, in combination with annealing. The approach is numerically tested
against a set of publicly available test problems with sizes ranging up to
5x10^3 rows and 10^6 columns. When comparing the performance with exact results
for sizes where these are available, the approach yields results within a few
percent from the optimal solutions. Comparisons with other approximate methods
also come out well, in particular given the very low CPU consumption required
-- typically a few seconds. Arbitrary problems can be processed using the
algorithm via a public domain server.
|
cs/9902026
|
Probabilistic Inductive Inference:a Survey
|
cs.LG cs.CC cs.LO math.LO
|
Inductive inference is a recursion-theoretic theory of learning, first
developed by E. M. Gold (1967). This paper surveys developments in
probabilistic inductive inference. We mainly focus on finite inference of
recursive functions, since this simple paradigm has produced the most
interesting (and most complex) results.
|
cs/9902027
|
Autocatalytic Theory of Meaning
|
cs.CL adap-org nlin.AO
|
Recently it has been argued that autocatalytic theory could be applied to the
origin of culture. Here possible application to a theory of meaning in the
philosophy of language, called radical interpretation, is commented upon and
compared to previous applications.
|
cs/9902028
|
A Scrollbar-based Visualization for Document Navigation
|
cs.IR cs.HC
|
We are interested in questions of improving user control in best-match
text-retrieval systems, specifically questions as to whether simple
visualizations that nonetheless go beyond the minimal ones generally available
can significantly help users. Recently, we have been investigating ways to help
users decide-given a set of documents retrieved by a query-which documents and
passages are worth closer examination. We built a document viewer incorporating
a visualization centered around a novel content-displaying scrollbar and color
term highlighting, and studied whether the visualization is helpful to
non-expert searchers. Participants' reaction to the visualization was very
positive, while the objective results were inconclusive.
|
cs/9902029
|
The "Fodor"-FODOR fallacy bites back
|
cs.CL
|
The paper argues that Fodor and Lepore are misguided in their attack on
Pustejovsky's Generative Lexicon, largely because their argument rests on a
traditional, but implausible and discredited, view of the lexicon on which it
is effectively empty of content, a view that stands in the long line of
explaining word meaning (a) by ostension and then (b) explaining it by means of
a vacuous symbol in a lexicon, often the word itself after typographic
transmogrification. (a) and (b) both share the wrong belief that to a word must
correspond a simple entity that is its meaning. I then turn to the semantic
rules that Pustejovsky uses and argue first that, although they have novel
features, they are in a well-established Artificial Intelligence tradition of
explaining meaning by reference to structures that mention other structures
assigned to words that may occur in close proximity to the first. It is argued
that Fodor and Lepore's view that there cannot be such rules is without
foundation, and indeed systems using such rules have proved their practical
worth in computational systems. Their justification descends from line of
argument, whose high points were probably Wittgenstein and Quine that meaning
is not to be understood by simple links to the world, ostensive or otherwise,
but by the relationship of whole cultural representational structures to each
other and to the world as a whole.
|
cs/9902030
|
Is Word Sense Disambiguation just one more NLP task?
|
cs.CL
|
This paper compares the tasks of part-of-speech (POS) tagging and
word-sense-tagging or disambiguation (WSD), and argues that the tasks are not
related by fineness of grain or anything like that, but are quite different
kinds of task, particularly becuase there is nothing in POS corresponding to
sense novelty. The paper also argues for the reintegration of sub-tasks that
are being separated for evaluation
|
cs/9903002
|
An Algebraic Programming Style for Numerical Software and its
Optimization
|
cs.SE cs.AI cs.CE cs.MS
|
The abstract mathematical theory of partial differential equations (PDEs) is
formulated in terms of manifolds, scalar fields, tensors, and the like, but
these algebraic structures are hardly recognizable in actual PDE solvers. The
general aim of the Sophus programming style is to bridge the gap between theory
and practice in the domain of PDE solvers. Its main ingredients are a library
of abstract datatypes corresponding to the algebraic structures used in the
mathematical theory and an algebraic expression style similar to the expression
style used in the mathematical theory. Because of its emphasis on abstract
datatypes, Sophus is most naturally combined with object-oriented languages or
other languages supporting abstract datatypes. The resulting source code
patterns are beyond the scope of current compiler optimizations, but are
sufficiently specific for a dedicated source-to-source optimizer. The limited,
domain-specific, character of Sophus is the key to success here. This kind of
optimization has been tested on computationally intensive Sophus style code
with promising results. The general approach may be useful for other styles and
in other application domains as well.
|
cs/9903003
|
A Formal Framework for Linguistic Annotation
|
cs.CL
|
`Linguistic annotation' covers any descriptive or analytic notations applied
to raw language data. The basic data may be in the form of time functions --
audio, video and/or physiological recordings -- or it may be textual. The added
notations may include transcriptions of all sorts (from phonetic features to
discourse structures), part-of-speech and sense tagging, syntactic analysis,
`named entity' identification, co-reference annotation, and so on. While there
are several ongoing efforts to provide formats and tools for such annotations
and to publish annotated linguistic databases, the lack of widely accepted
standards is becoming a critical problem. Proposed standards, to the extent
they exist, have focussed on file formats. This paper focuses instead on the
logical structure of linguistic annotations. We survey a wide variety of
existing annotation formats and demonstrate a common conceptual core, the
annotation graph. This provides a formal framework for constructing,
maintaining and searching linguistic annotations, while remaining consistent
with many alternative data structures and file formats.
|
cs/9903007
|
Some Remarks on the Geometry of Grammar
|
cs.CL cs.LO
|
This paper, following (Dymetman:1998), presents an approach to grammar
description and processing based on the geometry of cancellation diagrams, a
concept which plays a central role in combinatorial group theory
(Lyndon-Schuppe:1977). The focus here is on the geometric intuitions and on
relating group-theoretical diagrams to the traditional charts associated with
context-free grammars and type-0 rewriting systems. The paper is structured as
follows. We begin in Section 1 by analyzing charts in terms of constructs
called cells, which are a geometrical counterpart to rules. Then we move in
Section 2 to a presentation of cancellation diagrams and show how they can be
used computationally. In Section 3 we give a formal algebraic presentation of
the concept of group computation structure, which is based on the standard
notions of free group and conjugacy. We then relate in Section 4 the geometric
and the algebraic views of computation by using the fundamental theorem of
combinatorial group theory (Rotman:1994). In Section 5 we study in more detail
the relationship between the two views on the basis of a simple grammar stated
as a group computation structure. In section 6 we extend this grammar to handle
non-local constructs such as relative pronouns and quantifiers. We conclude in
Section 7 with some brief notes on the differences between normal submonoids
and normal subgroups, group computation versus rewriting systems, and the use
of group morphisms to study the computational complexity of parsing and
generation.
|
cs/9903008
|
Empirically Evaluating an Adaptable Spoken Dialogue System
|
cs.CL
|
Recent technological advances have made it possible to build real-time,
interactive spoken dialogue systems for a wide variety of applications.
However, when users do not respect the limitations of such systems, performance
typically degrades. Although users differ with respect to their knowledge of
system limitations, and although different dialogue strategies make system
limitations more apparent to users, most current systems do not try to improve
performance by adapting dialogue behavior to individual users. This paper
presents an empirical evaluation of TOOT, an adaptable spoken dialogue system
for retrieving train schedules on the web. We conduct an experiment in which 20
users carry out 4 tasks with both adaptable and non-adaptable versions of TOOT,
resulting in a corpus of 80 dialogues. The values for a wide range of
evaluation measures are then extracted from this corpus. Our results show that
adaptable TOOT generally outperforms non-adaptable TOOT, and that the utility
of adaptation depends on TOOT's initial dialogue strategies.
|
cs/9903011
|
A complete anytime algorithm for balanced number partitioning
|
cs.DS cond-mat.dis-nn cs.AI
|
Given a set of numbers, the balanced partioning problem is to divide them
into two subsets, so that the sum of the numbers in each subset are as nearly
equal as possible, subject to the constraint that the cardinalities of the
subsets be within one of each other. We combine the balanced largest
differencing method (BLDM) and Korf's complete Karmarkar-Karp algorithm to get
a new algorithm that optimally solves the balanced partitioning problem. For
numbers with twelve significant digits or less, the algorithm can optimally
solve balanced partioning problems of arbitrary size in practice. For numbers
with greater precision, it first returns the BLDM solution, then continues to
find better solutions as time allows.
|
cs/9903016
|
Modeling Belief in Dynamic Systems, Part II: Revision and Update
|
cs.AI
|
The study of belief change has been an active area in philosophy and AI. In
recent years two special cases of belief change, belief revision and belief
update, have been studied in detail. In a companion paper (Friedman & Halpern,
1997), we introduce a new framework to model belief change. This framework
combines temporal and epistemic modalities with a notion of plausibility,
allowing us to examine the change of beliefs over time. In this paper, we show
how belief revision and belief update can be captured in our framework. This
allows us to compare the assumptions made by each method, and to better
understand the principles underlying them. In particular, it shows that Katsuno
and Mendelzon's notion of belief update (Katsuno & Mendelzon, 1991a) depends on
several strong assumptions that may limit its applicability in artificial
intelligence. Finally, our analysis allow us to identify a notion of minimal
change that underlies a broad range of belief change operations including
revision and update.
|
cs/9903017
|
SIMMUNE, a tool for simulating and analyzing immune system behavior
|
cs.MA q-bio
|
We present a new approach to the simulation and analysis of immune system
behavior. The simulations that can be done with our software package called
SIMMUNE are based on immunological data that describe the behavior of immune
system agents (cells, molecules) on a microscopial (i.e. agent-agent
interaction) scale by defining cellular stimulus-response mechanisms. Since the
behavior of the agents in SIMMUNE can be very flexibly configured, its
application is not limited to immune system simulations. We outline the
principles of SIMMUNE's multiscale analysis of emergent structure within the
simulated immune system that allow the identification of immunological contexts
using minimal a priori assumptions about the higher level organization of the
immune system.
|
cs/9904001
|
A Proposal for the Establishment of Review Boards - a flexible approach
to the selection of academic knowledge
|
cs.CY cs.DL cs.IR
|
Paper journals use a small number of trusted academics to select information
on behalf of all their readers. This inflexibility in the selection was
justified due to the expense of publishing. The advent of cheap distribution
via the internet allows a new trade-off between time and expense and the
flexibility of the selection process. This paper explores one such possible
process one where the role of mark-up and archiving is separated from that of
review. The idea is that authors publish their papers on their own web pages or
in a public paper archive, a board of reviewers judge that paper on a number of
different criteria. The detailed results of the reviews are stored in such a
way as to enable readers to use these judgements to find the papers they want
using search engines on the web. Thus instead of journals using generic
selection criteria readers can set their own to suit their needs. The resulting
system might be even cheaper than web-journals to implement.
|
cs/9904002
|
A geometric framework for modelling similarity search
|
cs.IR cs.DB cs.DS
|
The aim of this paper is to propose a geometric framework for modelling
similarity search in large and multidimensional data spaces of general nature,
which seems to be flexible enough to address such issues as analysis of
complexity, indexability, and the `curse of dimensionality.' Such a framework
is provided by the concept of the so-called similarity workload, which is a
probability metric space $\Omega$ (query domain) with a distinguished finite
subspace $X$ (dataset), together with an assembly of concepts, techniques, and
results from metric geometry. They include such notions as metric transform,
$\e$-entropy, and the phenomenon of concentration of measure on
high-dimensional structures. In particular, we discuss the relevance of the
latter to understanding the curse of dimensionality. As some of those concepts
and techniques are being currently reinvented by the database community, it
seems desirable to try and bridge the gap between database research and the
relevant work already done in geometry and analysis.
|
cs/9904003
|
The Structure of Weighting Coefficient Matrices of Harmonic Differential
Quadrature and Its Applications
|
cs.CE cs.NA math.NA
|
The structure of weighting coefficient matrices of Harmonic Differential
Quadrature (HDQ) is found to be either centrosymmetric or skew centrosymmetric
depending on the order of the corresponding derivatives. The properties of both
matrices are briefly discussed in this paper. It is noted that the
computational effort of the harmonic quadrature for some problems can be
further reduced up to 75 per cent by using the properties of the
above-mentioned matrices.
|
cs/9904004
|
Mixing Metaphors
|
cs.CL cs.AI
|
Mixed metaphors have been neglected in recent metaphor research. This paper
suggests that such neglect is short-sighted. Though mixing is a more complex
phenomenon than straight metaphors, the same kinds of reasoning and knowledge
structures are required. This paper provides an analysis of both parallel and
serial mixed metaphors within the framework of an AI system which is already
capable of reasoning about straight metaphorical manifestations and argues that
the processes underlying mixing are central to metaphorical meaning. Therefore,
any theory of metaphors must be able to account for mixing.
|
cs/9904006
|
Jacobian matrix: a bridge between linear and nonlinear polynomial-only
problems
|
cs.CE cs.NA math.NA
|
By using the Hadamard matrix product concept, this paper introduces two
generalized matrix formulation forms of numerical analogue of nonlinear
differential operators. The SJT matrix-vector product approach is found to be a
simple, efficient and accurate technique in the calculation of the Jacobian
matrix of the nonlinear discretization by finite difference, finite volume,
collocation, dual reciprocity BEM or radial functions based numerical methods.
We also present and prove simple underlying relationship (theorem (3.1))
between general nonlinear analogue polynomials and their corresponding Jacobian
matrices, which forms the basis of this paper. By means of theorem 3.1,
stability analysis of numerical solutions of nonlinear initial value problems
can be easily handled based on the well-known results for linear problems.
Theorem 3.1 also leads naturally to the straightforward extension of various
linear iterative algorithms such as the SOR, Gauss-Seidel and Jacobi methods to
nonlinear algebraic equations. Since an exact alternative of the quasi-Newton
equation is established via theorem 3.1, we derive a modified BFGS quasi-Newton
method. A simple formula is also given to examine the deviation between the
approximate and exact Jacobian matrices. Furthermore, in order to avoid the
evaluation of the Jacobian matrix and its inverse, the pseudo-Jacobian matrix
is introduced with a general applicability of any nonlinear systems of
equations. It should be pointed out that a large class of real-world nonlinear
problems can be modeled or numerically discretized polynomial-only algebraic
system of equations. The results presented here are in general applicable for
all these problems. This paper can be considered as a starting point in the
research of nonlinear computation and analysis from an innovative viewpoint.
|
cs/9904007
|
The Study on the Nonlinear Computations of the DQ and DC Methods
|
cs.CE cs.NA math.NA
|
This paper points out that the differential quadrature (DQ) and differential
cubature (DC) methods due to their global domain property are more efficient
for nonlinear problems than the traditional numerical techniques such as finite
element and finite difference methods. By introducing the Hadamard product of
matrices, we obtain an explicit matrix formulation for the DQ and DC solutions
of nonlinear differential and integro-differential equations. Due to its
simplicity and flexibility, the present Hadamard product approach makes the DQ
and DC methods much easier to be used. Many studies on the Hadamard product can
be fully exploited for the DQ and DC nonlinear computations. Furthermore, we
first present SJT product of matrix and vector to compute accurately and
efficiently the Frechet derivative matrix in the Newton-Raphson method for the
solution of the nonlinear formulations. We also propose a simple approach to
simplify the DQ or DC formulations for some nonlinear differential operators
and thus the computational efficiency of these methods is improved
significantly. We give the matrix multiplication formulas to compute
efficiently the weighting coefficient matrices of the DC method. The spherical
harmonics are suggested as the test functions in the DC method to handle the
nonlinear differential equations occurring in global and hemispheric weather
forecasting problems. Some examples are analyzed to demonstrate the simplicity
and efficiency of the presented techniques. It is emphasized that innovations
presented are applicable to the nonlinear computations of the other numerical
methods as well.
|
cs/9904008
|
Transducers from Rewrite Rules with Backreferences
|
cs.CL
|
Context sensitive rewrite rules have been widely used in several areas of
natural language processing, including syntax, morphology, phonology and speech
processing. Kaplan and Kay, Karttunen, and Mohri & Sproat have given various
algorithms to compile such rewrite rules into finite-state transducers. The
present paper extends this work by allowing a limited form of backreferencing
in such rules. The explicit use of backreferencing leads to more elegant and
general solutions.
|
cs/9904009
|
An ascription-based approach to speech acts
|
cs.CL
|
The two principal areas of natural language processing research in pragmatics
are belief modelling and speech act processing. Belief modelling is the
development of techniques to represent the mental attitudes of a dialogue
participant. The latter approach, speech act processing, based on speech act
theory, involves viewing dialogue in planning terms. Utterances in a dialogue
are modelled as steps in a plan where understanding an utterance involves
deriving the complete plan a speaker is attempting to achieve. However,
previous speech act based approaches have been limited by a reliance upon
relatively simplistic belief modelling techniques and their relationship to
planning and plan recognition. In particular, such techniques assume
precomputed nested belief structures. In this paper, we will present an
approach to speech act processing based on novel belief modelling techniques
where nested beliefs are propagated on demand.
|
cs/9904018
|
A Computational Memory and Processing Model for Processing for Prosody
|
cs.CL
|
This paper links prosody to the information in a text and how it is processed
by the speaker. It describes the operation and output of LOQ, a text-to-speech
implementation that includes a model of limited attention and working memory.
Attentional limitations are key. Varying the attentional parameter in the
simulations varies in turn what counts as given and new in a text, and
therefore, the intonational contours with which it is uttered. Currently, the
system produces prosody in three different styles: child-like, adult
expressive, and knowledgeable. This prosody also exhibits differences within
each style -- no two simulations are alike. The limited resource approach
captures some of the stylistic and individual variety found in natural prosody.
|
cs/9904021
|
Hadamard product nonlinear formulation of Galerkin and finite element
methods
|
cs.CE cs.NA math.NA
|
A novel nonlinear formulation of the finite element and Galerkin methods is
presented here, which leads to the Hadamard product expression of the resultant
nonlinear algebraic analogue. The presented formulation attains the advantages
of weak formulation in the standard finite element and Galerkin schemes and
avoids the costly repeated numerical integration of the Jacobian matrix via the
recently developed SJT product approach. This also provides possibility of the
nonlinear decoupling computations.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.