id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
cmp-lg/9708013
|
explanation-based learning of data oriented parsing
|
cmp-lg cs.CL
|
This paper presents a new view of Explanation-Based Learning (EBL) of natural
language parsing. Rather than employing EBL for specializing parsers by
inferring new ones, this paper suggests employing EBL for learning how to
reduce ambiguity only partially.
The present method consists of an EBL algorithm for learning partial-parsers,
and a parsing algorithm which combines partial-parsers with existing
``full-parsers". The learned partial-parsers, implementable as Cascades of
Finite State Transducers (CFSTs), recognize and combine constituents
efficiently, prohibiting spurious overgeneration. The parsing algorithm
combines a learned partial-parser with a given full-parser such that the role
of the full-parser is limited to combining the constituents, recognized by the
partial-parser, and to recognizing unrecognized portions of the input sentence.
Besides the reduction of the parse-space prior to disambiguation, the present
method provides a way for refining existing disambiguation models that learn
stochastic grammars from tree-banks.
We exhibit encouraging empirical results using a pilot implementation:
parse-space is reduced substantially with minimal loss of coverage. The speedup
gain for disambiguation models is exemplified by experiments with the DOP
model.
|
cmp-lg/9709001
|
The Complexity of Recognition of Linguistically Adequate Dependency
Grammars
|
cmp-lg cs.CL
|
Results of computational complexity exist for a wide range of phrase
structure-based grammar formalisms, while there is an apparent lack of such
results for dependency-based formalisms. We here adapt a result on the
complexity of ID/LP-grammars to the dependency framework. Contrary to previous
studies on heavily restricted dependency grammars, we prove that recognition
(and thus, parsing) of linguistically adequate dependency grammars is
NP-complete.
|
cmp-lg/9709002
|
Learning Methods for Combining Linguistic Indicators to Classify Verbs
|
cmp-lg cs.CL
|
Fourteen linguistically-motivated numerical indicators are evaluated for
their ability to categorize verbs as either states or events. The values for
each indicator are computed automatically across a corpus of text. To improve
classification performance, machine learning techniques are employed to combine
multiple indicators. Three machine learning methods are compared for this task:
decision tree induction, a genetic algorithm, and log-linear regression.
|
cmp-lg/9709003
|
Combining Multiple Methods for the Automatic Construction of
Multilingual WordNets
|
cmp-lg cs.CL
|
This paper explores the automatic construction of a multilingual Lexical
Knowledge Base from preexisting lexical resources. First, a set of automatic
and complementary techniques for linking Spanish words collected from
monolingual and bilingual MRDs to English WordNet synsets are described.
Second, we show how resulting data provided by each method is then combined to
produce a preliminary version of a Spanish WordNet with an accuracy over 85%.
The application of these combinations results on an increment of the extracted
connexions of a 40% without losing accuracy. Both coarse-grained (class level)
and fine-grained (synset assignment level) confidence ratios are used and
evaluated. Finally, the results for the whole process are presented.
|
cmp-lg/9709004
|
Integrating a Lexical Database and a Training Collection for Text
Categorization
|
cmp-lg cs.CL
|
Automatic text categorization is a complex and useful task for many natural
language processing applications. Recent approaches to text categorization
focus more on algorithms than on resources involved in this operation. In
contrast to this trend, we present an approach based on the integration of
widely available resources as lexical databases and training collections to
overcome current limitations of the task. Our approach makes use of WordNet
synonymy information to increase evidence for bad trained categories. When
testing a direct categorization, a WordNet based one, a training algorithm, and
our integrated approach, the latter exhibits a better perfomance than any of
the others. Incidentally, WordNet based approach perfomance is comparable with
the training approach one.
|
cmp-lg/9709005
|
A generation algorithm for f-structure representations
|
cmp-lg cs.CL
|
This paper shows that previously reported generation algorithms run into
problems when dealing with f-structure representations. A generation algorithm
that is suitable for this type of representations is presented: the Semantic
Kernel Generation (SKG) algorithm. The SKG method has the same processing
strategy as the Semantic Head Driven generation (SHDG) algorithm and relies on
the assumption that it is possible to compute the Semantic Kernel (SK) and non
Semantic Kernel (Non-SK) information for each input structure.
|
cmp-lg/9709006
|
Semantic Processing of Out-Of-Vocabulary Words in a Spoken Dialogue
System
|
cmp-lg cs.CL
|
One of the most important causes of failure in spoken dialogue systems is
usually neglected: the problem of words that are not covered by the system's
vocabulary (out-of-vocabulary or OOV words). In this paper a methodology is
described for the detection, classification and processing of OOV words in an
automatic train timetable information system. The various extensions that had
to be effected on the different modules of the system are reported, resulting
in the design of appropriate dialogue strategies, as are encouraging evaluation
results on the new versions of the word recogniser and the linguistic
processor.
|
cmp-lg/9709007
|
Using WordNet to Complement Training Information in Text Categorization
|
cmp-lg cs.CL
|
Automatic Text Categorization (TC) is a complex and useful task for many
natural language applications, and is usually performed through the use of a
set of manually classified documents, a training collection. We suggest the
utilization of additional resources like lexical databases to increase the
amount of information that TC systems make use of, and thus, to improve their
performance. Our approach integrates WordNet information with two training
approaches through the Vector Space Model. The training approaches we test are
the Rocchio (relevance feedback) and the Widrow-Hoff (machine learning)
algorithms. Results obtained from evaluation show that the integration of
WordNet clearly outperforms training approaches, and that an integrated
technique can effectively address the classification of low frequency
categories.
|
cmp-lg/9709008
|
Semantic Similarity Based on Corpus Statistics and Lexical Taxonomy
|
cmp-lg cs.CL
|
This paper presents a new approach for measuring semantic similarity/distance
between words and concepts. It combines a lexical taxonomy structure with
corpus statistical information so that the semantic distance between nodes in
the semantic space constructed by the taxonomy can be better quantified with
the computational evidence derived from a distributional analysis of corpus
data. Specifically, the proposed measure is a combined approach that inherits
the edge-based approach of the edge counting scheme, which is then enhanced by
the node-based approach of the information content calculation. When tested on
a common data set of word pair similarity ratings, the proposed approach
outperforms other computational models. It gives the highest correlation value
(r = 0.828) with a benchmark based on human similarity judgements, whereas an
upper bound (r = 0.885) is observed when human subjects replicate the same
task.
|
cmp-lg/9709009
|
Evaluating Parsing Schemes with Entropy Indicators
|
cmp-lg cs.CL
|
This paper introduces an objective metric for evaluating a parsing scheme. It
is based on Shannon's original work with letter sequences, which can be
extended to part-of-speech tag sequences. It is shown that this regular
language is an inadequate model for natural language, but a representation is
used that models language slightly higher in the Chomsky hierarchy.
We show how the entropy of parsed and unparsed sentences can be measured. If
the entropy of the parsed sentence is lower, this indicates that some of the
structure of the language has been captured.
We apply this entropy indicator to support one particular parsing scheme that
effects a top down segmentation. This approach could be used to decompose the
parsing task into computationally more tractable subtasks. It also lends itself
to the extraction of predicate/argument structure.
|
cmp-lg/9709010
|
Message-Passing Protocols for Real-World Parsing -- An Object-Oriented
Model and its Preliminary Evaluation
|
cmp-lg cs.CL
|
We argue for a performance-based design of natural language grammars and
their associated parsers in order to meet the constraints imposed by real-world
NLP. Our approach incorporates declarative and procedural knowledge about
language and language use within an object-oriented specification framework. We
discuss several message-passing protocols for parsing and provide reasons for
sacrificing completeness of the parse in favor of efficiency based on a
preliminary empirical evaluation.
|
cmp-lg/9709011
|
Off-line Parsability and the Well-foundedness of Subsumption
|
cmp-lg cs.CL
|
Typed feature structures are used extensively for the specification of
linguistic information in many formalisms. The subsumption relation orders TFSs
by their information content. We prove that subsumption of acyclic TFSs is
well-founded, whereas in the presence of cycles general TFS subsumption is not
well-founded. We show an application of this result for parsing, where the
well-foundedness of subsumption is used to guarantee termination for grammars
that are off-line parsable. We define a new version of off-line parsability
that is less strict than the existing one; thus termination is guaranteed for
parsing with a larger set of grammars.
|
cmp-lg/9709012
|
Using Single Layer Networks for Discrete, Sequential Data: An Example
from Natural Language Processing
|
cmp-lg cs.CL
|
A natural language parser which has been successfully implemented is
described. This is a hybrid system, in which neural networks operate within a
rule based framework. It can be accessed via telnet for users to try on their
own text. (For details, contact the author.) Tested on technical manuals, the
parser finds the subject and head of the subject in over 90% of declarative
sentences.
The neural processing components belong to the class of Generalized Single
Layer Networks (GSLN). In general, supervised, feed-forward networks need more
than one layer to process data. However, in some cases data can be
pre-processed with a non-linear transformation, and then presented in a
linearly separable form for subsequent processing by a single layer net. Such
networks offer advantages of functional transparency and operational speed.
For our parser, the initial stage of processing maps linguistic data onto a
higher order representation, which can then be analysed by a single layer
network. This transformation is supported by information theoretic analysis.
|
cmp-lg/9709013
|
An Abstract Machine for Unification Grammars
|
cmp-lg cs.CL
|
This work describes the design and implementation of an abstract machine,
Amalia, for the linguistic formalism ALE, which is based on typed feature
structures. This formalism is one of the most widely accepted in computational
linguistics and has been used for designing grammars in various linguistic
theories, most notably HPSG. Amalia is composed of data structures and a set of
instructions, augmented by a compiler from the grammatical formalism to the
abstract instructions, and a (portable) interpreter of the abstract
instructions. The effect of each instruction is defined using a low-level
language that can be executed on ordinary hardware.
The advantages of the abstract machine approach are twofold. From a
theoretical point of view, the abstract machine gives a well-defined
operational semantics to the grammatical formalism. This ensures that grammars
specified using our system are endowed with well defined meaning. It enables,
for example, to formally verify the correctness of a compiler for HPSG, given
an independent definition. From a practical point of view, Amalia is the first
system that employs a direct compilation scheme for unification grammars that
are based on typed feature structures. The use of amalia results in a much
improved performance over existing systems.
In order to test the machine on a realistic application, we have developed a
small-scale, HPSG-based grammar for a fragment of the Hebrew language, using
Amalia as the development platform. This is the first application of HPSG to a
Semitic language.
|
cmp-lg/9709014
|
Amalia -- A Unified Platform for Parsing and Generation
|
cmp-lg cs.CL
|
Contemporary linguistic theories (in particular, HPSG) are declarative in
nature: they specify constraints on permissible structures, not how such
structures are to be computed. Grammars designed under such theories are,
therefore, suitable for both parsing and generation. However, practical
implementations of such theories don't usually support bidirectional processing
of grammars. We present a grammar development system that includes a compiler
of grammars (for parsing and generation) to abstract machine instructions, and
an interpreter for the abstract machine language. The generation compiler
inverts input grammars (designed for parsing) to a form more suitable for
generation. The compiled grammars are then executed by the interpreter using
one control strategy, regardless of whether the grammar is the original or the
inverted version. We thus obtain a unified, efficient platform for developing
reversible grammars.
|
cmp-lg/9709015
|
Segmentation of Expository Texts by Hierarchical Agglomerative
Clustering
|
cmp-lg cs.CL
|
We propose a method for segmentation of expository texts based on
hierarchical agglomerative clustering. The method uses paragraphs as the basic
segments for identifying hierarchical discourse structure in the text, applying
lexical similarity between them as the proximity test. Linear segmentation can
be induced from the identified structure through application of two simple
rules. However the hierarchy can be used also for intelligent exploration of
the text. The proposed segmentation algorithm is evaluated against an accepted
linear segmentation method and shows comparable results.
|
cmp-lg/9710001
|
Use of Weighted Finite State Transducers in Part of Speech Tagging
|
cmp-lg cs.CL
|
This paper addresses issues in part of speech disambiguation using
finite-state transducers and presents two main contributions to the field. One
of them is the use of finite-state machines for part of speech tagging.
Linguistic and statistical information is represented in terms of weights on
transitions in weighted finite-state transducers. Another contribution is the
successful combination of techniques -- linguistic and statistical -- for word
disambiguation, compounded with the notion of word classes.
|
cmp-lg/9710002
|
Tagging French Without Lexical Probabilities -- Combining Linguistic
Knowledge And Statistical Learning
|
cmp-lg cs.CL
|
This paper explores morpho-syntactic ambiguities for French to develop a
strategy for part-of-speech disambiguation that a) reflects the complexity of
French as an inflected language, b) optimizes the estimation of probabilities,
c) allows the user flexibility in choosing a tagset. The problem in extracting
lexical probabilities from a limited training corpus is that the statistical
model may not necessarily represent the use of a particular word in a
particular context. In a highly morphologically inflected language, this
argument is particularly serious since a word can be tagged with a large number
of parts of speech. Due to the lack of sufficient training data, we argue
against estimating lexical probabilities to disambiguate parts of speech in
unrestricted texts. Instead, we use the strength of contextual probabilities
along with a feature we call ``genotype'', a set of tags associated with a
word. Using this knowledge, we have built a part-of-speech tagger that combines
linguistic and statistical approaches: contextual information is disambiguated
by linguistic rules and n-gram probabilities on parts of speech only are
estimated in order to disambiguate the remaining ambiguous tags.
|
cmp-lg/9710003
|
Disambiguating with Controlled Disjunctions
|
cmp-lg cs.CL
|
In this paper, we propose a disambiguating technique called controlled
disjunctions. This extension of the so-called named disjunctions relies on the
relations existing between feature values (covariation, control, etc.). We show
that controlled disjunctions can implement different kind of ambiguities in a
consistent and homogeneous way. We describe the integration of controlled
disjunctions into a HPSG feature structure representation. Finally, we present
a direct implementation by means of delayed evaluation and we develop an
example within the functionnal programming paradigm.
|
cmp-lg/9710004
|
Parsing syllables: modeling OT computationally
|
cmp-lg cs.CL
|
In this paper, I propose to implement syllabification in OT as a parser. I
propose several innovations that result in a finite and small candidate set.
The candidate set problem is handled with several moves: i) MAX and DEP
violations are not hypothesized by the parser, ii) candidates are encoded
locally, and iii) EVAL is applied constraint by constraint.
The parser I propose is implemented in Prolog. It has a number of desirable
consequences. First, it runs and thus provides an existence proof that
syllabification can be implemented in OT. There are a number of other desirable
consequences as well. First, constraints are implemented as finite-state
transducers. Second, the parser makes several interesting claims about the
phonological properties of so-called nonrecoverable insertions and deletions.
Third, the implementation suggests some particular reformulations of some of
the benchmark constraints in the OT arsenal, e.g. *COMPLEX, PARSE, ONSET, and
NOCODA.
|
cmp-lg/9710005
|
Attaching Multiple Prepositional Phrases: Generalized Backed-off
Estimation
|
cmp-lg cs.CL
|
There has recently been considerable interest in the use of lexically-based
statistical techniques to resolve prepositional phrase attachments. To our
knowledge, however, these investigations have only considered the problem of
attaching the first PP, i.e., in a [V NP PP] configuration. In this paper, we
consider one technique which has been successfully applied to this problem,
backed-off estimation, and demonstrate how it can be extended to deal with the
problem of multiple PP attachment. The multiple PP attachment introduces two
related problems: sparser data (since multiple PPs are naturally rarer), and
greater syntactic ambiguity (more attachment configurations which must be
distinguished). We present and algorithm which solves this problem through
re-use of the relatively rich data obtained from first PP training, in
resolving subsequent PP attachments.
|
cmp-lg/9710006
|
Learning Features that Predict Cue Usage
|
cmp-lg cs.CL
|
Our goal is to identify the features that predict the occurrence and
placement of discourse cues in tutorial explanations in order to aid in the
automatic generation of explanations. Previous attempts to devise rules for
text generation were based on intuition or small numbers of constructed
examples. We apply a machine learning program, C4.5, to induce decision trees
for cue occurrence and placement from a corpus of data coded for a variety of
features previously thought to affect cue usage. Our experiments enable us to
identify the features with most predictive power, and show that machine
learning can be used to induce decision trees useful for text generation.
|
cmp-lg/9710007
|
A Corpus-Based Investigation of Definite Description Use
|
cmp-lg cs.CL
|
We present the results of a study of definite descriptions use in written
texts aimed at assessing the feasibility of annotating corpora with information
about definite description interpretation. We ran two experiments, in which
subjects were asked to classify the uses of definite descriptions in a corpus
of 33 newspaper articles, containing a total of 1412 definite descriptions. We
measured the agreement among annotators about the classes assigned to definite
descriptions, as well as the agreement about the antecedent assigned to those
definites that the annotators classified as being related to an antecedent in
the text. The most interesting result of this study from a corpus annotation
perspective was the rather low agreement (K=0.63) that we obtained using
versions of Hawkins' and Prince's classification schemes; better results
(K=0.76) were obtained using the simplified scheme proposed by Fraurud that
includes only two classes, first-mention and subsequent-mention. The agreement
about antecedents was also not complete. These findings raise questions
concerning the strategy of evaluating systems for definite description
interpretation by comparing their results with a standardized annotation. From
a linguistic point of view, the most interesting observations were the great
number of discourse-new definites in our corpus (in one of our experiments,
about 50% of the definites in the collection were classified as discourse-new,
30% as anaphoric, and 18% as associative/bridging) and the presence of
definites which did not seem to require a complete disambiguation.
|
cmp-lg/9710008
|
Probabilistic Event Categorization
|
cmp-lg cs.CL
|
This paper describes the automation of a new text categorization task. The
categories assigned in this task are more syntactically, semantically, and
contextually complex than those typically assigned by fully automatic systems
that process unseen test data. Our system for assigning these categories is a
probabilistic classifier, developed with a recent method for formulating a
probabilistic model from a predefined set of potential features. This paper
focuses on feature selection. It presents a number of fully automatic features.
It identifies and evaluates various approaches to organizing collocational
properties into features, and presents the results of experiments covarying
type of organization and type of property. We find that one organization is not
best for all kinds of properties, so this is an experimental parameter worth
investigating in NLP systems. In addition, the results suggest a way to take
advantage of properties that are low frequency but strongly indicative of a
class. The problems of recognizing and organizing the various kinds of
contextual information required to perform a linguistically complex
categorization task have rarely been systematically investigated in NLP.
|
cmp-lg/9711001
|
Probabilistic Constraint Logic Programming
|
cmp-lg cs.CL
|
This paper addresses two central problems for probabilistic processing
models: parameter estimation from incomplete data and efficient retrieval of
most probable analyses. These questions have been answered satisfactorily only
for probabilistic regular and context-free models. We address these problems
for a more expressive probabilistic constraint logic programming model. We
present a log-linear probability model for probabilistic constraint logic
programming. On top of this model we define an algorithm to estimate the
parameters and to select the properties of log-linear models from incomplete
data. This algorithm is an extension of the improved iterative scaling
algorithm of Della-Pietra, Della-Pietra, and Lafferty (1995). Our algorithm
applies to log-linear models in general and is accompanied with suitable
approximation methods when applied to large data spaces. Furthermore, we
present an approach for searching for most probable analyses of the
probabilistic constraint logic programming model. This method can be applied to
the ambiguity resolution problem in natural language processing applications.
|
cmp-lg/9711002
|
Approximating Context-Free Grammars with a Finite-State Calculus
|
cmp-lg cs.CL
|
Although adequate models of human language for syntactic analysis and
semantic interpretation are of at least context-free complexity, for
applications such as speech processing in which speed is important finite-state
models are often preferred. These requirements may be reconciled by using the
more complex grammar to automatically derive a finite-state approximation which
can then be used as a filter to guide speech recognition or to reject many
hypotheses at an early stage of processing. A method is presented here for
calculating such finite-state approximations from context-free grammars. It is
essentially different from the algorithm introduced by Pereira and Wright
(1991; 1996), is faster in some cases, and has the advantage of being
open-ended and adaptable.
|
cmp-lg/9711003
|
Probabilistic Parsing Using Left Corner Language Models
|
cmp-lg cs.CL
|
We introduce a novel parser based on a probabilistic version of a left-corner
parser. The left-corner strategy is attractive because rule probabilities can
be conditioned on both top-down goals and bottom-up derivations. We develop the
underlying theory and explain how a grammar can be induced from analyzed data.
We show that the left-corner approach provides an advantage over simple
top-down probabilistic context-free grammars in parsing the Wall Street Journal
using a grammar induced from the Penn Treebank. We also conclude that the Penn
Treebank provides a fairly weak testbed due to the flatness of its bracketings
and to the obvious overgeneration and undergeneration of its induced grammar.
|
cmp-lg/9711004
|
Variation and Synthetic Speech
|
cmp-lg cs.CL
|
We describe the approach to linguistic variation taken by the Motorola speech
synthesizer. A pan-dialectal pronunciation dictionary is described, which
serves as the training data for a neural network based letter-to-sound
converter. Subsequent to dictionary retrieval or letter-to-sound generation,
pronunciations are submitted a neural network based postlexical module. The
postlexical module has been trained on aligned dictionary pronunciations and
hand-labeled narrow phonetic transcriptions. This architecture permits the
learning of individual postlexical variation, and can be retrained for each
speaker whose voice is being modeled for synthesis. Learning variation in this
way can result in greater naturalness for the synthetic speech that is produced
by the system.
|
cmp-lg/9711005
|
Some apparently disjoint aims and requirements for grammar development
environments: the case of natural language generation
|
cmp-lg cs.CL
|
Grammar development environments (GDE's) for analysis and for generation have
not yet come together. Despite the fact that analysis-oriented GDE's (such as
ALEP) may include some possibility of sentence generation, the development
techniques and kinds of resources suggested are apparently not those required
for practical, large-scale natural language generation work. Indeed, there is
no use of `standard' (i.e., analysis-oriented) GDE's in current
projects/applications targetting the generation of fluent, coherent texts. This
unsatisfactory situation requires some analysis and explanation, which this
paper attempts using as an example an extensive GDE for generation. The support
provided for distributed large-scale grammar development, multilinguality, and
resource maintenance are discussed and contrasted with analysis-oriented
approaches.
|
cmp-lg/9711006
|
Contextual Information and Specific Language Models for Spoken Language
Understanding
|
cmp-lg cs.CL
|
In this paper we explain how contextual expectations are generated and used
in the task-oriented spoken language understanding system Dialogos. The hard
task of recognizing spontaneous speech on the telephone may greatly benefit
from the use of specific language models during the recognition of callers'
utterances. By 'specific language models' we mean a set of language models that
are trained on contextually appropriated data, and that are used during
different states of the dialogue on the basis of the information sent to the
acoustic level by the dialogue management module. In this paper we describe how
the specific language models are obtained on the basis of contextual
information. The experimental result we report show that recognition and
understanding performance are improved thanks to the use of specific language
models.
|
cmp-lg/9711007
|
Language Modelling For Task-Oriented Domains
|
cmp-lg cs.CL
|
This paper is focused on the language modelling for task-oriented domains and
presents an accurate analysis of the utterances acquired by the Dialogos spoken
dialogue system. Dialogos allows access to the Italian Railways timetable by
using the telephone over the public network. The language modelling aspects of
specificity and behaviour to rare events are studied. A technique for getting a
language model more robust, based on sentences generated by grammars, is
presented. Experimental results show the benefit of the proposed technique. The
increment of performance between language models created using grammars and
usual ones, is higher when the amount of training material is limited.
Therefore this technique can give an advantage especially for the development
of language models in a new domain.
|
cmp-lg/9711008
|
On the use of expectations for detecting and repairing human-machine
miscommunication
|
cmp-lg cs.CL
|
In this paper I describe how miscommunication problems are dealt with in the
spoken language system DIALOGOS. The dialogue module of the system exploits
dialogic expectations in a twofold way: to model what future user utterance
might be about (predictions), and to account how the user's next utterance may
be related to previous ones in the ongoing interaction (pragmatic-based
expectations). The analysis starts from the hypothesis that the occurrence of
miscommunication is concomitant with two pragmatic phenomena: the deviation of
the user from the expected behaviour and the generation of a conversational
implicature. A preliminary evaluation of a large amount of interactions between
subjects and DIALOGOS shows that the system performance is enhanced by the uses
of both predictions and pragmatic-based expectations.
|
cmp-lg/9711009
|
Towards an Improved Performance Measure for Language Models
|
cmp-lg cs.CL
|
In this paper a first attempt at deriving an improved performance measure for
language models, the probability ratio measure (PRM) is described. In a proof
of concept experiment, it is shown that PRM correlates better with recognition
accuracy and can lead to better recognition results when used as the
optimisation criterion of a clustering algorithm. Inspite of the approximations
and limitations of this preliminary work, the results are very encouraging and
should justify more work along the same lines.
|
cmp-lg/9711010
|
Application-driven automatic subgrammar extraction
|
cmp-lg cs.CL
|
The space and run-time requirements of broad coverage grammars appear for
many applications unreasonably large in relation to the relative simplicity of
the task at hand. On the other hand, handcrafted development of
application-dependent grammars is in danger of duplicating work which is then
difficult to re-use in other contexts of application. To overcome this problem,
we present in this paper a procedure for the automatic extraction of
application-tuned consistent subgrammars from proved large-scale generation
grammars. The procedure has been implemented for large-scale systemic grammars
and builds on the formal equivalence between systemic grammars and typed
unification based grammars. Its evaluation for the generation of encyclopedia
entries is described, and directions of future development, applicability, and
extensions are discussed.
|
cmp-lg/9711011
|
The effect of alternative tree representations on tree bank grammars
|
cmp-lg cs.CL
|
The performance of PCFGs estimated from tree banks is sensitive to the
particular way in which linguistic constructions are represented as trees in
the tree bank. This paper presents a theoretical analysis of the effect of
different tree representations for PP attachment on PCFG models, and introduces
a new methodology for empirically examining such effects using tree
transformations. It shows that one transformation, which copies the label of a
parent node onto the labels of its children, can improve the performance of a
PCFG model in terms of labelled precision and recall on held out data from 73%
(precision) and 69% (recall) to 80% and 79% respectively. It also points out
that if only maximum likelihood parses are of interest then many productions
can be ignored, since they are subsumed by combinations of other productions in
the grammar. In the Penn II tree bank grammar, almost 9% of productions are
subsumed in this way.
|
cmp-lg/9711012
|
Proof Nets and the Complexity of Processing Center-Embedded
Constructions
|
cmp-lg cs.CL
|
This paper shows how proof nets can be used to formalize the notion of
``incomplete dependency'' used in psycholinguistic theories of the
unacceptability of center-embedded constructions. Such theories of human
language processing can usually be restated in terms of geometrical constraints
on proof nets. The paper ends with a discussion of the relationship between
these constraints and incremental semantic interpretation.
|
cmp-lg/9711013
|
Features as Resources in R-LFG
|
cmp-lg cs.CL
|
This paper introduces a non-unification-based version of LFG called R-LFG
(Resource-based Lexical Functional Grammar), which combines elements from both
LFG and Linear Logic. The paper argues that a resource sensitive account
provides a simpler treatment of many linguistic uses of non-monotonic devices
in LFG, such as existential constraints and constraint equations.
|
cmp-lg/9711014
|
Type-driven semantic interpretation and feature dependencies in R-LFG
|
cmp-lg cs.CL
|
Once one has enriched LFG's formal machinery with the linear logic mechanisms
needed for semantic interpretation as proposed by Dalrymple et. al., it is
natural to ask whether these make any existing components of LFG redundant. As
Dalrymple and her colleagues note, LFG's f-structure completeness and coherence
constraints fall out as a by-product of the linear logic machinery they propose
for semantic interpretation, thus making those f-structure mechanisms
redundant. Given that linear logic machinery or something like it is
independently needed for semantic interpretation, it seems reasonable to
explore the extent to which it is capable of handling feature structure
constraints as well.
R-LFG represents the extreme position that all linguistically required
feature structure dependencies can be captured by the resource-accounting
machinery of a linear or similiar logic independently needed for semantic
interpretation, making LFG's unification machinery redundant. The goal is to
show that LFG linguistic analyses can be expressed as clearly and perspicuously
using the smaller set of mechanisms of R-LFG as they can using the much larger
set of unification-based mechanisms in LFG: if this is the case then we will
have shown that positing these extra f-structure mechanisms is not
linguistically warranted.
|
cmp-lg/9712001
|
Applying Explanation-based Learning to Control and Speeding-up Natural
Language Generation
|
cmp-lg cs.CL
|
This paper presents a method for the automatic extraction of subgrammars to
control and speeding-up natural language generation NLG. The method is based on
explanation-based learning (EBL). The main advantage for the proposed new
method for NLG is that the complexity of the grammatical decision making
process during NLG can be vastly reduced, because the EBL method supports the
adaption of a NLG system to a particular use of a language.
|
cmp-lg/9712002
|
Machine Learning of User Profiles: Representational Issues
|
cmp-lg cs.CL cs.LG
|
As more information becomes available electronically, tools for finding
information of interest to users becomes increasingly important. The goal of
the research described here is to build a system for generating comprehensible
user profiles that accurately capture user interest with minimum user
interaction. The research described here focuses on the importance of a
suitable generalization hierarchy and representation for learning profiles
which are predictively accurate and comprehensible. In our experiments we
evaluated both traditional features based on weighted term vectors as well as
subject features corresponding to categories which could be drawn from a
thesaurus. Our experiments, conducted in the context of a content-based
profiling system for on-line newspapers on the World Wide Web (the IDD News
Browser), demonstrate the importance of a generalization hierarchy and the
promise of combining natural language processing techniques with machine
learning (ML) to address an information retrieval (IR) problem.
|
cmp-lg/9712003
|
Context as a Spurious Concept
|
cmp-lg cs.CL
|
I take issue with AI formalizations of context, primarily the formalization
by McCarthy and Buvac, that regard context as an undefined primitive whose
formalization can be the same in many different kinds of AI tasks. In
particular, any theory of context in natural language must take the special
nature of natural language into account and cannot regard context simply as an
undefined primitive. I show that there is no such thing as a coherent theory of
context simpliciter -- context pure and simple -- and that context in natural
language is not the same kind of thing as context in KR. In natural language,
context is constructed by the speaker and the interpreter, and both have
considerable discretion in so doing. Therefore, a formalization based on
pre-defined contexts and pre-defined `lifting axioms' cannot account for how
context is used in real-world language.
|
cmp-lg/9712004
|
Multi-document Summarization by Graph Search and Matching
|
cmp-lg cs.CL
|
We describe a new method for summarizing similarities and differences in a
pair of related documents using a graph representation for text. Concepts
denoted by words, phrases, and proper names in the document are represented
positionally as nodes in the graph along with edges corresponding to semantic
relations between items. Given a perspective in terms of which the pair of
documents is to be summarized, the algorithm first uses a spreading activation
technique to discover, in each document, nodes semantically related to the
topic. The activated graphs of each document are then matched to yield a graph
corresponding to similarities and differences between the pair, which is
rendered in natural language. An evaluation of these techniques has been
carried out.
|
cmp-lg/9712005
|
Topic Graph Generation for Query Navigation: Use of Frequency Classes
for Topic Extraction
|
cmp-lg cs.CL
|
To make an interactive guidance mechanism for document retrieval systems, we
developed a user-interface which presents users the visualized map of topics at
each stage of retrieval process. Topic words are automatically extracted by
frequency analysis and the strength of the relationships between topic words is
measured by their co-occurrence. A major factor affecting a user's impression
of a given topic word graph is the balance between common topic words and
specific topic words. By using frequency classes for topic word extraction, we
made it possible to select well-balanced set of topic words, and to adjust the
balance of common and specific topic words.
|
cmp-lg/9712006
|
"I don't believe in word senses"
|
cmp-lg cs.CL
|
Word sense disambiguation assumes word senses. Within the lexicography and
linguistics literature, they are known to be very slippery entities. The paper
looks at problems with existing accounts of `word sense' and describes the
various kinds of ways in which a word's meaning can deviate from its core
meaning. An analysis is presented in which word senses are abstractions from
clusters of corpus citations, in accordance with current lexicographic
practice. The corpus citations, not the word senses, are the basic objects in
the ontology. The corpus citations will be clustered into senses according to
the purposes of whoever or whatever does the clustering. In the absence of such
purposes, word senses do not exist.
Word sense disambiguation also needs a set of word senses to disambiguate
between. In most recent work, the set has been taken from a general-purpose
lexical resource, with the assumption that the lexical resource describes the
word senses of English/French/..., between which NLP applications will need to
disambiguate. The implication of the paper is, by contrast, that word senses
exist only relative to a task.
|
cmp-lg/9712007
|
Foreground and Background Lexicons and Word Sense Disambiguation for
Information Extraction
|
cmp-lg cs.CL
|
Lexicon acquisition from machine-readable dictionaries and corpora is
currently a dynamic field of research, yet it is often not clear how lexical
information so acquired can be used, or how it relates to structured meaning
representations. In this paper I look at this issue in relation to Information
Extraction (hereafter IE), and one subtask for which both lexical and general
knowledge are required, Word Sense Disambiguation (WSD). The analysis is based
on the widely-used, but little-discussed distinction between an IE system's
foreground lexicon, containing the domain's key terms which map onto the
database fields of the output formalism, and the background lexicon, containing
the remainder of the vocabulary. For the foreground lexicon, human lexicography
is required. For the background lexicon, automatic acquisition is appropriate.
For the foreground lexicon, WSD will occur as a by-product of finding a
coherent semantic interpretation of the input. WSD techniques as discussed in
recent literature are suited only to the background lexicon. Once the
foreground/background distinction is developed, there is a match between what
is possible, given the state of the art in WSD, and what is required, for
high-quality IE.
|
cmp-lg/9712008
|
What is word sense disambiguation good for?
|
cmp-lg cs.CL
|
Word sense disambiguation has developed as a sub-area of natural language
processing, as if, like parsing, it was a well-defined task which was a
pre-requisite to a wide range of language-understanding applications. First, I
review earlier work which shows that a set of senses for a word is only ever
defined relative to a particular human purpose, and that a view of word senses
as part of the linguistic furniture lacks theoretical underpinnings. Then, I
investigate whether and how word sense ambiguity is in fact a problem for
different varieties of NLP application.
|
cmp-lg/9712009
|
Speech Repairs, Intonational Boundaries and Discourse Markers: Modeling
Speakers' Utterances in Spoken Dialog
|
cmp-lg cs.CL
|
In this thesis, we present a statistical language model for resolving speech
repairs, intonational boundaries and discourse markers. Rather than finding the
best word interpretation for an acoustic signal, we redefine the speech
recognition problem to so that it also identifies the POS tags, discourse
markers, speech repairs and intonational phrase endings (a major cue in
determining utterance units). Adding these extra elements to the speech
recognition problem actually allows it to better predict the words involved,
since we are able to make use of the predictions of boundary tones, discourse
markers and speech repairs to better account for what word will occur next.
Furthermore, we can take advantage of acoustic information, such as silence
information, which tends to co-occur with speech repairs and intonational
phrase endings, that current language models can only regard as noise in the
acoustic signal. The output of this language model is a much fuller account of
the speaker's turn, with part-of-speech assigned to each word, intonation
phrase endings and discourse markers identified, and speech repairs detected
and corrected. In fact, the identification of the intonational phrase endings,
discourse markers, and resolution of the speech repairs allows the speech
recognizer to model the speaker's utterances, rather than simply the words
involved, and thus it can return a more meaningful analysis of the speaker's
turn for later processing.
|
cmp-lg/9712010
|
Orthographic Structuring of Human Speech and Texts: Linguistic
Application of Recurrence Quantification Analysis
|
cmp-lg cs.CL
|
A methodology based upon recurrence quantification analysis is proposed for
the study of orthographic structure of written texts. Five different
orthographic data sets (20th century Italian poems, 20th century American
poems, contemporary Swedish poems with their corresponding Italian
translations, Italian speech samples, and American speech samples) were
subjected to recurrence quantification analysis, a procedure which has been
found to be diagnostically useful in the quantitative assessment of ordered
series in fields such as physics, molecular dynamics, physiology, and general
signal processing. Recurrence quantification was developed from recurrence
plots as applied to the analysis of nonlinear, complex systems in the physical
sciences, and is based on the computation of a distance matrix of the elements
of an ordered series (in this case the letters consituting selected speech and
poetic texts). From a strictly mathematical view, the results show the
possibility of demonstrating invariance between different language exemplars
despite the apparent low-level of coding (orthography). Comparison with the
actual texts confirms the ability of the method to reveal recurrent structures,
and their complexity. Using poems as a reference standard for judging speech
complexity, the technique exhibits language independence, order dependence and
freedom from pure statistical characteristics of studied sequences, as well as
consistency with easily identifiable texts. Such studies may provide
phenomenological markers of hidden structure as coded by the purely
orthographic level.
|
cmp-lg/9801001
|
Hierarchical Non-Emitting Markov Models
|
cmp-lg cs.CL
|
We describe a simple variant of the interpolated Markov model with
non-emitting state transitions and prove that it is strictly more powerful than
any Markov model. More importantly, the non-emitting model outperforms the
classic interpolated model on the natural language texts under a wide range of
experimental conditions, with only a modest increase in computational
requirements. The non-emitting model is also much less prone to overfitting.
Keywords: Markov model, interpolated Markov model, hidden Markov model,
mixture modeling, non-emitting state transitions, state-conditional
interpolation, statistical language model, discrete time series, Brown corpus,
Wall Street Journal.
|
cmp-lg/9801002
|
Identifying Discourse Markers in Spoken Dialog
|
cmp-lg cs.CL
|
In this paper, we present a method for identifying discourse marker usage in
spontaneous speech based on machine learning. Discourse markers are denoted by
special POS tags, and thus the process of POS tagging can be used to identify
discourse markers. By incorporating POS tagging into language modeling,
discourse markers can be identified during speech recognition, in which the
timeliness of the information can be used to help predict the following words.
We contrast this approach with an alternative machine learning approach
proposed by Litman (1996). This paper also argues that discourse markers can be
used to help the hearer predict the role that the upcoming utterance plays in
the dialog. Thus discourse markers should provide valuable evidence for
automatic dialog act prediction.
|
cmp-lg/9801003
|
Do not forget: Full memory in memory-based learning of word
pronunciation
|
cmp-lg cs.CL
|
Memory-based learning, keeping full memory of learning material, appears a
viable approach to learning NLP tasks, and is often superior in generalisation
accuracy to eager learning approaches that abstract from learning material.
Here we investigate three partial memory-based learning approaches which remove
from memory specific task instance types estimated to be exceptional. The three
approaches each implement one heuristic function for estimating exceptionality
of instance types: (i) typicality, (ii) class prediction strength, and (iii)
friendly-neighbourhood size. Experiments are performed with the memory-based
learning algorithm IB1-IG trained on English word pronunciation. We find that
removing instance types with low prediction strength (ii) is the only tested
method which does not seriously harm generalisation accuracy. We conclude that
keeping full memory of types rather than tokens, and excluding minority
ambiguities appear to be the only performance-preserving optimisations of
memory-based learning.
|
cmp-lg/9801004
|
Modularity in inductively-learned word pronunciation systems
|
cmp-lg cs.CL
|
In leading morpho-phonological theories and state-of-the-art text-to-speech
systems it is assumed that word pronunciation cannot be learned or performed
without in-between analyses at several abstraction levels (e.g., morphological,
graphemic, phonemic, syllabic, and stress levels). We challenge this assumption
for the case of English word pronunciation. Using IGTree, an inductive-learning
decision-tree algorithms, we train and test three word-pronunciation systems in
which the number of abstraction levels (implemented as sequenced modules) is
reduced from five, via three, to one. The latter system, classifying letter
strings directly as mapping to phonemes with stress markers, yields
significantly better generalisation accuracies than the two multi-module
systems. Analyses of empirical results indicate that positive utility effects
of sequencing modules are outweighed by cascading errors passed on between
modules.
|
cmp-lg/9801005
|
A General, Sound and Efficient Natural Language Parsing Algorithm based
on Syntactic Constraints Propagation
|
cmp-lg cs.CL
|
This paper presents a new context-free parsing algorithm based on a
bidirectional strictly horizontal strategy which incorporates strong top-down
predictions (derivations and adjacencies). From a functional point of view, the
parser is able to propagate syntactic constraints reducing parsing ambiguity.
From a computational perspective, the algorithm includes different techniques
aimed at the improvement of the manipulation and representation of the
structures used.
|
cmp-lg/9802001
|
Look-Back and Look-Ahead in the Conversion of Hidden Markov Models into
Finite State Transducers
|
cmp-lg cs.CL
|
This paper describes the conversion of a Hidden Markov Model into a finite
state transducer that closely approximates the behavior of the stochastic
model. In some cases the transducer is equivalent to the HMM. This conversion
is especially advantageous for part-of-speech tagging because the resulting
transducer can be composed with other transducers that encode correction rules
for the most frequent tagging errors. The speed of tagging is also improved.
The described methods have been implemented and successfully tested.
|
cmp-lg/9802002
|
A Hybrid Environment for Syntax-Semantic Tagging
|
cmp-lg cs.CL
|
The thesis describes the application of the relaxation labelling algorithm to
NLP disambiguation. Language is modelled through context constraint inspired on
Constraint Grammars. The constraints enable the use of a real value statind
"compatibility". The technique is applied to POS tagging, Shallow Parsing and
Word Sense Disambigation. Experiments and results are reported. The proposed
approach enables the use of multi-feature constraint models, the simultaneous
resolution of several NL disambiguation tasks, and the collaboration of
linguistic and statistical models.
|
cmp-lg/9803001
|
Automating Coreference: The Role of Annotated Training Data
|
cmp-lg cs.CL
|
We report here on a study of interannotator agreement in the coreference task
as defined by the Message Understanding Conference (MUC-6 and MUC-7). Based on
feedback from annotators, we clarified and simplified the annotation
specification. We then performed an analysis of disagreement among several
annotators, concluding that only 16% of the disagreements represented genuine
disagreement about coreference; the remainder of the cases were mostly
typographical errors or omissions, easily reconciled. Initially, we measured
interannotator agreement in the low 80s for precision and recall. To try to
improve upon this, we ran several experiments. In our final experiment, we
separated the tagging of candidate noun phrases from the linking of actual
coreferring expressions. This method shows promise - interannotator agreement
climbed to the low 90s - but it needs more extensive validation. These results
position the research community to broaden the coreference task to multiple
languages, and possibly to different kinds of coreference.
|
cmp-lg/9803002
|
Time, Tense and Aspect in Natural Language Database Interfaces
|
cmp-lg cs.CL
|
Most existing natural language database interfaces (NLDBs) were designed to
be used with database systems that provide very limited facilities for
manipulating time-dependent data, and they do not support adequately temporal
linguistic mechanisms (verb tenses, temporal adverbials, temporal subordinate
clauses, etc.). The database community is becoming increasingly interested in
temporal database systems, that are intended to store and manipulate in a
principled manner information not only about the present, but also about the
past and future. When interfacing to temporal databases, supporting temporal
linguistic mechanisms becomes crucial.
We present a framework for constructing natural language interfaces for
temporal databases (NLTDBs), that draws on research in tense and aspect
theories, temporal logics, and temporal databases. The framework consists of a
temporal intermediate representation language, called TOP, an HPSG grammar that
maps a wide range of questions involving temporal mechanisms to appropriate TOP
expressions, and a provably correct method for translating from TOP to TSQL2,
TSQL2 being a recently proposed temporal extension of the SQL database
language. This framework was employed to implement a prototype NLTDB using ALE
and Prolog.
|
cmp-lg/9803003
|
Nymble: a High-Performance Learning Name-finder
|
cmp-lg cs.CL
|
This paper presents a statistical, learned approach to finding names and
other non-recursive entities in text (as per the MUC-6 definition of the NE
task), using a variant of the standard hidden Markov model. We present our
justification for the problem and our approach, a detailed discussion of the
model itself and finally the successful results of this new approach.
|
cmp-lg/9804001
|
Graph Interpolation Grammars: a Rule-based Approach to the Incremental
Parsing of Natural Languages
|
cmp-lg cs.CL
|
Graph Interpolation Grammars are a declarative formalism with an operational
semantics. Their goal is to emulate salient features of the human parser, and
notably incrementality. The parsing process defined by GIGs incrementally
builds a syntactic representation of a sentence as each successive lexeme is
read. A GIG rule specifies a set of parse configurations that trigger its
application and an operation to perform on a matching configuration. Rules are
partly context-sensitive; furthermore, they are reversible, meaning that their
operations can be undone, which allows the parsing process to be
nondeterministic. These two factors confer enough expressive power to the
formalism for parsing natural languages.
|
cmp-lg/9804002
|
The Proper Treatment of Optimality in Computational Phonology
|
cmp-lg cs.CL
|
This paper presents a novel formalization of optimality theory. Unlike
previous treatments of optimality in computational linguistics, starting with
Ellison (1994), the new approach does not require any explicit marking and
counting of constraint violations. It is based on the notion of "lenient
composition," defined as the combination of ordinary composition and priority
union. If an underlying form has outputs that can meet a given constraint,
lenient composition enforces the constraint; if none of the output candidates
meet the constraint, lenient composition allows all of them. For the sake of
greater efficiency, we may "leniently compose" the GEN relation and all the
constraints into a single finite-state transducer that maps each underlying
form directly into its optimal surface realizations, and vice versa, without
ever producing any failing candidates. Seen from this perspective, optimality
theory is surprisingly similar to the two older strains of finite-state
phonology: classical rewrite systems and two-level models. In particular, the
ranking of optimality constraints corresponds to the ordering of rewrite rules.
|
cmp-lg/9804003
|
Treatment of Epsilon-Moves in Subset Construction
|
cmp-lg cs.CL
|
The paper discusses the problem of determinising finite-state automata
containing large numbers of epsilon-moves. Experiments with finite-state
approximations of natural language grammars often give rise to very large
automata with a very large number of epsilon-moves. The paper identifies three
subset construction algorithms which treat epsilon-moves. A number of
experiments has been performed which indicate that the algorithms differ
considerably in practice. Furthermore, the experiments suggest that the average
number of epsilon-moves per state can be used to predict which algorithm is
likely to perform best for a given input automaton.
|
cmp-lg/9804004
|
Corpus-Based Word Sense Disambiguation
|
cmp-lg cs.CL
|
Resolution of lexical ambiguity, commonly termed ``word sense
disambiguation'', is expected to improve the analytical accuracy for tasks
which are sensitive to lexical semantics. Such tasks include machine
translation, information retrieval, parsing, natural language understanding and
lexicography. Reflecting the growth in utilization of machine readable texts,
word sense disambiguation techniques have been explored variously in the
context of corpus-based approaches. Within one corpus-based framework, that is
the similarity-based method, systems use a database, in which example sentences
are manually annotated with correct word senses. Given an input, systems search
the database for the most similar example to the input. The lexical ambiguity
of a word contained in the input is resolved by selecting the sense annotation
of the retrieved example. In this research, we apply this method of resolution
of verbal polysemy, in which the similarity between two examples is computed as
the weighted average of the similarity between complements governed by a target
polysemous verb. We explore similarity-based verb sense disambiguation focusing
on the following three methods. First, we propose a weighting schema for each
verb complement in the similarity computation. Second, in similarity-based
techniques, the overhead for manual supervision and searching the large-sized
database can be prohibitive. To resolve this problem, we propose a method to
select a small number of effective examples, for system usage. Finally, the
efficiency of our system is highly dependent on the similarity computation
used. To maximize efficiency, we propose a method which integrates the
advantages of previous methods for similarity computation.
|
cmp-lg/9804005
|
On the existence of certain total recursive functions in nontrivial
axiom systems, I
|
cmp-lg cs.CL
|
We investigate the existence of a class of ZFC-provably total recursive unary
functions, given certain constraints, and apply some of those results to show
that, for $\Sigma_1$-sound set theory, ZFC$\not\vdash P<NP$.
|
cmp-lg/9805001
|
Valence Induction with a Head-Lexicalized PCFG
|
cmp-lg cs.CL
|
This paper presents an experiment in learning valences (subcategorization
frames) from a 50 million word text corpus, based on a lexicalized
probabilistic context free grammar. Distributions are estimated using a
modified EM algorithm. We evaluate the acquired lexicon both by comparison with
a dictionary and by entropy measures. Results show that our model produces
highly accurate frame distributions.
|
cmp-lg/9805002
|
Group Theory and Grammatical Description
|
cmp-lg cs.CL
|
This paper presents a model for linguistic description based on group theory.
A grammar in this model, or "G-grammar", is a collection of lexical expressions
which are products of logical forms, phonological forms, and their inverses.
Phrasal descriptions are obtained by forming products of lexical expressions
and by cancelling contiguous elements which are inverses of each other. We show
applications of this model to parsing and generation, long-distance movement,
and quantifier scoping. We believe that by moving from the free monoid over a
vocabulary V --- standard in formal language studies --- to the free group over
V, deep affinities between linguistic phenomena and classical algebra come to
the surface, and that the consequences of tapping the mathematical connections
thus established could be considerable.
|
cmp-lg/9805003
|
Models of Co-occurrence
|
cmp-lg cs.CL
|
A model of co-occurrence in bitext is a boolean predicate that indicates
whether a given pair of word tokens co-occur in corresponding regions of the
bitext space. Co-occurrence is a precondition for the possibility that two
tokens might be mutual translations. Models of co-occurrence are the glue that
binds methods for mapping bitext correspondence with methods for estimating
translation models into an integrated system for exploiting parallel texts.
Different models of co-occurrence are possible, depending on the kind of bitext
map that is available, the language-specific information that is available, and
the assumptions made about the nature of translational equivalence. Although
most statistical translation models are based on models of co-occurrence,
modeling co-occurrence correctly is more difficult than it may at first appear.
|
cmp-lg/9805004
|
Annotation Style Guide for the Blinker Project
|
cmp-lg cs.CL
|
This annotation style guide was created by and for the Blinker project at the
University of Pennsylvania. The Blinker project was so named after the
``bilingual linker'' GUI, which was created to enable bilingual annotators to
``link'' word tokens that are mutual translations in parallel texts. The
parallel text chosen for this project was the Bible, because it is probably the
easiest text to obtain in electronic form in multiple languages. The languages
involved were English and French, because, of the languages with which the
project co-ordinator was familiar, these were the two for which a sufficient
number of annotators was likely to be found.
|
cmp-lg/9805005
|
Manual Annotation of Translational Equivalence: The Blinker Project
|
cmp-lg cs.CL
|
Bilingual annotators were paid to link roughly sixteen thousand corresponding
words between on-line versions of the Bible in modern French and modern
English. These annotations are freely available to the research community from
http://www.cis.upenn.edu/~melamed . The annotations can be used for several
purposes. First, they can be used as a standard data set for developing and
testing translation lexicons and statistical translation models. Second,
researchers in lexical semantics will be able to mine the annotations for
insights about cross-linguistic lexicalization patterns. Third, the annotations
can be used in research into certain recently proposed methods for monolingual
word-sense disambiguation. This paper describes the annotated texts, the
specially-designed annotation tool, and the strategies employed to increase the
consistency of the annotations. The annotation process was repeated five times
by different annotators. Inter-annotator agreement rates indicate that the
annotations are reasonably reliable and that the method is easy to replicate.
|
cmp-lg/9805006
|
Word-to-Word Models of Translational Equivalence
|
cmp-lg cs.CL
|
Parallel texts (bitexts) have properties that distinguish them from other
kinds of parallel data. First, most words translate to only one other word.
Second, bitext correspondence is noisy. This article presents methods for
biasing statistical translation models to reflect these properties. Analysis of
the expected behavior of these biases in the presence of sparse data predicts
that they will result in more accurate models. The prediction is confirmed by
evaluation with respect to a gold standard -- translation models that are
biased in this fashion are significantly more accurate than a baseline
knowledge-poor model. This article also shows how a statistical translation
model can take advantage of various kinds of pre-existing knowledge that might
be available about particular language pairs. Even the simplest kinds of
language-specific knowledge, such as the distinction between content words and
function words, is shown to reliably boost translation model performance on
some tasks. Statistical models that are informed by pre-existing knowledge
about the model domain combine the best of both the rationalist and empiricist
traditions.
|
cmp-lg/9805007
|
Parsing Inside-Out
|
cmp-lg cs.CL
|
The inside-outside probabilities are typically used for reestimating
Probabilistic Context Free Grammars (PCFGs), just as the forward-backward
probabilities are typically used for reestimating HMMs. I show several novel
uses, including improving parser accuracy by matching parsing algorithms to
evaluation criteria; speeding up DOP parsing by 500 times; and 30 times faster
PCFG thresholding at a given accuracy level. I also give an elegant,
state-of-the-art grammar formalism, which can be used to compute inside-outside
probabilities; and a parser description formalism, which makes it easy to
derive inside-outside formulas and many others.
|
cmp-lg/9805008
|
A Descriptive Characterization of Tree-Adjoining Languages (Full
Version)
|
cmp-lg cs.CL
|
Since the early Sixties and Seventies it has been known that the regular and
context-free languages are characterized by definability in the monadic
second-order theory of certain structures. More recently, these descriptive
characterizations have been used to obtain complexity results for constraint-
and principle-based theories of syntax and to provide a uniform model-theoretic
framework for exploring the relationship between theories expressed in
disparate formal terms. These results have been limited, to an extent, by the
lack of descriptive characterizations of language classes beyond the
context-free. Recently, we have shown that tree-adjoining languages (in a
mildly generalized form) can be characterized by recognition by automata
operating on three-dimensional tree manifolds, a three-dimensional analog of
trees. In this paper, we exploit these automata-theoretic results to obtain a
characterization of the tree-adjoining languages by definability in the monadic
second-order theory of these three-dimensional tree manifolds. This not only
opens the way to extending the tools of model-theoretic syntax to the level of
TALs, but provides a highly flexible mechanism for defining TAGs in terms of
logical constraints.
This is the full version of a paper to appear in the proceedings of
COLING-ACL'98 as a project note.
|
cmp-lg/9805009
|
Discovery of Linguistic Relations Using Lexical Attraction
|
cmp-lg cs.CL
|
This work has been motivated by two long term goals: to understand how humans
learn language and to build programs that can understand language. Using a
representation that makes the relevant features explicit is a prerequisite for
successful learning and understanding. Therefore, I chose to represent
relations between individual words explicitly in my model. Lexical attraction
is defined as the likelihood of such relations. I introduce a new class of
probabilistic language models named lexical attraction models which can
represent long distance relations between words and I formalize this new class
of models using information theory.
Within the framework of lexical attraction, I developed an unsupervised
language acquisition program that learns to identify linguistic relations in a
given sentence. The only explicitly represented linguistic knowledge in the
program is lexical attraction. There is no initial grammar or lexicon built in
and the only input is raw text. Learning and processing are interdigitated. The
processor uses the regularities detected by the learner to impose structure on
the input. This structure enables the learner to detect higher level
regularities. Using this bootstrapping procedure, the program was trained on
100 million words of Associated Press material and was able to achieve 60%
precision and 50% recall in finding relations between content-words. Using
knowledge of lexical attraction, the program can identify the correct relations
in syntactically ambiguous sentences such as ``I saw the Statue of Liberty
flying over New York.''
|
cmp-lg/9805010
|
Integrating Text Plans for Conciseness and Coherence
|
cmp-lg cs.CL
|
Our experience with a critiquing system shows that when the system detects
problems with the user's performance, multiple critiques are often produced.
Analysis of a corpus of actual critiques revealed that even though each
individual critique is concise and coherent, the set of critiques as a whole
may exhibit several problems that detract from conciseness and coherence, and
consequently assimilation. Thus a text planner was needed that could integrate
the text plans for individual communicative goals to produce an overall text
plan representing a concise, coherent message.
This paper presents our general rule-based system for accomplishing this
task. The system takes as input a \emph{set} of individual text plans
represented as RST-style trees, and produces a smaller set of more complex
trees representing integrated messages that still achieve the multiple
communicative goals of the individual text plans. Domain-independent rules are
used to capture strategies across domains, while the facility for addition of
domain-dependent rules enables the system to be tuned to the requirements of a
particular domain. The system has been tested on a corpus of critiques in the
domain of trauma care.
|
cmp-lg/9805011
|
Automatic summarising: factors and directions
|
cmp-lg cs.CL
|
This position paper suggests that progress with automatic summarising demands
a better research methodology and a carefully focussed research strategy. In
order to develop effective procedures it is necessary to identify and respond
to the context factors, i.e. input, purpose, and output factors, that bear on
summarising and its evaluation. The paper analyses and illustrates these
factors and their implications for evaluation. It then argues that this
analysis, together with the state of the art and the intrinsic difficulty of
summarising, imply a nearer-term strategy concentrating on shallow, but not
surface, text analysis and on indicative summarising. This is illustrated with
current work, from which a potentially productive research programme can be
developed.
|
cmp-lg/9805012
|
Recognizing Syntactic Errors in the Writing of Second Language Learners
|
cmp-lg cs.CL
|
This paper reports on the recognition component of an intelligent tutoring
system that is designed to help foreign language speakers learn standard
English. The system models the grammar of the learner, with this instantiation
of the system tailored to signers of American Sign Language (ASL). We discuss
the theoretical motivations for the system, various difficulties that have been
encountered in the implementation, as well as the methods we have used to
overcome these problems. Our method of capturing ungrammaticalities involves
using mal-rules (also called 'error productions'). However, the straightforward
addition of some mal-rules causes significant performance problems with the
parser. For instance, the ASL population has a strong tendency to drop pronouns
and the auxiliary verb `to be'. Being able to account for these as sentences
results in an explosion in the number of possible parses for each sentence.
This explosion, left unchecked, greatly hampers the performance of the system.
We discuss how this is handled by taking into account expectations from the
specific population (some of which are captured in our unique user model). The
different representations of lexical items at various points in the acquisition
process are modeled by using mal-rules, which obviates the need for multiple
lexicons. The grammar is evaluated on its ability to correctly diagnose
agreement problems in actual sentences produced by ASL native speakers.
|
cmp-lg/9806001
|
Learning Correlations between Linguistic Indicators and Semantic
Constraints: Reuse of Context-Dependent Descriptions of Entities
|
cmp-lg cs.CL
|
This paper presents the results of a study on the semantic constraints
imposed on lexical choice by certain contextual indicators. We show how such
indicators are computed and how correlations between them and the choice of a
noun phrase description of a named entity can be automatically established
using supervised learning. Based on this correlation, we have developed a
technique for automatic lexical choice of descriptions of entities in text
generation. We discuss the underlying relationship between the pragmatics of
choosing an appropriate description that serves a specific purpose in the
automatically generated text and the semantics of the description itself. We
present our work in the framework of the more general concept of reuse of
linguistic structures that are automatically extracted from large corpora. We
present a formal evaluation of our approach and we conclude with some thoughts
on potential applications of our method.
|
cmp-lg/9806002
|
Computing Dialogue Acts from Features with Transformation-Based Learning
|
cmp-lg cs.CL
|
To interpret natural language at the discourse level, it is very useful to
accurately recognize dialogue acts, such as SUGGEST, in identifying speaker
intentions. Our research explores the utility of a machine learning method
called Transformation-Based Learning (TBL) in computing dialogue acts, because
TBL has a number of advantages over alternative approaches for this
application. We have identified some extensions to TBL that are necessary in
order to address the limitations of the original algorithm and the particular
demands of discourse processing. We use a Monte Carlo strategy to increase the
applicability of the TBL method, and we select features of utterances that can
be used as input to improve the performance of TBL. Our system is currently
being tested on the VerbMobil corpora of spoken dialogues, producing promising
preliminary results.
|
cmp-lg/9806003
|
Lazy Transformation-Based Learning
|
cmp-lg cs.CL
|
We introduce a significant improvement for a relatively new machine learning
method called Transformation-Based Learning. By applying a Monte Carlo strategy
to randomly sample from the space of rules, rather than exhaustively analyzing
all possible rules, we drastically reduce the memory and time costs of the
algorithm, without compromising accuracy on unseen data. This enables
Transformation- Based Learning to apply to a wider range of domains, as it can
effectively consider a larger number of different features and feature
interactions in the data. In addition, the Monte Carlo improvement decreases
the labor demands on the human developer, who no longer needs to develop a
minimal set of rule templates to maintain tractability.
|
cmp-lg/9806004
|
Rationality, Cooperation and Conversational Implicature
|
cmp-lg cs.CL
|
Conversational implicatures are usually described as being licensed by the
disobeying or flouting of a Principle of Cooperation. However, the
specification of this principle has proved computationally elusive. In this
paper we suggest that a more useful concept is rationality. Such a concept can
be specified explicitely in planning terms and we argue that speakers perform
utterances as part of the optimal plan for their particular communicative
goals. Such an assumption can be used by the hearer to infer conversational
implicatures implicit in the speaker's utterance.
|
cmp-lg/9806005
|
Eliminating deceptions and mistaken belief to infer conversational
implicature
|
cmp-lg cs.CL
|
Conversational implicatures are usually described as being licensed by the
disobeying or flouting of some principle by the speaker in cooperative
dialogue. However, such work has failed to distinguish cases of the speaker
flouting such a principle from cases where the speaker is either deceptive or
holds a mistaken belief. In this paper, we demonstrate how the three different
cases can be distinguished in terms of the beliefs ascribed to the speaker of
the utterance. We argue that in the act of distinguishing the speaker's
intention and ascribing such beliefs, the intended inference can be made by the
hearer. This theory is implemented in ViewGen, a pre-existing belief modelling
system used in a medical counselling domain.
|
cmp-lg/9806006
|
Dialogue Act Tagging with Transformation-Based Learning
|
cmp-lg cs.CL
|
For the task of recognizing dialogue acts, we are applying the
Transformation-Based Learning (TBL) machine learning algorithm. To circumvent a
sparse data problem, we extract values of well-motivated features of
utterances, such as speaker direction, punctuation marks, and a new feature,
called dialogue act cues, which we find to be more effective than cue phrases
and word n-grams in practice. We present strategies for constructing a set of
dialogue act cues automatically by minimizing the entropy of the distribution
of dialogue acts in a training corpus, filtering out irrelevant dialogue act
cues, and clustering semantically-related words. In addition, to address
limitations of TBL, we introduce a Monte Carlo strategy for training
efficiently and a committee method for computing confidence measures. These
ideas are combined in our working implementation, which labels held-out data as
accurately as any other reported system for the dialogue act tagging task.
|
cmp-lg/9806007
|
An Investigation of Transformation-Based Learning in Discourse
|
cmp-lg cs.CL
|
This paper presents results from the first attempt to apply
Transformation-Based Learning to a discourse-level Natural Language Processing
task. To address two limitations of the standard algorithm, we developed a
Monte Carlo version of Transformation-Based Learning to make the method
tractable for a wider range of problems without degradation in accuracy, and we
devised a committee method for assigning confidence measures to tags produced
by Transformation-Based Learning. The paper describes these advances, presents
experimental evidence that Transformation-Based Learning is as effective as
alternative approaches (such as Decision Trees and N-Grams) for a discourse
task called Dialogue Act Tagging, and argues that Transformation-Based Learning
has desirable features that make it particularly appealing for the Dialogue Act
Tagging task.
|
cmp-lg/9806008
|
Unlimited Vocabulary Grapheme to Phoneme Conversion for Korean TTS
|
cmp-lg cs.CL
|
This paper describes a grapheme-to-phoneme conversion method using phoneme
connectivity and CCV conversion rules. The method consists of mainly four
modules including morpheme normalization, phrase-break detection, morpheme to
phoneme conversion and phoneme connectivity check.
The morpheme normalization is to replace non-Korean symbols into standard
Korean graphemes. The phrase-break detector assigns phrase breaks using
part-of-speech (POS) information. In the morpheme-to-phoneme conversion module,
each morpheme in the phrase is converted into phonetic patterns by looking up
the morpheme phonetic pattern dictionary which contains candidate phonological
changes in boundaries of the morphemes. Graphemes within a morpheme are grouped
into CCV patterns and converted into phonemes by the CCV conversion rules. The
phoneme connectivity table supports grammaticality checking of the adjacent two
phonetic morphemes.
In the experiments with a corpus of 4,973 sentences, we achieved 99.9% of the
grapheme-to-phoneme conversion performance and 97.5% of the sentence conversion
performance. The full Korean TTS system is now being implemented using this
conversion method.
|
cmp-lg/9806009
|
Methods and Tools for Building the Catalan WordNet
|
cmp-lg cs.CL
|
In this paper we introduce the methodology used and the basic phases we
followed to develop the Catalan WordNet, and shich lexical resources have been
employed in its building. This methodology, as well as the tools we made use
of, have been thought in a general way so that they could be applied to any
other language.
|
cmp-lg/9806010
|
Towards a single proposal is spelling correction
|
cmp-lg cs.CL
|
The study presented here relies on the integrated use of different kinds of
knowledge in order to improve first-guess accuracy in non-word
context-sensitive correction for general unrestricted texts. State of the art
spelling correction systems, e.g. ispell, apart from detecting spelling errors,
also assist the user by offering a set of candidate corrections that are close
to the misspelled word. Based on the correction proposals of ispell, we built
several guessers, which were combined in different ways. Firstly, we evaluated
all possibilities and selected the best ones in a corpus with artificially
generated typing errors. Secondly, the best combinations were tested on texts
with genuine spelling errors. The results for the latter suggest that we can
expect automatic non-word correction for all the errors in a free running text
with 80% precision and a single proposal 98% of the times (1.02 proposals on
average).
|
cmp-lg/9806011
|
A Memory-Based Approach to Learning Shallow Natural Language Patterns
|
cmp-lg cs.CL
|
Recognizing shallow linguistic patterns, such as basic syntactic
relationships between words, is a common task in applied natural language and
text processing. The common practice for approaching this task is by tedious
manual definition of possible pattern structures, often in the form of regular
expressions or finite automata. This paper presents a novel memory-based
learning method that recognizes shallow patterns in new text based on a
bracketed training corpus. The training data are stored as-is, in efficient
suffix-tree data structures. Generalization is performed on-line at recognition
time by comparing subsequences of the new text to positive and negative
evidence in the corpus. This way, no information in the training is lost, as
can happen in other learning systems that construct a single generalized model
at the time of training. The paper presents experimental results for
recognizing noun phrase, subject-verb and verb-object patterns in English.
Since the learning approach enables easy porting to new domains, we plan to
apply it to syntactic patterns in other languages and to sub-language patterns
for information extraction.
|
cmp-lg/9806012
|
Bayesian Stratified Sampling to Assess Corpus Utility
|
cmp-lg cs.CL
|
This paper describes a method for asking statistical questions about a large
text corpus. We exemplify the method by addressing the question, "What
percentage of Federal Register documents are real documents, of possible
interest to a text researcher or analyst?" We estimate an answer to this
question by evaluating 200 documents selected from a corpus of 45,820 Federal
Register documents. Stratified sampling is used to reduce the sampling
uncertainty of the estimate from over 3100 documents to fewer than 1000. The
stratification is based on observed characteristics of real documents, while
the sampling procedure incorporates a Bayesian version of Neyman allocation. A
possible application of the method is to establish baseline statistics used to
estimate recall rates for information retrieval systems.
|
cmp-lg/9806013
|
Can Subcategorisation Probabilities Help a Statistical Parser?
|
cmp-lg cs.CL
|
Research into the automatic acquisition of lexical information from corpora
is starting to produce large-scale computational lexicons containing data on
the relative frequencies of subcategorisation alternatives for individual
verbal predicates. However, the empirical question of whether this type of
frequency information can in practice improve the accuracy of a statistical
parser has not yet been answered. In this paper we describe an experiment with
a wide-coverage statistical grammar and parser for English and
subcategorisation frequencies acquired from ten million words of text which
shows that this information can significantly improve parse accuracy.
|
cmp-lg/9806014
|
Word Sense Disambiguation using Optimised Combinations of Knowledge
Sources
|
cmp-lg cs.CL
|
Word sense disambiguation algorithms, with few exceptions, have made use of
only one lexical knowledge source. We describe a system which performs
unrestricted word sense disambiguation (on all content words in free text) by
combining different knowledge sources: semantic preferences, dictionary
definitions and subject/domain codes along with part-of-speech tags. The
usefulness of these sources is optimised by means of a learning algorithm. We
also describe the creation of a new sense tagged corpus by combining existing
resources. Tested accuracy of our approach on this corpus exceeds 92%,
demonstrating the viability of all-word disambiguation rather than restricting
oneself to a small sample.
|
cmp-lg/9806015
|
Building Accurate Semantic Taxonomies from Monolingual MRDs
|
cmp-lg cs.CL
|
This paper presents a method that combines a set of unsupervised algorithms
in order to accurately build large taxonomies from any machine-readable
dictionary (MRD). Our aim is to profit from conventional MRDs, with no explicit
semantic coding. We propose a system that 1) performs fully automatic exraction
of taxonomic links from MRD entries and 2) ranks the extracted relations in a
way that selective manual refinement is allowed. Tested accuracy can reach
around 100% depending on the degree of coverage selected, showing that taxonomy
building is not limited to structured dictionaries such as LDOCE.
|
cmp-lg/9806016
|
Using WordNet for Building WordNets
|
cmp-lg cs.CL
|
This paper summarises a set of methodologies and techniques for the fast
construction of multilingual WordNets. The English WordNet is used in this
approach as a backbone for Catalan and Spanish WordNets and as a lexical
knowledge resource for several subtasks.
|
cmp-lg/9806017
|
Anchoring a Lexicalized Tree-Adjoining Grammar for Discourse
|
cmp-lg cs.CL
|
We here explore a ``fully'' lexicalized Tree-Adjoining Grammar for discourse
that takes the basic elements of a (monologic) discourse to be not simply
clauses, but larger structures that are anchored on variously realized
discourse cues. This link with intra-sentential grammar suggests an account for
different patterns of discourse cues, while the different structures and
operations suggest three separate sources for elements of discourse meaning:
(1) a compositional semantics tied to the basic trees and operations; (2) a
presuppositional semantics carried by cue phrases that freely adjoin to trees;
and (3) general inference, that draws additional, defeasible conclusions that
flesh out what is conveyed compositionally.
|
cmp-lg/9806018
|
Never Look Back: An Alternative to Centering
|
cmp-lg cs.CL
|
I propose a model for determining the hearer's attentional state which
depends solely on a list of salient discourse entities (S-list). The ordering
among the elements of the S-list covers also the function of the
backward-looking center in the centering model. The ranking criteria for the
S-list are based on the distinction between hearer-old and hearer-new discourse
entities and incorporate preferences for inter- and intra-sentential anaphora.
The model is the basis for an algorithm which operates incrementally, word by
word.
|
cmp-lg/9806019
|
An Empirical Investigation of Proposals in Collaborative Dialogues
|
cmp-lg cs.CL
|
We describe a corpus-based investigation of proposals in dialogue. First, we
describe our DRI compliant coding scheme and report our inter-coder reliability
results. Next, we test several hypotheses about what constitutes a well-formed
proposal.
|
cmp-lg/9806020
|
Textual Economy through Close Coupling of Syntax and Semantics
|
cmp-lg cs.CL
|
We focus on the production of efficient descriptions of objects, actions and
events. We define a type of efficiency, textual economy, that exploits the
hearer's recognition of inferential links to material elsewhere within a
sentence. Textual economy leads to efficient descriptions because the material
that supports such inferences has been included to satisfy independent
communicative goals, and is therefore overloaded in Pollack's sense. We argue
that achieving textual economy imposes strong requirements on the
representation and reasoning used in generating sentences. The representation
must support the generator's simultaneous consideration of syntax and
semantics. Reasoning must enable the generator to assess quickly and reliably
at any stage how the hearer will interpret the current sentence, with its
(incomplete) syntax and semantics. We show that these representational and
reasoning requirements are met in the SPUD system for sentence planning and
realization.
|
cmp-lg/9807001
|
Evaluating a Focus-Based Approach to Anaphora Resolution
|
cmp-lg cs.CL
|
We present an approach to anaphora resolution based on a focusing algorithm,
and implemented within an existing MUC (Message Understanding Conference)
Information Extraction system, allowing quantitative evaluation against a
substantial corpus of annotated real-world texts. Extensions to the basic
focusing mechanism can be easily tested, resulting in refinements to the
mechanism and resolution rules. Results are compared with the results of a
simpler heuristic-based approach.
|
cmp-lg/9807002
|
The Role of Verbs in Document Analysis
|
cmp-lg cs.CL
|
We present results of two methods for assessing the event profile of news
articles as a function of verb type. The unique contribution of this research
is the focus on the role of verbs, rather than nouns. Two algorithms are
presented and evaluated, one of which is shown to accurately discriminate
documents by type and semantic properties, i.e. the event profile. The initial
method, using WordNet (Miller et al. 1990), produced multiple
cross-classification of articles, primarily due to the bushy nature of the verb
tree coupled with the sense disambiguation problem. Our second approach using
English Verb Classes and Alternations (EVCA) Levin (1993) showed that
monosemous categorization of the frequent verbs in WSJ made it possible to
usefully discriminate documents. For example, our results show that articles in
which communication verbs predominate tend to be opinion pieces, whereas
articles with a high percentage of agreement verbs tend to be about mergers or
legal cases. An evaluation is performed on the results using Kendall's Tau. We
present convincing evidence for using verb semantic classes as a discriminant
in document classification.
|
cmp-lg/9807003
|
Centering in Dynamic Semantics
|
cmp-lg cs.CL
|
Centering theory posits a discourse center, a distinguished discourse entity
that is the topic of a discourse. A simplified version of this theory is
developed in a Dynamic Semantics framework. In the resulting system, the
mechanism of center shift allows a simple, elegant analysis of a variety of
phenomena involving sloppy identity in ellipsis and ``paycheck pronouns''.
|
cmp-lg/9807004
|
Word Clustering and Disambiguation Based on Co-occurrence Data
|
cmp-lg cs.CL
|
We address the problem of clustering words (or constructing a thesaurus)
based on co-occurrence data, and using the acquired word classes to improve the
accuracy of syntactic disambiguation. We view this problem as that of
estimating a joint probability distribution specifying the joint probabilities
of word pairs, such as noun verb pairs. We propose an efficient algorithm based
on the Minimum Description Length (MDL) principle for estimating such a
probability distribution. Our method is a natural extension of those proposed
in (Brown et al 92) and (Li & Abe 96), and overcomes their drawbacks while
retaining their advantages. We then combined this clustering method with the
disambiguation method of (Li & Abe 95) to derive a disambiguation method that
makes use of both automatically constructed thesauruses and a hand-made
thesaurus. The overall disambiguation accuracy achieved by our method is 85.2%,
which compares favorably against the accuracy (82.4%) obtained by the
state-of-the-art disambiguation method of (Brill & Resnik 94).
|
cmp-lg/9807005
|
Graph Interpolation Grammars as Context-Free Automata
|
cmp-lg cs.CL
|
A derivation step in a Graph Interpolation Grammar has the effect of scanning
an input token. This feature, which aims at emulating the incrementality of the
natural parser, restricts the formal power of GIGs. This contrasts with the
fact that the derivation mechanism involves a context-sensitive device similar
to tree adjunction in TAGs. The combined effect of input-driven derivation and
restricted context-sensitiveness would be conceivably unfortunate if it turned
out that Graph Interpolation Languages did not subsume Context Free Languages
while being partially context-sensitive. This report sets about examining
relations between CFGs and GIGs, and shows that GILs are a proper superclass of
CFLs. It also brings out a strong equivalence between CFGs and GIGs for the
class of CFLs. Thus, it lays the basis for meaningfully investigating the
amount of context-sensitiveness supported by GIGs, but leaves this
investigation for further research.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.