id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
cmp-lg/9607002
|
Inducing Constraint Grammars
|
cmp-lg cs.CL
|
Constraint Grammar rules are induced from corpora. A simple scheme based on
local information, i.e., on lexical biases and next-neighbour contexts,
extended through the use of barriers, reached 87.3 percent precision (1.12
tags/word) at 98.2 percent recall. The results compare favourably with other
methods that are used for similar tasks although they are by no means as good
as the results achieved using the original hand-written rules developed over
several years time.
|
cmp-lg/9607003
|
Domain and Language Independent Feature Extraction for Statistical Text
Categorization
|
cmp-lg cs.CL
|
A generic system for text categorization is presented which uses a
representative text corpus to adapt the processing steps: feature extraction,
dimension reduction, and classification. Feature extraction automatically
learns features from the corpus by reducing actual word forms using statistical
information of the corpus and general linguistic knowledge. The dimension of
feature vector is then reduced by linear transformation keeping the essential
information. The classification principle is a minimum least square approach
based on polynomials. The described system can be readily adapted to new
domains or new languages. In application, the system is reliable, fast, and
processes completely automatically. It is shown that the text categorizer works
successfully both on text generated by document image analysis - DIA and on
ground truth data.
|
cmp-lg/9607004
|
Integrating Syntactic and Prosodic Information for the Efficient
Detection of Empty Categories
|
cmp-lg cs.CL
|
We describe a number of experiments that demonstrate the usefulness of
prosodic information for a processing module which parses spoken utterances
with a feature-based grammar employing empty categories. We show that by
requiring certain prosodic properties from those positions in the input where
the presence of an empty category has to be hypothesized, a derivation can be
accomplished more efficiently. The approach has been implemented in the machine
translation project VERBMOBIL and results in a significant reduction of the
work-load for the parser.
|
cmp-lg/9607005
|
Head Automata and Bilingual Tiling: Translation with Minimal
Representations
|
cmp-lg cs.CL
|
We present a language model consisting of a collection of costed
bidirectional finite state automata associated with the head words of phrases.
The model is suitable for incremental application of lexical associations in a
dynamic programming search for optimal dependency tree derivations. We also
present a model and algorithm for machine translation involving optimal
``tiling'' of a dependency tree with entries of a costed bilingual lexicon.
Experimental results are reported comparing methods for assigning cost
functions to these models. We conclude with a discussion of the adequacy of
annotated linguistic strings as representations for machine translation.
|
cmp-lg/9607006
|
Head Automata for Speech Translation
|
cmp-lg cs.CL
|
This paper presents statistical language and translation models based on
collections of small finite state machines we call ``head automata''. The
models are intended to capture the lexical sensitivity of N-gram models and
direct statistical translation models, while at the same time taking account of
the hierarchical phrasal structure of language. Two types of head automata are
defined: relational head automata suitable for translation by transfer of
dependency trees, and head transducers suitable for direct recursive lexical
translation.
|
cmp-lg/9607007
|
Parallel Replacement in Finite State Calculus
|
cmp-lg cs.CL
|
This paper extends the calculus of regular expressions with new types of
replacement expressions that enhance the expressiveness of the simple replace
operator defined in Karttunen (1995). Parallel replacement allows multiple
replacements to apply simultaneously to the same input without interfering with
each other. We also allow a replacement to be constrained by any number of
alternative contexts. With these enhancements, the general replacement
expressions are more versatile than two-level rules for the description of
complex morphological alternations.
|
cmp-lg/9607008
|
From Submit to Submitted via Submission: On Lexical Rules in Large-Scale
Lexicon Acquisition
|
cmp-lg cs.CL
|
This paper deals with the discovery, representation, and use of lexical rules
(LRs) during large-scale semi-automatic computational lexicon acquisition. The
analysis is based on a set of LRs implemented and tested on the basis of
Spanish and English business- and finance-related corpora. We show that, though
the use of LRs is justified, they do not come cost-free. Semi-automatic output
checking is required, even with blocking and preemtion procedures built in.
Nevertheless, large-scope LRs are justified because they facilitate the
unavoidable process of large-scale semi-automatic lexical acquisition. We also
argue that the place of LRs in the computational process is a complex issue.
|
cmp-lg/9607009
|
Semantic-based Transfer
|
cmp-lg cs.CL
|
This article presents a new semantic-based transfer approach developed and
applied within the Verbmobil Machine Translation project. We give an overview
of the declarative transfer formalism together with its procedural realization.
Our approach is discussed and compared with several other approaches from the
MT literature.
|
cmp-lg/9607010
|
Efficient Implementation of a Semantic-based Transfer Approach
|
cmp-lg cs.CL
|
This article gives an overview of a new semantic-based transfer approach
developed and applied within the Verbmobil Machine Translation project. We
present the declarative transfer formalism and discuss its implementation.
|
cmp-lg/9607011
|
Pattern-Based Context-Free Grammars for Machine Translation
|
cmp-lg cs.CL
|
This paper proposes the use of ``pattern-based'' context-free grammars as a
basis for building machine translation (MT) systems, which are now being
adopted as personal tools by a broad range of users in the cyberspace society.
We discuss major requirements for such tools, including easy customization for
diverse domains, the efficiency of the translation algorithm, and scalability
(incremental improvement in translation quality through user interaction), and
describe how our approach meets these requirements.
|
cmp-lg/9607012
|
MBT: A Memory-Based Part of Speech Tagger-Generator
|
cmp-lg cs.CL
|
We introduce a memory-based approach to part of speech tagging. Memory-based
learning is a form of supervised learning based on similarity-based reasoning.
The part of speech tag of a word in a particular context is extrapolated from
the most similar cases held in memory. Supervised learning approaches are
useful when a tagged corpus is available as an example of the desired output of
the tagger. Based on such a corpus, the tagger-generator automatically builds a
tagger which is able to tag new text the same way, diminishing development time
for the construction of a tagger considerably. Memory-based tagging shares this
advantage with other statistical or machine learning approaches. Additional
advantages specific to a memory-based approach include (i) the relatively small
tagged corpus size sufficient for training, (ii) incremental learning, (iii)
explanation capabilities, (iv) flexible integration of information in case
representations, (v) its non-parametric nature, (vi) reasonably good results on
unknown words without morphological analysis, and (vii) fast learning and
tagging. In this paper we show that a large-scale application of the
memory-based approach is feasible: we obtain a tagging accuracy that is on a
par with that of known statistical approaches, and with attractive space and
time complexity properties when using {\em IGTree}, a tree-based formalism for
indexing and searching huge case bases.} The use of IGTree has as additional
advantage that optimal context size for disambiguation is dynamically computed.
|
cmp-lg/9607013
|
Unsupervised Discovery of Phonological Categories through Supervised
Learning of Morphological Rules
|
cmp-lg cs.CL
|
We describe a case study in the application of {\em symbolic machine
learning} techniques for the discovery of linguistic rules and categories. A
supervised rule induction algorithm is used to learn to predict the correct
diminutive suffix given the phonological representation of Dutch nouns. The
system produces rules which are comparable to rules proposed by linguists.
Furthermore, in the process of learning this morphological task, the phonemes
used are grouped into phonologically relevant categories. We discuss the
relevance of our method for linguistics and language technology.
|
cmp-lg/9607014
|
A Corpus Study of Negative Imperatives in Natural Language Instructions
|
cmp-lg cs.CL
|
In this paper, we define the notion of a preventative expression and discuss
a corpus study of such expressions in instructional text. We discuss our coding
schema, which takes into account both form and function features, and present
measures of inter-coder reliability for those features. We then discuss the
correlations that exist between the function and the form features.
|
cmp-lg/9607015
|
Learning Micro-Planning Rules for Preventative Expressions
|
cmp-lg cs.CL
|
Building text planning resources by hand is time-consuming and difficult.
Certainly, a number of planning architectures and their accompanying plan
libraries have been implemented, but while the architectures themselves may be
reused in a new domain, the library of plans typically cannot. One way to
address this problem is to use machine learning techniques to automate the
derivation of planning resources for new domains. In this paper, we apply this
technique to build micro-planning rules for preventative expressions in
instructional text.
|
cmp-lg/9607016
|
Beyond Word N-Grams
|
cmp-lg cs.CL
|
We describe, analyze, and evaluate experimentally a new probabilistic model
for word-sequence prediction in natural language based on prediction suffix
trees (PSTs). By using efficient data structures, we extend the notion of PST
to unbounded vocabularies. We also show how to use a Bayesian approach based on
recursive priors over all possible PSTs to efficiently maintain tree mixtures.
These mixtures have provably and practically better performance than almost any
single model. We evaluate the model on several corpora. The low perplexity
achieved by relatively small PST mixture models suggests that they may be an
advantageous alternative, both theoretically and practically, to the widely
used n-gram models.
|
cmp-lg/9607017
|
Natural Language Processing: Structure and Complexity
|
cmp-lg cs.CL
|
We introduce a method for analyzing the complexity of natural language
processing tasks, and for predicting the difficulty new NLP tasks.
Our complexity measures are derived from the Kolmogorov complexity of a class
of automata --- {\it meaning automata}, whose purpose is to extract relevant
pieces of information from sentences. Natural language semantics is defined
only relative to the set of questions an automaton can answer.
The paper shows examples of complexity estimates for various NLP programs and
tasks, and some recipes for complexity management. It positions natural
language processing as a subdomain of software engineering, and lays down its
formal foundation.
|
cmp-lg/9607018
|
TSNLP - Test Suites for Natural Language Processing
|
cmp-lg cs.CL
|
The TSNLP project has investigated various aspects of the construction,
maintenance and application of systematic test suites as diagnostic and
evaluation tools for NLP applications. The paper summarizes the motivation and
main results of the project: besides the solid methodological foundation, TSNLP
has produced substantial multi-purpose and multi-user test suites for three
European languages together with a set of specialized tools that facilitate the
construction, extension, maintenance, retrieval, and customization of the test
data. As TSNLP results, including the data and technology, are made publicly
available, the project presents a valuable linguistic resourc e that has the
potential of providing a wide-spread pre-standard diagnostic and evaluation
tool for both developers and users of NLP applications.
|
cmp-lg/9607019
|
Mental State Adjectives: the Perspective of Generative Lexicon
|
cmp-lg cs.CL
|
This paper focusses on mental state adjectives and offers a unified analysis
in the theory of Generative Lexicon (Pustejovsky, 1991, 1995). We show that,
instead of enumerating the various syntactic constructions they enter into,
with the different senses which arise, it is possible to give them a rich typed
semantic representation which will explain both their semantic and syntactic
polymorphism.
|
cmp-lg/9607020
|
A Divide-and-Conquer Strategy for Parsing
|
cmp-lg cs.CL
|
In this paper, we propose a novel strategy which is designed to enhance the
accuracy of the parser by simplifying complex sentences before parsing. This
approach involves the separate parsing of the constituent sub-sentences within
a complex sentence. To achieve that, the divide-and-conquer strategy first
disambiguates the roles of the link words in the sentence and segments the
sentence based on these roles. The separate parse trees of the segmented
sub-sentences and the noun phrases within them are then synthesized to form the
final parse. To evaluate the effects of this strategy on parsing, we compare
the original performance of a dependency parser with the performance when it is
enhanced with the divide-and-conquer strategy. When tested on 600 sentences of
the IPSM'95 data sets, the enhanced parser saw a considerable error reduction
of 21.2% in its accuracy.
|
cmp-lg/9607021
|
Morphological Analysis as Classification: an Inductive-Learning Approach
|
cmp-lg cs.CL
|
Morphological analysis is an important subtask in text-to-speech conversion,
hyphenation, and other language engineering tasks. The traditional approach to
performing morphological analysis is to combine a morpheme lexicon, sets of
(linguistic) rules, and heuristics to find a most probable analysis. In
contrast we present an inductive learning approach in which morphological
analysis is reformulated as a segmentation task. We report on a number of
experiments in which five inductive learning algorithms are applied to three
variations of the task of morphological analysis. Results show (i) that the
generalisation performance of the algorithms is good, and (ii) that the lazy
learning algorithm IB1-IG performs best on all three tasks. We conclude that
lazy learning of morphological analysis as a classification task is indeed a
viable approach; moreover, it has the strong advantages over the traditional
approach of avoiding the knowledge-acquisition bottleneck, being fast and
deterministic in learning and processing, and being language-independent.
|
cmp-lg/9607022
|
A Machine Learning Approach to the Classification of Dialogue Utterances
|
cmp-lg cs.CL
|
The purpose of this paper is to present a method for automatic classification
of dialogue utterances and the results of applying that method to a corpus.
Superficial features of a set of training utterances (which we will call cues)
are taken as the basis for finding relevant utterance classes and for
extracting rules for assigning these classes to new utterances. Each cue is
assumed to partially contribute to the communicative function of an utterance.
Instead of relying on subjective judgments for the tasks of finding classes and
rules, we opt for using machine learning techniques to guarantee objectivity.
|
cmp-lg/9607023
|
Phonological modeling for continuous speech recognition in Korean
|
cmp-lg cs.CL
|
A new scheme to represent phonological changes during continuous speech
recognition is suggested. A phonological tag coupled with its morphological tag
is designed to represent the conditions of Korean phonological changes. A
pairwise language model of these morphological and phonological tags is
implemented in Korean speech recognition system. Performance of the model is
verified through the TDNN-based speech recognition experiments.
|
cmp-lg/9607024
|
Applying Winnow to Context-Sensitive Spelling Correction
|
cmp-lg cs.CL
|
Multiplicative weight-updating algorithms such as Winnow have been studied
extensively in the COLT literature, but only recently have people started to
use them in applications. In this paper, we apply a Winnow-based algorithm to a
task in natural language: context-sensitive spelling correction. This is the
task of fixing spelling errors that happen to result in valid words, such as
substituting {\it to\/} for {\it too}, {\it casual\/} for {\it causal}, and so
on. Previous approaches to this problem have been statistics-based; we compare
Winnow to one of the more successful such approaches, which uses Bayesian
classifiers. We find that: (1)~When the standard (heavily-pruned) set of
features is used to describe problem instances, Winnow performs comparably to
the Bayesian method; (2)~When the full (unpruned) set of features is used,
Winnow is able to exploit the new features and convincingly outperform Bayes;
and (3)~When a test set is encountered that is dissimilar to the training set,
Winnow is better than Bayes at adapting to the unfamiliar test set, using a
strategy we will present for combining learning on the training set with
unsupervised learning on the (noisy) test set.
|
cmp-lg/9607025
|
New Methods, Current Trends and Software Infrastructure for NLP
|
cmp-lg cs.CL
|
The increasing use of `new methods' in NLP, which the NeMLaP conference
series exemplifies, occurs in the context of a wider shift in the nature and
concerns of the discipline. This paper begins with a short review of this
context and significant trends in the field. The review motivates and leads to
a set of requirements for support software of general utility for NLP research
and development workers. A freely-available system designed to meet these
requirements is described (called GATE - a General Architecture for Text
Engineering). Information Extraction (IE), in the sense defined by the Message
Understanding Conferences (ARPA \cite{Arp95}), is an NLP application in which
many of the new methods have found a home (Hobbs \cite{Hob93}; Jacobs ed.
\cite{Jac92}). An IE system based on GATE is also available for research
purposes, and this is described. Lastly we review related work.
|
cmp-lg/9607026
|
Building Knowledge Bases for the Generation of Software Documentation
|
cmp-lg cs.CL
|
Automated text generation requires a underlying knowledge base from which to
generate, which is often difficult to produce. Software documentation is one
domain in which parts of this knowledge base may be derived automatically. In
this paper, we describe \drafter, an authoring support tool for generating
user-centred software documentation, and in particular, we describe how parts
of its required knowledge base can be obtained automatically.
|
cmp-lg/9607027
|
Learning Translation Rules From A Bilingual Corpus
|
cmp-lg cs.CL
|
This paper proposes a mechanism for learning pattern correspondences between
two languages from a corpus of translated sentence pairs. The proposed
mechanism uses analogical reasoning between two translations. Given a pair of
translations, the similar parts of the sentences in the source language must
correspond the similar parts of the sentences in the target language.
Similarly, the different parts should correspond to the respective parts in the
translated sentences. The correspondences between the similarities, and also
differences are learned in the form of translation rules. The system is tested
on a small training dataset and produced promising results for further
investigation.
|
cmp-lg/9607028
|
The Grammar of Sense: Is word-sense tagging much more than
part-of-speech tagging?
|
cmp-lg cs.CL
|
This squib claims that Large-scale Automatic Sense Tagging of text (LAST) can
be done at a high-level of accuracy and with far less complexity and
computational effort than has been believed until now. Moreover, it can be done
for all open class words, and not just carefully selected opposed pairs as in
some recent work. We describe two experiments: one exploring the amount of
information relevant to sense disambiguation which is contained in the
part-of-speech field of entries in Longman Dictionary of Contemporary English
(LDOCE). Another, more practical, experiment attempts sense disambiguation of
all open class words in a text assigning LDOCE homographs as sense tags using
only part-of-speech information. We report that 92% of open class words can be
successfully tagged in this way. We plan to extend this work and to implement
an improved large-scale tagger, a description of which is included here.
|
cmp-lg/9607029
|
Design and Implementation of a Tactical Generator for Turkish, a Free
Constituent Order Language
|
cmp-lg cs.CL
|
This thesis describes a tactical generator for Turkish, a free constituent
order language, in which the order of the constituents may change according to
the information structure of the sentences to be generated. In the absence of
any information regarding the information structure of a sentence (i.e., topic,
focus, background, etc.), the constituents of the sentence obey a default
order, but the order is almost freely changeable, depending on the constraints
of the text flow or discourse. We have used a recursively structured finite
state machine for handling the changes in constituent order, implemented as a
right-linear grammar backbone. Our implementation environment is the GenKit
system, developed at Carnegie Mellon University--Center for Machine
Translation. Morphological realization has been implemented using an external
morphological analysis/generation component which performs concrete morpheme
selection and handles morphographemic processes.
|
cmp-lg/9607030
|
Using Multiple Sources of Information for Constraint-Based Morphological
Disambiguation
|
cmp-lg cs.CL
|
This thesis presents a constraint-based morphological disambiguation approach
that is applicable to languages with complex morphology--specifically
agglutinative languages with productive inflectional and derivational
morphological phenomena. For morphologically complex languages like Turkish,
automatic morphological disambiguation involves selecting for each token
morphological parse(s), with the right set of inflectional and derivational
markers. Our system combines corpus independent hand-crafted constraint rules,
constraint rules that are learned via unsupervised learning from a training
corpus, and additional statistical information obtained from the corpus to be
morphologically disambiguated. The hand-crafted rules are linguistically
motivated and tuned to improve precision without sacrificing recall. In certain
respects, our approach has been motivated by Brill's recent work, but with the
observation that his transformational approach is not directly applicable to
languages like Turkish. Our approach also uses a novel approach to unknown word
processing by employing a secondary morphological processor which recovers any
relevant inflectional and derivational information from a lexical item whose
root is unknown. With this approach, well below 1% of the tokens remains as
unknown in the texts we have experimented with. Our results indicate that by
combining these hand-crafted, statistical and learned information sources, we
can attain a recall of 96 to 97% with a corresponding precision of 93 to 94%,
and ambiguity of 1.02 to 1.03 parses per token.
|
cmp-lg/9607031
|
Compositional Semantics in Verbmobil
|
cmp-lg cs.CL
|
The paper discusses how compositional semantics is implemented in the
Verbmobil speech-to-speech translation system using LUD, a description language
for underspecified discourse representation structures. The description
language and its formal interpretation in DRT are described as well as its
implementation together with the architecture of the system's entire
syntactic-semantic processing module. We show that a linguistically sound
theory and formalism can be properly implemented in a system with (near)
real-time requirements.
|
cmp-lg/9607032
|
A Lexical Semantic Database for Verbmobil
|
cmp-lg cs.CL
|
This paper describes the development and use of a lexical semantic database
for the Verbmobil speech-to-speech machine translation system. The motivation
is to provide a common information source for the distributed development of
the semantics, transfer and semantic evaluation modules and to store lexical
semantic information application-independently.
The database is organized around a set of abstract semantic classes and has
been used to define the semantic contributions of the lemmata in the vocabulary
of the system, to automatically create semantic lexica and to check the
correctness of the semantic representations built up. The semantic classes are
modelled using an inheritance hierarchy. The database is implemented using the
lexicon formalism LeX4 developed during the project.
|
cmp-lg/9607033
|
Multiple Discourse Relations on the Sentential Level in Japanese
|
cmp-lg cs.CL
|
In the German government (BMBF) funded project Verbmobil, a semantic
formalism Language for Underspecified Discourse Representation Structures (LUD)
is used which describes several DRSs and allows for underspecification. Dealing
with Japanese poses challenging problems. In this paper, a treatment of
multiple discourse relation constructions on the sentential level is shown,
which are common in Japanese but cause a problem for the formalism,. The
problem is to distinguish discourse relations which take the widest scope
compared with other scope-taking elements on the one hand and to have them
underspecified among each other on the other hand. We also state a semantic
constraint on the resolution of multiple discourse relations which seems to
prevail over the syntactic c-command constraint.
|
cmp-lg/9607034
|
Using textual clues to improve metaphor processing
|
cmp-lg cs.CL
|
In this paper, we propose a textual clue approach to help metaphor detection,
in order to improve the semantic processing of this figure. The previous works
in the domain studied the semantic regularities only, overlooking an obvious
set of regularities. A corpus-based analysis shows the existence of surface
regularities related to metaphors. These clues can be characterized by
syntactic structures and lexical markers. We present an object oriented model
for representing the textual clues that were found. This representation is
designed to help the choice of a semantic processing, in terms of possible
non-literal meanings. A prototype implementing this model is currently under
development, within an incremental approach allowing step-by-step evaluations.
\footnote{This work takes part in a research project sponsored by the
AUPELF-UREF (Francophone Agency For Education and Research)}
|
cmp-lg/9607035
|
Completeness of Compositional Translation for Context-Free Grammars
|
cmp-lg cs.CL
|
A machine translation system is said to be *complete* if all expressions that
are correct according to the source-language grammar can be translated into the
target language. This paper addresses the completeness issue for compositional
machine translation in general, and for compositional machine translation of
context-free grammars in particular. Conditions that guarantee translation
completeness of context-free grammars are presented.
|
cmp-lg/9607036
|
Connected Text Recognition Using Layered HMMs and Token Passing
|
cmp-lg cs.CL
|
We present a novel approach to lexical error recovery on textual input. An
advanced robust tokenizer has been implemented that can not only correct
spelling mistakes, but also recover from segmentation errors. Apart from the
orthographic considerations taken, the tokenizer also makes use of linguistic
expectations extracted from a training corpus. The idea is to arrange Hidden
Markov Models (HMM) in multiple layers where the HMMs in each layer are
responsible for different aspects of the processing of the input. We report on
experimental evaluations with alternative probabilistic language models to
guide the lexical error recovery process.
|
cmp-lg/9607037
|
Automatic Construction of Clean Broad-Coverage Translation Lexicons
|
cmp-lg cs.CL
|
Word-level translational equivalences can be extracted from parallel texts by
surprisingly simple statistical techniques. However, these techniques are
easily fooled by {\em indirect associations} --- pairs of unrelated words whose
statistical properties resemble those of mutual translations. Indirect
associations pollute the resulting translation lexicons, drastically reducing
their precision. This paper presents an iterative lexicon cleaning method. On
each iteration, most of the remaining incorrect lexicon entries are filtered
out, without significant degradation in recall. This lexicon cleaning technique
can produce translation lexicons with recall and precision both exceeding 90\%,
as well as dictionary-sized translation lexicons that are over 99\% correct.
|
cmp-lg/9608001
|
Storage of Natural Language Sentences in a Hopfield Network
|
cmp-lg cs.CL
|
This paper look at how the Hopfield neural network can be used to store and
recall patterns constructed from natural language sentences. As a pattern
recognition and storage tool, the Hopfield neural network has received much
attention. This attention however has been mainly in the field of statistical
physics due to the model's simple abstraction of spin glass systems. A
discussion is made of the differences, shown as bias and correlation, between
natural language sentence patterns and the randomly generated ones used in
previous experiments. Results are given for numerical simulations which show
the auto-associative competence of the network when trained with natural
language patterns.
|
cmp-lg/9608002
|
Controlling Functional Uncertainty
|
cmp-lg cs.CL
|
There have been two different methods for checking the satisfiability of
feature descriptions that use the functional uncertainty device,
namely~\cite{Kaplan:88CO} and \cite{Backofen:94JSC}. Although only the one in
\cite{Backofen:94JSC} solves the satisfiability problem completely, both
methods have their merits. But it may happen that in one single description,
there are parts where the first method is more appropriate, and other parts
where the second should be applied. In this paper, we present a common
framework that allows one to combine both methods. This is done by presenting a
set of rules for simplifying feature descriptions. The different methods are
described as different controls on this rule set, where a control specifies in
which order the different rules must be applied.
|
cmp-lg/9608003
|
Stylistic Variation in an Information Retrieval Experiment
|
cmp-lg cs.CL
|
Texts exhibit considerable stylistic variation. This paper reports an
experiment where a corpus of documents (N= 75 000) is analyzed using various
simple stylistic metrics. A subset (n = 1000) of the corpus has been previously
assessed to be relevant for answering given information retrieval queries. The
experiment shows that this subset differs significantly from the rest of the
corpus in terms of the stylistic metrics studied.
|
cmp-lg/9608004
|
Patterns of Language - A Population Model for Language Structure
|
cmp-lg cs.CL
|
A key problem in the description of language structure is to explain its
contradictory properties of specificity and generality, the contrasting poles
of formulaic prescription and generative productivity. I argue that this is
possible if we accept analogy and similarity as the basic mechanisms of
structural definition. As a specific example I discuss how it would be possible
to use analogy to define a generative model of syntactic structure.
|
cmp-lg/9608005
|
CLEARS - An Education and Research Tool for Computational Semantics
|
cmp-lg cs.CL
|
The CLEARS (Computational Linguistics Education and Research for Semantics)
tool provides a graphical interface allowing interactive construction of
semantic representations in a variety of different formalisms, and using
several construction methods. CLEARS was developed as part of the FraCaS
project which was designed to encourage convergence between different semantic
formalisms, such as Montague-Grammar, DRT, and Situation Semantics. The CLEARS
system is freely available on the WWW from
http://coli.uni-sb.de/~clears/clears.html
|
cmp-lg/9608006
|
Grapheme-to-Phoneme Conversion using Multiple Unbounded Overlapping
Chunks
|
cmp-lg cs.CL
|
We present in this paper an original extension of two data-driven algorithms
for the transcription of a sequence of graphemes into the corresponding
sequence of phonemes. In particular, our approach generalizes the algorithm
originally proposed by Dedina and Nusbaum (D&N) (1991), which had originally
been promoted as a model of the human ability to pronounce unknown words by
analogy to familiar lexical items. We will show that DN's algorithm performs
comparatively poorly when evaluated on a realistic test set, and that our
extension allows us to improve substantially the performance of the
analogy-based model. We will also suggest that both algorithms can be
reformulated in a much more general framework, which allows us to anticipate
other useful extensions. However, considering the inability to define in these
models important notions like lexical neighborhood, we conclude that both
approaches fail to offer a proper model of the analogical processes involved in
reading aloud.
|
cmp-lg/9608007
|
Centering in Italian
|
cmp-lg cs.CL
|
This paper explores the correlation between centering and different forms of
pronominal reference in Italian, in particular zeros and overt pronouns in
subject position. Such correlations, that I had proposed in earlier work
(COLING 90), are verified through the analysis of a corpus of naturally
occurring texts. In the process, I extend my previous analysis in several ways,
for example by taking possessives and subordinates into account. I also provide
a more detailed analysis of the "continue" transition: more specifically, I
show that pronouns are used in a markedly different way in a "continue"
preceded by another "continue" or by a "shift", and in a "continue" preceded by
a "retain".
|
cmp-lg/9608008
|
The discourse functions of Italian subjects: a centering approach
|
cmp-lg cs.CL
|
This paper examines the discourse functions that different types of subjects
perform in Italian within the centering framework. I build on my previous work
(COLING90) that accounted for the alternation of null and strong pronouns in
subject position. I extend my previous analysis in several ways: for example, I
refine the notion of {\sc continue} and discuss the centering functions of full
NPs.
|
cmp-lg/9608009
|
Centering theory and the Italian pronominal system
|
cmp-lg cs.CL
|
In this paper, I give an account of some phenomena of pronominalization in
Italian in terms of centering theory. After a general introduction to the
Italian pronominal system, I will review centering, and then show how the
original rules have to be extended or modified. Finally, I will show that
centering does not account for two phenomena: first, the functional role of an
utterance may override the predictions of centering; second, a null subject can
be used to refer to a whole discourse segment.
|
cmp-lg/9608010
|
Fishing for Exactness
|
cmp-lg cs.CL
|
Statistical methods for automatically identifying dependent word pairs (i.e.
dependent bigrams) in a corpus of natural language text have traditionally been
performed using asymptotic tests of significance. This paper suggests that
Fisher's exact test is a more appropriate test due to the skewed and sparse
data samples typical of this problem. Both theoretical and experimental
comparisons between Fisher's exact test and a variety of asymptotic tests (the
t-test, Pearson's chi-square test, and Likelihood-ratio chi-square test) are
presented. These comparisons show that Fisher's exact test is more reliable in
identifying dependent word pairs. The usefulness of Fisher's exact test extends
to other problems in statistical natural language processing as skewed and
sparse data appears to be the rule in natural language. The experiment
presented in this paper was performed using PROC FREQ of the SAS System.
|
cmp-lg/9608011
|
Punctuation in Quoted Speech
|
cmp-lg cs.CL
|
Quoted speech is often set off by punctuation marks, in particular quotation
marks. Thus, it might seem that the quotation marks would be extremely useful
in identifying these structures in texts. Unfortunately, the situation is not
quite so clear. In this work, I will argue that quotation marks are not
adequate for either identifying or constraining the syntax of quoted speech.
More useful information comes from the presence of a quoting verb, which is
either a verb of saying or a punctual verb, and the presence of other
punctuation marks, usually commas. Using a lexicalized grammar, we can license
most quoting clauses as text adjuncts. A distinction will be made not between
direct and indirect quoted speech, but rather between adjunct and non-adjunct
quoting clauses.
|
cmp-lg/9608012
|
Multilingual Text Analysis for Text-to-Speech Synthesis
|
cmp-lg cs.CL
|
We present a model of text analysis for text-to-speech (TTS) synthesis based
on (weighted) finite-state transducers, which serves as the text-analysis
module of the multilingual Bell Labs TTS system. The transducers are
constructed using a lexical toolkit that allows declarative descriptions of
lexicons, morphological rules, numeral-expansion rules, and phonological rules,
inter alia. To date, the model has been applied to eight languages: Spanish,
Italian, Romanian, French, German, Russian, Mandarin and Japanese.
|
cmp-lg/9608013
|
A Word Grammar of Turkish with Morphophonemic Rules
|
cmp-lg cs.CL
|
In this thesis, morphological description of Turkish is encoded using the
two-level model. This description is made up of the phonological component that
contains the two-level morphophonemic rules, and the lexicon component which
lists the lexical items and encodes the morphotactic constraints. The word
grammar is expressed in tabular form. It includes the verbal and the nominal
paradigm. Vowel and consonant harmony, epenthesis, reduplication, etc. are
described in detail and coded in two-level notation. Loan-word phonology is
modelled separately.
The implementation makes use of Lexc/Twolc from Xerox. Mechanisms to
integrate the morphological analyzer with the lexical and syntactic components
are discussed, and a simple graphical user interface is provided. Work is
underway to use this model in a classroom setting for teaching Turkish
morphology to non-native speakers.
|
cmp-lg/9608014
|
Classifiers in Japanese-to-English Machine Translation
|
cmp-lg cs.CL
|
This paper proposes an analysis of classifiers into four major types: UNIT,
METRIC, GROUP and SPECIES, based on properties of both Japanese and English.
The analysis makes possible a uniform and straightforward treatment of noun
phrases headed by classifiers in Japanese-to-English machine translation, and
has been implemented in the MT system ALT-J/E. Although the analysis is based
on the characteristics of, and differences between, Japanese and English, it is
shown to be also applicable to the unrelated language Thai.
|
cmp-lg/9608015
|
Morphological Productivity in the Lexicon
|
cmp-lg cs.CL
|
In this paper we outline a lexical organization for Turkish that makes use of
lexical rules for inflections, derivations, and lexical category changes to
control the proliferation of lexical entries. Lexical rules handle changes in
grammatical roles, enforce type constraints, and control the mapping of
subcategorization frames in valency-changing operations. A lexical inheritance
hierarchy facilitates the enforcement of type constraints. Semantic
compositions in inflections and derivations are constrained by the properties
of the terms and predicates.
The design has been tested as part of a HPSG grammar for Turkish. In terms of
performance, run-time execution of the rules seems to be a far better
alternative than pre-compilation. The latter causes exponential growth in the
lexicon due to intensive use of inflections and derivations in Turkish.
|
cmp-lg/9608016
|
A Sign-Based Phrase Structure Grammar for Turkish
|
cmp-lg cs.CL
|
This study analyses Turkish syntax from an informational point of view. Sign
based linguistic representation and principles of HPSG (Head-driven Phrase
Structure Grammar) theory are adapted to Turkish. The basic informational
elements are nested and inherently sorted feature structures called signs.
In the implementation, logic programming tool ALE (Attribute Logic Engine)
which is primarily designed for implementing HPSG grammars is used. A type and
structure hierarchy of Turkish language is designed. Syntactic phenomena such a
s subcategorization, relative clauses, constituent order variation, adjuncts,
nomina l predicates and complement-modifier relations in Turkish are analyzed.
A parser is designed and implemented in ALE.
|
cmp-lg/9608017
|
Automatic Alignment of English-Chinese Bilingual Texts of CNS News
|
cmp-lg cs.CL
|
In this paper we address a method to align English-Chinese bilingual news
reports from China News Service, combining both lexical and satistical
approaches. Because of the sentential structure differences between English and
Chinese, matching at the sentence level as in many other works may result in
frequent matching of several sentences en masse. In view of this, the current
work also attempts to create shorter alignment pairs by permitting finer
matching between clauses from both texts if possible. The current method is
based on statiscal correlation between sentence or clause length of both texts
and at the same time uses obvious anchors such as numbers and place names
appearing frequently in the news reports as lexcial cues.
|
cmp-lg/9608018
|
Algorithms for Speech Recognition and Language Processing
|
cmp-lg cs.CL
|
Speech processing requires very efficient methods and algorithms.
Finite-state transducers have been shown recently both to constitute a very
useful abstract model and to lead to highly efficient time and space algorithms
in this field. We present these methods and algorithms and illustrate them in
the case of speech recognition. In addition to classical techniques, we
describe many new algorithms such as minimization, global and local on-the-fly
determinization of weighted automata, and efficient composition of transducers.
These methods are currently used in large vocabulary speech recognition
systems. We then show how the same formalism and algorithms can be used in
text-to-speech applications and related areas of language processing such as
morphology, syntax, and local grammars, in a very efficient way. The tutorial
is self-contained and requires no specific computational or linguistic
knowledge other than classical results.
|
cmp-lg/9608019
|
Using sentence connectors for evaluating MT output
|
cmp-lg cs.CL
|
This paper elaborates on the design of a machine translation evaluation
method that aims to determine to what degree the meaning of an original text is
preserved in translation, without looking into the grammatical correctness of
its constituent sentences. The basic idea is to have a human evaluator take the
sentences of the translated text and, for each of these sentences, determine
the semantic relationship that exists between it and the sentence immediately
preceding it. In order to minimise evaluator dependence, relations between
sentences are expressed in terms of the conjuncts that can connect them, rather
than through explicit categories. For an n-sentence text this results in a list
of n-1 sentence-to-sentence relationships, which we call the text's
connectivity profile. This can then be compared to the connectivity profile of
the original text, and the degree of correspondence between the two would be a
measure for the quality of the translation.
A set of "essential" conjuncts was extracted for English and Japanese, and a
computer interface was designed to support the task of inserting the most
fitting conjuncts between sentence pairs. With these in place, several sets of
experiments were performed.
|
cmp-lg/9608020
|
Phonetic Ambiguity : Approaches, Touchstones, Pitfalls and New
Approaches
|
cmp-lg cs.CL
|
Phonetic ambiguity and confusibility are bugbears for any form of bottom-up
or data-driven approach to language processing. The question of when an input
is ``close enough'' to a target word pervades the entire problem spaces of
speech recognition, synthesis, language acquisition, speech compression, and
language representation, but the variety of representations that have been
applied are demonstrably inadequate to at least some aspects of the problem.
This paper reviews this inadequacy by examining several touchstone models in
phonetic ambiguity and relating them to the problems they were designed to
solve. An good solution would be, among other things, efficient, accurate,
precise, and universally applicable to representation of words, ideally usable
as a ``phonetic distance'' metric for direct measurement of the ``distance''
between word or utterance pairs. None of the proposed models can provide a
complete solution to the problem; in general, there is no algorithmic theory of
phonetic distance. It is unclear whether this is a weakness of our
representational technology or a more fundamental difficulty with the problem
statement.
|
cmp-lg/9608021
|
Isolated-Word Confusion Metrics and the PGPfone Alphabet
|
cmp-lg cs.CL
|
Although the confusion of individual phonemes and features have been studied
and analyzed since (Miller and Nicely, 1955), there has been little work done
on extending this to a predictive theory of word-level confusions. The PGPfone
alphabet is a good touchstone problem for developing such word-level confusion
metrics. This paper presents some difficulties incurred, along with their
proposed solutions, in the extension of phonetic confusion results to a
theoretical whole-word phonetic distance metric. The proposed solutions have
been used, in conjunction with a set of selection filters, in a genetic
algorithm to automatically generate appropriate word lists for a radio
alphabet. This work illustrates some principles and pitfalls that should be
addressed in any numeric theory of isolated word perception.
|
cmp-lg/9609001
|
Corrections and Higher-Order Unification
|
cmp-lg cs.CL
|
We propose an analysis of corrections which models some of the requirements
corrections place on context. We then show that this analysis naturally extends
to the interaction of corrections with pronominal anaphora on the one hand, and
(in)definiteness on the other. The analysis builds on previous
unification--based approaches to NL semantics and relies on Higher--Order
Unification with Equivalences, a form of unification which takes into account
not only syntactic beta-eta-identity but also denotational equivalence.
|
cmp-lg/9609002
|
Inferring Acceptance and Rejection in Dialogue by Default Rules of
Inference
|
cmp-lg cs.CL
|
This paper discusses the processes by which conversants in a dialogue can
infer whether their assertions and proposals have been accepted or rejected by
their conversational partners. It expands on previous work by showing that
logical consistency is a necessary indicator of acceptance, but that it is not
sufficient, and that logical inconsistency is sufficient as an indicator of
rejection, but it is not necessary. I show how conversants can use information
structure and prosody as well as logical reasoning in distinguishing between
acceptances and logically consistent rejections, and relate this work to
previous work on implicature and default reasoning by introducing three new
classes of rejection: {\sc implicature rejections}, {\sc epistemic rejections}
and {\sc deliberation rejections}. I show how these rejections are inferred as
a result of default inferences, which, by other analyses, would have been
blocked by the context. In order to account for these facts, I propose a model
of the common ground that allows these default inferences to go through, and
show how the model, originally proposed to account for the various forms of
acceptance, can also model all types of rejection.
|
cmp-lg/9609003
|
Cue Phrase Classification Using Machine Learning
|
cmp-lg cs.CL
|
Cue phrases may be used in a discourse sense to explicitly signal discourse
structure, but also in a sentential sense to convey semantic rather than
structural information. Correctly classifying cue phrases as discourse or
sentential is critical in natural language processing systems that exploit
discourse structure, e.g., for performing tasks such as anaphora resolution and
plan recognition. This paper explores the use of machine learning for
classifying cue phrases as discourse or sentential. Two machine learning
programs (Cgrendel and C4.5) are used to induce classification models from sets
of pre-classified cue phrases and their features in text and speech. Machine
learning is shown to be an effective technique for not only automating the
generation of classification models, but also for improving upon previous
results. When compared to manually derived classification models already in the
literature, the learned models often perform with higher accuracy and contain
new linguistic insights into the data. In addition, the ability to
automatically construct classification models makes it easier to comparatively
analyze the utility of alternative feature representations of the data.
Finally, the ease of retraining makes the learning approach more scalable and
flexible than manual methods.
|
cmp-lg/9609004
|
A Principled Framework for Constructing Natural Language Interfaces To
Temporal Databases
|
cmp-lg cs.CL
|
Most existing natural language interfaces to databases (NLIDBs) were designed
to be used with ``snapshot'' database systems, that provide very limited
facilities for manipulating time-dependent data. Consequently, most NLIDBs also
provide very limited support for the notion of time. The database community is
becoming increasingly interested in _temporal_ database systems. These are
intended to store and manipulate in a principled manner information not only
about the present, but also about the past and future.
This thesis develops a principled framework for constructing English NLIDBs
for _temporal_ databases (NLITDBs), drawing on research in tense and aspect
theories, temporal logics, and temporal databases. I first explore temporal
linguistic phenomena that are likely to appear in English questions to NLITDBs.
Drawing on existing linguistic theories of time, I formulate an account for a
large number of these phenomena that is simple enough to be embodied in
practical NLITDBs. Exploiting ideas from temporal logics, I then define a
temporal meaning representation language, TOP, and I show how the HPSG grammar
theory can be modified to incorporate the tense and aspect account of this
thesis, and to map a wide range of English questions involving time to
appropriate TOP expressions. Finally, I present and prove the correctness of a
method to translate from TOP to TSQL2, TSQL2 being a temporal extension of the
SQL-92 database language. This way, I establish a sound route from English
questions involving time to a general-purpose temporal database language, that
can act as a principled framework for building NLITDBs. To demonstrate that
this framework is workable, I employ it to develop a prototype NLITDB,
implemented using ALE and Prolog.
|
cmp-lg/9609005
|
Centering in Japanese Discourse
|
cmp-lg cs.CL
|
In this paper we propose a computational treatment of the resolution of zero
pronouns in Japanese discourse, using an adaptation of the centering algorithm.
We are able to factor language-specific dependencies into one parameter of the
centering algorithm. Previous analyses have stipulated that a zero pronoun and
its cospecifier must share a grammatical function property such as {\sc
Subject} or {\sc NonSubject}. We show that this property-sharing stipulation is
unneeded. In addition we propose the notion of {\sc topic ambiguity} within the
centering framework, which predicts some ambiguities that occur in Japanese
discourse. This analysis has implications for the design of
language-independent discourse modules for Natural Language systems. The
centering algorithm has been implemented in an HPSG Natural Language system
with both English and Japanese grammars.
|
cmp-lg/9609006
|
Japanese Discourse and the Process of Centering
|
cmp-lg cs.CL
|
This paper has three aims: (1) to generalize a computational account of the
discourse process called {\sc centering}, (2) to apply this account to
discourse processing in Japanese so that it can be used in computational
systems for machine translation or language understanding, and (3) to provide
some insights on the effect of syntactic factors in Japanese on discourse
interpretation. We argue that while discourse interpretation is an inferential
process, syntactic cues constrain this process, and demonstrate this argument
with respect to the interpretation of {\sc zeros}, unexpressed arguments of the
verb, in Japanese. The syntactic cues in Japanese discourse that we investigate
are the morphological markers for grammatical {\sc topic}, the postposition
{\it wa}, as well as those for grammatical functions such as {\sc subject},
{\em ga}, {\sc object}, {\em o} and {\sc object2}, {\em ni}. In addition, we
investigate the role of speaker's {\sc empathy}, which is the viewpoint from
which an event is described. This is syntactically indicated through the use of
verbal compounding, i.e. the auxiliary use of verbs such as {\it kureta, kita}.
Our results are based on a survey of native speakers of their interpretation of
short discourses, consisting of minimal pairs, varied by one of the above
factors. We demonstrate that these syntactic cues do indeed affect the
interpretation of {\sc zeros}, but that having previously been the {\sc topic}
and being realized as a {\sc zero} also contributes to the salience of a
discourse entity. We propose a discourse rule of {\sc zero topic assignment},
and show that {\sc centering} provides constraints on when a {\sc zero} can be
interpreted as the {\sc zero topic}.
|
cmp-lg/9609007
|
Discourse Coherence and Shifting Centers in Japanese Texts
|
cmp-lg cs.CL
|
In languages such as Japanese, the use of {\it zeros}, unexpressed arguments
of the verb, in utterances that shift the topic involves a risk that the
meaning intended by the speaker may not be transparent to the hearer. However,
this potentially undesirable conversational strategy often occurs in the course
of naturally-occurring discourse. In this chapter, I report on an empirical
study of 250 utterances with {\it zeros} in 20 Japanese newspaper articles.
Each utterance is analyzed in terms of centering transitions and the form in
which centers are realized by referring expressions. I also examine lexical
subcategorization information, and tense and aspect in order to test the
hypothesis that the speaker expects the hearer to use this information in
determining global discourse structure. I explain the occurrence of {\it zeros}
in {\sc retain} and {\sc rough-shift} centering transitions, by claiming that a
{\it zero} can only be used in these cases when the shift of centers is
supported by contextual information such as lexical semantics, tense and
aspect, and agreement features. I then propose an algorithm by which centering
can incorporate these observations to integrate centering with global discourse
structure, and thus enhance its ability for non-local pronoun resolution.
|
cmp-lg/9609008
|
Designing Statistical Language Learners: Experiments on Noun Compounds
|
cmp-lg cs.CL
|
The goal of this thesis is to advance the exploration of the statistical
language learning design space. In pursuit of that goal, the thesis makes two
main theoretical contributions: (i) it identifies a new class of designs by
specifying an architecture for natural language analysis in which probabilities
are given to semantic forms rather than to more superficial linguistic
elements; and (ii) it explores the development of a mathematical theory to
predict the expected accuracy of statistical language learning systems in terms
of the volume of data used to train them.
The theoretical work is illustrated by applying statistical language learning
designs to the analysis of noun compounds. Both syntactic and semantic analysis
of noun compounds are attempted using the proposed architecture. Empirical
comparisons demonstrate that the proposed syntactic model is significantly
better than those previously suggested, approaching the performance of human
judges on the same task, and that the proposed semantic model, the first
statistical approach to this problem, exhibits significantly better accuracy
than the baseline strategy. These results suggest that the new class of designs
identified is a promising one. The experiments also serve to highlight the need
for a widely applicable theory of data requirements.
|
cmp-lg/9609009
|
A Geometric Approach to Mapping Bitext Correspondence
|
cmp-lg cs.CL
|
The first step in most corpus-based multilingual NLP work is to construct a
detailed map of the correspondence between a text and its translation. Several
automatic methods for this task have been proposed in recent years. Yet even
the best of these methods can err by several typeset pages. The Smooth
Injective Map Recognizer (SIMR) is a new bitext mapping algorithm. SIMR's
errors are smaller than those of the previous front-runner by more than a
factor of 4. Its robustness has enabled new commercial-quality applications.
The greedy nature of the algorithm makes it independent of memory resources.
Unlike other bitext mapping algorithms, SIMR allows crossing correspondences to
account for word order differences. Its output can be converted quickly and
easily into a sentence alignment. SIMR's output has been used to align over 200
megabytes of the Canadian Hansards for publication by the Linguistic Data
Consortium.
|
cmp-lg/9609010
|
Automatic Detection of Omissions in Translations
|
cmp-lg cs.CL
|
ADOMIT is an algorithm for Automatic Detection of OMIssions in Translations.
The algorithm relies solely on geometric analysis of bitext maps and uses no
linguistic information. This property allows it to deal equally well with
omissions that do not correspond to linguistic units, such as might result from
word-processing mishaps. ADOMIT has proven itself by discovering many errors in
a hand-constructed gold standard for evaluating bitext mapping algorithms.
Quantitative evaluation on simulated omissions showed that, even with today's
poor bitext mapping technology, ADOMIT is a valuable quality control tool for
translators and translation bureaus.
|
cmp-lg/9610001
|
Death and Lightness: Using a Demographic Model to Find Support Verbs
|
cmp-lg cs.CL
|
Some verbs have a particular kind of binary ambiguity: they can carry their
normal, full meaning, or they can be merely acting as a prop for the nominal
object. It has been suggested that there is a detectable pattern in the
relationship between a verb acting as a prop (a \term{support verb}) and the
noun it supports.
The task this paper undertakes is to develop a model which identifies the
support verb for a particular noun, and by extension, when nouns are
enumerated, a model which disambiguates a verb with respect to its support
status. The paper sets up a basic model as a standard for comparison; it then
proposes a more complex model, and gives some results to support the model's
validity, comparing it with other similar approaches.
|
cmp-lg/9610002
|
Gathering Statistics to Aspectually Classify Sentences with a Genetic
Algorithm
|
cmp-lg cs.CL
|
This paper presents a method for large corpus analysis to semantically
classify an entire clause. In particular, we use cooccurrence statistics among
similar clauses to determine the aspectual class of an input clause. The
process examines linguistic features of clauses that are relevant to aspectual
classification. A genetic algorithm determines what combinations of linguistic
features to use for this task.
|
cmp-lg/9610003
|
Stochastic Attribute-Value Grammars
|
cmp-lg cs.CL
|
Probabilistic analogues of regular and context-free grammars are well-known
in computational linguistics, and currently the subject of intensive research.
To date, however, no satisfactory probabilistic analogue of attribute-value
grammars has been proposed: previous attempts have failed to define a correct
parameter-estimation algorithm.
In the present paper, I define stochastic attribute-value grammars and give a
correct algorithm for estimating their parameters. The estimation algorithm is
adapted from Della Pietra, Della Pietra, and Lafferty (1995). To estimate model
parameters, it is necessary to compute the expectations of certain functions
under random fields. In the application discussed by Della Pietra, Della
Pietra, and Lafferty (representing English orthographic constraints), Gibbs
sampling can be used to estimate the needed expectations. The fact that
attribute-value grammars generate constrained languages makes Gibbs sampling
inapplicable, but I show how a variant of Gibbs sampling, the
Metropolis-Hastings algorithm, can be used instead.
|
cmp-lg/9610004
|
A Faster Structured-Tag Word-Classification Method
|
cmp-lg cs.CL
|
Several methods have been proposed for processing a corpus to induce a tagset
for the sub-language represented by the corpus. This paper examines a
structured-tag word classification method introduced by McMahon (1994) and
discussed further by McMahon & Smith (1995) in cmp-lg/9503011 . Two major
variations, (1) non-random initial assignment of words to classes and (2)
moving multiple words in parallel, together provide robust non-random results
with a speed increase of 200% to 450%, at the cost of slightly lower quality
than McMahon's method's average quality. Two further variations, (3) retaining
information from less- frequent words and (4) avoiding reclustering closed
classes, are proposed for further study.
Note: The speed increases quoted above are relative to my implementation of
my understanding of McMahon's algorithm; this takes time measured in hours and
days on a home PC. A revised version of the McMahon & Smith (1995) paper has
appeared (June 1996) in Computational Linguistics 22(2):217- 247; this refers
to a time of "several weeks" to cluster 569 words on a Sparc-IPC.
|
cmp-lg/9610005
|
Learning string edit distance
|
cmp-lg cs.CL
|
In many applications, it is necessary to determine the similarity of two
strings. A widely-used notion of string similarity is the edit distance: the
minimum number of insertions, deletions, and substitutions required to
transform one string into the other. In this report, we provide a stochastic
model for string edit distance. Our stochastic model allows us to learn a
string edit distance function from a corpus of examples. We illustrate the
utility of our approach by applying it to the difficult problem of learning the
pronunciation of words in conversational speech. In this application, we learn
a string edit distance with one fourth the error rate of the untrained
Levenshtein distance. Our approach is applicable to any string classification
problem that may be solved using a similarity function against a database of
labeled prototypes.
Keywords: string edit distance, Levenshtein distance, stochastic
transduction, syntactic pattern recognition, prototype dictionary, spelling
correction, string correction, string similarity, string classification, speech
recognition, pronunciation modeling, Switchboard corpus.
|
cmp-lg/9610006
|
A Morphology-System and Part-of-Speech Tagger for German
|
cmp-lg cs.CL
|
This paper presents an integrated tool for German morphology and statistical
part-of-speech tagging which aims at making some well established methods
widely available. The software is very user friendly, runs on any PC and can be
downloaded as a complete package (including lexicon and documentation) from the
World Wide Web. Compared with the performance of other tagging systems the
tagger produces similar results.
|
cmp-lg/9611001
|
OT SIMPLE - a construction-kit approach to Optimality Theory
implementation
|
cmp-lg cs.CL
|
This paper details a simple approach to the implementation of Optimality
Theory (OT, Prince and Smolensky 1993) on a computer, in part reusing standard
system software. In a nutshell, OT's GENerating source is implemented as a
BinProlog program interpreting a context-free specification of a GEN structural
grammar according to a user-supplied input form. The resulting set of textually
flattened candidate tree representations is passed to the CONstraint stage.
Constraints are implemented by finite-state transducers specified as `sed'
stream editor scripts that typically map ill-formed portions of the candidate
to violation marks. EVALuation of candidates reduces to simple sorting: the
violation-mark-annotated output leaving CON is fed into `sort', which orders
candidates on the basis of the violation vector column of each line, thereby
bringing the optimal candidate to the top. This approach gave rise to OT
SIMPLE, the first freely available software tool for the OT framework to
provide generic facilities for both GEN and CONstraint definition. Its
practical applicability is demonstrated by modelling the OT analysis of
apparent subtractive pluralization in Upper Hessian presented in Golston and
Wiese (1996).
|
cmp-lg/9611002
|
Unsupervised Language Acquisition
|
cmp-lg cs.CL
|
This thesis presents a computational theory of unsupervised language
acquisition, precisely defining procedures for learning language from ordinary
spoken or written utterances, with no explicit help from a teacher. The theory
is based heavily on concepts borrowed from machine learning and statistical
estimation. In particular, learning takes place by fitting a stochastic,
generative model of language to the evidence. Much of the thesis is devoted to
explaining conditions that must hold for this general learning strategy to
arrive at linguistically desirable grammars. The thesis introduces a variety of
technical innovations, among them a common representation for evidence and
grammars, and a learning strategy that separates the ``content'' of linguistic
parameters from their representation. Algorithms based on it suffer from few of
the search problems that have plagued other computational approaches to
language acquisition.
The theory has been tested on problems of learning vocabularies and grammars
from unsegmented text and continuous speech, and mappings between sound and
representations of meaning. It performs extremely well on various objective
criteria, acquiring knowledge that causes it to assign almost exactly the same
structure to utterances as humans do. This work has application to data
compression, language modeling, speech recognition, machine translation,
information retrieval, and other tasks that rely on either structural or
stochastic descriptions of language.
|
cmp-lg/9611003
|
Data-Oriented Language Processing. An Overview
|
cmp-lg cs.CL
|
During the last few years, a new approach to language processing has started
to emerge, which has become known under various labels such as "data-oriented
parsing", "corpus-based interpretation", and "tree-bank grammar" (cf. van den
Berg et al. 1994; Bod 1992-96; Bod et al. 1996a/b; Bonnema 1996; Charniak
1996a/b; Goodman 1996; Kaplan 1996; Rajman 1995a/b; Scha 1990-92; Sekine &
Grishman 1995; Sima'an et al. 1994; Sima'an 1995-96; Tugwell 1995). This
approach, which we will call "data-oriented processing" or "DOP", embodies the
assumption that human language perception and production works with
representations of concrete past language experiences, rather than with
abstract linguistic rules. The models that instantiate this approach therefore
maintain large corpora of linguistic representations of previously occurring
utterances. When processing a new input utterance, analyses of this utterance
are constructed by combining fragments from the corpus; the
occurrence-frequencies of the fragments are used to estimate which analysis is
the most probable one.
In this paper we give an in-depth discussion of a data-oriented processing
model which employs a corpus of labelled phrase-structure trees. Then we review
some other models that instantiate the DOP approach. Many of these models also
employ labelled phrase-structure trees, but use different criteria for
extracting fragments from the corpus or employ different disambiguation
strategies (Bod 1996b; Charniak 1996a/b; Goodman 1996; Rajman 1995a/b; Sekine &
Grishman 1995; Sima'an 1995-96); other models use richer formalisms for their
corpus annotations (van den Berg et al. 1994; Bod et al., 1996a/b; Bonnema
1996; Kaplan 1996; Tugwell 1995).
|
cmp-lg/9611004
|
Nonuniform Markov models
|
cmp-lg cs.CL
|
A statistical language model assigns probability to strings of arbitrary
length. Unfortunately, it is not possible to gather reliable statistics on
strings of arbitrary length from a finite corpus. Therefore, a statistical
language model must decide that each symbol in a string depends on at most a
small, finite number of other symbols in the string. In this report we propose
a new way to model conditional independence in Markov models. The central
feature of our nonuniform Markov model is that it makes predictions of varying
lengths using contexts of varying lengths. Experiments on the Wall Street
Journal reveal that the nonuniform model performs slightly better than the
classic interpolated Markov model. This result is somewhat remarkable because
both models contain identical numbers of parameters whose values are estimated
in a similar manner. The only difference between the two models is how they
combine the statistics of longer and shorter strings.
Keywords: nonuniform Markov model, interpolated Markov model, conditional
independence, statistical language model, discrete time series.
|
cmp-lg/9611005
|
Integrating HMM-Based Speech Recognition With Direct Manipulation In A
Multimodal Korean Natural Language Interface
|
cmp-lg cs.CL
|
This paper presents a HMM-based speech recognition engine and its integration
into direct manipulation interfaces for Korean document editor. Speech
recognition can reduce typical tedious and repetitive actions which are
inevitable in standard GUIs (graphic user interfaces). Our system consists of
general speech recognition engine called ABrain {Auditory Brain} and speech
commandable document editor called SHE {Simple Hearing Editor}. ABrain is a
phoneme-based speech recognition engine which shows up to 97% of discrete
command recognition rate. SHE is a EuroBridge widget-based document editor that
supports speech commands as well as direct manipulation interfaces.
|
cmp-lg/9611006
|
A Framework for Natural Language Interfaces to Temporal Databases
|
cmp-lg cs.CL
|
Over the past thirty years, there has been considerable progress in the
design of natural language interfaces to databases. Most of this work has
concerned snapshot databases, in which there are only limited facilities for
manipulating time-varying information. The database community is becoming
increasingly interested in temporal databases, databases with special support
for time-dependent entries. We have developed a framework for constructing
natural language interfaces to temporal databases, drawing on research on
temporal phenomena within logic and linguistics. The central part of our
framework is a logic-like formal language, called TOP, which can capture the
semantics of a wide range of English sentences. We have implemented an
HPSG-based sentence analyser that converts a large set of English queries
involving time into TOP formulae, and have formulated a provably correct
procedure for translating TOP expressions into queries in the TSQL2 temporal
database language. In this way we have established a sound route from English
to a general-purpose temporal database language.
|
cmp-lg/9612001
|
Comparative Experiments on Disambiguating Word Senses: An Illustration
of the Role of Bias in Machine Learning
|
cmp-lg cs.CL
|
This paper describes an experimental comparison of seven different learning
algorithms on the problem of learning to disambiguate the meaning of a word
from context. The algorithms tested include statistical, neural-network,
decision-tree, rule-based, and case-based classification techniques. The
specific problem tested involves disambiguating six senses of the word ``line''
using the words in the current and proceeding sentence as context. The
statistical and neural-network methods perform the best on this particular
problem and we discuss a potential reason for this observed difference. We also
discuss the role of bias in machine learning and its importance in explaining
performance differences observed on specific problems.
|
cmp-lg/9612002
|
Specialized Language Models using Dialogue Predictions
|
cmp-lg cs.CL
|
This paper analyses language modeling in spoken dialogue systems for
accessing a database. The use of several language models obtained by exploiting
dialogue predictions gives better results than the use of a single model for
the whole dialogue interaction. For this reason several models have been
created, each one for a specific system question, such as the request or the
confirmation of a parameter.
The use of dialogue-dependent language models increases the performance both
at the recognition and at the understanding level, especially on answers to
system requests. Moreover other methods to increase performance, like automatic
clustering of vocabulary words or the use of better acoustic models during
recognition, does not affect the improvements given by dialogue-dependent
language models.
The system used in our experiments is Dialogos, the Italian spoken dialogue
system used for accessing railway timetable information over the telephone. The
experiments were carried out on a large corpus of dialogues collected using
Dialogos.
|
cmp-lg/9612003
|
Metrics for Evaluating Dialogue Strategies in a Spoken Language System
|
cmp-lg cs.CL
|
In this paper, we describe a set of metrics for the evaluation of different
dialogue management strategies in an implemented real-time spoken language
system. The set of metrics we propose offers useful insights in evaluating how
particular choices in the dialogue management can affect the overall quality of
the man-machine dialogue. The evaluation makes use of established metrics: the
transaction success, the contextual appropriateness of system answers, the
calculation of normal and correction turns in a dialogue. We also define a new
metric, the implicit recovery, which allows to measure the ability of a
dialogue manager to deal with errors by different levels of analysis. We report
evaluation data from several experiments, and we compare two different
approaches to dialogue repair strategies using the set of metrics we argue for.
|
cmp-lg/9612004
|
Dialogos: a Robust System for Human-Machine Spoken Dialogue on the
Telephone
|
cmp-lg cs.CL
|
This paper presents Dialogos, a real-time system for human-machine spoken
dialogue on the telephone in task-oriented domains. The system has been tested
in a large trial with inexperienced users and it has proved robust enough to
allow spontaneous interactions both to users which get good recognition
performance and to the ones which get lower scores. The robust behavior of the
system has been achieved by combining the use of specific language models
during the recognition phase of analysis, the tolerance toward spontaneous
speech phenomena, the activity of a robust parser, and the use of
pragmatic-based dialogue knowledge. This integration of the different modules
allows to deal with partial or total breakdowns of the different levels of
analysis. We report the field trial data of the system and the evaluation
results of the overall system and of the submodules.
|
cmp-lg/9612005
|
Maximum Entropy Modeling Toolkit
|
cmp-lg cs.CL
|
The Maximum Entropy Modeling Toolkit supports parameter estimation and
prediction for statistical language models in the maximum entropy framework.
The maximum entropy framework provides a constructive method for obtaining the
unique conditional distribution p*(y|x) that satisfies a set of linear
constraints and maximizes the conditional entropy H(p|f) with respect to the
empirical distribution f(x). The maximum entropy distribution p*(y|x) also has
a unique parametric representation in the class of exponential models, as
m(y|x) = r(y|x)/Z(x) where the numerator m(y|x) = prod_i alpha_i^g_i(x,y) is a
product of exponential weights, with alpha_i = exp(lambda_i), and the
denominator Z(x) = sum_y r(y|x) is required to satisfy the axioms of
probability.
This manual explains how to build maximum entropy models for discrete domains
with the Maximum Entropy Modeling Toolkit (MEMT). First we summarize the steps
necessary to implement a language model using the toolkit. Next we discuss the
executables provided by the toolkit and explain the file formats required by
the toolkit. Finally, we review the maximum entropy framework and apply it to
the problem of statistical language modeling.
Keywords: statistical language models, maximum entropy, exponential models,
improved iterative scaling, Markov models, triggers.
|
cmp-lg/9701001
|
Exploiting Context to Identify Lexical Atoms -- A Statistical View of
Linguistic Context
|
cmp-lg cs.CL
|
Interpretation of natural language is inherently context-sensitive. Most
words in natural language are ambiguous and their meanings are heavily
dependent on the linguistic context in which they are used. The study of
lexical semantics can not be separated from the notion of context. This paper
takes a contextual approach to lexical semantics and studies the linguistic
context of lexical atoms, or "sticky" phrases such as "hot dog". Since such
lexical atoms may occur frequently in unrestricted natural language text,
recognizing them is crucial for understanding naturally-occurring text. The
paper proposes several heuristic approaches to exploiting the linguistic
context to identify lexical atoms from arbitrary natural language text.
|
cmp-lg/9701002
|
Hybrid language processing in the Spoken Language Translator
|
cmp-lg cs.CL
|
The paper presents an overview of the Spoken Language Translator (SLT)
system's hybrid language-processing architecture, focussing on the way in which
rule-based and statistical methods are combined to achieve robust and efficient
performance within a linguistically motivated framework. In general, we argue
that rules are desirable in order to encode domain-independent linguistic
constraints and achieve high-quality grammatical output, while corpus-derived
statistics are needed if systems are to be efficient and robust; further, that
hybrid architectures are superior from the point of view of portability to
architectures which only make use of one type of information. We address the
topics of ``multi-engine'' strategies for robust translation; robust bottom-up
parsing using pruning and grammar specialization; rational development of
linguistic rule-sets using balanced domain corpora; and efficient supervised
training by interactive disambiguation. All work described is fully implemented
in the current version of the SLT-2 system.
|
cmp-lg/9701003
|
Generating Information-Sharing Subdialogues in Expert-User Consultation
|
cmp-lg cs.CL
|
In expert-consultation dialogues, it is inevitable that an agent will at
times have insufficient information to determine whether to accept or reject a
proposal by the other agent. This results in the need for the agent to initiate
an information-sharing subdialogue to form a set of shared beliefs within which
the agents can effectively re-evaluate the proposal. This paper presents a
computational strategy for initiating such information-sharing subdialogues to
resolve the system's uncertainty regarding the acceptance of a user proposal.
Our model determines when information-sharing should be pursued, selects a
focus of information-sharing among multiple uncertain beliefs, chooses the most
effective information-sharing strategy, and utilizes the newly obtained
information to re-evaluate the user proposal. Furthermore, our model is capable
of handling embedded information-sharing subdialogues.
|
cmp-lg/9701004
|
An Efficient Implementation of the Head-Corner Parser
|
cmp-lg cs.CL
|
This paper describes an efficient and robust implementation of a
bi-directional, head-driven parser for constraint-based grammars. This parser
is developed for the OVIS system: a Dutch spoken dialogue system in which
information about public transport can be obtained by telephone.
After a review of the motivation for head-driven parsing strategies, and
head-corner parsing in particular, a non-deterministic version of the
head-corner parser is presented. A memoization technique is applied to obtain a
fast parser. A goal-weakening technique is introduced which greatly improves
average case efficiency, both in terms of speed and space requirements.
I argue in favor of such a memoization strategy with goal-weakening in
comparison with ordinary chart-parsers because such a strategy can be applied
selectively and therefore enormously reduces the space requirements of the
parser, while no practical loss in time-efficiency is observed. On the
contrary, experiments are described in which head-corner and left-corner
parsers implemented with selective memoization and goal weakening outperform
`standard' chart parsers. The experiments include the grammar of the OVIS
system and the Alvey NL Tools grammar.
Head-corner parsing is a mix of bottom-up and top-down processing. Certain
approaches towards robust parsing require purely bottom-up processing.
Therefore, it seems that head-corner parsing is unsuitable for such robust
parsing techniques. However, it is shown how underspecification (which arises
very naturally in a logic programming environment) can be used in the
head-corner parser to allow such robust parsing techniques. A particular robust
parsing model is described which is implemented in OVIS.
|
cmp-lg/9702001
|
SCREEN: Learning a Flat Syntactic and Semantic Spoken Language Analysis
Using Artificial Neural Networks
|
cmp-lg cs.CL
|
In this paper, we describe a so-called screening approach for learning robust
processing of spontaneously spoken language. A screening approach is a flat
analysis which uses shallow sequences of category representations for analyzing
an utterance at various syntactic, semantic and dialog levels. Rather than
using a deeply structured symbolic analysis, we use a flat connectionist
analysis. This screening approach aims at supporting speech and language
processing by using (1) data-driven learning and (2) robustness of
connectionist networks. In order to test this approach, we have developed the
SCREEN system which is based on this new robust, learned and flat analysis.
In this paper, we focus on a detailed description of SCREEN's architecture,
the flat syntactic and semantic analysis, the interaction with a speech
recognizer, and a detailed evaluation analysis of the robustness under the
influence of noisy or incomplete input. The main result of this paper is that
flat representations allow more robust processing of spontaneous spoken
language than deeply structured representations. In particular, we show how the
fault-tolerance and learning capability of connectionist networks can support a
flat analysis for providing more robust spoken-language processing within an
overall hybrid symbolic/connectionist framework.
|
cmp-lg/9702002
|
Automatic Extraction of Subcategorization from Corpora
|
cmp-lg cs.CL
|
We describe a novel technique and implemented system for constructing a
subcategorization dictionary from textual corpora. Each dictionary entry
encodes the relative frequency of occurrence of a comprehensive set of
subcategorization classes for English. An initial experiment, on a sample of 14
verbs which exhibit multiple complementation patterns, demonstrates that the
technique achieves accuracy comparable to previous approaches, which are all
limited to a highly restricted set of subcategorization classes. We also
demonstrate that a subcategorization dictionary built with the system improves
the accuracy of a parser by an appreciable amount.
|
cmp-lg/9702003
|
A Robust Text Processing Technique Applied to Lexical Error Recovery
|
cmp-lg cs.CL
|
This thesis addresses automatic lexical error recovery and tokenization of
corrupt text input. We propose a technique that can automatically correct
misspellings, segmentation errors and real-word errors in a unified framework
that uses both a model of language production and a model of the typing
behavior, and which makes tokenization part of the recovery process.
The typing process is modeled as a noisy channel where Hidden Markov Models
are used to model the channel characteristics. Weak statistical language models
are used to predict what sentences are likely to be transmitted through the
channel. These components are held together in the Token Passing framework
which provides the desired tight coupling between orthographic pattern matching
and linguistic expectation.
The system, CTR (Connected Text Recognition), has been tested on two corpora
derived from two different applications, a natural language dialogue system and
a transcription typing scenario. Experiments show that CTR can automatically
correct a considerable portion of the errors in the test sets without
introducing too much noise. The segmentation error correction rate is virtually
faultless.
|
cmp-lg/9702004
|
An Annotation Scheme for Free Word Order Languages
|
cmp-lg cs.CL
|
We describe an annotation scheme and a tool developed for creating
linguistically annotated corpora for non-configurational languages. Since the
requirements for such a formalism differ from those posited for configurational
languages, several features have been added, influencing the architecture of
the scheme. The resulting scheme reflects a stratificational notion of
language, and makes only minimal assumptions about the interrelation of the
particular representational strata.
|
cmp-lg/9702005
|
Software Infrastructure for Natural Language Processing
|
cmp-lg cs.CL
|
We classify and review current approaches to software infrastructure for
research, development and delivery of NLP systems. The task is motivated by a
discussion of current trends in the field of NLP and Language Engineering. We
describe a system called GATE (a General Architecture for Text Engineering)
that provides a software infrastructure on top of which heterogeneous NLP
processing modules may be evaluated and refined individually, or may be
combined into larger application systems. GATE aims to support both researchers
and developers working on component technologies (e.g. parsing, tagging,
morphological analysis) and those working on developing end-user applications
(e.g. information extraction, text summarisation, document generation, machine
translation, and second language learning). GATE promotes reuse of component
technology, permits specialisation and collaboration in large-scale projects,
and allows for the comparison and evaluation of alternative technologies. The
first release of GATE is now available - see
http://www.dcs.shef.ac.uk/research/groups/nlp/gate/
|
cmp-lg/9702006
|
Information Extraction - A User Guide
|
cmp-lg cs.CL
|
This technical memo describes Information Extraction from the point-of-view
of a potential user of the technology. No knowledge of language processing is
assumed. Information Extraction is a process which takes unseen texts as input
and produces fixed-format, unambiguous data as output. This data may be used
directly for display to users, or may be stored in a database or spreadsheet
for later analysis, or may be used for indexing purposes in Information
Retrieval applications. See also http://www.dcs.shef.ac.uk/~hamish
|
cmp-lg/9702007
|
Natural Language Dialogue Service for Appointment Scheduling Agents
|
cmp-lg cs.CL
|
Appointment scheduling is a problem faced daily by many individuals and
organizations. Cooperating agent systems have been developed to partially
automate this task. In order to extend the circle of participants as far as
possible we advocate the use of natural language transmitted by e-mail. We
describe COSMA, a fully implemented German language server for existing
appointment scheduling agent systems. COSMA can cope with multiple dialogues in
parallel, and accounts for differences in dialogue behaviour between human and
machine agents. NL coverage of the sublanguage is achieved through both
corpus-based grammar development and the use of message extraction techniques.
|
cmp-lg/9702008
|
Sequential Model Selection for Word Sense Disambiguation
|
cmp-lg cs.CL
|
Statistical models of word-sense disambiguation are often based on a small
number of contextual features or on a model that is assumed to characterize the
interactions among a set of features. Model selection is presented as an
alternative to these approaches, where a sequential search of possible models
is conducted in order to find the model that best characterizes the
interactions among features. This paper expands existing model selection
methodology and presents the first comparative study of model selection search
strategies and evaluation criteria when applied to the problem of building
probabilistic classifiers for word-sense disambiguation.
|
cmp-lg/9702009
|
Fast Statistical Parsing of Noun Phrases for Document Indexing
|
cmp-lg cs.CL
|
Information Retrieval (IR) is an important application area of Natural
Language Processing (NLP) where one encounters the genuine challenge of
processing large quantities of unrestricted natural language text. While much
effort has been made to apply NLP techniques to IR, very few NLP techniques
have been evaluated on a document collection larger than several megabytes.
Many NLP techniques are simply not efficient enough, and not robust enough, to
handle a large amount of text. This paper proposes a new probabilistic model
for noun phrase parsing, and reports on the application of such a parsing
technique to enhance document indexing. The effectiveness of using syntactic
phrases provided by the parser to supplement single words for indexing is
evaluated with a 250 megabytes document collection. The experiment's results
show that supplementing single words with syntactic phrases for indexing
consistently and significantly improves retrieval performance.
|
cmp-lg/9702010
|
Selective Sampling of Effective Example Sentence Sets for Word Sense
Disambiguation
|
cmp-lg cs.CL
|
This paper proposes an efficient example selection method for example-based
word sense disambiguation systems. To construct a practical size database, a
considerable overhead for manual sense disambiguation is required. Our method
is characterized by the reliance on the notion of the training utility: the
degree to which each example is informative for future example selection when
used for the training of the system. The system progressively collects examples
by selecting those with greatest utility. The paper reports the effectivity of
our method through experiments on about one thousand sentences. Compared to
experiments with random example selection, our method reduced the overhead
without the degeneration of the performance of the system.
|
cmp-lg/9702011
|
How much has information technology contributed to linguistics?
|
cmp-lg cs.CL
|
Information technology should have much to offer linguistics, not only
through the opportunities offered by large-scale data analysis and the stimulus
to develop formal computational models, but through the chance to use language
in systems for automatic natural language processing. The paper discusses these
possibilities in detail, and then examines the actual work that has been done.
It is evident that this has so far been primarily research within a new field,
computational linguistics, which is largely motivated by the demands, and
interest, of practical processing systems, and that information technology has
had rather little influence on linguistics at large. There are different
reasons for this, and not all good ones: information technology deserves more
attention from linguists.
|
cmp-lg/9702012
|
Design and Implementation of a Computational Lexicon for Turkish
|
cmp-lg cs.CL
|
All natural language processing systems (such as parsers, generators,
taggers) need to have access to a lexicon about the words in the language. This
thesis presents a lexicon architecture for natural language processing in
Turkish. Given a query form consisting of a surface form and other features
acting as restrictions, the lexicon produces feature structures containing
morphosyntactic, syntactic, and semantic information for all possible
interpretations of the surface form satisfying those restrictions. The lexicon
is based on contemporary approaches like feature-based representation,
inheritance, and unification. It makes use of two information sources: a
morphological processor and a lexical database containing all the open and
closed-class words of Turkish. The system has been implemented in SICStus
Prolog as a standalone module for use in natural language processing
applications.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.