text
stringlengths 17
3.36M
| source
stringlengths 3
333
| __index_level_0__
int64 0
518k
|
|---|---|---|
The title of a document has two roles, to give a compact summary and to lead the reader to read the document. Conventional title generation focuses on finding key expressions from the author's wording in the document to give a compact summary and pays little attention to the reader's interest. To make the title play its second role properly, it is indispensable to clarify the content (``what to say'') and wording (``how to say'') of titles that are effective to attract the target reader's interest. In this article, we first identify typical content and wording of titles aimed at general readers in a comparative study between titles of technical papers and headlines rewritten for newspapers. Next, we describe the results of a questionnaire survey on the effects of the content and wording of titles on the reader's interest. The survey of general and knowledgeable readers shows both common and different tendencies in interest.
|
Analysis of Titles and Readers For Title Generation Centered on the
Readers
| 1,200
|
We present a broad coverage Japanese grammar written in the HPSG formalism with MRS semantics. The grammar is created for use in real world applications, such that robustness and performance issues play an important role. It is connected to a POS tagging and word segmentation tool. This grammar is being developed in a multilingual context, requiring MRS structures that are easily comparable across languages.
|
Efficient Deep Processing of Japanese
| 1,201
|
Diff is a software program that detects differences between two data sets and is useful in natural language processing. This paper shows several examples of the application of diff. They include the detection of differences between two different datasets, extraction of rewriting rules, merging of two different datasets, and the optimal matching of two different data sets. Since diff comes with any standard UNIX system, it is readily available and very easy to use. Our studies showed that diff is a practical tool for research into natural language processing.
|
Using the DIFF Command for Natural Language Processing
| 1,202
|
This article studies the problem of assessing relevance to each of the rules of a reference resolution system. The reference solver described here stems from a formal model of reference and is integrated in a reference processing workbench. Evaluation of the reference resolution is essential, as it enables differential evaluation of individual rules. Numerical values of these measures are given, and discussed, for simple selection rules and other processing rules; such measures are then studied for numerical parameters.
|
Evaluation of Coreference Rules on Complex Narrative Texts
| 1,203
|
Reference resolution on extended texts (several thousand references) cannot be evaluated manually. An evaluation algorithm has been proposed for the MUC tests, using equivalence classes for the coreference relation. However, we show here that this algorithm is too indulgent, yielding good scores even for poor resolution strategies. We elaborate on the same formalism to propose two new evaluation algorithms, comparing them first with the MUC algorithm and giving then results on a variety of examples. A third algorithm using only distributional comparison of equivalence classes is finally described; it assesses the relative importance of the recall vs. precision errors.
|
Three New Methods for Evaluating Reference Resolution
| 1,204
|
Anaphora resolution is envisaged in this paper as part of the reference resolution process. A general open architecture is proposed, which can be particularized and configured in order to simulate some classic anaphora resolution methods. With the aim of improving pronoun resolution, the system takes advantage of elementary cues about characters of the text, which are represented through a particular data structure. In its most robust configuration, the system uses only a general lexicon, a local morpho-syntactic parser and a dictionary of synonyms. A short comparative corpus analysis shows that narrative texts are the most suitable for testing such a system.
|
Cooperation between Pronoun and Reference Resolution for Unrestricted
Texts
| 1,205
|
A model for reference use in communication is proposed, from a representationist point of view. Both the sender and the receiver of a message handle representations of their common environment, including mental representations of objects. Reference resolution by a computer is viewed as the construction of object representations using referring expressions from the discourse, whereas often only coreference links between such expressions are looked for. Differences between these two approaches are discussed. The model has been implemented with elementary rules, and tested on complex narrative texts (hundreds to thousands of referring expressions). The results support the mental representations paradigm.
|
Reference Resolution Beyond Coreference: a Conceptual Frame and its
Application
| 1,206
|
In some contexts, well-formed natural language cannot be expected as input to information or communication systems. In these contexts, the use of grammar-independent input (sequences of uninflected semantic units like e.g. language-independent icons) can be an answer to the users' needs. A semantic analysis can be performed, based on lexical semantic knowledge: it is equivalent to a dependency analysis with no syntactic or morphological clues. However, this requires that an intelligent system should be able to interpret this input with reasonable accuracy and in reasonable time. Here we propose a method allowing a purely semantic-based analysis of sequences of semantic units. It uses an algorithm inspired by the idea of ``chart parsing'' known in Natural Language Processing, which stores intermediate parsing results in order to bring the calculation time down. In comparison with using declarative logic programming - where the calculation time, left to a prolog engine, is hyperexponential -, this method brings the calculation time down to a polynomial time, where the order depends on the valency of the predicates.
|
A Chart-Parsing Algorithm for Efficient Semantic Analysis
| 1,207
|
In this paper, we discuss the utility and deficiencies of existing ontology resources for a number of language processing applications. We describe a technique for increasing the semantic type coverage of a specific ontology, the National Library of Medicine's UMLS, with the use of robust finite state methods used in conjunction with large-scale corpus analytics of the domain corpus. We call this technique "semantic rerendering" of the ontology. This research has been done in the context of Medstract, a joint Brandeis-Tufts effort aimed at developing tools for analyzing biomedical language (i.e., Medline), as well as creating targeted databases of bio-entities, biological relations, and pathway data for biological researchers. Motivating the current research is the need to have robust and reliable semantic typing of syntactic elements in the Medline corpus, in order to improve the overall performance of the information extraction applications mentioned above.
|
Rerendering Semantic Ontologies: Automatic Extensions to UMLS through
Corpus Analytics
| 1,208
|
We describe the CoNLL-2002 shared task: language-independent named entity recognition. We give background information on the data sets and the evaluation method, present a general overview of the systems that have taken part in the task and discuss their performance.
|
Introduction to the CoNLL-2002 Shared Task: Language-Independent Named
Entity Recognition
| 1,209
|
We present new results on the relation between purely symbolic context-free parsing strategies and their probabilistic counter-parts. Such parsing strategies are seen as constructions of push-down devices from grammars. We show that preservation of probability distribution is possible under two conditions, viz. the correct-prefix property and the property of strong predictiveness. These results generalize existing results in the literature that were obtained by considering parsing strategies in isolation. From our general results we also derive negative results on so-called generalized LR parsing.
|
Probabilistic Parsing Strategies
| 1,210
|
Robert French has argued that a disembodied computer is incapable of passing a Turing Test that includes subcognitive questions. Subcognitive questions are designed to probe the network of cultural and perceptual associations that humans naturally develop as we live, embodied and embedded in the world. In this paper, I show how it is possible for a disembodied computer to answer subcognitive questions appropriately, contrary to French's claim. My approach to answering subcognitive questions is to use statistical information extracted from a very large collection of text. In particular, I show how it is possible to answer a sample of subcognitive questions taken from French, by issuing queries to a search engine that indexes about 350 million Web pages. This simple algorithm may shed light on the nature of human (sub-) cognition, but the scope of this paper is limited to demonstrating that French is mistaken: a disembodied computer can answer subcognitive questions.
|
Answering Subcognitive Turing Test Questions: A Reply to French
| 1,211
|
In this paper we describe an algorithm for aligning sentences with their translations in a bilingual corpus using lexical information of the languages. Existing efficient algorithms ignore word identities and consider only the sentence lengths (Brown, 1991; Gale and Church, 1993). For a sentence in the source language text, the proposed algorithm picks the most likely translation from the target language text using lexical information and certain heuristics. It does not do statistical analysis using sentence lengths. The algorithm is language independent. It also aids in detecting addition and deletion of text in translations. The algorithm gives comparable results with the existing algorithms in most of the cases while it does better in cases where statistical algorithms do not give good results.
|
An Algorithm for Aligning Sentences in Bilingual Corpora Using Lexical
Information
| 1,212
|
Compounded words are a challenge for NLP applications such as machine translation (MT). We introduce methods to learn splitting rules from monolingual and parallel corpora. We evaluate them against a gold standard and measure their impact on performance of statistical MT systems. Results show accuracy of 99.1% and performance gains for MT of 0.039 BLEU on a German-English noun phrase translation task.
|
Empirical Methods for Compound Splitting
| 1,213
|
We address the text-to-text generation problem of sentence-level paraphrasing -- a phenomenon distinct from and more difficult than word- or phrase-level paraphrasing. Our approach applies multiple-sequence alignment to sentences gathered from unannotated comparable corpora: it learns a set of paraphrasing patterns represented by word lattice pairs and automatically determines how to apply these patterns to rewrite new sentences. The results of our evaluation experiments show that the system derives accurate paraphrases, outperforming baseline systems.
|
Learning to Paraphrase: An Unsupervised Approach Using Multiple-Sequence
Alignment
| 1,214
|
We show how to construct a channel-independent representation of speech that has propagated through a noisy reverberant channel. This is done by blindly rescaling the cepstral time series by a non-linear function, with the form of this scale function being determined by previously encountered cepstra from that channel. The rescaled form of the time series is an invariant property of it in the following sense: it is unaffected if the time series is transformed by any time-independent invertible distortion. Because a linear channel with stationary noise and impulse response transforms cepstra in this way, the new technique can be used to remove the channel dependence of a cepstral time series. In experiments, the method achieved greater channel-independence than cepstral mean normalization, and it was comparable to the combination of cepstral mean normalization and spectral subtraction, despite the fact that no measurements of channel noise or reverberations were required (unlike spectral subtraction).
|
Blind Normalization of Speech From Different Channels
| 1,215
|
We report about the current state of development of a document suite and its applications. This collection of tools for the flexible and robust processing of documents in German is based on the use of XML as unifying formalism for encoding input and output data as well as process information. It is organized in modules with limited responsibilities that can easily be combined into pipelines to solve complex tasks. Strong emphasis is laid on a number of techniques to deal with lexical and conceptual gaps that are typical when starting a new application.
|
An XML based Document Suite
| 1,216
|
It is very costly to build up lexical resources and domain ontologies. Especially when confronted with a new application domain lexical gaps and a poor coverage of domain concepts are a problem for the successful exploitation of natural language document analysis systems that need and exploit such knowledge sources. In this paper we report about ongoing experiments with `bootstrapping techniques' for lexicon and ontology creation.
|
Exploiting Sublanguage and Domain Characteristics in a Bootstrapping
Approach to Lexicon and Ontology Creation
| 1,217
|
In this paper we describe an approach for the analysis of documents in German and English with a shared pool of resources. For the analysis of German documents we use a document suite, which supports the user in tasks like information retrieval and information extraction. The core of the document suite is based on our tool XDOC. Now we want to exploit these methods for the analysis of English documents as well. For this aim we need a multilingual presentation format of the resources. These resources must be transformed into an unified format, in which we can set additional information about linguistic characteristics of the language depending on the analyzed documents. In this paper we describe our approach for such an exchange model for multilingual resources based on XML.
|
An Approach for Resource Sharing in Multilingual NLP
| 1,218
|
Factorization of statistical language models is the task that we resolve the most discriminative model into factored models and determine a new model by combining them so as to provide better estimate. Most of previous works mainly focus on factorizing models of sequential events, each of which allows only one factorization manner. To enable parallel factorization, which allows a model event to be resolved in more than one ways at the same time, we propose a general framework, where we adopt a backing-off lattice to reflect parallel factorizations and to define the paths along which a model is resolved into factored models, we use a mixture model to combine parallel paths in the lattice, and generalize Katz's backing-off method to integrate all the mixture models got by traversing the entire lattice. Based on this framework, we formulate two types of model factorizations that are used in natural language modeling.
|
Factorization of Language Models through Backing-Off Lattices
| 1,219
|
We describe the CoNLL-2003 shared task: language-independent named entity recognition. We give background information on the data sets (English and German) and the evaluation method, present a general overview of the systems that have taken part in the task and discuss their performance.
|
Introduction to the CoNLL-2003 Shared Task: Language-Independent Named
Entity Recognition
| 1,220
|
This paper presents a machine learning approach to discourse planning in natural language generation. More specifically, we address the problem of learning the most natural ordering of facts in discourse plans for a specific domain. We discuss our methodology and how it was instantiated using two different machine learning algorithms. A quantitative evaluation performed in the domain of museum exhibit descriptions indicates that our approach performs significantly better than manually constructed ordering rules. Being retrainable, the resulting planners can be ported easily to other similar domains, without requiring language technology expertise.
|
Learning to Order Facts for Discourse Planning in Natural Language
Generation
| 1,221
|
k is the most important parameter in a text categorization system based on k-Nearest Neighbor algorithm (kNN).In the classification process, k nearest documents to the test one in the training set are determined firstly. Then, the predication can be made according to the category distribution among these k nearest neighbors. Generally speaking, the class distribution in the training set is uneven. Some classes may have more samples than others. Therefore, the system performance is very sensitive to the choice of the parameter k. And it is very likely that a fixed k value will result in a bias on large categories. To deal with these problems, we propose an improved kNN algorithm, which uses different numbers of nearest neighbors for different categories, rather than a fixed number across all categories. More samples (nearest neighbors) will be used for deciding whether a test document should be classified to a category, which has more samples in the training set. Preliminary experiments on Chinese text categorization show that our method is less sensitive to the parameter k than the traditional one, and it can properly classify documents belonging to smaller classes with a large k. The method is promising for some cases, where estimating the parameter k via cross-validation is not allowed.
|
An Improved k-Nearest Neighbor Algorithm for Text Categorization
| 1,222
|
As interaction between autonomous agents, communication can be analyzed in game-theoretic terms. Meaning game is proposed to formalize the core of intended communication in which the sender sends a message and the receiver attempts to infer its meaning intended by the sender. Basic issues involved in the game of natural language communication are discussed, such as salience, grammaticality, common sense, and common belief, together with some demonstration of the feasibility of game-theoretic account of language.
|
Issues in Communication Game
| 1,223
|
The standard tabulation techniques for logic programming presuppose fixed order of computation. Some data-driven control should be introduced in order to deal with diverse contexts. The present paper describes a data-driven method of constraint transformation with a sort of compilation which subsumes accessibility check and last-call optimization, which characterize standard natural-language parsing techniques, semantic-head-driven generation, etc.
|
Parsing and Generation with Tabulation and Compilation
| 1,224
|
MPEG-7 (Moving Picture Experts Group Phase 7) is an XML-based international standard on semantic description of multimedia content. This document discusses the Linguistic DS and related tools. The linguistic DS is a tool, based on the GDA tag set (http://i-content.org/GDA/tagset.html), for semantic annotation of linguistic data in or associated with multimedia content. The current document text reflects `Study of FPDAM - MPEG-7 MDS Extensions' issued in March 2003, and not most part of MPEG-7 MDS, for which the readers are referred to the first version of MPEG-7 MDS document available from ISO (http://www.iso.org). Without that reference, however, this document should be mostly intelligible to those who are familiar with XML and linguistic theories. Comments are welcome and will be considered in the standardization process.
|
The Linguistic DS: Linguisitic Description in MPEG-7
| 1,225
|
The world is passing through a major revolution called the information revolution, in which information and knowledge is becoming available to people in unprecedented amounts wherever and whenever they need it. Those societies which fail to take advantage of the new technology will be left behind, just like in the industrial revolution. The information revolution is based on two major technologies: computers and communication. These technologies have to be delivered in a COST EFFECTIVE manner, and in LANGUAGES accessible to people. One way to deliver them in cost effective manner is to make suitable technology choices, and to allow people to access through shared resources. This could be done throuch street corner shops (for computer usage, e-mail etc.), schools, community centres and local library centres.
|
Collaborative Creation of Digital Content in Indian Languages
| 1,226
|
The world is passing through a major revolution called the information revolution, in which information and knowledge is becoming available to people in unprecedented amounts wherever and whenever they need it. Those societies which fail to take advantage of the new technology will be left behind, just like in the industrial revolution. The information revolution is based on two major technologies: computers and communication. These technologies have to be delivered in a COST EFFECTIVE manner, and in LANGUAGES accessible to people. One way to deliver them in cost effective manner is to make suitable technology choices (discussed later), and to allow people to access through shared resources. This could be done throuch street corner shops (for computer usage, e-mail etc.), schools, community centers and local library centres.
|
Information Revolution
| 1,227
|
The anusaaraka system makes text in one Indian language accessible in another Indian language. In the anusaaraka approach, the load is so divided between man and computer that the language load is taken by the machine, and the interpretation of the text is left to the man. The machine presents an image of the source text in a language close to the target language.In the image, some constructions of the source language (which do not have equivalents) spill over to the output. Some special notation is also devised. The user after some training learns to read and understand the output. Because the Indian languages are close, the learning time of the output language is short, and is expected to be around 2 weeks. The output can also be post-edited by a trained user to make it grammatically correct in the target language. Style can also be changed, if necessary. Thus, in this scenario, it can function as a human assisted translation system. Currently, anusaarakas are being built from Telugu, Kannada, Marathi, Bengali and Punjabi to Hindi. They can be built for all Indian languages in the near future. Everybody must pitch in to build such systems connecting all Indian languages, using the free software model.
|
Anusaaraka: Overcoming the Language Barrier in India
| 1,228
|
The anusaaraka system (a kind of machine translation system) makes text in one Indian language accessible through another Indian language. The machine presents an image of the source text in a language close to the target language. In the image, some constructions of the source language (which do not have equivalents in the target language) spill over to the output. Some special notation is also devised. Anusaarakas have been built from five pairs of languages: Telugu,Kannada, Marathi, Bengali and Punjabi to Hindi. They are available for use through Email servers. Anusaarkas follows the principle of substitutibility and reversibility of strings produced. This implies preservation of information while going from a source language to a target language. For narrow subject areas, specialized modules can be built by putting subject domain knowledge into the system, which produce good quality grammatical output. However, it should be remembered, that such modules will work only in narrow areas, and will sometimes go wrong. In such a situation, anusaaraka output will still remain useful.
|
Language Access: An Information Based Approach
| 1,229
|
The paper reports on efforts taken to create lexical resources pertaining to Indian languages, using the collaborative model. The lexical resources being developed are: (1) Transfer lexicon and grammar from English to several Indian languages. (2) Dependencey tree bank of annotated corpora for several Indian languages. The dependency trees are based on the Paninian model. (3) Bilingual dictionary of 'core meanings'.
|
LERIL : Collaborative Effort for Creating Lexical Resources
| 1,230
|
This paper describes a test collection (benchmark data) for retrieval systems driven by spoken queries. This collection was produced in the subtask of the NTCIR-3 Web retrieval task, which was performed in a TREC-style evaluation workshop. The search topics and document collection for the Web retrieval task were used to produce spoken queries and language models for speech recognition, respectively. We used this collection to evaluate the performance of our retrieval system. Experimental results showed that (a) the use of target documents for language modeling and (b) enhancement of the vocabulary size in speech recognition were effective in improving the system performance.
|
Building a Test Collection for Speech-Driven Web Retrieval
| 1,231
|
We propose a cross-media lecture-on-demand system, in which users can selectively view specific segments of lecture videos by submitting text queries. Users can easily formulate queries by using the textbook associated with a target lecture, even if they cannot come up with effective keywords. Our system extracts the audio track from a target lecture video, generates a transcription by large vocabulary continuous speech recognition, and produces a text index. Experimental results showed that by adapting speech recognition to the topic of the lecture, the recognition accuracy increased and the retrieval accuracy was comparable with that obtained by human transcription.
|
A Cross-media Retrieval System for Lecture Videos
| 1,232
|
Spoken Language can be used to provide insights into organisational processes, unfortunately transcription and coding stages are very time consuming and expensive. The concept of partial transcription and coding is proposed in which spoken language is indexed prior to any subsequent processing. The functional linguistic theory of texture is used to describe the effects of partial transcription on observational records. The standard used to encode transcript context and metadata is called CHAT, but a previous XML schema developed to implement it contains design assumptions that make it difficult to support partial transcription for example. This paper describes a more effective XML schema that overcomes many of these problems and is intended for use in applications that support the rapid development of spoken language deliverables.
|
Effective XML Representation for Spoken Language in Organisations
| 1,233
|
Special technologies need to be used to take advantage of, and overcome, the challenges associated with acquiring, transforming, storing, processing, and distributing spoken language resources in organisations. This paper introduces an application architecture consisting of tools and supporting utilities for indexing and transcription, and describes how these tools, together with downstream processing and distribution systems, can be integrated into a workflow. Two sample applications for this architecture are outlined- the analysis of decision-making processes in organisations and the deployment of systems development methods by designers in the field.
|
Application Architecture for Spoken Language Resources in Organisational
Settings
| 1,234
|
Frequency counts are a measure of how much use a language makes of a linguistic unit, such as a phoneme or word. However, what is often important is not the units themselves, but the contrasts between them. A measure is therefore needed for how much use a language makes of a contrast, i.e. the functional load (FL) of the contrast. We generalize previous work in linguistics and speech recognition and propose a family of measures for the FL of several phonological contrasts, including phonemic oppositions, distinctive features, suprasegmentals, and phonological rules. We then test it for robustness to changes of corpora. Finally, we provide examples in Cantonese, Dutch, English, German and Mandarin, in the context of historical linguistics, language acquisition and speech recognition. More information can be found at http://dinoj.info/research/fload
|
Measuring the Functional Load of Phonological Contrasts
| 1,235
|
In the article the fact is verified that the list of words selected by formal statistical methods (frequency and functional genre unrestrictedness) is not a conglomerate of non-related words. It creates a system of interrelated items and it can be named "lexical base of language". This selected list of words covers all the spheres of human activities. To verify this statement the invariant synoptical scheme common for ideographic dictionaries of different language was determined.
|
Lexical Base as a Compressed Language Model of the World (on the
material of the Ukrainian language)
| 1,236
|
We present a novel, type-logical analysis of_polarity sensitivity_: how negative polarity items (like "any" and "ever") or positive ones (like "some") are licensed or prohibited. It takes not just scopal relations but also linear order into account, using the programming-language notions of delimited continuations and evaluation order, respectively. It thus achieves greater empirical coverage than previous proposals.
|
Polarity sensitivity and evaluation order in type-logical grammar
| 1,237
|
This paper describes the Patent Retrieval Task in the Fourth NTCIR Workshop, and the test collections produced in this task. We perform the invalidity search task, in which each participant group searches a patent collection for the patents that can invalidate the demand in an existing claim. We also perform the automatic patent map generation task, in which the patents associated with a specific topic are organized in a multi-dimensional matrix.
|
Test Collections for Patent-to-Patent Retrieval and Patent Map
Generation in NTCIR-4 Workshop
| 1,238
|
A probabilistic model for computer-based generation of a machine translation system on the basis of English-Russian parallel text corpora is suggested. The model is trained using parallel text corpora with pre-aligned source and target sentences. The training of the model results in a bilingual dictionary of words and "word blocks" with relevant translation probability.
|
A Probabilistic Model of Machine Translation
| 1,239
|
We consider the problem of modeling the content structure of texts within a specific domain, in terms of the topics the texts address and the order in which these topics appear. We first present an effective knowledge-lean method for learning content models from un-annotated documents, utilizing a novel adaptation of algorithms for Hidden Markov Models. We then apply our method to two complementary tasks: information ordering and extractive summarization. Our experiments show that incorporating content models in these applications yields substantial improvement over previously-proposed methods.
|
Catching the Drift: Probabilistic Content Models, with Applications to
Generation and Summarization
| 1,240
|
This paper describes a standalone, publicly-available implementation of the Resolution of Anaphora Procedure (RAP) given by Lappin and Leass (1994). The RAP algorithm resolves third person pronouns, lexical anaphors, and identifies pleonastic pronouns. Our implementation, JavaRAP, fills a current need in anaphora resolution research by providing a reference implementation that can be benchmarked against current algorithms. The implementation uses the standard, publicly available Charniak (2000) parser as input, and generates a list of anaphora-antecedent pairs as output. Alternately, an in-place annotation or substitution of the anaphors with their antecedents can be produced. Evaluation on the MUC-6 co-reference task shows that JavaRAP has an accuracy of 57.9%, similar to the performance given previously in the literature (e.g., Preiss 2002).
|
A Public Reference Implementation of the RAP Anaphora Resolution
Algorithm
| 1,241
|
This paper discusses the problems and possibility of collecting bee dance data in a linguistic \textit{corpus} and use linguistic instruments such as Zipf's law and entropy statistics to decide on the question whether the dance carries information of any kind. We describe this against the historical background of attempts to analyse non-human communication systems.
|
Building a linguistic corpus from bee dance data
| 1,242
|
We report on a recently initiated project which aims at building a multi-layered parallel treebank of English and German. Particular attention is devoted to a dedicated predicate-argument layer which is used for aligning translationally equivalent sentences of the two languages. We describe both our conceptual decisions and aspects of their technical realisation. We discuss some selected problems and conclude with a few remarks on how this project relates to similar projects in the field.
|
Annotating Predicate-Argument Structure for a Parallel Treebank
| 1,243
|
Designers of statistical machine translation (SMT) systems have begun to employ tree-structured translation models. Systems involving tree-structured translation models tend to be complex. This article aims to reduce the conceptual complexity of such systems, in order to make them easier to design, implement, debug, use, study, understand, explain, modify, and improve. In service of this goal, the article extends the theory of semiring parsing to arrive at a novel abstract parsing algorithm with five functional parameters: a logic, a grammar, a semiring, a search strategy, and a termination condition. The article then shows that all the common algorithms that revolve around tree-structured translation models, including hierarchical alignment, inference for parameter estimation, translation, and structured evaluation, can be derived by generalizing two of these parameters -- the grammar and the logic. The article culminates with a recipe for using such generalized parsers to train, apply, and evaluate an SMT system that is driven by tree-structured translation models.
|
Statistical Machine Translation by Generalized Parsing
| 1,244
|
We are developing an automatic method to compile an encyclopedic corpus from the Web. In our previous work, paragraph-style descriptions for a term are extracted from Web pages and organized based on domains. However, these descriptions are independent and do not comprise a condensed text as in hand-crafted encyclopedias. To resolve this problem, we propose a summarization method, which produces a single text from multiple descriptions. The resultant summary concisely describes a term from different viewpoints. We also show the effectiveness of our method by means of experiments.
|
Summarizing Encyclopedic Term Descriptions on the Web
| 1,245
|
We are developing a cross-media information retrieval system, in which users can view specific segments of lecture videos by submitting text queries. To produce a text index, the audio track is extracted from a lecture video and a transcription is generated by automatic speech recognition. In this paper, to improve the quality of our retrieval system, we extensively investigate the effects of adapting acoustic and language models on speech recognition. We perform an MLLR-based method to adapt an acoustic model. To obtain a corpus for language model adaptation, we use the textbook for a target lecture to search a Web collection for the pages associated with the lecture topic. We show the effectiveness of our method by means of experiments.
|
Unsupervised Topic Adaptation for Lecture Speech Retrieval
| 1,246
|
We integrate automatic speech recognition (ASR) and question answering (QA) to realize a speech-driven QA system, and evaluate its performance. We adapt an N-gram language model to natural language questions, so that the input of our system can be recognized with a high accuracy. We target WH-questions which consist of the topic part and fixed phrase used to ask about something. We first produce a general N-gram model intended to recognize the topic and emphasize the counts of the N-grams that correspond to the fixed phrases. Given a transcription by the ASR engine, the QA engine extracts the answer candidates from target documents. We propose a passage retrieval method robust against recognition errors in the transcription. We use the QA test collection produced in NTCIR, which is a TREC-style evaluation workshop, and show the effectiveness of our method by means of experiments.
|
Effects of Language Modeling on Speech-driven Question Answering
| 1,247
|
This paper describes a novel method of compiling ranked tagging rules into a deterministic finite-state device called a bimachine. The rules are formulated in the framework of regular rewrite operations and allow unrestricted regular expressions in both left and right rule contexts. The compiler is illustrated by an application within a speech synthesis system.
|
A Bimachine Compiler for Ranked Tagging Rules
| 1,248
|
The Metaphone algorithm applies the phonetic encoding of orthographic sequences to simplify words prior to comparison. While Metaphone has been highly successful for the English language, for which it was designed, it may not be applied directly to Ethiopian languages. The paper details how the principles of Metaphone can be applied to Ethiopic script and uses Amharic as a case study. Match results improve as specific considerations are made for Amharic writing practices. Results are shown to improve further when common errors from Amharic input methods are considered.
|
Application of the Double Metaphone Algorithm to Amharic Orthography
| 1,249
|
The aim of this paper is to present the R&D activities carried out at Neurosoft S.A. regarding the development of proofing tools for Modern Greek. Firstly, we focus on infrastructure issues that we faced during our initial steps. Subsequently, we describe the most important insights of three proofing tools developed by Neurosoft, i.e. the spelling checker, the hyphenator and the thesaurus, outlining their efficiencies and inefficiencies. Finally, we discuss some improvement ideas and give our future directions.
|
Proofing Tools Technology at Neurosoft S.A.
| 1,250
|
A way of extracting French verbal chunks, inflected and infinitive, is explored and tested on effective corpus. Declarative morphological and local grammar rules specifying chunks and some simple contextual structures are used, relying on limited lexical information and some simple heuristic/statistic properties obtained from restricted corpora. The specific goals, the architecture and the formalism of the system, the linguistic information on which it relies and the obtained results on effective corpus are presented.
|
Verbal chunk extraction in French using limited resources
| 1,251
|
The existence of a Dictionary in electronic form for Modern Greek (MG) is mandatory if one is to process MG at the morphological and syntactic levels since MG is a highly inflectional language with marked stress and a spelling system with many characteristics carried over from Ancient Greek. Moreover, such a tool becomes necessary if one is to create efficient and sophisticated NLP applications with substantial linguistic backing and coverage. The present paper will focus on the deployment of such an electronic dictionary for Modern Greek, which was built in two phases: first it was constructed to be the basis for a spelling correction schema and then it was reconstructed in order to become the platform for the deployment of a wider spectrum of NLP tools.
|
An electronic dictionary as a basis for NLP tools: The Greek case
| 1,252
|
While alignment of texts on the sentential level is often seen as being too coarse, and word alignment as being too fine-grained, bi- or multilingual texts which are aligned on a level in-between are a useful resource for many purposes. Starting from a number of examples of non-literal translations, which tend to make alignment difficult, we describe an alignment model which copes with these cases by explicitly coding them. The model is based on predicate-argument structures and thus covers the middle ground between sentence and word alignment. The model is currently used in a recently initiated project of a parallel English-German treebank (FuSe), which can in principle be extended with additional languages.
|
A Model for Fine-Grained Alignment of Multilingual Texts
| 1,253
|
Sentiment analysis seeks to identify the viewpoint(s) underlying a text span; an example application is classifying a movie review as "thumbs up" or "thumbs down". To determine this sentiment polarity, we propose a novel machine-learning method that applies text-categorization techniques to just the subjective portions of the document. Extracting these portions can be implemented using efficient techniques for finding minimum cuts in graphs; this greatly facilitates incorporation of cross-sentence contextual constraints.
|
A Sentimental Education: Sentiment Analysis Using Subjectivity
Summarization Based on Minimum Cuts
| 1,254
|
The paper gives a brief review of the expectation-maximization algorithm (Dempster 1977) in the comprehensible framework of discrete mathematics. In Section 2, two prominent estimation methods, the relative-frequency estimation and the maximum-likelihood estimation are presented. Section 3 is dedicated to the expectation-maximization algorithm and a simpler variant, the generalized expectation-maximization algorithm. In Section 4, two loaded dice are rolled. A more interesting example is presented in Section 5: The estimation of probabilistic context-free grammars.
|
A Tutorial on the Expectation-Maximization Algorithm Including
Maximum-Likelihood Estimation and EM Training of Probabilistic Context-Free
Grammars
| 1,255
|
We briefly review the inside-outside and EM algorithm for probabilistic context-free grammars. As a result, we formally prove that inside-outside estimation is a dynamic-programming variant of EM. This is interesting in its own right, but even more when considered in a theoretical context since the well-known convergence behavior of inside-outside estimation has been confirmed by many experiments but apparently has never been formally proved. However, being a version of EM, inside-outside estimation also inherits the good convergence behavior of EM. Therefore, the as yet imperfect line of argumentation can be transformed into a coherent proof.
|
Inside-Outside Estimation Meets Dynamic EM
| 1,256
|
Several Networks of Excellence have been set up in the framework of the European FP5 research program. Among these Networks of Excellence, the NEMIS project focuses on the field of Text Mining. Within this field, document processing and visualization was identified as one of the key topics and the WG1 working group was created in the NEMIS project, to carry out a detailed survey of techniques associated with the text mining process and to identify the relevant research topics in related research areas. In this document we present the results of this comprehensive survey. The report includes a description of the current state-of-the-art and practice, a roadmap for follow-up research in the identified areas, and recommendations for anticipated technological development in the domain of text mining.
|
State of the Art, Evaluation and Recommendations regarding "Document
Processing and Visualization Techniques"
| 1,257
|
Contrarily to standard approaches to topic annotation, the technique used in this work does not centrally rely on some sort of -- possibly statistical -- keyword extraction. In fact, the proposed annotation algorithm uses a large scale semantic database -- the EDR Electronic Dictionary -- that provides a concept hierarchy based on hyponym and hypernym relations. This concept hierarchy is used to generate a synthetic representation of the document by aggregating the words present in topically homogeneous document segments into a set of concepts best preserving the document's content. This new extraction technique uses an unexplored approach to topic selection. Instead of using semantic similarity measures based on a semantic resource, the later is processed to extract the part of the conceptual hierarchy relevant to the document content. Then this conceptual hierarchy is searched to extract the most relevant set of concepts to represent the topics discussed in the document. Notice that this algorithm is able to extract generic concepts that are not directly present in the document.
|
Thematic Annotation: extracting concepts out of documents
| 1,258
|
In this paper we describe a biography summarization system using sentence classification and ideas from information retrieval. Although the individual techniques are not new, assembling and applying them to generate multi-document biographies is new. Our system was evaluated in DUC2004. It is among the top performers in task 5-short summaries focused by person questions.
|
Multi-document Biography Summarization
| 1,259
|
A general-audience introduction to the area of "sentiment analysis", the computational treatment of subjective, opinion-oriented language (an example application is determining whether a review is "thumbs up" or "thumbs down"). Some challenges, applications to business-intelligence tasks, and potential future directions are described.
|
A Matter of Opinion: Sentiment Analysis and Business Intelligence
(position paper)
| 1,260
|
This article describes the results of a systematic in-depth study of the criteria used for word sense disambiguation. Our study is based on 60 target words: 20 nouns, 20 adjectives and 20 verbs. Our results are not always in line with some practices in the field. For example, we show that omitting non-content words decreases performance and that bigrams yield better results than unigrams.
|
Word sense disambiguation criteria: a systematic study
| 1,261
|
The goal of this work is to recover articulatory information from the speech signal by acoustic-to-articulatory inversion. One of the main difficulties with inversion is that the problem is underdetermined and inversion methods generally offer no guarantee on the phonetical realism of the inverse solutions. A way to adress this issue is to use additional phonetic constraints. Knowledge of the phonetic caracteristics of French vowels enable the derivation of reasonable articulatory domains in the space of Maeda parameters: given the formants frequencies (F1,F2,F3) of a speech sample, and thus the vowel identity, an "ideal" articulatory domain can be derived. The space of formants frequencies is partitioned into vowels, using either speaker-specific data or generic information on formants. Then, to each articulatory vector can be associated a phonetic score varying with the distance to the "ideal domain" associated with the corresponding vowel. Inversion experiments were conducted on isolated vowels and vowel-to-vowel transitions. Articulatory parameters were compared with those obtained without using these constraints and those measured from X-ray data.
|
Using phonetic constraints in acoustic-to-articulatory inversion
| 1,262
|
This paper presents an "elitist approach" for extracting automatically well-realized speech sounds with high confidence. The elitist approach uses a speech recognition system based on Hidden Markov Models (HMM). The HMM are trained on speech sounds which are systematically well-detected in an iterative procedure. The results show that, by using the HMM models defined in the training phase, the speech recognizer detects reliably specific speech sounds with a small rate of errors.
|
An elitist approach for extracting automatically well-realized speech
sounds with high confidence
| 1,263
|
In this paper we propose some new measures of language development using network analyses, which is inspired by the recent surge of interests in network studies of many real-world systems. Children's and care-takers' speech data from a longitudinal study are represented as a series of networks, word forms being taken as nodes and collocation of words as links. Measures on the properties of the networks, such as size, connectivity, hub and authority analyses, etc., allow us to make quantitative comparison so as to reveal different paths of development. For example, the asynchrony of development in network size and average degree suggests that children cannot be simply classified as early talkers or late talkers by one or two measures. Children follow different paths in a multi-dimensional space. They may develop faster in one dimension but slower in another dimension. The network approach requires little preprocessing of words and analyses on sentence structures, and the characteristics of words and their usage emerge from the network and are independent of any grammatical presumptions. We show that the change of the two articles "the" and "a" in their roles as important nodes in the network reflects the progress of children's syntactic development: the two articles often start in children's networks as hubs and later shift to authorities, while they are authorities constantly in the adult's networks. The network analyses provide a new approach to study language development, and at the same time language development also presents a rich area for network theories to explore.
|
Analyzing language development from a network approach
| 1,264
|
This paper presents the TermSciences portal, which deals with the implementation of a conceptual model that uses the recent ISO 16642 standard (Terminological Markup Framework). This standard turns out to be suitable for concept modeling since it allowed for organizing the original resources by concepts and to associate the various terms for a given concept. Additional structuring is produced by sharing conceptual relationships, that is, cross-linking of resource results through the introduction of semantic relations which may have initially be missing.
|
Unification of multi-lingual scientific terminological resources using
the ISO 16642 standard. The TermSciences initiative
| 1,265
|
A number of serious reasons will convince an increasing amount of researchers to store their relevant material in centers which we will call "language resource archives". They combine the duty of taking care of long-term preservation as well as the task to give access to their material to different user groups. Access here is meant in the sense that an active interaction with the data will be made possible to support the integration of new data, new versions or commentaries of all sort. Modern Language Resource Archives will have to adhere to a number of basic principles to fulfill all requirements and they will have to be involved in federations to create joint language resource domains making it even more simple for the researchers to access the data. This paper makes an attempt to formulate the essential pillars language resource archives have to adhere to.
|
Foundations of Modern Language Resource Archives
| 1,266
|
This paper describes an interdisciplinary approach which brings together the fields of corpus linguistics and translation studies. It presents ongoing work on the creation of a corpus resource in which translation shifts are explicitly annotated. Translation shifts denote departures from formal correspondence between source and target text, i.e. deviations that have occurred during the translation process. A resource in which such shifts are annotated in a systematic way will make it possible to study those phenomena that need to be addressed if machine translation output is to resemble human translation. The resource described in this paper contains English source texts (parliamentary proceedings) and their German translations. The shift annotation is based on predicate-argument structures and proceeds in two steps: first, predicates and their arguments are annotated monolingually in a straightforward manner. Then, the corresponding English and German predicates and arguments are aligned with each other. Whenever a shift - mainly grammatical or semantic -has occurred, the alignment is tagged accordingly.
|
Building a resource for studying translation shifts
| 1,267
|
Diagrammatic, analogical or iconic representations are often contrasted with linguistic or logical representations, in which the shape of the symbols is arbitrary. The aim of this paper is to make a case for the usefulness of diagrams in inferential knowledge representation systems. Although commonly used, diagrams have for a long time suffered from the reputation of being only a heuristic tool or a mere support for intuition. The first part of this paper is an historical background paying tribute to the logicians, psychologists and computer scientists who put an end to this formal prejudice against diagrams. The second part is a discussion of their characteristics as opposed to those of linguistic forms. The last part is aimed at reviving the interest for heterogeneous representation systems including both linguistic and diagrammatic representations.
|
Raisonner avec des diagrammes : perspectives cognitives et
computationnelles
| 1,268
|
Studies of different term extractors on a corpus of the biomedical domain revealed decreasing performances when applied to highly technical texts. The difficulty or impossibility of customising them to new domains is an additional limitation. In this paper, we propose to use external terminologies to influence generic linguistic data in order to augment the quality of the extraction. The tool we implemented exploits testified terms at different steps of the process: chunking, parsing and extraction of term candidates. Experiments reported here show that, using this method, more term candidates can be acquired with a higher level of reliability. We further describe the extraction process involving endogenous disambiguation implemented in the term extractor YaTeA.
|
Improving Term Extraction with Terminological Resources
| 1,269
|
The paper aims at emphasizing that, even relaxed, the hypothesis of compositionality has to face many problems when used for interpreting natural language texts. Rather than fixing these problems within the compositional framework, we believe that a more radical change is necessary, and propose another approach.
|
Challenging the principle of compositionality in interpreting natural
language texts
| 1,270
|
The paper concerns the understanding of plurals in the framework of Artificial Intelligence and emphasizes the role of time. The construction of collection(s) and their evolution across time is often crucial and has to be accounted for. The paper contrasts a "de dicto" collection where the collection can be considered as persisting over these situations even if its members change with a "de re" collection whose composition does not vary through time. It expresses different criteria of choice between the two interpretations (de re and de dicto) depending on the context of enunciation.
|
The role of time in considering collections
| 1,271
|
We present a new, unique and freely available parallel corpus containing European Union (EU) documents of mostly legal nature. It is available in all 20 official EUanguages, with additional documents being available in the languages of the EU candidate countries. The corpus consists of almost 8,000 documents per language, with an average size of nearly 9 million words per language. Pair-wise paragraph alignment information produced by two different aligners (Vanilla and HunAlign) is available for all 190+ language pair combinations. Most texts have been manually classified according to the EUROVOC subject domains so that the collection can also be used to train and test multi-label classification algorithms and keyword-assignment software. The corpus is encoded in XML, according to the Text Encoding Initiative Guidelines. Due to the large number of parallel texts in many languages, the JRC-Acquis is particularly suitable to carry out all types of cross-language research, as well as to test and benchmark text analysis software across different languages (for instance for alignment, sentence splitting and term extraction).
|
The JRC-Acquis: A multilingual aligned parallel corpus with 20+
languages
| 1,272
|
DepAnn is an interactive annotation tool for dependency treebanks, providing both graphical and text-based annotation interfaces. The tool is aimed for semi-automatic creation of treebanks. It aids the manual inspection and correction of automatically created parses, making the annotation process faster and less error-prone. A novel feature of the tool is that it enables the user to view outputs from several parsers as the basis for creating the final tree to be saved to the treebank. DepAnn uses TIGER-XML, an XML-based general encoding format for both, representing the parser outputs and saving the annotated treebank. The tool includes an automatic consistency checker for sentence structures. In addition, the tool enables users to build structures manually, add comments on the annotations, modify the tagsets, and mark sentences for further revision.
|
DepAnn - An Annotation Tool for Dependency Treebanks
| 1,273
|
The few available French resources for evaluating linguistic models or algorithms on other linguistic levels than morpho-syntax are either insufficient from quantitative as well as qualitative point of view or not freely accessible. Based on this fact, the FREEBANK project intends to create French corpora constructed using manually revised output from a hybrid Constraint Grammar parser and annotated on several linguistic levels (structure, morpho-syntax, syntax, coreference), with the objective to make them available on-line for research purposes. Therefore, we will focus on using standard annotation schemes, integration of existing resources and maintenance allowing for continuous enrichment of the annotations. Prior to the actual presentation of the prototype that has been implemented, this paper describes a generic model for the organization and deployment of a linguistic resource archive, in compliance with the various works currently conducted within international standardization initiatives (TEI and ISO/TC 37/SC 4).
|
Un modèle générique d'organisation de corpus en ligne: application
à la FReeBank
| 1,274
|
While a great effort has concerned the development of fully integrated modular understanding systems, few researches have focused on the problem of unifying existing linguistic formalisms with cognitive processing models. The Situated Constructional Interpretation Model is one of these attempts. In this model, the notion of "construction" has been adapted in order to be able to mimic the behavior of Production Systems. The Construction Grammar approach establishes a model of the relations between linguistic forms and meaning, by the mean of constructions. The latter can be considered as pairings from a topologically structured space to an unstructured space, in some way a special kind of production rules.
|
Scaling Construction Grammar up to Production Systems: the SCIM
| 1,275
|
MontyLingua, an integral part of ConceptNet which is currently the largest commonsense knowledge base, is an English text processor developed using Python programming language in MIT Media Lab. The main feature of MontyLingua is the coverage for all aspects of English text processing from raw input text to semantic meanings and summary generation, yet each component in MontyLingua is loosely-coupled to each other at the architectural and code level, which enabled individual components to be used independently or substituted. However, there has been no review exploring the role of MontyLingua in recent research work utilizing it. This paper aims to review the use of and roles played by MontyLingua and its components in research work published in 19 articles between October 2004 and August 2006. We had observed a diversified use of MontyLingua in many different areas, both generic and domain-specific. Although the use of text summarizing component had not been observe, we are optimistic that it will have a crucial role in managing the current trend of information overload in future research.
|
An Anthological Review of Research Utilizing MontyLingua, a Python-Based
End-to-End Text Processor
| 1,276
|
This paper introduces how human languages can be studied in light of recent development of network theories. There are two directions of exploration. One is to study networks existing in the language system. Various lexical networks can be built based on different relationships between words, being semantic or syntactic. Recent studies have shown that these lexical networks exhibit small-world and scale-free features. The other direction of exploration is to study networks of language users (i.e. social networks of people in the linguistic community), and their role in language evolution. Social networks also show small-world and scale-free features, which cannot be captured by random or regular network models. In the past, computational models of language change and language emergence often assume a population to have a random or regular structure, and there has been little discussion how network structures may affect the dynamics. In the second part of the paper, a series of simulation models of diffusion of linguistic innovation are used to illustrate the importance of choosing realistic conditions of population structure for modeling language change. Four types of social networks are compared, which exhibit two categories of diffusion dynamics. While the questions about which type of networks are more appropriate for modeling still remains, we give some preliminary suggestions for choosing the type of social networks for modeling.
|
Complex networks and human language
| 1,277
|
High dimensional, sparsely populated data spaces have been characterized in terms of ultrametric topology. This implies that there are natural, not necessarily unique, tree or hierarchy structures defined by the ultrametric topology. In this note we study the extent of local ultrametric topology in texts, with the aim of finding unique ``fingerprints'' for a text or corpus, discriminating between texts from different domains, and opening up the possibility of exploiting hierarchical structures in the data. We use coherent and meaningful collections of over 1000 texts, comprising over 1.3 million words.
|
A Note on Local Ultrametricity in Text
| 1,278
|
In the paper, the definition of clause suitable for an automated processing of a Ukrainian text is proposed. The Menzerath-Altmann law is verified on the sentence level and the parameters for the dependences of the clause length counted in words and syllables on the sentence length counted in clauses are calculated for "Perekhresni Stezhky" ("The Cross-Paths"), a novel by Ivan Franko.
|
Menzerath-Altmann Law for Syntactic Structures in Ukrainian
| 1,279
|
In numerous domains in cognitive science it is often useful to have a source for randomly generated corpora. These corpora may serve as a foundation for artificial stimuli in a learning experiment (e.g., Ellefson & Christiansen, 2000), or as input into computational models (e.g., Christiansen & Dale, 2001). The following compact and general C program interprets a phrase-structure grammar specified in a text file. It follows parameters set at a Unix or Unix-based command-line and generates a corpus of random sentences from that grammar.
|
Random Sentences from a Generalized Phrase-Structure Grammar Interpreter
| 1,280
|
This paper includes a reflection on the role of networks in the study of English language acquisition, as well as a collection of practical criteria to annotate free-speech corpora from children utterances. At the theoretical level, the main claim of this paper is that syntactic networks should be interpreted as the outcome of the use of the syntactic machinery. Thus, the intrinsic features of such machinery are not accessible directly from (known) network properties. Rather, what one can see are the global patterns of its use and, thus, a global view of the power and organization of the underlying grammar. Taking a look into more practical issues, the paper examines how to build a net from the projection of syntactic relations. Recall that, as opposed to adult grammars, early-child language has not a well-defined concept of structure. To overcome such difficulty, we develop a set of systematic criteria assuming constituency hierarchy and a grammar based on lexico-thematic relations. At the end, what we obtain is a well defined corpora annotation that enables us i) to perform statistics on the size of structures and ii) to build a network from syntactic relations over which we can perform the standard measures of complexity. We also provide a detailed example.
|
Network statistics on early English Syntax: Structural criteria
| 1,281
|
We show that a general model of lexical information conforms to an abstract model that reflects the hierarchy of information found in a typical dictionary entry. We show that this model can be mapped into a well-formed XML document, and how the XSL transformation language can be used to implement a semantics defined over the abstract model to enable extraction and manipulation of the information in any format.
|
A Formal Model of Dictionary Structure and Content
| 1,282
|
This paper describes experiments on learning Dutch phonotactic rules using Inductive Logic Programming, a machine learning discipline based on inductive logical operators. Two different ways of approaching the problem are experimented with, and compared against each other as well as with related work on the task. The results show a direct correspondence between the quality and informedness of the background knowledge and the constructed theory, demonstrating the ability of ILP to take good advantage of the prior domain knowledge available. Further research is outlined.
|
Learning Phonotactics Using ILP
| 1,283
|
We propose a range of deep lexical acquisition methods which make use of morphological, syntactic and ontological language resources to model word similarity and bootstrap from a seed lexicon. The different methods are deployed in learning lexical items for a precision grammar, and shown to each have strengths and weaknesses over different word classes. A particular focus of this paper is the relative accessibility of different language resource types, and predicted ``bang for the buck'' associated with each in deep lexical acquisition applications.
|
Bootstrapping Deep Lexical Resources: Resources for Courses
| 1,284
|
The task of finding a criterion allowing to distinguish a text from an arbitrary set of words is rather relevant in itself, for instance, in the aspect of development of means for internet-content indexing or separating signals and noise in communication channels. The Zipf law is currently considered to be the most reliable criterion of this kind [3]. At any rate, conventional stochastic word sets do not meet this law. The present paper deals with one of possible criteria based on the determination of the degree of data compression.
|
On the role of autocorrelations in texts
| 1,285
|
In the task of information retrieval the term relevance is taken to mean formal conformity of a document given by the retrieval system to user's information query. As a rule, the documents found by the retrieval system should be submitted to the user in a certain order. Therefore, a retrieval perceived as a selection of documents formally solving the user's query, should be supplemented with a certain procedure of processing a relevant set. It would be natural to introduce a quantitative measure of document conformity to query, i.e. the relevance measure. Since no single rule exists for the determination of the relevance measure, we shall consider two of them which are the simplest in our opinion. The proposed approach does not suppose any restrictions and can be applied to other relevance measures.
|
On the fractal nature of mutual relevance sequences in the Internet news
message flows
| 1,286
|
We discuss the use of model building for temporal representations. We chose Polish to illustrate our discussion because it has an interesting aspectual system, but the points we wish to make are not language specific. Rather, our goal is to develop theoretical and computational tools for temporal model building tasks in computational semantics. To this end, we present a first-order theory of time and events which is rich enough to capture interesting semantic distinctions, and an algorithm which takes minimal models for first-order theories and systematically attempts to ``perturb'' their temporal component to provide non-minimal, but semantically significant, models.
|
Generating models for temporal representations
| 1,287
|
The aim of this paper is to show how we can handle the Recognising Textual Entailment (RTE) task by using Description Logics (DLs). To do this, we propose a representation of natural language semantics in DLs inspired by existing representations in first-order logic. But our most significant contribution is the definition of two novel inference tasks: A-Box saturation and subgraph detection which are crucial for our approach to RTE.
|
Using Description Logics for Recognising Textual Entailment
| 1,288
|
Despite its importance, the task of summarizing evolving events has received small attention by researchers in the field of multi-document summariztion. In a previous paper (Afantenos et al. 2007) we have presented a methodology for the automatic summarization of documents, emitted by multiple sources, which describe the evolution of an event. At the heart of this methodology lies the identification of similarities and differences between the various documents, in two axes: the synchronic and the diachronic. This is achieved by the introduction of the notion of Synchronic and Diachronic Relations. Those relations connect the messages that are found in the documents, resulting thus in a graph which we call grid. Although the creation of the grid completes the Document Planning phase of a typical NLG architecture, it can be the case that the number of messages contained in a grid is very large, exceeding thus the required compression rate. In this paper we provide some initial thoughts on a probabilistic model which can be applied at the Content Determination stage, and which tries to alleviate this problem.
|
Some Reflections on the Task of Content Determination in the Context of
Multi-Document Summarization of Evolving Events
| 1,289
|
In this paper we present an automated method for the classification of the origin of non-native speakers. The origin of non-native speakers could be identified by a human listener based on the detection of typical pronunciations for each nationality. Thus we suppose the existence of several phoneme sequences that might allow the classification of the origin of non-native speakers. Our new method is based on the extraction of discriminative sequences of phonemes from a non-native English speech database. These sequences are used to construct a probabilistic classifier for the speakers' origin. The existence of discriminative phone sequences in non-native speech is a significant result of this work. The system that we have developed achieved a significant correct classification rate of 96.3% and a significant error reduction compared to some other tested techniques.
|
Discriminative Phoneme Sequences Extraction for Non-Native Speaker's
Origin Classification
| 1,290
|
In this paper, we present several adaptation methods for non-native speech recognition. We have tested pronunciation modelling, MLLR and MAP non-native pronunciation adaptation and HMM models retraining on the HIWIRE foreign accented English speech database. The ``phonetic confusion'' scheme we have developed consists in associating to each spoken phone several sequences of confused phones. In our experiments, we have used different combinations of acoustic models representing the canonical and the foreign pronunciations: spoken and native models, models adapted to the non-native accent with MAP and MLLR. The joint use of pronunciation modelling and acoustic adaptation led to further improvements in recognition accuracy. The best combination of the above mentioned techniques resulted in a relative word error reduction ranging from 46% to 71%.
|
Combined Acoustic and Pronunciation Modelling for Non-Native Speech
Recognition
| 1,291
|
In this article, we present an approach for non native automatic speech recognition (ASR). We propose two methods to adapt existing ASR systems to the non-native accents. The first method is based on the modification of acoustic models through integration of acoustic models from the mother tong. The phonemes of the target language are pronounced in a similar manner to the native language of speakers. We propose to combine the models of confused phonemes so that the ASR system could recognize both concurrent pronounciations. The second method we propose is a refinment of the pronounciation error detection through the introduction of graphemic constraints. Indeed, non native speakers may rely on the writing of words in their uttering. Thus, the pronounctiation errors might depend on the characters composing the words. The average error rate reduction that we observed is (22.5%) relative for the sentence error rate, and 34.5% (relative) in word error rate.
|
Amélioration des Performances des Systèmes Automatiques de
Reconnaissance de la Parole pour la Parole Non Native
| 1,292
|
This paper explores several extensions of proof nets for the Lambek calculus in order to handle the different connectives of display logic in a natural way. The new proof net calculus handles some recent additions to the Lambek vocabulary such as Galois connections and Grishin interactions. It concludes with an exploration of the generative capacity of the Lambek-Grishin calculus, presenting an embedding of lexicalized tree adjoining grammars into the Lambek-Grishin calculus.
|
Proof nets for display logic
| 1,293
|
This article describes an exclusively resource-based method of morphological annotation of written Korean text. Korean is an agglutinative language. Our annotator is designed to process text before the operation of a syntactic parser. In its present state, it annotates one-stem words only. The output is a graph of morphemes annotated with accurate linguistic information. The granularity of the tagset is 3 to 5 times higher than usual tagsets. A comparison with a reference annotated corpus showed that it achieves 89% recall without any corpus training. The language resources used by the system are lexicons of stems, transducers of suffixes and transducers of generation of allomorphs. All can be easily updated, which allows users to control the evolution of the performances of the system. It has been claimed that morphological annotation of Korean text could only be performed by a morphological analysis module accessing a lexicon of morphemes. We show that it can also be performed directly with a lexicon of words and without applying morphological rules at annotation time, which speeds up annotation to 1,210 word/s. The lexicon of words is obtained from the maintainable language resources through a fully automated compilation process.
|
Morphological annotation of Korean with Directly Maintainable Resources
| 1,294
|
International standards for lexicon formats are in preparation. To a certain extent, the proposed formats converge with prior results of standardization projects. However, their adequacy for (i) lexicon management and (ii) lexicon-driven applications have been little debated in the past, nor are they as a part of the present standardization effort. We examine these issues. IGM has developed XML formats compatible with the emerging international standards, and we report experimental results on large-coverage lexica.
|
Lexicon management and standard formats
| 1,295
|
We describe a resource-based method of morphological annotation of written Korean text. Korean is an agglutinative language. The output of our system is a graph of morphemes annotated with accurate linguistic information. The language resources used by the system can be easily updated, which allows us-ers to control the evolution of the per-formances of the system. We show that morphological annotation of Korean text can be performed directly with a lexicon of words and without morpho-logical rules.
|
A resource-based Korean morphological annotation system
| 1,296
|
Shifting to a lexicalized grammar reduces the number of parsing errors and improves application results. However, such an operation affects a syntactic parser in all its aspects. One of our research objectives is to design a realistic model for grammar lexicalization. We carried out experiments for which we used a grammar with a very simple content and formalism, and a very informative syntactic lexicon, the lexicon-grammar of French elaborated by the LADL. Lexicalization was performed by applying the parameterized-graph approach. Our results tend to show that most information in the lexicon-grammar can be transferred into a grammar and exploited successfully for the syntactic parsing of sentences.
|
Graphes paramétrés et outils de lexicalisation
| 1,297
|
Existing syntactic grammars of natural languages, even with a far from complete coverage, are complex objects. Assessments of the quality of parts of such grammars are useful for the validation of their construction. We evaluated the quality of a grammar of French determiners that takes the form of a recursive transition network. The result of the application of this local grammar gives deeper syntactic information than chunking or information available in treebanks. We performed the evaluation by comparison with a corpus independently annotated with information on determiners. We obtained 86% precision and 92% recall on text not tagged for parts of speech.
|
Evaluation of a Grammar of French Determiners
| 1,298
|
We discuss the characteristics and behaviour of two parallel classes of verbs in two Romance languages, French and Portuguese. Examples of these verbs are Port. abater [gado] and Fr. abattre [b\'etail], both meaning "slaughter [cattle]". In both languages, the definition of the class of verbs includes several features: - They have only one essential complement, which is a direct object. - The nominal distribution of the complement is very limited, i.e., few nouns can be selected as head nouns of the complement. However, this selection is not restricted to a single noun, as would be the case for verbal idioms such as Fr. monter la garde "mount guard". - We excluded from the class constructions which are reductions of more complex constructions, e.g. Port. afinar [instrumento] com "tune [instrument] with".
|
Very strict selectional restrictions
| 1,299
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.