_id stringlengths 4 10 | text stringlengths 0 18.4k | title stringlengths 0 8.56k |
|---|---|---|
d15817724 | In this paper we present a model to transfor a grammatical formalism in another. The model is applicable only on restrictive conditions. However, it is fairly useful for many purposes: parsing evaluation, researching methods for truly combining different parsing outputs to reach better parsing performances, and building larger syntactically annotated corpora for data-driven approaches. The model has been tested over a case study: the translation of the Turin Tree Bank Grammar to the Shallow Grammar of the CHAOS Italian parser. | A Dependency-based Algorithm for Grammar Conversion |
d6894250 | In this paper, we present a novel distortion model for phrase-based statistical machine translation. Unlike the previous phrase distortion models whose role is to simply penalize nonmonotonic alignments[1, 2], the new model assigns the probability of relative position between two source language phrases aligned to the two adjacent target language phrases. The phrase translation probabilities and phrase distortion probabilities are calculated from the N-best phrase alignment of the training bilingual sentences. To obtain Nbest phrase alignment, we devised a novel phrase alignment algorithm based on word translation probabilities and N-best search. Experiments show that the phrase distortion model and phrase translation model improve the BLEU and NIST scores over the baseline method. | NUT-NTT Statistical Machine Translation System for IWSLT 2005 |
d12601574 | Left-to-right (LR) decoding(Watanabe et al., 2006)is promising decoding algorithm for hierarchical phrase-based translation (Hiero) that visits input spans in arbitrary order producing the output translation in left to right order. This leads to far fewer language model calls, but while LR decoding is more efficient than CKY decoding, it is unable to capture some hierarchical phrase alignments reachable using CKY decoding and suffers from lower translation quality as a result. This paper introduces two improvements to LR decoding that make it comparable in translation quality to CKY-based Hiero. | Two Improvements to Left-to-Right Decoding for Hierarchical Phrase-based Machine Translation |
d17831015 | We present the NewSoMe (News and Social Media) Corpus, a set of subcorpora with annotations on opinion expressions across genres (news reports, blogs, product reviews and tweets) and covering multiple languages (English, Spanish, Catalan and Portuguese). NewSoMe is the result of an effort to increase the opinion corpus resources available in languages other than English, and to build a unifying annotation framework for analyzing opinion in different genres, including controlled text, such as news reports, as well as different types of user generated contents (UGC). Given the broad design of the resource, most of the annotation effort was carried out resorting to crowdsourcing platforms: Amazon Mechanical Turk and CrowdFlower. This created an excellent opportunity to research on the feasibility of crowdsourcing methods for annotating big amounts of text in different languages. | The NewSoMe Corpus: A Unifying Opinion Annotation Framework across Genres and in Multiple Languages |
d2869400 | For NLP applications that require some sort of semantic interpretation it would be helpful to know what expressions exhibit an idiomatic meaning and what expressions exhibit a literal meaning. We investigate whether automatic word-alignment in existing parallel corpora facilitates the classification of candidate expressions along a continuum ranging from literal and transparent expressions to idiomatic and opaque expressions. Our method relies on two criteria: (i) meaning predictability that is measured as semantic entropy and (ii), the overlap between the meaning of an expression and the meaning of its component words. We approximate the mentioned overlap as the proportion of default alignments. We obtain a significant improvement over the baseline with both measures. | Identifying idiomatic expressions using automatic word-alignment |
d9947240 | The present study examines English patent documents extracted from LexisNexis. We compiled a reference corpus of independent claim texts and lay the focus specifically on their collocation features. The findings suggest the functional development of independent claim involves verb-noun collocation and semantic prosody. Verb-noun collocations happen to function as semantic trigger affected by semantic prosody. In particular, clausal nominalization ([13]) is observed in that of verbal clauses. Based on discourse thematic referentiality ([2]), independent claim entails how clausal-specific units constructed the patent setting. The result is significant because discourse thematic referentiality which addresses how lexical units build up modern patent language providing empirical evidence for the overall characterization of independent claim. Besides, rhetorical structure and lexical meaning of independent claim can be derived from components of clausal types as they occur collocationally, referentially and dependently. Mutual information is attainable with the help of selectional collocation features that specific clausal types represented in natural language processing of modern patent language. It is suggested that the development of independent claim as a primer for Patent English. | Collocation Features of Independent Claim in U.S. Patent Documents: Information Retrieval from LexisNexis |
d18487464 | The research described in this work focuses on identifying key components for the task of irony detection. By means of analyzing a set of customer reviews, which are considered as ironic both in social and mass media, we try to find hints about how to deal with this task from a computational point of view. Our objective is to gather a set of discriminating elements to represent irony. In particular, the kind of irony expressed in such reviews. To this end, we built a freely available data set with ironic reviews collected from Amazon. Such reviews were posted on the basis of an online viral effect; i.e. contents whose effect triggers a chain reaction on people. The findings were assessed employing three classifiers. The results show interesting hints regarding the patterns and, especially, regarding the implications for sentiment analysis. | Mining Subjective Knowledge from Customer Reviews: A Specific Case of Irony Detection |
d3193864 | In this paper, we explore a conceptual resource for Chinese nominal phrases, which allows multi-dependency and distinction between dependency and the corresponding exact relation. We also provide an ILP-based method to learn mapping rules from training data, and use the rules to analyze new nominal phrases. | Coling 2008: Companion volume -Posters and Demonstrations |
d16058519 | In this paper we present a system for automatic Arabic text diacritization using three levels of analysis granularity in a layered back off manner. We build and exploit diacritized language models (LM) for each of three different levels of granularity: surface form, morphologically segmented into prefix/stem/suffix, and character level. For each of the passes, we use Viterbi search to pick the most probable diacritization per word in the input. We start with the surface form LM, followed by the morphological level, then finally we leverage the character level LM. Our system outperforms all of the published systems evaluated against the same training and test data. It achieves a 10.87% WER for complete full diacritization including lexical and syntactic diacritization, and 3.0% WER for lexical diacritization, ignoring syntactic diacritization. | A Layered Language Model based Hybrid Approach to Automatic Full Diacritization of Arabic |
d226262366 | ||
d43371876 | ||
d219302978 | ||
d226239135 | ||
d9425866 | The Linguistic Data Consortium (LDC) creates a variety of linguistic resources -data, annotations, tools, standards and best practices -for many sponsored projects. The programming staff at LDC has created the tools and technical infrastructures to support the data creation efforts for these projects, creating tools and technical infrastructures for all aspects of data creation projects: data scouting, data collection, data selection, annotation, search, data tracking and workflow management. This paper introduces a number of samples of LDC programming staff's work, with particular focus on the recent additions and updates to the suite of software tools developed by LDC. Tools introduced include the GScout Web | Annotation Tool Development for Large-Scale Corpus Creation Projects at the Linguistic Data Consortium |
d17904150 | In this contribution we present a new methodology to compile large language resources for domain-specific taxonomy learning. We describe the necessary stages to deal with the rich morphology of an agglutinative language, i.e. Korean, and point out a second order machine learning algorithm to unveil term similarity from a given raw text corpus. The language resource compilation described is part of a fully automatic top-down approach to construct taxonomies, without involving the human efforts which are usually required. | Compiling large language resources using lexical similarity metrics for domain taxonomy learning |
d16033896 | Empirical experience and observations have shown us when powerful and highly tunable classifiers such as maximum entropy classifiers, boosting and SVMs are applied to language processing tasks, it is possible to achieve high accuracies, but eventually their performances all tend to plateau out at around the same point. To further improve performance, various error correction mechanisms have been developed, but in practice, most of them cannot be relied on to predictably improve performance on unseen data; indeed, depending upon the test set, they are as likely to degrade accuracy as to improve it. This problem is especially severe if the base classifier has already been finely tuned.In recent work, we introduced N-fold Templated Piped Correction, or NTPC ("nitpick"), an intriguing error corrector that is designed to work in these extreme operating conditions. Despite its simplicity, it consistently and robustly improves the accuracy of existing highly accurate base models. This paper investigates some of the more surprising claims made by NTPC, and presents experiments supporting an Occam's Razor argument that more complex models are damaging or unnecessary in practice. | Why Nitpicking Works: Evidence for Occam's Razor in Error Correctors Dekai WU †1 Grace NGAI ‡2 |
d210722212 | ||
d1693756 | Implemented methods for proper names recognition rely on large gazetteers of common proper nouns and a set of heuristic rules (e.g. Mr. as an indicator of a PERSON entity type). Though the performance of current PN recognizers is very high (over 90%), it is important to note that this problem is by no means a "solved problem". Existing systems perform extremely well on newswire corpora by virtue of the availability of large gazetteers and rule bases designed for specific tasks (e.g. recognition of Organization and Person entity types as specified in recent Message Understanding Conferences MUC). However, large gazetteers are not available for most languages and applications other than newswire texts and, in any case, proper nouns are an open class. In this paper we describe a context-based method to assign an entity type to unknown proper names (PNs). Like many others, our system relies on a gazetteer and a set of context-dependent heuristics to classify proper nouns. However, due to the unavailability of large gazetteers in Italian, over 20% detected PNs cannot be semantically tagged. The algorithm that we propose assigns an entity type to an unknown PN based on the analysis of syntactically and semantically similar contexts already seen in the application corpus. The performance of the algorithm is evaluated not only in terms of precision, following the tradition of MUC conferences, but also in terms of Information Gain, an information theoretic measure that takes into account the complexity of the classification task. | Automatic Semantic Tagging of Unknown Proper Names |
d6791168 | In this paper, we explore statistical machine translation (SMT) approaches to automatic text simplification (ATS) for Spanish. First, we compare the performances of the standard phrase-based (PB) and hierarchical (HIERO) SMT models in this specific task. In both cases, we build two models, one using the TS corpus with "light" simplifications and the other using the TS corpus with "heavy" simplifications. Next, we compare the two best systems with the state-of-the-art text simplification system for Spanish (Simplext). Our results, based on an extensive human evaluation, show that the SMT-based systems perform equally as well as, or better than, Simplext, despite the very small datasets used for training and tuning. | Automatic Text Simplification for Spanish: Comparative Evaluation of Various Simplification Strategies |
d53629190 | This paper presents methodologies for interoperable annotation of events and event relations across different domains, based on notions proposed in prior work. In addition to the interoperability, our annotation scheme supports a wide coverage of events and event relations. We employ the methodologies to annotate events and event relations on Simple Wikipedia articles in 10 different domains. Our analysis demonstrates that the methodologies can allow us to annotate events and event relations in a principled manner against the wide variety of domains. Despite our relatively wide and flexible annotation of events, we achieve high inter-annotator agreement on event annotation. As for event relations, we obtain reasonable inter-annotator agreement. We also provide an analysis of issues on annotation of events and event relations that could lead to annotators' disagreement. | Interoperable Annotation of Events and Event Relations across Domains |
d6784154 | Quotation and opinion extraction, discourse and factuality have all partly addressed the annotation and identification of Attribution Relations. However, disjoint efforts have provided a partial and partly inaccurate picture of attribution and generated small or incomplete resources, thus limiting the applicability of machine learning approaches. This paper presents PARC 3.0, a large corpus fully annotated with attribution relations (ARs). The annotation scheme was tested with an inter-annotator agreement study showing satisfactory results for the identification of ARs and high agreement on the selection of the text spans corresponding to its constitutive elements: source, cue and content. The corpus, which comprises around 20k ARs, was used to investigate the range of structures that can express attribution. The results show a complex and varied relation of which the literature has addressed only a portion. PARC 3.0 is available for research use and can be used in a range of different studies to analyse attribution and validate assumptions as well as to develop supervised attribution extraction models. | PARC 3.0: A Corpus of Attribution Relations |
d8606739 | With the explosive growth of Internet, more and more domain-specific environments appear, such as forums, blogs, MOOCs and etc. Domain-specific words appear in these areas and always play a critical role in the domain-specific NLP tasks. This paper aims at extracting Chinese domain-specific new words automatically. The extraction of domain-specific new words has two parts including both new words in this domain and the especially important words. In this work, we propose a joint statistical model to perform these two works simultaneously. Compared to traditional new words detection models, our model doesn't need handcraft features which are labor intensive. Experimental results demonstrate that our joint model achieves a better performance compared with the state-of-the-art methods. | Domain-Specific New Words Detection in Chinese |
d2872933 | This paper describes our work in developing a bilingual speech recognition system using two SpeechDat databases. The bilingual aspect of this work is of particular importance in the Galician region of Spain where both languages Galician and Spanish coexist and one of the languages, the Galician one, is a minority language. Based on a global Spanish-Galician phoneme set we built a bilingual speech recognition system which can handle both languages: Spanish and Galician. The recognizer makes use of context dependent acoustic models based on continuous density hidden Markov models. The system has been evaluated on a isolated-word large-vocabulary task.The tests show that Spanish system exhibits a better performance than the Galician system due to its better training. The bilingual system provides an equivalent performance to that achieved by the language specific systems. | Acoustic Modeling and Training of a Bilingual ASR System when a Minority Language is Involved |
d6706150 | This paper proposes an annotating scheme that encodes honorifics (respectful words). Honorifics are used extensively in Japanese, reflecting the social relationship (e.g. social ranks and age) of the referents. This referential information is vital for resolving zero pronouns and improving machine translation outputs. Annotating honorifics is a complex task that involves identifying a predicate with honorifics, assigning ranks to referents of the predicate, calibrating the ranks, and connecting referents with their predicates. | Annotating Honorifics Denoting Social Ranking of Referents |
d9971269 | The goal of the DECODA project is to reduce the development cost of Speech Analytics systems by reducing the need for manual annotation. This project aims to propose robust speech data mining tools in the framework of call-center monitoring and evaluation, by means of weakly supervised methods. The applicative framework of the project is the call-center of the RATP (Paris public transport authority). This project tackles two very important open issues in the development of speech mining methods from spontaneous speech recorded in call-centers : robustness (how to extract relevant information from very noisy and spontaneous speech messages) and weak supervision (how to reduce the annotation effort needed to train and adapt recognition and classification models). This paper describes the DECODA corpus collected at the RATP during the project. We present the different annotation levels performed on the corpus, the methods used to obtain them, as well as some evaluation of the quality of the annotations produced. | DECODA: a call-center human-human spoken conversation corpus |
d10678830 | In this work, we investigate the role of morphology on the performance of semantic similarity for morphologically rich languages, such as German and Greek. The challenge in processing languages with richer morphology than English, lies in reducing estimation error while addressing the semantic distortion introduced by a stemmer or a lemmatiser. For this purpose, we propose a methodology for selective stemming, based on a semantic distortion metric. The proposed algorithm is tested on the task of similarity estimation between words using two types of corpus-based similarity metrics: co-occurrence-based and context-based. The performance on morphologically rich languages is boosted by stemming with the context-based metric, unlike English, where the best results are obtained by the co-occurrence-based metric. A key finding is that the estimation error reduction is different when a word is used as a feature, rather than when it is used as a target word. | Word Semantic Similarity for Morphologically Rich Languages |
d11885145 | In this paper, we present a two-stage approach to acquire Japanese unknown morphemes from text with full POS tags assigned to them. We first acquire unknown morphemes only making a morphologylevel distinction, and then apply semantic classification to acquired nouns. One advantage of this approach is that, at the second stage, we can exploit syntactic clues in addition to morphological ones because as a result of the first stage acquisition, we can rely on automatic parsing. Japanese semantic classification poses an interesting challenge: proper nouns need to be distinguished from common nouns. It is because Japanese has no orthographic distinction between common and proper nouns and no apparent morphosyntactic distinction between them. We explore lexico-syntactic clues that are extracted from automatically parsed text and investigate their effects. | Semantic Classification of Automatically Acquired Nouns using Lexico-Syntactic Clues |
d195899759 | ||
d236460171 | ||
d236477889 | Capturing the salient information from an input article has been a long-standing challenge for summarization. On Wikipedia, most of the wiki pages about people contain a factual table that lists the basic properties of the people. Illuminatingly, a factual table can be regarded as a natural summary of the key information in the corresponding article. Thus, in this paper we propose the task of tableguided abstractive biography summarization, which utilizes factual tables to capture important information and then generate a summary of a biography. We first introduce the TaGS (Table-Guided Summarization) dataset 1 , the first large-scale biography summarization dataset with tables. Next, we report some statistics about this dataset to validate the quality of the dataset. We also benchmark several commonly used summarization methods on TaGS and hope this will inspire more exciting methods. | |
d28542925 | Learner corpora are receiving special attention as an invaluable source of educational feedback and are expected to improve teaching materials and methodology. However, they include various types of incorrect sentences. Error type classification is an important task in learner corpora which enables clarifying for learners why a certain sentence is classified as incorrect in order to help learners not to repeat errors. To address this issue, we defined a set of error type criteria and conducted automatic classification of errors into error types in the sentences from the NAIST Goyo Corpus and achieved an accuracy of 77.6%. We also tried inter-corpus evaluation of our system on the Lang-8 corpus of learner Japanese and achieved an accuracy of 42.3%. To know the accuracy, we also investigated the classification method by human judgement and compared the difference in classification between the machine and the human. | Towards Automatic Error Type Classification of Japanese Language Learners' Writing |
d4958096 | In this paper we present ongoing work to produce an expressive TTS reader that can be used both in text and dialogue applications. The system has been previously used to read (English) poetry and it has now been extended to apply to short stories. The text is fully analyzed both at phonetic and phonological level, and at syntactic and semantic level. The core of the system is the Prosodic Manager which takes as input discourse structures and relations and uses this information to modify parameters for the TTS accordingly. The text is transformed into a poem-like structures, where each line corresponds to a Breath Group, semantically and syntactically consistent. Stanzas correspond to paragraph boundaries. Analogical parameters are related to ToBI theoretical indices but their number is doubled. | Semantics and Discourse Processing for Expressive TTS |
d18881132 | We report on the joint GE/NYU natural language information retrieval project as related to the Tipster Phase 2 research conducted initially at NYU and subsequently at GE R&D Center and NYU. The evaluation results discussed here were obtained in connection with the 3rd and 4th Text Retrieval Conferences (TREC-3 and TREC-4). The main thrust of this project is to use natural language processing techniques to enhance the effectiveness of full-text document retrieval. During the course of the four TREC conferences, we have built a prototype IR system designed around a statistical full-text indexing and search backbone provided by the NIST's Prise engine. The original Prise has been modified to allow handling of multi-word phrases, differential term weighting schemes, automatic query expansion, index partitioning and rank merging, as well as dealing with complex documents. Natural language processing is used to preprocess the documents in order to extract content-carrying terms, discover inter-term dependencies and build a conceptual hierarchy specific to the database domain, and process user's natural language requests into effective search queries. | NATURAL LANGUAGE INFORMATION RETRIEVAL: TIPSTER-2 FINAL REPORT |
d8220020 | By combining information extraction and record linkage techniques, we have created a repository of references to attorneys, judges, and expert witnesses across a broad range of text sources. These text sources include news, caselaw, law reviews, Medline abstracts, and legal briefs among others. We briefly describe our cross document co-reference resolution algorithm and discuss applications these resolved references enable. Among these applications is one that shows summaries of relationships chains between individuals based on their document co-occurrence and cross document co-references. | Cross Document Co-Reference Resolution Applications for People in the Legal Domain |
d7773534 | This paper describes various types of semantic ellipsis and underspecification in natural language, and the ways in which the meaning of semantically elided elements is reconstructed in the Ontological Semantics (OntoSem) text processing environment. The description covers phenomena whose treatment in OntoSem has reached various levels of advancement: fully implemented, partially implemented, and described algorithmically outside of implementation. We present these research results at this point -prior to full implementation and extensive evaluation -for two reasons: first, new descriptive material is being reported; second, some subclasses of the phenomena in question will require a truly long-term effort whose results are best reported in installments. | OntoSem Methods for Processing Semantic Ellipsis |
d18764304 | An important trend in recent works on lexical semantics has been the development of learning methods capable of extracting semantic information from text corpora. The majority of these methods are based on the distributional hypothesis of meaning and acquire semantic information by identifying distributional patterns in texts. In this article, we present a distributional analysis method for extracting nominalization relations from monolingual corpora. The acquisition method makes use of distributional and morphological information to select nominalization candidates. We explain how the learning is performed on a dependency annotated corpus and describe the nominalization results. Furthermore, we show how these results served to enrich an existing lexical resource, the WOLF (Wordnet Libre du Français). We present the techniques that we developed in order to integrate the new information into WOLF, based on both its structure and content. Finally, we evaluate the validity of the automatically obtained information and the correctness of its integration into the semantic resource. The method proved to be useful for boosting the coverage of WOLF and presents the advantage of filling verbal synsets, which are particularly difficult to handle due to the high level of verbal polysemy. | Boosting the Coverage of a Semantic Lexicon by Automatically Extracted Event Nominalizations |
d52804217 | Personal MT (PMT) is a new concept in dialoguebased MT (DBMT) , which we are currently studying and prototyping in the LIDIA project Ideally, a PMT system should run on PCs and be usable by everybody. To get his/her text translated into one or several languages, the writer would accept to cooperate with the system in order to standardize and clarify his/her document. There are many interesting aspects in the design of such a system. The paper briefly presents some of them (HyperText, distributed architecture, guided language, hybrid transfer/interlingua, the goes on to study in more detail the structure of the dialogue with the writer and the place of speech synthesis [1]. | Towards Personal MT: general design, dialogue structure, potential role of speech |
d16571535 | The input of network is the key problem for Chinese Word sense disambiguation utilizing the Neural Network. This paper presents an input model of Neural Network that calculates the Mutual Information between contextual words and ambiguous word by using statistical method and taking the contextual words to certain number beside the ambiguous word according to (-M, +N). The experiment adopts triple-layer BP Neural Network model and proves how the size of training set and the value of M and N affect the performance of Neural Network model. The experimental objects are six pseudowords owning three word-senses constructed according to certain principles. Tested accuracy of our approach on a close-corpus reaches 90.31%,, and 89.62% on a open-corpus. The experiment proves that the Neural Network model has good performance on Word sense disambiguation. | Combining Neural Networks and Statistics for Chinese Word sense disambiguation |
d14511084 | In this paper, we propose a fully automatic system for acquisition of hypernym/hyponymy relations from large corpus in Turkish Language. The method relies on both lexico-syntactic pattern and semantic similarity. Once the model has extracted the seeds by using patterns, it applies similarity based expansion in order to increase recall. For the expansion, several scoring functions within a bootstrapping algorithm are applied and compared. We show that a model based on a particular lexico-syntactic pattern for Turkish Language can successfully retrieve many hypernym/hyponym relations with high precision. We further demonstrate that the model can statistically expand the hyponym list to go beyond the limitations of lexico-syntactic patterns and get better recall. During the expansion phase, the hypernym/hyponym pairs are automatically and incrementally extracted depending on their statistics by employing various association measures and graph-based scoring. In brief, the fully automatic model mines only a large corpus and produces is-a relations with promising precision and recall. To achieve this goal, several methods and approaches were designed, implemented, compared and evaluated. | Automatic Extraction of Turkish Hypernym-Hyponym Pairs From Large Corpus |
d967582 | UPV-PRHLT participated in the System Combination task of the Fifth Workshop on Statistical Machine Translation (WMT 2010). On each translation direction, all the submitted systems were combined into a consensus translation. These consensus translations always improve translation quality of the best individual system. | The UPV-PRHLT Combination System for WMT 2010 |
d220047255 | ||
d1242188 | We address the issues of transliteration between Indian languages and English, especially for named entities. We use an EM algorithm to learn the alignment between the languages. We find that there are lot of ambiguities in the rules mapping the characters in the source language to the corresponding characters in the target language. Some of these ambiguities can be handled by capturing context by learning multi-character based alignments and use of character n-gram models. We observed that a word in the source script may have actually originated from different languages. Instead of learning one model for the language pair, we propose that one may use multiple models and a classifier to decide which model to use. A contribution of this work is that the models and classifiers are learned in a completely unsupervised manner. Using our system we were able to get quite accurate transliteration models. | Learning Multi Character Alignment Rules and Classification of training data for Transliteration |
d13138861 | This paper describes baseline systems for Finnish-English and English-Finnish machine translation using standard phrasebased and factored models including morphological features. We experiment with compound splitting and morphological segmentation and study the effect of adding noisy out-of-domain data to the parallel and the monolingual training data. Our results stress the importance of training data and demonstrate the effectiveness of morphological pre-processing of Finnish. | Morphological Segmentation and OPUS for Finnish-English Machine Translation |
d336840 | In this paper we explore the identification of negated molecular events (e.g. protein binding, gene expressions, regulation, etc.) in biomedical research abstracts. We construe the problem as a classification task and apply a machine learning (ML) approach that uses lexical, syntactic, and semantic features associated with sentences that represent events. Lexical features include negation cues, whereas syntactic features are engineered from constituency parse trees and the command relation between constituents. Semantic features include event type and participants. We also consider a rule-based approach that uses only the command relation. On a test dataset, the ML approach showed significantly better results (51% F-measure) compared to the commandbased rules (35-42% F-measure). Training a separate classifier for each event class proved to be useful, as the micro-averaged F-score improved to 63% (with 88% precision), demonstrating the potential of task-specific ML approaches to negation detection. | Using SVMs with the Command Relation Features to Identify Negated Events in Biomedical Literature |
d24472464 | To be able to use existing natural language processing tools for analysing historical text, an important preprocessing step is spelling normalisation, converting the original spelling to present-day spelling, before applying tools such as taggers and parsers. In this paper, we compare a probablistic, language-independent approach to spelling normalisation based on statistical machine translation (SMT) techniques, to a rule-based system combining dictionary lookup with rules and non-probabilistic weights. The rule-based system reaches the best accuracy, up to 94% precision at 74% recall, while the SMT system improves each tested period. | Comparing Rule-based and SMT-based Spelling Normalisation for English Historical Texts |
d22971176 | 張隆勳、王逸如、陳信宏 國立交通大學電信工程系 摘要 在本論文中,使用國內自行錄製的國語廣播新聞語料庫,MATBN,製作一個基本 的語音辨認系統以評估在國語廣播新聞環境下之國語語音辨認效能。在論文中所使用 語音辨認器之聲學模型為 100 韻母相關之聲母及 40 個韻母模型,另外也為particles及超 語言現象製作了聲學模型。在語言模式方面,論文中使用六萬詞之國語詞典及其雙連 文模型;同時在論文中還加入了最簡單的韻律資訊-音節間靜音長度模型以期提升辨 認器效能及詞、語句邊界的正確率。最後,對國語廣播新聞語料中的三種不同語者環 境-主播、外場記者及受訪者,分別得到 86.9%、76.4%及 48.5%的詞辨認率。 一、 簡介 在 1995 年世界四個做語音辨認研究的著名單位(BBN, CMU, Dragon 及 IBM)開始參與一個在 當年是一項創舉的語音辨認評比之語音資料庫建立工作,該語音資料庫稱做 Hub-4,在此項評比 中希望能做到廣播新聞語料自動轉述(automatic broadcast news transcription)[1]。Hub-4 語料庫中也 已陸續加入許多語料,事實上 Hub-4 語料庫中也已經有國語廣播新聞語料,其內容是由大陸中央 台及洛杉磯中文台的廣播新聞節目錄製而成。由 1999 年 DARPA 所舉辦的語音辨認評比的結果 可以看出世界各大語音辨認研究單位在廣播新聞語料自動轉述已獲得重大的進展;不只在語音辨 認方面,在 segmentation、information extraction、topic detection 等技術都有許多成果。在英文廣播 新聞語料語音辨認方面,在 DARPA Broadcast News (Hub-4) Evaluation [2]的 F0 評比項目 -其訓 練及測試環境是僅考慮無環境雜訊、背景音樂及無外國口音語者的廣播新聞語料,其語音辨識率 已可達 7.8% 的詞錯誤率(word error rate, WER);而在 F1 評比項目 -其訓練及測試環境是 F0 再 加上自發性廣播新聞語料(spontaneous speech),也就是考慮了有不流利現象 (disfluencies) 的語 料,其辨認結果也可達 14.4% 的詞錯誤率[2]。在國語廣播語料語音辨認部分,Dragon 公司在 1998 年發表的辨認結果可達 36%的詞錯誤率及 25%的字錯誤率(character error rate, CER)[3]。 在國內則從 2001 年起由台大、中研院、清大、成大及交大五個學術單位,在國科會的補助 感謝中研院王新民博士在MATBN語料庫標示內容上之協助及台師大陳柏琳教授所提供之詞典。 下開始了一項為期三年的國語廣播語料蒐集計畫。其中之一部分為蒐集國語新聞廣播語料庫 (MATBN, Mandarin Across Taiwan -Broadcast News)[4,5],三年計畫中共蒐集並轉述了 198 個小時 的國語廣播新聞語料。這個國語新聞廣播語料,MATBN,現在正要由國科會技轉到語言學會中。 二、 國語新聞廣播語料庫(MATBN) | Evaluation of Mandarin Broadcast News Transcription System |
d235097287 | ||
d18444659 | The automatic grading of oral language tests has been the subject of much research in recent years. Several obstacles lie in the way of achieving this goal. Recent work suggests a testing technique called elicited imitation (EI) that can serve to accurately approximate global oral proficiency. This testing methodology, however, does not incorporate some fundamental aspects of language, such as fluency. Other work has suggested another testing technique, simulated speech (SS), as a supplement or an alternative to EI that can provide automated fluency metrics. In this work, we investigate a combination of fluency features extracted from SS tests and EI test scores as a means to more accurately predict oral language proficiency. Using machine learning and statistical modeling, we identify which features automatically extracted from SS tests best predicted hand-scored SS test results, and demonstrate the benefit of adding EI scores to these models. Results indicate that the combination of EI and fluency features do indeed more effectively predict hand-scored SS test scores. We finally discuss implications of this work for future automated oral testing scenarios. | Combining elicited imitation and fluency features for oral proficiency measurement |
d199379867 | ||
d209892156 | ||
d5739885 | The paper presents the system developed by the SentimentalITsts team for the participation in Semeval-2016 task 4, in the subtasks A, B and C. The developed system uses off the shelf solutions for the development of a quick sentiment analyzer for tweets. However, the lack of any syntactic or semantic information resulted in performances lower than those of other teams. | SentimentalITsts at SemEval-2016 Task 4: building a Twitter sentiment analyzer in your backyard |
d5700290 | This paper presents suggested semantic representations for different types of referring expressions in the format of Minimal Recursion Semantics and sketches syntactic analyses which can create them compositionally. We explore cross-linguistic harmonization of these representations, to promote interoperability and reusability of linguistic analyses. We followBorthen and Haugereid (2005)in positing COG-ST ('cognitive status') as a feature on the syntax-semantics interface to handle phenomena associated with definiteness. Our proposal helps to unify the treatments of definiteness markers, demonstratives, overt pronouns and null anaphora across languages. In languages with articles, they contribute an existential quantifier and the appropriate value for COG-ST. In other languages, the COG-ST value is determined by an affix. The contribution of demonstrative determiners is decomposed into a COG-ST value, a quantifier, and proximity information, each of which can be contributed by a different kind of grammatical construction in a given language. Along with COG-ST, we posit a feature that distinguishes between pronouns (and null anaphora) that are sensitive to the identity of the referent of their antecedent and those that are sensitive to its type. | Semantic Representations of Syntactically Marked Discourse Status in Crosslinguistic Perspective |
d8101711 | We examine various string distance measures for suitability in modeling dialect distance, especially its perception. We find measures superior which do not normalize for word length, but which are are sensitive to order. We likewise find evidence for the superiority of measures which incorporate a sensitivity to phonological context, realized in the form of n-gramsalthough we cannot identify which form of context (bigram, trigram, etc.) is best. However, we find no clear benefit in using gradual as opposed to binary segmental difference when calculating sequence distances. | Evaluation of String Distance Algorithms for Dialectology |
d52100723 | We propose a new method to detect when users express the intent to leave a service, also known as churn. While previous work focuses solely on social media, we show that this intent can be detected in chatbot conversations. As companies increasingly rely on chatbots, they need an overview of potentially churny users. To this end, we crowdsource and publish a dataset of churn intent expressions in chatbot interactions in German and English. We show that classifiers trained on social media data can detect the same intent in the context of chatbots.We introduce a classification architecture that outperforms existing work on churn intent detection in social media. Moreover, we show that, using bilingual word embeddings, a system trained on combined English and German data outperforms monolingual approaches. As the only existing dataset is in English, we crowdsource and publish a novel dataset of German tweets. We thus underline the universal aspect of the problem, as examples of churn intent in English help us identify churn in German tweets and chatbot conversations. | Churn Intent Detection in Multilingual Chatbot Conversations and Social Media |
d13979944 | This position paper argues for an interactive approach to text understanding. The proposed model extends an existing semantics-based text authoring system by using the input text as a source of information to assist the user in re-authoring its content. The approach permits a reliable deep semantic analysis by combining automatic information extraction with a minimal amount of human intervention. | Towards Interactive Text Understanding |
d6670517 | ||
d7204869 | We propose an automatic machine translation (MT) evaluation metric that calculates a similarity score (based on precision and recall) of a pair of sentences. Unlike most metrics, we compute a similarity score between items across the two sentences. We then find a maximum weight matching between the items such that each item in one sentence is mapped to at most one item in the other sentence. This general framework allows us to use arbitrary similarity functions between items, and to incorporate different information in our comparison, such as n-grams, dependency relations, etc. When evaluated on data from the ACL-07 MT workshop, our proposed metric achieves higher correlation with human judgements than all 11 automatic MT evaluation metrics that were evaluated during the workshop. | MAXSIM: A Maximum Similarity Metric for Machine Translation Evaluation |
d219304585 | ||
d5391048 | A b stractThis paper' treats automatic, probabilistic tagging. First, residual, untagged, output from the lexical analyser SWETWOL^ is described and discussed. A method of tagging residual output is proposed and implemented: the left-stripping method. This algorithm, employed by the module ENDTAG, recursively strips a word of its leftmost letter, and looks up the remaining 'ending' in a dictionary. If the ending is found, ENDTAG tags it according to the information found in the dictionary. If the ending is not found in the dictionary, a match is searched in ending lexica containing statistical information about word classes associated with the ending and the relative frequency of each word class. If a match is found in the ending lexica, the word is given graded tagging according to the statistical information in the ending lexica. If no match is found, the ending is stripped of what is now its left-most letter and is recursively searched in dictionary and ending lexica (in that order). The ending lexica -containing the statistical informaiton -employed in this paper are obtained from a reversed version of Nusvensk Frekvensordbok (Allén 1970), and contain endings of one to seven letters. Success rates for ENDTAG as a stand-alone module are presented.In trod u ctionOne problem with automatic tagging and lexical analysis is that they are never (as yet) 100 % accurate. Varying tagging algorithms, using different methods, arrive at success rates in the area of 94-99 %.3 After machine analysis there remains an untagged residue, and the complete output may -somewhat roughly -be divided into three subgroups: 1 A group of unambiguously tagged words. 2 A group of homographs given alternative tags. 3 A residual group lacking tags.4This paper is an abbreviated version of my diploma paper in computational linguistics with the same title, presented in April 1993 at the department of linguistics, computational linguistics, Stockholm University. | Koskenniemi 1983a,b; Pitkanen 1992 |
d248780163 | Using technology for analysis of human emotion is a relatively nascent research area. There are several types of data where emotion recognition can be employed, such as -text, images, audio and video. In this paper, the focus is on emotion recognition in text data. Emotion recognition in text can be performed from both written comments and from conversations. In this paper, the dataset used for emotion recognition is a list of comments. While extensive research is being performed in this area, the language of the text plays a very important role. In this work, the focus is on the Dravidian language of Tamil. The language and its script demands an extensive pre-processing. The paper contributes to this by adapting various preprocessing methods to the Dravidian Language of Tamil. A CNN method has been adopted for the task at hand. The proposed method has achieved a comparable result. | JudithJeyafreedaAndrew@TamilNLP-ACL2022:CNN for Emotion Analysis in Tamil |
d9979623 | In this study we investigate how we can learn both: (a) syntactic classes that capture the range of predicate argument structures (PASs) of a word and the syntactic alternations it participates in, but ignore large semantic differences in the component words; and (b) syntactico-semantic classes that capture PAS and alternation properties, but are also semantically coherent (a la Levin classes).We focus on Indonesian as our case study, a language that is spoken by more than 165 million speakers, but is nonetheless relatively under-resourced in terms of NLP. In particular, we focus on the syntactic variation that arises with the affixing of the Indonesian suffix -kan, which varies according to the kind of stem it attaches to. | Unsupervised Word Class Induction for Under-resourced Languages: A Case Study on Indonesian |
d27804845 | The performance of a speech recognition system is often degraded due to the mismatch between the environments of development and application. One of the major sources that give rises to this mismatch is additive noise. The approaches for handling the problem of additive noise can be divided into three classes: speech enhancement, robust speech feature extraction, and compensation of speech models. In this thesis, we are focused on the second class, robust speech feature extraction.The approaches of speech robust feature extraction are often together with the voice activity detection in order to estimate the noise characteristics. A voice activity detector (VAD) is used to discriminate the speech and noise-only portions within an utterance. This thesis | 端點偵測技術在強健語音參數擷取之研究 Study of the Voice Activity Detection Techniques for Robust Speech Feature Extraction |
d131761661 | Many applications (not necessarily only from computational linguistics), involving record-or graph-like structures, would benefit from a framework which would allow to efficiently test a single structure φ under various operations against a compact representation of a set of similar structures: φ . Besides a Boolean answer, we would also like to see those structures stored in which are entailed by operation . In our case, we are especially interested in s that implement feature structure subsumption and unifiability. The urgent need for such a kind of framework is related to our work on the approximation of (P)CFGs from unificationbased grammars. We not only define the mathematical apparatus for this in terms of finite-state automata, but also come up with an efficient implementation mostly along the theoretical basis, together with measurements in which we compare our implementation of against a discrimination tree index. | An Efficient Typed Feature Structure Index: Theory and Implementation |
d52013306 | We present bot#1337: a dialog system developed for the 1 st NIPS Conversational Intelligence Challenge 2017 (ConvAI). The aim of the competition was to implement a bot capable of conversing with humans based on a given passage of text. To enable conversation, we implemented a set of skills for our bot, including chit-chat, topic detection, text summarization, question answering and question generation. The system has been trained in a supervised setting using a dialogue manager to select an appropriate skill for generating a response. The latter allows a developer to focus on the skill implementation rather than the finite state machine based dialog manager. The proposed system bot#1337 won the competition with an average dialogue quality score of 2.78 out of 5 given by human evaluators. Source code and trained models for the bot#1337 are available on GitHub.This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http:// creativecommons.org/licenses/by/4.0/ | NIPS Conversational Intelligence Challenge 2017 Winner System: Skill-based Conversational Agent with Supervised Dialog Manager |
d219307090 | ||
d219308815 | ||
d237155071 | ||
d21696731 | Present-day empirical research in computational or theoretical linguistics has at its disposal an enormous wealth in the form of richly annotated and diverse corpus resources. Especially the points of contact between modalities are areas of exciting new research. However, progress in those areas in particular suffers from poor coverage in terms of visualization or query systems. Many limitations for such tools stem from the non-uniform representations of very diverse resources and the lack of standards that address this problem from the perspective of processing or querying. In this paper we present our framework for modeling arbitrary multi-modal corpus resources in a unified form for processing tools. It serves as a middleware system and combines the expressiveness of general graph-based models with a rich metadata schema to preserve linguistic specificity. By separating data structures and their linguistic interpretations, it assists tools on top of it so that they can in turn allow their users to more efficiently exploit corpus resources. | A Lightweight Modeling Middleware for Corpus Processing |
d14929695 | In this paper, we discuss the integration of metaphor information into the RDF/OWL representation of EuroWordNet. First, the lexical database WordNet and its variants are presented. After a brief description of the Hamburg Metaphor Database, examples of its conversion into the RDF/OWL representation of EuroWordNet are discussed. The metaphor information is added to the general EuroWordNet data and the new resulting RDF/OWL structure is shown in LexiRes, a visualization tool developed and adapted for handling structures of ontological and lexical databases. We show how LexiRes can be used to further edit the newly added metaphor information, and explain some problems with this new type of information on the basis of examples. | Integrating Metaphor Information into RDF/OWL EuroWordNet |
d7489937 | Semantic representation of text is key to text understanding and reasoning. In this paper, we present Polaris, Lymba's semantic parser. Polaris is a supervised semantic parser that given text extracts semantic relations. It extracts relations from a wide variety of lexico-syntactic patterns, including verb-argument structures, noun compounds and others. The output can be provided in several formats: XML, RDF triples, logic forms or plain text, facilitating interoperability with other tools. Polaris is implemented using eight separate modules. Each module is explained and a detailed example of processing using a sample sentence is provided. Overall results using a benchmark are discussed. Per module performance, including errors made and pruned by each module are also analyzed. | Polaris: Lymba's Semantic Parser |
d7058681 | In this paper we present the SemEval-2014 Task 2 on spoken dialogue grammar induction. The task is to classify a lexical fragment to the appropriate semantic category (grammar rule) in order to construct a grammar for spoken dialogue systems. We describe four subtasks covering two languages, English and Greek, and three speech application domains, travel reservation, tourism and finance. The classification results are compared against the groundtruth. Weighted and unweighted precision, recall and fmeasure are reported. Three sites participated in the task with five systems, employing a variety of features and in some cases using external resources for training. The submissions manage to significantly beat the baseline, achieving a f-measure of 0.69 in comparison to 0.56 for the baseline, averaged across all subtasks. | SemEval-2014 Task 2: Grammar Induction for Spoken Dialogue Systems |
d60837759 | We present a simple approach for Asian language text classification without word segmentation, based on statistical § -gram language modeling. In particular, we examine Chinese and Japanese text classification. With character § -gram models, our approach avoids word segmentation. However, unlike traditional ad hoc § -gram models, the statistical language modeling based approach has strong information theoretic basis and avoids explicit feature selection procedure which potentially loses significantly amount of useful information. We systematically study the key factors in language modeling and their influence on classification. Experiments on Chinese TREC and Japanese NTCIR topic detection show that the simple approach can achieve better performance compared to traditional approaches while avoiding word segmentation, which demonstrates its superiority in Asian language text classification. | Text Classification in Asian Languages without Word Segmentation |
d919225 | Preservation of an endangered language is an important and difficult task. The preservation project should include documentation, archiving and development of shared resources for the endangered language. In addition, the project will consider how to revitalize this endangered language among the younger generation. In this paper, we propose an integrated framework that will connect the three different tasks: language archiving, language processing and creating learning materials. We are using this framework to document one Taiwanese aboriginal language: Yami. The proposed framework should be an effective tool for documenting other endangered languages in Asia. | An Integrated Framework for Archiving, Processing and Developing Learning Materials for an Endangered Aboriginal Language in Tai- wan |
d14199519 | We report applications of language technology to analyzing historical documents in the Database for the Study of Modern Chinese Thoughts and Literature (DSMCTL). We studied two historical issues with the reported techniques: the conceptualization of "huaren" ( 華 人 , Chinese people) and the attempt to institute constitutional monarchy in the late Qing dynasty. We also discuss research challenges for supporting sophisticated issues using our experience with DSMCTL, the Database of Government Officials of the Republic of China, and the Dream of the Red Chamber. Advanced techniques and tools for lexical, syntactic, semantic, and pragmatic processing of language information, along with more thorough data collection, are needed to strengthen the collaboration between historians and computer scientists.28Chao-Lin Liu et al.Whether we can extend the applications of current NLP techniques to historical Chinese text and in the humanistic context (e.g.,Xiang & Unsworth, 2006;Hsiang, 2011a;Hsiang, 2011b;Yu, 2012) is a challenge. Word senses and grammar have changed over time, and people have assigned different meanings to the same symbols, phrases, and word patterns.We explored the applications of NLP techniques to support the study of historical issues, based on the textual material from three data sources. These include the Database for the Study of Modern Chinese Thoughts and Literature (DSMCTL), 1 the Database of Government Officials of the Republic of China (DGOROC), 2 and the Dream of the Red Chamber (DRC). 3 DSMCTL is a very large database that contains more than 120 million Chinese characters about Chinese history between 1830 and 1930. DGOROC includes government announcements starting from 1912 to the present. DRC is a famous Chinese novel that was composed in the Qing dynasty. These data sources offer great chances for researchers to study Chinese history and literature, and, due to the huge amount of content, computing technology is expected to provide meaningful help.In this paper, we report how we employed NLP techniques to support historical studies. Chinese text did not contain punctuation until modern days, so we had to face not only the well-known Chinese segmentation problem but also the problem of missing sentence boundaries. In recent attempts, we applied the PAT Tree method(Chien, 1999)to extract frequent Chinese strings from the corpora, and we discovered that the distribution over the frequencies of these strings conforms to Zipf's law(Zipf, 1949).We investigated the issue of how the Qing government attempted to convert from an imperial monarchy to a constitutional monarchy between 1905 and 1911, using the emperor's memorials (奏摺, zou4 zhe2) 4 about the preparation of constitutional monarchy. 5 To this end, we selected the keywords from the frequent strings with human inspection, and we applied 1 中國近現代思想及文學史專業數據庫 (zhong1 guo2 jin4 xian4 dai4 si1 xiang3 ji2 wen2 xue2 shi3 zhuan1 ye4 shu4 ju4 ku4): http://dsmctl.nccu.edu.tw/d_about_e.html, a joint research project between the National Chengchi University (Taiwan) and the Chinese University of Hong Kong (Hong Kong), led by Guantao Jin and Qingfeng Liu 2 中 華 民 國 政 府 官 職 資 料 庫 (zhong1 hua2 min2 guo2 zheng4 fu3 guan1 zhi2 zi1 liao4 ku4):http://gpost.ssic.nccu.edu.tw/. The development of this database was led by Jyi-Shane Liu of the National Chengchi University. 3 紅 樓 夢 (hong1 lou2 meng4): http://en.wikipedia.org/wiki/Dream_of_the_Red_Chamber, a very famous Chinese novel that was composed in the eighteenth century 4 Most Chinese words are followed by their Hanyu pinyin and tone the first time they appear in this paper. 5 清 末 籌 備 立 憲 檔 案 史 料 (qing1 mo4 chou2 bei4 li4 xian4 dang3 an4 shi3 liao4) : | Some Chances and Challenges in Applying Language Technologies to Historical Studies in Chinese |
d236486317 | ||
d15389969 | This paper proposes a method to extract constructions in a formal and mathematically rigid way using the technique of formal concept analysis (FCA). Looking at lemmas of core components of constructions as objects and semantic frames of the construction instances as attributes, in terms of the FCA, a complete lattice that represents the network structure of constructions can be obtained. We conducted the preliminary experiment of extracting the network of sub-patterns of the English ditransitive construction using a relatively small-sized corpus that was semantically tagged. The result displays the potential capability of this method, both for verifying and substantiating previous theoretical works on construction grammar and for enabling the application of such works to more practical enterprises in the field of natural language processing. | Extraction of English Ditransitive Constructions Using Formal Concept Analysis |
d36756123 | RESUME ____________________________________________________________________________________________________________ L'objectif de cette étude est d'observer la configuration labiale des voyelles /i, y, a/ à partir de mesures prises sur les contours interne et externe des lèvres. Les variations de configuration labiale en fonction des voyelles, des locuteurs et de la position prosodique de la voyelle sont aussi bien capturées par les contours internes et externes pour les mesures d'aire et le facteur K2 (forme du contour), alors que les distances verticale et horizontale dépendent du contour étudié. Les variations entre locuteurs s'observent d'avantage sur le contour externe comme attendu et les variations induites par la position prosodique sont reflétées avec une plus grande précision sur le contour interne.ABSTRACT _________________________________________________________________________________________________________Variations of labial configuration of vowels /i, y, a/: effect of prosodic position and speaker.Variations in the labial configuration of the French vowels /i, y, a/ are observed on measurements derived from the external and internal contour of the lips. Articulatory variations according to vowel type, speakers and prosodic position of the vowel are equally captured by the internal and external contours for area measures and K2 factor (shape), but not for vertical and horizontal distances. Inter-speakers differences are best captured by measurements on the external contour as expected, while prosodically induced variations are reflected with more precisions on the internal contour. | Variations de la configuration labiale des voyelles /i, y, a/ : effets de la position prosodique et du locuteur |
d219306955 | ||
d729163 | Word sense induction is an unsupervised task to find and characterize different senses of polysemous words. This work investigates two unsupervised approaches that focus on using distributional word statistics to cluster the contextual information of the target words using two different algorithms involving latent dirichlet allocation and spectral clustering. Using a large corpus for achieving this task, we quantitatively analyze our clusters on the Semeval-2010 dataset and also perform a qualitative analysis of our induced senses. Our results indicate that our methods successfully characterized the senses of the target words and were also able to find unconventional senses for those words.Related WorkMuch of the work on word sense induction has been quite recent following the Semeval tasks on WSI in 2007(Agirre andSoroa, 2007)and 2010, but the task was recognized much earlier and various semisupervised and unsupervised efforts were directed towards the problem.Yarowsky (1995) proposed a This work is licenced under a Creative Commons Attribution 4.0 International License. Page numbers and proceedings footer are added by the organizers. License details: http://creativecommons.org/licenses/by/4.0/ | Unsupervised Word Sense Induction using Distributional Statistics |
d9867933 | This paper describes an extremely lexicalized probabilistic model for fast and accurate HPSG parsing. In this model, the probabilities of parse trees are defined with only the probabilities of selecting lexical entries. The proposed model is very simple, and experiments revealed that the implemented parser runs around four times faster than the previous model and that the proposed model has a high accuracy comparable to that of the previous model for probabilistic HPSG, which is defined over phrase structures. We also developed a hybrid of our probabilistic model and the conventional phrasestructure-based model. The hybrid model is not only significantly faster but also significantly more accurate by two points of precision and recall compared to the previous model. | Extremely Lexicalized Models for Accurate and Fast HPSG Parsing |
d18404957 | In this paper we propose a corpus-based approach to anaphora resolution combining a machine learning method and statistical information. First, a decision tree trained on an annotated corpus determines the coreference relation of a given anaphor and antecedent candidates and is utilized as a filter in order to reduce the number of potential candidates. In the second step, preference selection is achieved by taking into account the frequency information of coreferential and non-referential pairs tagged in the training corpus as well as distance features within the current discourse. Preliminary experiments concerning the resolution of Japanese pronouns in spoken-language dialogs result in a success rate of 80.6%. | Corpus-Based Anaphora Resolution Towards Antecedent Preference |
d5249899 | The primary objective of this paper is to describe an experiment designed to investigate the semantic relationships between the three basis components of a prepositional construct: the governor, preposition and the complement. Because of the preliminary nature of the experiment, only simple data processing equipment, such as the keypunch and the sorter, was used. The implementation of this approach on a larger scale, however, would necessitate the use of more sophisticated hardware.The described procedure uses Russian prepositions because, while working on this problem, the author was a research staff member of theRussian-English mechanical translation group at IBM's Thomas J. WatsonResearch Center in Yorktown Heights, New York.While the described procedure presents a tentative approach, which does not offer a solution to the semantic ambiguities within prepositional constructs in Russian, it does suggest a method for examining each basic component of a given construct in relation to other constructs containing different types of prepositions.The data used in the model was collected mainly from the Soviet Academy of Sciences Grau~ar and, to some extent, from the Soviet Academy of Sciences Dictionary. | |
d12929928 | Convolution kernels, such as sequence and tree kernels, are advantageous for both the concept and accuracy of many natural language processing (NLP) tasks. Experiments have, however, shown that the over-fitting problem often arises when these kernels are used in NLP tasks. This paper discusses this issue of convolution kernels, and then proposes a new approach based on statistical feature selection that avoids this issue. To enable the proposed method to be executed efficiently, it is embedded into an original kernel calculation process by using sub-structure mining algorithms. Experiments are undertaken on real NLP tasks to confirm the problem with a conventional method and to compare its performance with that of the proposed method. | Convolution Kernels with Feature Selection for Natural Language Processing Tasks |
d37296498 | La simplification lexicale consiste à remplacer des mots ou des phrases par leur équivalent plus simple. Dans cet article, nous présentons trois modèles de simplification lexicale, fondés sur différents critères qui font qu'un mot est plus simple à lire et à comprendre qu'un autre. Nous avons testé différentes tailles de contextes autour du mot étudié : absence de contexte avec un modèle fondé sur des fréquences de termes dans un corpus d'anglais simplifié ; quelques mots de contexte au moyen de probabilités à base de n-grammes issus de données du web ; et le contexte étendu avec un modèle fondé sur les fréquences de cooccurrences.ABSTRACTStudying frequency-based approaches to process lexical simplificationLexical simplification aims at replacing words or phrases by simpler equivalents. In this paper, we present three models for lexical simplification, focusing on the criteria that make one word simpler to read and understand than another. We tested different contexts of the considered word : no context, with a model based on word frequencies in a simplified English corpus ; a few words context, with n-grams probabilites on Web data, and an extended context, with a model based on co-occurrence frequencies.MOTS-CLÉS : simplification lexicale, fréquence lexicale, modèle de langue. | Approches à base de fréquences pour la simplification lexicale |
d39939044 | Concordancers are an accepted and valuable part of the tool set of linguists and lexicographers. They allow the user to see the context of use of a word or phrase in a corpus. A large enough corpus, such as the Corpus Of Contemporary American English, provides the data needed to enumerate all common uses or meanings. | Toward A Semantic Concordancer |
d199553531 | ||
d39354903 | ||
d237418569 | ||
d218974555 | ||
d14136647 | Data-driven grammatical function tag assignment has been studied for English using the Penn-II Treebank data. In this paper we address the question of whether such methods can be applied successfully to other languages and treebank resources. In addition to tag assignment accuracy and f-scores we also present results of a task-based evaluation. We use three machine-learning methods to assign Cast3LB function tags to sentences parsed with Bikel's parser trained on the Cast3LB treebank. The best performing method, SVM, achieves an f-score of 86.87% on gold-standard trees and 66.67% on parser output -a statistically significant improvement of 6.74% over the baseline. In a task-based evaluation we generate LFG functional-structures from the functiontag-enriched trees. On this task we achive an f-score of 75.67%, a statistically significant 3.4% improvement over the baseline. | Using Machine-Learning to Assign Function Labels to Parser Output for Spanish |
d219182009 | ||
d235599135 | ||
d14087839 | In statistical machine translation (SMT), syntax-based pre-ordering of the source language is an effective method for dealing with language pairs where there are great differences in their respective word orders. This paper introduces a novel pre-ordering approach based on dependency parsing for Chinese-English SMT. We present a set of dependency-based preordering rules which improved the BLEU score by 1.61 on the NIST 2006 evaluation data. We also investigate the accuracy of the rule set by conducting human evaluations. | Dependency-based Pre-ordering for Chinese-English Machine Translation |
d5223711 | This paper presents a novel approach to automatic captioning of geo-tagged images by summarizing multiple webdocuments that contain information related to an image's location. The summarizer is biased by dependency pattern models towards sentences which contain features typically provided for different scene types such as those of churches, bridges, etc. Our results show that summaries biased by dependency pattern models lead to significantly higher ROUGE scores than both n-gram language models reported in previous work and also Wikipedia baseline summaries. Summaries generated using dependency patterns also lead to more readable summaries than those generated without dependency patterns. | Generating image descriptions using dependency relational patterns |
d7746159 | The paper presents two experiments of unsupervised classification of Italian noun phrases. The goal of the experiments is to identify the most prominent contextual properties that allow for a functional classification of noun phrases. For this purpose, we used a Self Organizing Map is trained with syntactically-annotated contexts containing noun phrases. The contexts are defined by means of a set of features representing morpho-syntactic properties of both nouns and their wider contexts. Two types of experiments have been run: one based on noun types and the other based on noun tokens. The results of the type simulation show that when frequency is the most prominent classification factor, the network isolates idiomatic or fixed phrases. The results of the token simulation experiment, instead, show that, of the 36 attributes represented in the original input matrix, only a few of them are prominent in the re-organization of the map. In particular, key features in the emergent macro-classification are the type of determiner and the grammatical number of the noun. An additional but not less interesting result is an organization into semantic/pragmatic micro-classes. In conclusions, our result confirm the relative prominence of determiner type and grammatical number in the task of noun (phrase) categorization. | Learning properties of Noun Phrases: from data to functions |
d243864621 | Eye-tracking psycholinguistic studies have suggested that context-word semantic coherence and predictability influence language processing during the reading activity.In this study, we investigated the correlation between the cosine similarities computed with word embedding models (both static and contextualized) and eye-tracking data from two naturalistic reading corpora. We also studied the correlations of surprisal scores computed with three state-of-the-art language models.Our results show strong correlation for the scores computed with BERT and GloVe, suggesting that similarity can play an important role in modeling reading times. | Looking for a Role for Word Embeddings in Eye-Tracking Features Prediction: Does Semantic Similarity Help? |
d10576049 | A forced derivation tree (FDT) of a sentence pair {f, e} denotes a derivation tree that can translate f into its accurate target translation e. In this paper, we present an approach that leverages structured knowledge contained in FDTs to train component models for statistical machine translation (SMT) systems. We first describe how to generate different FDTs for each sentence pair in training corpus, and then present how to infer the optimal FDTs based on their derivation and alignment qualities. As the first step in this line of research, we verify the effectiveness of our approach in a BTGbased phrasal system, and propose four FDTbased component models. Experiments are carried out on large scale English-to-Japanese and Chinese-to-English translation tasks, and significant improvements are reported on both translation quality and alignment quality. | Forced Derivation Tree based Model Training to Statistical Machine Translation |
d227231555 | ||
d232021657 | ||
d21712068 | We introduce a new general framework for sign recognition from monocular video using limited quantities of annotated data. The novelty of the hybrid framework we describe here is that we exploit state-of-the art learning methods while also incorporating features based on what we know about the linguistic composition of lexical signs. In particular, we analyze hand shape, orientation, location, and motion trajectories, and then use CRFs to combine this linguistically significant information for purposes of sign recognition. Our robust modeling and recognition of these sub-components of sign production allow an efficient parameterization of the sign recognition problem as compared with purely data-driven methods. This parameterization enables a scalable and extendable time-series learning approach that advances the state of the art in sign recognition, as shown by the results reported here for recognition of isolated, citation-form, lexical signs from American Sign Language (ASL). | Linguistically-driven Framework for Computationally Efficient and Scalable Sign Recognition |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.