_id
stringlengths
4
10
text
stringlengths
0
18.4k
title
stringlengths
0
8.56k
d8222337
Recent work on evaluation of spoken dialogue systems indicates that better algorithms are needed for the presentation of complex information in speech. Current dialogue systems often rely on presenting sets of options and their attributes sequentially. This places a large memory burden on users, who have to remember complex trade-offs between multiple options and their attributes. To address these problems we build on previous work using multiattribute decision theory to devise speech-planning algorithms that present usertailored summaries, comparisons and recommendations that allow users to focus on critical differences between options and their attributes. We discuss the differences between speech and text planning that result from the particular demands of the speech situation.
Speech-Plans: Generating Evaluative Responses in Spoken Dialogue
d19635656
Aiming at facilitating the research on quality estimation (QE) and automatic post-editing (APE) of machine translation (MT) outputs, especially for those among Asian languages, we have created new datasets for Japanese to English, Chinese, and Korean translations. As the source text, actual utterances in Japanese were extracted from the log data of our speech translation service. MT outputs were then given by phrase-based statistical MT systems. Finally, human evaluators were employed to grade the quality of MT outputs and to post-edit them. This paper describes the characteristics of the created datasets and reports on our benchmarking experiments on word-level QE, sentencelevel QE, and APE conducted using the created datasets.
Japanese to English/Chinese/Korean Datasets for Translation Quality Estimation and Automatic Post-Editing
d250391095
This paper presents our submission to task 5 ( Multimedia Automatic Misogyny Identification) of the SemEval 2022 competition. The purpose of the task is to identify given memes as misogynistic or not and further label the type of misogyny involved. In this paper, we present our approach based on language processing tools. We embed meme texts using GloVe embeddings and classify misogyny using BERT model. Our model obtains an F1score of 66.24% and 63.5% in misogyny classification and misogyny labels, respectively.
IITR CodeBusters at SemEval-2022 Task 5: Misogyny Identification using Transformers
d10202648
d21732305
With the emergence of new technologies, the surgical working environment becomes increasingly complex and comprises many medical devices which have to be monitored and controlled. With the aim of improving productivity and reducing the workload for the operating staff, we have developed an Intelligent Digital Assistant for Clinical Operating Rooms (IDACO) which allows the surgeon to control the operating room using natural spoken language. As speech is the modality used by the surgeon to communicate with their staff, using it to control the technical devices does not pose an additional mental burden. Therefore, we claim that the surgical environment presents a potential field of application for Spoken Dialogue Systems. In this work, we present the design and implementation of IDACO as well as the evaluation in an experimental set-up by specialists in the field of minimally invasive surgery. Our expert evaluation yields promising results and allows to conclude that clinical operating rooms are indeed an expedient area of application for Spoken Dialogue Systems.
Expert Evaluation of a Spoken Dialogue System in a Clinical Operating Room
d1430616
This paper describes the motivation and design of the Corpus Encoding Standard (CES) (Ide, et al.,
Encoding Linguistic Corpora
d11606908
This paper introduces the Aix Map Task corpus, a corpus of audio and video recordings of task-oriented dialogues. It was modelled after the original HCRC Map Task corpus. Lexical material was designed for the analysis of speech and prosody, as described inAstésano et al. (2007). The design of the lexical material, the protocol and some basic quantitative features of the existing corpus are presented. The corpus was collected under two communicative conditions, one audio-only condition and one face-to-face condition. The recordings took place in a studio and a sound attenuated booth respectively, with head-set microphones (and in the face-to-face condition with two video cameras). The recordings have been segmented into Inter-Pausal-Units and transcribed using transcription conventions containing actual productions and canonical forms of what was said. It is made publicly available online.
Aix Map Task corpus: The French multimodal corpus of task-oriented dialogue
d6495498
Remember: no matter where you go, there you are.The eight years from 1988 to 1996 saw the introduction and soon widespread prevalence of probabilistic generative models in NLP. Probabilities were the answer to learning, robustness and disambiguation, and we were all Bayesians, if commonly in a fairly shallow way. The eight years from 1996 to 2004 saw the rise to preeminence of discriminative models. Soon we were all either using SVMs or (in a few cases like myself) arguing that other discriminative techniques were equally as good: the sources of insight were margins and loss functions.What might the next eight years hold? There will doubtless be many more variants of SVMs deployed, but it seems much less likely to me that major progress will come from new learning methods. NLP pretty much already uses what is known, and commonly the difference between one kernel or prior and another is small indeed. If we are waiting for better two class classifiers to push the performance of NLP systems into new realms, then we may be waiting a very long time. What other opportunities are there?One answer is to rely on more data, and this answer has been rather fashionable lately. Indeed, it has been known for a while now that "There's no data like more data". One cannot argue with the efficacy of this solution if you are dealing with surface visible properties of a language with ample online text, and dealing with a standard problem over a stationary data set. Or if you have so much money that you can compensate for lacks from any of those directions. But I do not think this approach will work for most of us.Something that has almost snuck up upon the field is that with modern discriminative approaches and the corresponding widely available software, anyone with modest training can deploy state of the art classification methods. What then determines the better systems? The features that they use. As a result, we need more linguists back in the field (albeit ones with training in empirical, quantitative methods, who are still in short supply, especially in North America). This viewpoint is still somewhat unfashionable, but I think it will increasingly be seen to be correct. If you look through the results of recent competitive evaluations, such as the various CoNLL Shared Task evaluations, many of the groups are using similar or the same machine learning methods. The often substantial differences between the systems is mainly in the features employed. In the context of language, doing "feature engineering" is otherwise known as doing linguistics. A distinctive aspect of language processing problems is that the space of interesting and useful features that one could extract is usually effectively unbounded. All one needs is enough linguistic insight and time to build those features (and enough data to estimate them effectively).A second direction of the field is a renewed intererst in the deeper problems of NLP: semantics, pragmatic interpretation, and discourse. For both this issue and the previous one, issues of representation become central. At deeper levels of processing, there is less agreement on representations, and less understanding of what are effective representations for language learning. Much of our recent work in NLP has shown the importance and effectiveness of good representations for both unsupervised and supervised natural language learning problems. Working with good representations will be even more important for deeper NLP problems, and will see a revival of rich linguistic representations like in the 1980s.Finally, a third direction (and perhaps the most productive area for new types of machine learning research) is to build systems that work effectively from less data. Whether trying to build a text classifier that can classify email into a folder based on only two examples, porting your work to a different Arabic dialect, or wanting to incorporate context into parsing and semantic interpretation, the challenge is how to build systems that learn from just a little data. This is also the cognitive science challenge of tackling the phenomenon of one-shot learning, and it requires some different thinking from that of relying on large hand-labeled data sets.
Language Learning: Beyond Thunderdome
d261341825
Recent months have witnessed significant progress in the field of large language models (LLMs). Represented by ChatGPT and GPT-4, LLMs perform well in various natural language processing tasks and have been applied to many downstream applications to facilitate people's lives. However, there still exist safety and ethical concerns. Specifically, LLMs suffer from social bias, robustness problems, and poisoning issues, all of which may induce LLMs to spew harmful contents. We propose this tutorial as a gentle introduction to the safety and ethical issues of LLMs.
Safety and Ethical Concerns of Large Language Models
d260063137
In this paper we investigate the application of active learning to semantic role labeling (SRL) using Bayesian Active Learning by Disagreement (BALD). Our new predicate-focused selection method quickly improves efficiency on three different specialised domain corpora. This is encouraging news for researchers wanting to port SRL to domain specific applications. Interestingly, with the large and diverse OntoNotes corpus, the sentence selection approach, that collects a larger number of predicates, taking more time to annotate, fares better than the predicate approach. In this paper, we analyze both the selections made by our two selections methods for the various domains and the differences between these corpora in detail.
Leveraging Active Learning to Minimise SRL Annotation Across Corpora
d2372055
We propose a method of acquiring attribute words for a wide range of objects from Japanese Web documents. The method is a simple unsupervised method that utilizes the statistics of words, lexico-syntactic patterns, and HTML tags. To evaluate the attribute words, we also establish criteria and a procedure based on question-answerability about the candidate word.
Automatic Discovery of Attribute Words from Web Documents
d9937118
A method is described for handling the ambiguity and vagueness that is often found in quantifications -the semantically complex relations between nominal and verbal constituents. In natural language certain aspects of quantification are often left open; it is argued that the analysis of quantification in a model-theoretic framework should use semantic representations in which this may also be done. This paper shows a form for such a representation and how "ambiguous" representations are used in an elegant and efficient procedure for semantic analysis, incorporated in the TENDUM dialogue system.
THE RESOLUTION OF QUANTIFICATIONAL AMBIGUITY IN THE TENDUM SYSTEM
d241583289
Twitter is commonly used for civil unrest detection and forecasting tasks, but there is a lack of work in evaluating how civil unrest manifests on Twitter across countries and events. We present two in-depth case studies for two specific large-scale events, one in a country with high (English) Twitter usage (Johannesburg riots in South Africa) and one in a country with low Twitter usage (Burayu massacre protests in Ethiopia). We show that while there is event signal during the events, there is little signal leading up to the events. In addition to the case studies, we train n-gram-based models on a larger set of Twitter civil unrest data across time, events, and countries and use machine learning explainability tools (SHAP) to identify important features. The models were able to find words indicative of civil unrest that generalized across countries. The 42 countries span Africa, Middle East, and Southeast Asia and the events range occur between 2014 and 2019.
Study of Manifestation of Civil Unrest on Twitter
d232021620
Cet article s'intéresse à la structure des représentations logiques des énoncés en langue naturelle. Par représentation logique, nous entendons une représentation sémantique incluant un traitement de la portée des quantificateurs. Nous montrerons qu'une telle représentation combine fondamentalement deux structures sous-jacentes, une structure « prédicative » et une structure hiérarchique logique, et que la distinction des deux permet, par exemple, un traitement élégant de la sous-spécification. Nous proposerons une grammaire polarisée pour manipuler directement la structure des représentations logiques (sans passer par un langage linéaire avec variables), ainsi qu'une grammaire pour l'interface sémantique-syntaxe.This paper aims at the structure of logic representations in natural languages. By logic representation we mean a semantic representation including a quantifier scope processing. We show that such a representation basically combines two underlying substructures, a "predicative" structure and a logic hierarchic structure, and that the identification of the two allows for an elegant processing of underspecification. We will propose a polarized grammar that directly handles the structure of logic representations (without using a linear language with variables), as well as a grammar for the semantics-syntax interface.
Structure des représentations logiques, polarisation et sous-spécification
d232021871
SMS, phonétisation, synthèse de la parole. SMS, phonetisation, speech synthesis.Résumé -AbstractCet article présente une étude dont l'objectif était d'améliorer la phonétisation d'un système de synthèse vocale de SMS en ce qui concerne trois types de problèmes : l'écriture rébus (chiffres et lettres utilisés pour leur valeur phonique), les abréviations sous forme de squelettes consonantiques et les agglutinations (déterminants ou pronoms collés graphiquement au mot qui suit). Notre approche se base sur l'analyse d'un corpus de SMS, à partir duquel nous avons extrait des listes de formes permettant de compléter les lexiques du système, et mis au point de nouvelles règles pour les grammaires internes. Les modifications effectuées apportent une amélioration substantielle du système, bien qu'il reste, évidemment, de nombreuses autres classes de problèmes à traiter.This article presents a study whose goal is to improve the grapheme-to-phoneme component of an SMS-to-speech system. The three types of problems tackled in the study are: rebus writing (digits and letters used for their phonetic value), consonant skeleton abbreviations and agglutinations (determiner or pronouns merged with the next word). Our approach is based on the analysis of an SMS corpus, from which we extracted lists of forms to enhance the system's lexicons, and developed new grammatical rules for the internal grammars. Our modifications result in a substantial improvement of the system, although, of course, there remain many other categories of problems to address.1 Cette étude a été réalisée dans le cadre d'un contrat de recherche avec France Télécom Division R&D.
Étude de quelques problèmes de phonétisation dans un système de synthèse de la parole à partir de SMS Mots-clefs -Keywords
d8289107
This paper introduces the Ky-otoEBMT Example-Based Machine Translation framework. Our system uses a tree-to-tree approach, employing syntactic dependency analysis for both source and target languages in an attempt to preserve non-local structure. The effectiveness of our system is maximized with online example matching and a flexible decoder. Evaluation demonstrates BLEU scores competitive with state-of-the-art SMT baselines. The system implementation is available as open-source.
KyotoEBMT System Description for the 1st Workshop on Asian Translation
d17161141
Machine transliteration is an automatic method to generate characters or words in one alphabetical system for the corresponding characters in another alphabetical system. There has been increasing concern on machine transliteration as an assistant of machine translation and information retrieval. Three machine transliteration models, including "grapheme-based model", "phonemebased model", and "hybrid model", have been proposed. However, there are few works trying to make use of correspondence between source grapheme and phoneme, although the correspondence plays an important role in machine transliteration. Furthermore there are few works, which dynamically handle source grapheme and phoneme. In this paper, we propose a new transliteration model based on an ensemble of grapheme and phoneme. Our model makes use of the correspondence and dynamically uses source grapheme and phoneme. Our method shows better performance than the previous works about 15~23% in English-to-Korean transliteration and about 15~43% in English-to-Japanese transliteration.
An Ensemble of Grapheme and Phoneme for Machine Transliteration
d20196501
SOME ICONOCLASTIC ASSERTIONSConsidering the problems we have in communicating with other h~rmans using natural language, it is not clear that we want to recreate these problems in dealing with the computer.While there is some evidence that natural language is useful in communications among humans, there is also considerable evidence that it is neither perfect nor ideal.Natural language is wordy (redundant) and imprecise.Most b,*m,m groups who have a need to communicate quickly and accurately tend to develop a rather well specified subset of natural language that is highly coded and precise in nature.Pilots and police are good examples of this. Even working groups within a field or discipline tend over time to develop a jargon that minimizes the effort of communication and clarifies shared precise meanings.
NATURAL LANGUAGE AND COMPUTER INTEBFACE DESIGN
d248780059
Language should be accommodating of equality and diversity as a fundamental aspect of communication. The language of internet users has a big impact on peer users all over the world. On virtual platforms such as Facebook, Twitter, and YouTube, people express their opinions in different languages. People respect others' accomplishments, pray for their well-being, and cheer them on when they fail. Such motivational remarks are hope speech remarks. Simultaneously, a group of users encourages discrimination against women, people of color, people with disabilities, and other minorities based on gender, race, sexual orientation, and other factors. To recognize hope speech from YouTube comments, the current study offers an ensemble approach that combines a support vector machine, logistic regression, and random forest classifiers. Extensive testing was carried out to discover the best features for the aforementioned classifiers. In the support vector machine and logistic regression classifiers, charlevel TF-IDF features were used, whereas in the random forest classifier, word-level features were used. The proposed ensemble model performed significantly well among English, Spanish, Tamil, Malayalam, and Kannada YouTube comments.
SOA_NLP@LT-EDI-ACL2022: An Ensemble Model for Hope Speech Detection from YouTube Comments
d7138302
Lexicon definition is one of the main bottlenecks in the development of new applications in the field of Information Extraction from text. Generic resources (e.g., lexical databases) are promising for reducing the cost of specific lexica definition, but they introduce lexical ambiguity. This paper proposes a methodology for building application-specific lexica by using WordNet. Lexical ambiguity is kept under control by marking synsets in WordNet with field labels taken from the Dewey Decimal Classification.
The Development of Lexical Resources for Information Extraction from Text Combining WordNet and Dewey Decimal Classification*
d2980513
An efficient decoding algorithm is a crucial element of any statistical machine translation system. Some researchers have noted certain similarities between SMT decoding and the famous Traveling Salesman Problem; in particular(Knight, 1999)has shown that any TSP instance can be mapped to a sub-case of a word-based SMT model, demonstrating NP-hardness of the decoding task. In this paper, we focus on the reverse mapping, showing that any phrase-based SMT decoding problem can be directly reformulated as a TSP. The transformation is very natural, deepens our understanding of the decoding problem, and allows direct use of any of the powerful existing TSP solvers for SMT decoding. We test our approach on three datasets, and compare a TSP-based decoder to the popular beam-search algorithm. In all cases, our method provides competitive or better performance. * This work was conducted during an internship at XRCE.
Phrase-Based Statistical Machine Translation as a Traveling Salesman Problem
d243865386
Difficult samples of the minority class in imbalanced text classification are usually hard to be classified as they are embedded into an overlapping semantic region with the majority class. In this paper, we propose a Mutual Information constrained Semantically Oversampling framework (MISO) that can generate anchor instances to help the backbone network determine the re-embedding position of a non-overlapping representation for each difficult sample. MISO consists of (1) a semantic fusion module that learns entangled semantics among difficult and majority samples with an adaptive multi-head attention mechanism, (2) a mutual information loss that forces our model to learn new representations of entangled semantics in the non-overlapping region of the minority class, and (3) a coupled adversarial encoder-decoder that fine-tunes disentangled semantic representations to remain their correlations with the minority class, and then using these disentangled semantic representations to generate anchor instances for each difficult sample. Experiments on a variety of imbalanced text classification tasks demonstrate that anchor instances help classifiers achieve significant improvements over strong baselines.
Difficult Samples Re-embedding via Mutual Information Constrained Semantically Oversampling
d428945
Non-Sentential Utterances (NSUs) are short utterances that do not have the form of a full sentence but nevertheless convey a complete sentential meaning in the context of a conversation. NSUs are frequently used to ask follow up questions during interactions with question answer (QA) systems resulting into in-correct answers being presented to their users. Most of the current methods for resolving such NSUs have adopted rule or grammar based approach and have limited applicability.
A Statistical Approach for Non-Sentential Utterance Resolution for Interactive QA System
d225735979
d235716168
d3108329
In this paper, we propose an estimation method of user satisfaction for a spoken dialog system using an N-gram-based dialog history model. We have collected a large amount of spoken dialog data accompanied by usability evaluation scores by users in real environments. The database is made by a field-test in which naive users used a client-server music retrieval system with a spoken dialog interface on their own PCs. An N-gram model is trained from the sequences that consist of users' dialog acts and/or the system's dialog acts for each one of six user satisfaction levels: from 1 to 5 and φ (task not completed). Then, the satisfaction level is estimated based on the N-gram likelihood. Experiments were conducted on the large real data and the results show that our proposed method achieved good classification performance; the classification accuracy was 94.7% in the experiment on a classification into dialogs with task completion and those without task completion. Even if the classifier detected all of the task incomplete dialog correctly, our proposed method achieved the false detection rate of only 6%.
Estimation method of user satisfaction using N-gram-based dialog history model for spoken dialog system
d5624972
This is a semantic pilot study which concentrates on how people in Taiwan process the temporal metaphors, ego-moving metaphor and time-moving metaphor.Motivated by the research of Gentner, Imai, and Boroditsky (2002) in which the English native speakers comprehend ego-moving metaphors faster than time-moving metaphors, the present study attempts to reexamine whether the faster reaction to ego-moving metaphors is shared by both the Chinese native speakers and EFL learners. To achieve the goals, 25Chinese/English bilinguals are invited to be examined via the16 Chinese and 16 English test sentences. The recordings of their accuracy on each item are served as the databases used to compare with the study of Gentner, Imai, and Boroditsky (2002). The two finding presented here are: (1) when the subjects tested in their native language, Chinese, they process ego-moving metaphors better. (2) when tested in the foreign language, English, they conceptualize time-moving metaphors much better.
Time-moving Metaphors and Ego-moving Metaphors: Which Is Better Comprehended by Taiwanese? *
d19709706
!"#$%&'(%)*+)+,-%.'/)+-,0'
d8418674
Statistical Machine Translation is a modern success: Given a source language sentence, SMT finds the most probable target language sentence, based on (1) properties of the source; (2) probabilistic source--target mappings at the level of words, phrases and/or sub-structures; and (3) properties of the target language.
Discourse for Machine Translation
d64935477
A large percentage of the world's population speaks a language of the Indian subcontinent, what we will call here Indic languages, comprising languages from both Indo-European (e.g., Hindi, Bangla, Gujarati, etc.) and Dravidian (e.g., Tamil, Telugu, Malayalam, etc.) families, upwards of 1.5 Billion people. A universal characteristic of Indic languages is their complex morphology, which, when combined with the general lack of sufficient quantities of high quality parallel data, can make developing machine translation (MT) for these languages difficult. In this paper, we describe our efforts towards developing general domain English-Bangla MT systems which are deployable to the Web. We initially developed and deployed SMTbased systems, but over time migrated to NMT-based systems. Our initial SMTbased systems had reasonably good BLEU scores, however, using NMT systems, we have gained significant improvement over SMT baselines. This is achieved using a number of ideas to boost the data store and counter data sparsity: crowd translation of intelligently selected monolingual data (throughput enhanced by an IME (Input Method Editor) designed specifically for QWERTY keyboard entry for Devanagari scripted languages), back-translation, different regularization techniques, dataset augmentation and early stopping. c 2018 The authors. This article is licensed under a Creative Commons 3.0 licence, no derivative works, attribution, CC-BY-ND.
Training Deployable General Domain MT for a Low Resource Language Pair: English-Bangla
d227230864
d219306388
d219307293
d2165573
This paper explores several unsupervised approaches to automatic keyword extraction using meeting transcripts. In the TFIDF (term frequency, inverse document frequency) weighting framework, we incorporated partof-speech (POS) information, word clustering, and sentence salience score. We also evaluated a graph-based approach that measures the importance of a word based on its connection with other sentences or words. The system performance is evaluated in different ways, including comparison to human annotated keywords using F-measure and a weighted score relative to the oracle system performance, as well as a novel alternative human evaluation. Our results have shown that the simple unsupervised TFIDF approach performs reasonably well, and the additional information from POS and sentence score helps keyword extraction. However, the graph method is less effective for this domain. Experiments were also performed using speech recognition output and we observed degradation and different patterns compared to human transcripts.
Unsupervised Approaches for Automatic Keyword Extraction Using Meeting Transcripts
d714162
We use the technique of SVM anchoring to demonstrate that lexical features extracted from a training corpus are not necessary to obtain state of the art results on tasks such as Named Entity Recognition and Chunking. While standard models require as many as 100K distinct features, we derive models with as little as 1K features that perform as well or better on different domains. These robust reduced models indicate that the way rare lexical features contribute to classification in NLP is not fully understood. Contrastive error analysis (with and without lexical features) indicates that lexical features do contribute to resolving some semantic and complex syntactic ambiguities -but we find this contribution does not generalize outside the training corpus. As a general strategy, we believe lexical features should not be directly derived from a training corpus but instead, carefully inferred and selected from other sources.
On the Role of Lexical Features in Sequence Labeling
d7027276
Real (i.e., working) machine translation may be presented both as the result of inevitable approximations over an ideal, theoretically motivated model based on the principle of semantic compositionality and as the result of a set of necessary refinements over a very rudimentary word-for-word substitutional system. This paper explores the pedagogical value of presenting real MT as being somewhere in the middle of these two extreme scenarios. I contend that it is possible to reshape the (either optimistic or pessimistic) expectations of students about real MT by showing students, on the one hand, the approximations, compromises, and sacrifices necessary to mechanize efficiently a linguistically accurate model, and, on the other hand, the large amount of work needed to improve a word-for-word model so that it produces reasonable translations. This prepares them to learn and appreciate the strategies used to tackle the problem of machine translation.
Explaining real MT to translators: between compositional semantics and word-for-word
d3050178
We describe a set of techniques for Arabic cross-document coreference resolution. We compare a baseline system of exact mention string-matching to ones that include local mention context information as well as information from an existing machine translation system. It turns out that the machine translation-based technique outperforms the baseline, but local entity context similarity does not. This helps to point the way for future crossdocument coreference work in languages with few existing resources for the task.
Arabic Cross-Document Coreference Detection
d28809463
With Language Understanding Intelligent Service (LUIS), developers without machine learning expertise can quickly build and use language understanding models specific to their task. LUIS is entirely cloud-based: developers log into a website, enter a few example utterances and their labels, and then deploy a model to an HTTP endpoint. Utterances sent to the endpoint are logged and can be efficiently labeled using active learning. Visualizations help identify issues, which can be resolved by either adding more labels or by giving hints to the machine learner in the form of features. Altogether, a developer can create and deploy an initial language understanding model in minutes, and easily maintain it as usage of their application grows.
Fast and easy language understanding for dialog systems with Microsoft Language Understanding Intelligent Service (LUIS)
d15271310
The desirability of a syntactic parsing component in natural language understanding systems has been the subject of debate for the past several years.This paper describes an approach to automarie text processing which is entirely based on syntactic form.A program is described which processes one genre of discourse, that of newspaper reports.The program creates summaries of reports by relying on an expanded concept of text grounding: certain syntactic structures and tense/ aspect oairs indicate the most important events in a news story.Supportive, background material is also highly coded syntactically.Certain types of information are routinely expressed with distinct syntactic forms. Where more than one episode occurs in a single report, a change of episode will also be marked syntactically in a reliable way.
THE USE OF SYNTACTIC CLUES IN DISCOURSE PROCESSING
d5584985
We introduce a word alignment framework that facilitates the incorporation of syntax encoded in bilingual dependency tree pairs. Our model consists of two sub-models: an anchor word alignment model which aims to find a set of high-precision anchor links and a syntaxenhanced word alignment model which focuses on aligning the remaining words relying on dependency information invoked by the acquired anchor links. We show that our syntaxenhanced word alignment approach leads to a 10.32% and 5.57% relative decrease in alignment error rate compared to a generative word alignment model and a syntax-proof discriminative word alignment model respectively. Furthermore, our approach is evaluated extrinsically using a phrase-based statistical machine translation system. The results show that SMT systems based on our word alignment approach tend to generate shorter outputs. Without length penalty, using our word alignments yields statistically significant improvement in Chinese-English machine translation in comparison with the baseline word alignment.
Improving Word Alignment Using Syntactic Dependencies
d236460107
d2852358
We present a new two-tier user simulation model for learning adaptive referring expression generation (REG) policies for spoken dialogue systems using reinforcement learning. Current user simulation models that are used for dialogue policy learning do not simulate users with different levels of domain expertise and are not responsive to referring expressions used by the system. The twotier model displays these features, that are crucial to learning an adaptive REG policy. We also show that the two-tier model simulates real user behaviour more closely than other baseline models, using the dialogue similarity measure based on Kullback-Leibler divergence.
A Two-tier User Simulation Model for Reinforcement Learning of Adaptive Referring Expression Generation Policies
d5910996
One of the next big challenges in Automatic Speech Recognition (ASR) is the transcription of speech in meetings. This task is particularly problematic for current recognition technologies because, in most realistic meeting scenarios, the vocabularies are unconstrained, the speech is spontaneous and often overlapping, and the microphones are inconspicuously placed. To support the development of meeting recognition technologies by both the speech recognition and video extraction research communities, NIST is providing a development and evaluation infrastructure including: a multi-media corpus of audio and video from meetings collected at NIST using a variety of microphones and video cameras, new evaluation protocols, metrics, software, rich transcription conventions, sponsoring evaluations and workshops, facilitating multi-site data pooling, and helping bring the community together to focus on the technical challenges. To date, NIST has collected a pilot corpus of 15 hours of meetings in its specially-instrumented Meeting Data Collection Laboratory. The corpus includes digital recordings from close-talking mics, lapel mics, distantly-placed mics, 5 digitally-recorded camera views, and full speaker/word-level transcripts. This data is being used in the development and evaluation of speech technologies and by the video extraction community under the auspices of the ARDA Video Analysis and Content Exploitation (VACE) program.
The NIST Meeting Room Pilot Corpus
d24497195
The Cross-Language Evaluation Forum (CLEF) promotes research into the development of truly multilingual systems capable of retrieving relevant information from collections in many languages and in mixed media. The paper discusses some of the main results achieved in the first six years of activity.
The Impact of Evaluation on Multilingual Information Retrieval System Development
d16800883
This paper presents an algorithm to integrate different lexical resources, through which we hope to overcome the individual inadequacy of the resources, and thus obtain some enriched lexical semantic information for applications such as word sense disambiguation. We used WordNet as a mediator between a conventional dictionary and a thesaurus. Preliminary results support our hypothesised structural relationship, which enables the integration, of the resources. These results also suggest that we can combine the resources to achieve an overall balanced degree of sense discrimination.
Bridging the Gap between Dictionary and Thesaurus
d51880115
In this paper, we present a survey covering the last 20 years of machine translation work in the Philippines. We detail the various approaches used and innovations applied. We also discuss the various mechanisms and support that keep the MT community thriving, as well as the challenges ahead.
A Survey of Machine Translation Work in the Philippines: From 1998 to 2018
d196166986
Link prediction and entailment graph induction are often treated as different problems. In this paper, we show that these two problems are actually complementary. We train a link prediction model on a knowledge graph of assertions extracted from raw text. We propose an entailment score that exploits the new facts discovered by the link prediction model, and then form entailment graphs between relations. We further use the learned entailments to predict improved link prediction scores. Our results show that the two tasks can benefit from each other. The new entailment score outperforms prior state-of-the-art results on a standard entialment dataset and the new link prediction scores show improvements over the raw link prediction scores.
Edinburgh Research Explorer Duality of Link Prediction and Entailment Graph Induction Duality of Link Prediction and Entailment Graph Induction
d503103
In this paper we present an active approach to annotate with lexical and semantic labels an Italian corpus of conversational human-human and Wizard-of-Oz dialogues. This procedure consists in the use of a machine learner to assist human annotators in the labeling task. The computer assisted process engages human annotators to check and correct the automatic annotation rather than starting the annotation from un-annotated data. The active learning procedure is combined with an annotation error detection to control the reliablity of the annotation. With the goal of converging as fast as possible to reliable automatic annotations minimizing the human effort, we follow the active learning paradigm, which selects for annotation the most informative training examples required to achieve a better level of performance. We show that this procedure allows to quickly converge on correct annotations and thus minimize the cost of human supervision.
Active Annotation in the LUNA Italian Corpus of Spontaneous Dialogues
d53634315
The Hebrew treebank (HTB), consisting of 6221 morpho-syntactically annotated newspaper sentences, has been the only resource for training and validating statistical parsers and taggers for Hebrew, for almost two decades now. During these decades, the HTB has gone through a trajectory of automatic and semi-automatic conversions, until arriving at its UDv2 form. In this work we manually validate the UDv2 version of the HTB, and, according to our findings, we apply scheme changes that bring the UD HTB to the same theoretical grounds as the rest of UD. Our experimental parsing results with UDv2New confirm that improving the coherence and internal consistency of the UD HTB indeed leads to improved parsing performance. At the same time, our analysis demonstrates that there is more to be done at the point of intersection of UD with other linguistic processing layers, in particular, at the points where UD interfaces external morphological and lexical resources.
The Hebrew Universal Dependency Treebank: Past, Present and Future
d13253132
Spelling correction has been studied for many decades, which can be classified into two categories: (1) regular text spelling correction, (2) query spelling correction. Although the two tasks share many common techniques, they have different concerns. This paper presents our work on the CLP-2014 bake-off. The task focuses on spelling checking on foreigner Chinese essays. Compared to online search query spelling checking task, more complicated techniques can be applied for better performance. Therefore, we proposed a unified framework for Chinese essays spelling correction based on extended HMM and ranker-based models, together with a rule-based model for further polishing. Our system showed better performance on the test dataset.
Extended HMM and Ranking models for Chinese Spelling Correction
d227231135
d219306041
d250390491
This paper describes an ongoing endeavor to construct Pavia Verbs Database (PaVeDa) -an open-access typological resource that builds upon previous work on verb argument structure, and in particular the Valency Patterns Leipzig (ValPaL) project(Hartmann et al., 2013). The PaVeDa database features four major innovations as compared to the ValPaL database: (i) it includes data from ancient languages enabling diachronic research; (ii) it expands the language sample to language families that are not represented in the ValPaL; (iii) it is linked to external corpora that are used as sources of usage-based examples of stored patterns; (iv) it introduces a new cross-linguistic layer of annotation for valency patterns which allows for contrastive data visualization.
PaVeDa -Pavia Verbs Database: Challenges and Perspectives
d226283457
App developers often raise revenue by contracting with third party ad networks, which serve targeted ads to end-users. To this end, a free app may collect data about its users and share it with advertising companies for targeting purposes. Regulations such as General Data Protection Regulation (GDPR) require transparency with respect to the recipients (or categories of recipients) of user data. These regulations call for app developers to have privacy policies that disclose those third party recipients of user data. Privacy policies provide users transparency into what data an app will access, collect, shared, and retain. Given the size of app marketplaces, verifying compliance with such regulations is a tedious task. This paper aims to develop an automated approach to extract and categorize third party data recipients (i.e., entities) declared in privacy policies. We analyze 100 privacy policies associated with most downloaded apps in the Google Play Store. We crowdsource the collection and annotation of app privacy policies to establish the ground truth with respect to third party entities. From this, we train various models to extract third party entities automatically. Our best model achieves average F1 score of 66% when compared to crowdsourced annotations. 3 https://apnews.com/f6a83c0b8e0a65563e4c76955c37c0ab
Identifying and Classifying Third-party Entities in Natural Language Privacy Policies
d226283593
LIBKGE 1 is an open-source PyTorch-based library for training, hyperparameter optimization, and evaluation of knowledge graph embedding models for link prediction. The key goals of LIBKGE are to enable reproducible research, to provide a framework for comprehensive experimental studies, and to facilitate analyzing the contributions of individual components of training methods, model architectures, and evaluation methods. LIBKGE is highly configurable and every experiment can be fully reproduced with a single configuration file. Individual components are decoupled to the extent possible so that they can be mixed and matched with each other. Implementations in LIBKGE aim to be as efficient as possible without leaving the scope of Python/Numpy/PyTorch. A comprehensive logging mechanism and tooling facilitates indepth analysis. LIBKGE provides implementations of common knowledge graph embedding models and training methods, and new ones can be easily added. A comparative study showed that LIBKGE reaches competitive to state-of-the-art performance for many models with a modest amount of automatic hyperparameter tuning.
LibKGE A knowledge graph embedding library for reproducible research
d28784495
For accurate entity linking, we need to capture various information aspects of an entity, such as its description in a KB, contexts in which it is mentioned, and structured knowledge. Additionally, a linking system should work on texts from different domains without requiring domain-specific training data or hand-engineered features.In this work we present a neural, modular entity linking system that learns a unified dense representation for each entity using multiple sources of information, such as its description, contexts around its mentions, and its fine-grained types. We show that the resulting entity linking system is effective at combining these sources, and performs competitively, sometimes out-performing current state-of-theart systems across datasets, without requiring any domain-specific training data or hand-engineered features. We also show that our model can effectively "embed" entities that are new to the KB, and is able to link its mentions accurately.
Entity Linking via Joint Encoding of Types, Descriptions, and Context
d236145105
Cet article présente un analyseur syntaxique en dépendances pour le français qui se compare favorablement à l'état de l'art sur la plupart des corpus de référence. L'analyseur s'appuie sur de riches représentations lexicales issues notamment de BERT et de FASTTEXT. On remarque que les représentations lexicales produites par FLAUBERT ont un caractère auto-suffisant pour réaliser la tâche d'analyse syntaxique de manière optimale.ABSTRACTFrench dependency parsing with contextualized embeddingsThis paper presents a dependency parser for French that compares favorably to the state of the art on several corpora. The parser relies on rich lexical representations from BERT and FASTTEXT. We notice that the lexical representations produced by FLAUBERT are somehow self-sufficient to perform the syntax analysis task in an optimal way. MOTS-CLÉS : Analyse syntaxique en dépendances du français, BERT, FastText.
Analyse en dépendances du français avec des plongements contextualisés
d427416
In order to create reusable and sustainable multimodal resources a transcription model for hand and arm gestures in conversation is needed. We argue that transcription systems so far developed for sign language transcription and psychological analysis are not suitable for the linguistic analysis of conversational gesture. Such a model must adhere to a strict form-function distinction and be both computationally explicit and compatible with descriptive notations such as feature structures in other areas of computational and descriptive linguistics. We describe the development and evaluation of a suitable formal model using a feature-based transcription system, concentrating as a first step on arm gestures within the context of the development of an annotated video resource and gesture lexicon.
CoGesT: a formal transcription system for conversational gesture
d232021565
d35338491
One of the most common lexical misuse problems in the second language context concerns near synonyms. Dictionaries and thesauri often overlook the nuances of near synonyms and make reference to near synonyms in providing definitions. The semantic differences and implications of near synonyms are not easily recognized and often fail to be acquired by L2 learners. This study addressed the distinctions of synonymous semantics in the context of second language learning and use. The purpose is to examine the effects of lexical collocation behaviors on identifying salient semantic features and revealing subtle difference between near synonyms. We conducted both analytical evaluation and empirical evaluation to verify that proper use of collocation information leads to learners' successful comprehension of lexical semantics. Both results suggest that the process of organizing and identifying salient semantic features is favorable for and is accessible to a good portion of L2 learners, and thereby, improving near-synonym distinction.
Effects of Collocation Information on Learning Lexical Semantics for Near Synonym Distinction
d16812606
Question classifiers are used within Question Answering to predict the expected answer type for a given question. This paper describes the first steps towards applying a similar methodology to identifying question classes in dialogue contexts, beginning with a study of questions drawn from the Enron email corpus. Human-annotated data is used as a gold standard for assessing the output from an existing, open-source question classifier (QA-SYS). Problem areas are identified and potential solutions discussed. * The author would like to thank GCHQ for supporting this research.
Question Classification for Email *
d6186002
This paper presents a method for identifying non-English speech, with the aim of supporting an automated speech proficiency scoring system for non-native speakers.The method uses a popular technique from the language identification domain, a single phone recognizer followed by multiple languagedependent language models. This method determines the language of a speech sample based on the phonotactic differences among languages.The method is intended for use with nonnative English speakers.Therefore, the method must be able to distinguish non-English responses from non-native speakers' English responses. This makes the task more challenging, as the frequent pronunciation errors of non-native speakers may weaken the phonetic and phonotactic distinction between English responses and non-English responses. In order to address this issue, the speaking rate measure was used to complement the language identification based features in the model.The accuracy of the method was 98%, and there was 45% relative error reduction over a system based on the conventional language identification technique. The model using both feature sets furthermore demonstrated an improvement in accuracy for speakers at all English proficiency levels.
Non-English Response Detection Method for Automated Proficiency Scoring System
d13117296
We investigate the usefulness of syntactic knowledge in estimating the quality of English-French translations. We find that dependency and constituency tree kernels perform well but the error rate can be further reduced when these are combined with hand-crafted syntactic features. Both types of syntactic features provide information which is complementary to tried-and-tested nonsyntactic features. We then compare source and target syntax and find that the use of parse trees of machine translated sentences does not affect the performance of quality estimation nor does the intrinsic accuracy of the parser itself. However, the relatively flat structure of the French Treebank does appear to have an adverse effect, and this is significantly improved by simple transformations of the French trees. Finally, we provide further evidence of the usefulness of these transformations by applying them in a separate task -parser accuracy prediction.This work is licensed under a Creative Commons Attribution 4.0 International Licence. Page numbers and proceedings footer are added by the organisers. Licence details: http://creativecommons.org/licenses/by/4.0/
Quality Estimation of English-French Machine Translation: A Detailed Study of the Role of Syntax
d219301840
Systems that attempt to understand natural human input make nfistakes, even humans. However, humans avoid misunderstandings by confirming doubtful input. Multimodal systems--those that combine simultaneous input from more than one modality, for example speech and gesture----have historically been designed so that they either request confirmation of speech, their primary modality, or not at all. Instead, we experimented with delaying confn-rnation until after the speech and gesture were combined into a complete multimodal command.In controlled experiments, subjects achieved more commands per minute at a lower error rate when the system delayed confirmation, than compared to when subjects confirmed only speech. In addition, this style of late confirmation rreets the user's expectation that confimr.d commands should be executable.KEY'WORDS: mulfimodal, confirmation, uncertainty, disambiguafion "Mistakes are inevitable in dialog...In practice, conversation breakr down almost instantly in the absence of a facility to recognize ancl repair errors, ask clarification questions, give cot~rmation, and perform disambiguatior~ [ 1 ]"
Confirmation in Multimodal Systems
d18911085
This paper describes our system submissions to task 7 in SemEval 2016, i.e., Determining Sentiment Intensity. We participated the first two subtasks in English, which are to predict the sentiment intensity of a word or a phrase in English Twitter and General English domains. To address this task, we present a supervised learning-to-rank system to predict the relevant scores, i.e., the strength associated with positive sentiment, for English words or phrases. Multiple linguistic and sentiment features are adopted, e.g., Sentiment Lexicons, Sentiment Word Vectors, Word Vectors, Linguistic Features, etc. Officially released results showed that our systems rank the 1st among all submissions in English, which proves the effectiveness of the proposed method.
ECNU at SemEval-2016 Task 7: An Enhanced Supervised Learning Method for Lexicon Sentiment Intensity Ranking
d13606273
A NEW DESIGN OF PROLOG-BASED BOTTOM-UP PARSING SYSTEM WITH GOVERNMENT.BINDING THEORY
d8333055
We investigate in this paper the adequate unit of analysis for Arabic Mention Detection. We experiment different segmentation schemes with various feature-sets. Results show that when limited resources are available, models built on morphologically segmented data outperform other models by up to 4F points. On the other hand, when more resources extracted from morphologically segmented data become available, models built with Arabic TreeBank style segmentation yield to better results. We also show additional improvement by combining different segmentation schemes.
Arabic Mention Detection: Toward Better Unit of Analysis
d254293626
Wikipedia is a common source of training data for Natural Language Processing (NLP) research, especially as a source for corpora in languages other than English. However, for many downstream NLP tasks, it is important to understand the degree to which these corpora reflect representative contributions of native speakers. In particular, many entries in a given language may be translated from other languages or produced through other automated mechanisms. Language models built using corpora like Wikipedia can embed history, culture, bias, stereotypes, politics, and more, but it is important to understand whose views are actually being represented. In this paper, we present a case study focusing specifically on differences among the Arabic Wikipedia editions (Modern Standard Arabic, Egyptian, and Moroccan). In particular, we document issues in the Egyptian Arabic Wikipedia with automatic creation/generation and translation of content pages from English without human supervision. These issues could substantially affect the performance and accuracy of Large Language Models (LLMs) trained from these corpora, producing models that lack the cultural richness and meaningful representation of native speakers. Fortunately, the metadata maintained by Wikipedia provides visibility into these issues, but unfortunately, this is not the case for all corpora used to train LLMs.
Learning From Arabic Corpora But Not Always From Arabic Speakers: A Case Study of the Arabic Wikipedia Editions
d15145361
摘要 本文嘗試以自然語言處理及語言學分析之方式,結合語料庫及電子資源,採用中文語料 庫、雙語語料庫及 Word2Vec 模型工具及廣義知網電子資訊,以偵測並判別中文近義詞。 藉由訓練資料,建立近義詞偵測及判別規則,並利用測試資料評估規則的正確性及可行 性;本文建議透過國家教育研究院近義詞系統自動偵測近義詞組,並採用詞性訊息、詞 性異同、英文翻譯、詞素數目、雙語對譯等規則,可篩選出豐富的近義詞組。經由本文 所建立近義詞系統以及人工判別近義詞規則,可避免過去研究因傳統人工編纂近義詞而 費時耗力或是單採中文語料庫而有資料雜訊較多之缺點,同時本文篩選的近義詞具客觀 性及正確性,並兼顧詞彙的句法和語用特點,未來可應用至發展中文近義詞自動辨識技 術及系統,以及近義詞辨析、辭典編纂、中文教學及文章寫作等方面。 關鍵詞:近義詞,華語文語料庫,Word2Vec,雙語語料庫,詞性 一、前言及目的 探究近義詞,首先必須釐清近義詞的範圍與定義。過去探討中文詞彙語意的研究,部分 主張應區分同義詞和近義詞,此派尤以中國大陸學者居多 [1];對於可替換性是否為同 義詞的必要條件,學者們看法不盡相同 [2][3][4]。相較之下,採西方語言學觀點者多認 為同義詞少之又少,故以近義詞指稱語意相似的詞彙群組群;例如,Pinker [5] 認為同 形異義詞很多,但同義詞很少,事實上所有被認定是同義詞的詞彙語意都仍有差異; Taylor [6][7] 提到,完全同義詞(perfect synonyms)應語意相同且可在任何語境中互相替
中文近義詞的偵測與判別 * Detection and Discrimination of Chinese Near-synonyms
d202541012
We present a method to produce abstractive summaries of long documents that exceed several thousand words via neural abstractive summarization. We perform a simple extractive step before generating a summary, which is then used to condition the transformer language model on relevant information before being tasked with generating a summary. We also show that this approach produces more abstractive summaries compared to prior work that employs a copy mechanism while still achieving higher ROUGE scores. We provide extensive comparisons with strong baseline methods, prior state of the art work as well as multiple variants of our approach including those using only transformers, only extractive techniques and combinations of the two. We examine these models using four different summarization tasks and datasets: arXiv papers, PubMed papers, the Newsroom and BigPatent datasets. We find that transformer based methods produce summaries with fewer n-gram copies, leading to n-gram copying statistics that are more similar to human generated abstracts. We include a human evaluation, finding that transformers are ranked highly for coherence and fluency, but purely extractive methods score higher for informativeness and relevance. We hope that these architectures and experiments may serve as strong points of comparison for future work. 1AbstractWe demonstrate that Transformer language models are extremely promising at summarizing long texts, and provide a new approach to deep summarization that can be used to generate more "abstractive" summaries. We show that our approach produces more abstractive summaries than state-of-the-art methods without a copy mechanism. We provide an application to text summarization of the arXiv and PubMed datasets, and show that our model outperforms other popular summarization techniques. We also discuss a simple neural extractive model based on pointers networks trained on documents and their salient sentences. We show that this model can be used to augment Transformer language models to generate better summarization results. Note: The abstract above was generated by one of the models presented in this paper, as a summary of this paper.⇤Equal contribution, order determined by coin flipAbstractWe demonstrate that Transformer language models are extremely promising at summarizing long texts, and provide a new approach to deep summarization that can be used to generate more "abstractive" summaries. We show that our approach produces more abstractive summaries than state-of-the-art methods without a copy mechanism. We provide an application to text summarization of the arXiv and PubMed datasets, and show that our model outperforms other popular summarization techniques. We also discuss a simple neural extractive model based on pointers networks trained on documents and their salient sentences. We show that this model can be used to augment Transformer language models to generate better summarization results. Note: The abstract above was generated by one of the models presented in this paper, as a summary of this paper.Recent work by (Radford et al., 2019) (GPT-2) has demonstrated that Transformer Language
On Extractive and Abstractive Neural Document Summarization with Transformer Language Models
d207847595
Most of recent work in cross-lingual word embeddings is severely Anglocentric. The vast majority of lexicon induction evaluation dictionaries are between English and another language, and the English embedding space is selected by default as the hub when learning in a multilingual setting. With this work, however, we challenge these practices. First, we show that the choice of hub language can significantly impact downstream lexicon induction and zero-shot POS tagging performance. Second, we both expand a standard Englishcentered evaluation dictionary collection to include all language pairs using triangulation, and create new dictionaries for under-represented languages. 1 Evaluating established methods over all these language pairs sheds light into their suitability for aligning embeddings from distant languages and presents new challenges for the field. Finally, in our analysis we identify general guidelines for strong cross-lingual embedding baselines, that extend to language pairs that do not include English.
Should All Cross-Lingual Embeddings Speak English?
d2447859
In this paper we argue that the automatic term extraction procedure is an inherently multifactor process and the term extraction models needs to be based on multiple features including a specific type of a terminological resource under development. We proposed to use three types of features for extraction of two-word terms and showed that all these types of features are useful for term extraction. The set of features includes new features such as features extracted from an existing domain-specific thesaurus and features based on Internet search results. We studied the set of features for term extraction in two different domains and showed that the combination of several types of features considerably enhances the quality of the term extraction procedure. We found that for developing term extraction models in a specific domain, it is important to take into account some properties of the domain.
Automatic Term Recognition Needs Multiple Evidence
d2444945
Part-of-Speech (POS) tagging is a key step in many NLP algorithms. However, tweets are difficult to POS tag because there are many phenomena that frequently appear in Twitter that are not as common, or are entirely absent, in other domains: tweets are short, are not always written maintaining formal grammar and proper spelling, and abbreviations are often used to overcome their restricted lengths. Arabic tweets also show a further range of linguistic phenomena such as usage of different dialects, romanised Arabic and borrowing foreign words. In this paper, we present an evaluation and a detailed error analysis of stateof-the-art POS taggers for Arabic when applied to Arabic tweets. The accuracy of standard Arabic taggers is typically excellent (96-97%) on Modern Standard Arabic (MSA) text ; however,their accuracy declines to 49-65% on Arabic tweets. Further, we present our initial approach to improve the taggers' performance. By making improvements based on observed errors, we are able to reach 74% tagging accuracy.
Towards POS Tagging for Arabic Tweets
d237055467
d53637908
We present the first supervised approach to rhyme detection with Siamese Recurrent Networks (SRN) that offer near perfect performance (97% accuracy) with a single model on rhyme pairs for German, English and French, allowing future large scale analyses. SRNs learn a similarity metric on variable length character sequences that can be used as judgement on the distance of imperfect rhyme pairs and for binary classification. For training, we construct a diachronically balanced rhyme goldstandard of New High German (NHG) poetry. For further testing, we sample a second collection of NHG poetry and set of contemporary Hip-Hop lyrics, annotated for rhyme and assonance. We train several high-performing SRN models and evaluate them qualitatively on selected sonnetts.Rhyming CorporaWe describe the creation and annotation efforts of three corpora of lyric text 1.) DTR, a diachronically balanced sample of poems from the German text archive, containing 1,948 poems, 2.) ANTI-K, a very This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http:// creativecommons.org/licenses/by/4.0/. 1 i.e. identical pronunciation of word segments starting from the last accented vowel(Fabb, 1997)
Supervised Rhyme Detection with Siamese Recurrent Networks
d7136332
In conversational language, references to people (especially to the conversation participants, e.g., I, you, and we) are an essential part of many expressed meanings. In most conversational settings, however, many such expressions have numerous potential meanings, are frequently vague, and are highly dependent on social and situational context. This is a significant challenge to conversational language understanding systems -one which has seen little attention in annotation studies. In this paper, we present a method for annotating verbal reference to people in conversational speech, with a focus on reference to conversation participants. Our goal is to provide a resource that tackles the issues of vagueness, ambiguity, and contextual dependency in a nuanced yet reliable way, with the ultimate aim of supporting work on summarization and information extraction for conversation.
Annotating Participant Reference in English Spoken Conversation
d2115491
The production of accurate and complete multiple-document summaries is challenged by the complexity of judging the usefulness of information to the user. Our aim is to determine whether identifying sub-events in a news topic could help us capture essential information to produce better summaries. In our first experiment, we asked human judges to determine the relative utility of sentences as they related to the subevents of a larger topic. We used this data to create summaries by three different methods, and we then compared these summaries with three automatically created summaries. In our second experiment, we show how the results of our first experiment can be applied to a cluster-based automatic summarization system. Through both experiments, we examine the use of inter-judge agreement and a relative utility metric that accounts for the complexity of determining sentence quality in relation to a topic.
Sub-event based multi-document summarization
d227231770
d14514522
We report on two recent medium-scale initiatives annotating present day English corpora for animacy distinctions. We discuss the relevance of animacy for computational linguistics, specifically generation, the annotation categories used in the two studies and the interannotator reliability for one of the studies.
Animacy Encoding in English: why and how
d202628643
Two of the more predominant technologies that professional translators have at their disposal for improving productivity are machine translation (MT) and computeraided translation (CAT) tools based on translation memories (TM). When translators use MT, they can use automatic postediting (APE) systems to automate part of the post-editing work and get further productivity gains. When they use TMbased CAT tools, productivity may improve if they rely on fuzzy-match repair (FMR) methods. In this paper we combine FMR and APE: first a FMR proposal is produced from the translation unit proposed by the TM, then this proposal is further improved by an APE system specially tuned for this purpose. Experiments conducted on the translation of English texts into German show that, with the two combined technologies, the quality of the translations improves up to 23% compared to a pure MT system. The improvement over a pure FMR system is of 16%, showing the effectiveness of our joint solution.
Improving Translations by Combining Fuzzy-Match Repair with Automatic Post-Editing
d14066271
Synset and semantic relation based lexical knowledge base such as wordnet, have been well-studied and constructed in English and other European languages (EuroWordnet). The Chinese wordnet (CWN) has been launched by Academia Sinica basing on the similar paradigm. The synset that each word sense locates in CWN are manually labeled, however, the lexical semantic relations among synsets are not fully constructed yet. In this present paper, we try to propose a lexical pattern-based algorithm which can automatically discover the semantic relations among verbs, especially the troponymy relation. There are many ways that the structure of a language can indicate the meaning of lexical items. For Chinese verbs, we identify two sets of lexical syntactic patterns denoting the concept of hypernymy-troponymy relation. We describe a method for discovering these syntactic patterns and automatically extracting the target verbs and their corresponding hypernyms. Our system achieves satisfactory results and we beleive it will shed light on the task of automatic acquisition of Chinese lexical semantic relations and ontology learning as well.
Automatic labeling of troponymy for Chinese verbs
d5086644
We propose in this paper a method for quantifying sentence grammaticality. The approach based on Property Grammars, a constraint-based syntactic formalism, makes it possible to evaluate a grammaticality index for any kind of sentence, including ill-formed ones. We compare on a sample of sentences the grammaticality indices obtained from PG formalism and the acceptability judgements measured by means of a psycholinguistic analysis. The results show that the derived grammaticality index is a fairly good tracer of acceptability scores.
Acceptability Prediction by Means of Grammaticality Quantification
d17070041
d39471533
d3260676
Introduction Punjabi Morphology Corpus and Lexicon Developing Punjabi Morphology, Corpus and Lexicon: PACLIC24
d17622848
It is an ongoing debate whether categorical systems created by some experts are an appropriate way to help users finding useful resources in the internet. However for the much more restricted domain of language documentation such a category system might still prove reasonable if not indispensable. This article gives an overview over the particular IMDI category set and presents a rough evaluation of it's practical use at the Max-Planck-Institute Nijmegen. content genre content subgenre content task content modalities content subject communication context interactivity communication context planning type communication context involvement communication context social context communication context event communication context channel content language description content language ID content language name
Comparison of Resource Discovery Methods
d1322255
The use of well-nested linear context-free rewriting systems has been empirically motivated for modeling of the syntax of languages with discontinuous constituents or relatively free word order. We present a chart-based parsing algorithm that asymptotically improves the known running time upper bound for this class of rewriting systems. Our result is obtained through a linear space construction of a binary normal form for the grammar at hand.
Efficient Parsing of Well-Nested Linear Context-Free Rewriting Systems
d6764370
Recent work has shown how a parallel corpus can be leveraged to build syntactic parser for a target language by projecting automatic source parse onto the target sentence using word alignments. The projected target dependency parses are not always fully connected to be useful for training traditional dependency parsers. In this paper, we present a greedy non-directional parsing algorithm which doesn't need a fully connected parse and can learn from partial parses by utilizing available structural and syntactic information in them. Our parser achieved statistically significant improvements over a baseline system that trains on only fully connected parses for Bulgarian, Spanish and Hindi. It also gave a significant improvement over previously reported results for Bulgarian and set a benchmark for Hindi.
Partial Parsing from Bitext Projections
d8863860
This paper reports translation results for the "Exploiting Parallel Texts for Statistical Machine Translation" (HLT-NAACL Workshop on Parallel Texts 2006). We have studied different techniques to improve the standard Phrase-Based translation system. Mainly we introduce two reordering approaches and add morphological information.Section 3 presents the shared task results; and, finally, in Section 4, we conclude.
TALP Phrase-based statistical translation system for European language pairs
d220284611
This work investigates the most basic units that underlie contextualized word embeddings, such as BERT -the so-called word pieces. In Morphologically-Rich Languages (MRLs) which exhibit morphological fusion and nonconcatenative morphology, the different units of meaning within a word may be fused, intertwined, and cannot be separated linearly. Therefore, when using word-pieces in MRLs, we must consider that: (1) a linear segmentation into sub-word units might not capture the full morphological complexity of words; and (2) representations that leave morphological knowledge on sub-word units inaccessible might negatively affect performance. Here we empirically examine the capacity of wordpieces to capture morphology by investigating the task of multi-tagging in Hebrew, as a proxy to evaluating the underlying segmentation. Our results show that, while models trained to predict multi-tags for complete words outperform models tuned to predict the distinct tags of WPs, we can improve the WPs tag prediction by purposefully constraining the wordpieces to reflect their internal functions. We conjecture that this is due to the naïve linear tokenization of words into word-pieces, and suggest that linguistically-informed word-pieces schemes, that make morphological knowledge explicit, might boost performance for MRLs.
Getting the ##life out of living: How Adequate Are Word-Pieces for Modelling Complex Morphology?
d2067012
We made use of parallel texts to gather training and test examples for the English lexical sample task. Two tracks were organized for our task. The first track used examples gathered from an LDC corpus, while the second track used examples gathered from a Web corpus. In this paper, we describe the process of gathering examples from the parallel corpora, the differences with similar tasks in previous SENSEVAL evaluations, and present the results of participating systems.
SemEval-2007 Task 11: English Lexical Sample Task via English-Chinese Parallel Text
d226283888
This paper tackles the task of named entity recognition (NER) applied to digitized historical texts obtained from processing digital images of newspapers using optical character recognition (OCR) techniques. We argue that the main challenge for this task is that the OCR process leads to misspellings and linguistic errors in the output text. Moreover, historical variations can be present in aged documents, which can impact the performance of the NER process. We conduct a comparative evaluation on two historical datasets in German and French against previous state-of-the-art models, and we propose a model based on a hierarchical stack of Transformers to approach the NER task for historical data. Our findings show that the proposed model clearly improves the results on both historical datasets, and does not degrade the results for modern datasets.
Alleviating Digitization Errors in Named Entity Recognition for Historical Documents
d235097523
d218973767
d14074691
We introduce several probabilistic models for learning the lexicon of a semantic parser. Lexicon learning is the first step of training a semantic parser for a new application domain and the quality of the learned lexicon significantly affects both the accuracy and efficiency of the final semantic parser. Existing work on lexicon learning has focused on heuristic methods that lack convergence guarantees and require significant human input in the form of lexicon templates or annotated logical forms. In contrast, our probabilistic models are trained directly from question/answer pairs using EM and our simplest model has a concave objective that guarantees convergence to a global optimum. An experimental evaluation on a set of 4th grade science questions demonstrates that our models improve semantic parser accuracy (35-70% error reduction) and efficiency (4-25x more sentences per second) relative to prior work despite using less human input. Our models also obtain competitive results on GEO880 without any datasetspecific engineering.
Probabilistic Models for Learning a Semantic Parser Lexicon
d5372617
In this paper, we briefly describe two enhancements of the cross-pair similarity model for learning textual entailment rules: 1) the typed anchors and 2) a faster computation of the similarity. We will report and comment on the preliminary experiments and on the submission results.
Shallow Semantics in Fast Textual Entailment Rule Learners
d17874149
Qualia Structures have many applications within computational linguistics, but currently there are no corresponding lexical resources such as WordNet or FrameNet. This paper presents an approach to automatically learn qualia structures for nominals from the World Wide Web and thus opens the possibility to explore the impact of qualia structures for natural language processing at a larger scale. Furthermore, our approach can be also used support a lexicographer in the task of manually creating a lexicon of qualia structures. The approach is based on the idea of matching certain lexicosyntactic patterns conveying a certain semantic relation on the World Wide Web using standard search engines. We evaluate our approach qualitatively by comparing our automatically learned qualia structures with the ones from the literature, but also quantitatively by presenting results of a human evaluation.
Automatically Learning Qualia Structures from the Web
d775002
In this paper we present some of the algorithm improvements that have been made to Dragon's continuous speech recognition and training prograxns, improvements that have more than halved our error rate on the Resource Management task since the last SLS meeting in February 1991. We also report the "dry run" results that we have obtMned on the 5000-word speaker-dependent Wall Street Journal recognition task, and outline our overall research strategy and plans for the future.In our system, a set of output distributions, known as the set of PELs (phonetic elements), is associated with each phoneme. The HMM for a PIC (phoneme-in-context) is represented as a linear sequence of states, each having an output distribution chosen from the set of PELs for the given phoneme, and a (double exponential) duration distribution.
Large Vocabulary Recognition of Wall Street Journal Sentences at Dragon Systems
d17015488
Lexical-semantic resources are fundamental building blocks in natural language processing (NLP). Frequently, they fail to cover the informal vocabulary of web users as represented in user-generated content. This paper aims at exploring folksonomies as a novel source of lexical-semantic information. It analyzes two prototypical examples of folksonomies, namely BibSonomy and Delicious, and utilizes NLP and word sense induction techniques to turn the folksonomies into word sense-disambiguated networks representing the vocabulary and the word senses found in folksonomies. The main contribution of the paper is an in-depth analysis of the resulting resources, which can be combined with conventional wordnets to achieve broad coverage of user-generated content.
A Study of Sense-disambiguated Networks Induced from Folksonomies
d41607
In this paper, we report the adaptation of a named entity recognition (NER) system to the biomedical domain in order to participate in the "Shared Task Bio-Entity Recognition". The system is originally developed for German NER that shares characteristics with the biomedical task. To facilitate adaptability, the system is knowledge-poor and utilizes unlabeled data. Investigating the adaptability of the single components and the enhancements necessary, we get insights into the task of bioentity recognition.
Adapting an NER-System for German to the Biomedical Domain