_id
stringlengths
4
10
text
stringlengths
0
18.4k
title
stringlengths
0
8.56k
d218973899
d5562790
This paper describes a dialogue act tagging scheme developed for the purpose of providing finer-grained quantitative dialogue metrics for comparing and evaluating DARPA COMMUNICATOR spoken dialogue systems. We show that these dialogue act metrics can be used to quantify the amount of effort spent in a dialogue maintaining the channel of communication or, establishing the frame for communication, as opposed to actually carrying out the travel planning task that the system is designed to support. We show that the use of these metrics results in a 7% improvement in the fit in models of user satisfaction. We suggest that dialogue act metrics can ultimately support more focused qualitative analysis of the role of various dialogue strategy parameters, e.g. initiative, across dialogue systems, thus clarifying what development paths might be feasible for enhancing user satisfaction in future versions of these systems.Speech-ActExample PRESENT-INFO You are logged in as a guest user of A T and T Communicator. PRESENT-INFO
DATE: A Dialogue Act Tagging Scheme for Evaluation of Spoken Dialogue Systems
d8457271
We introduce factored language models (FLMs) and generalized parallel backoff (GPB). An FLM represents words as bundles of features (e.g., morphological classes, stems, data-driven clusters, etc.), and induces a probability model covering sequences of bundles rather than just words. GPB extends standard backoff to general conditional probability tables where variables might be heterogeneous types, where no obvious natural (temporal) backoff order exists, and where multiple dynamic backoff strategies are allowed. These methodologies were implemented during the JHU 2002 workshop as extensions to the SRI language modeling toolkit. This paper provides initial perplexity results on both CallHome Arabic and on Penn Treebank Wall Street Journal articles. Significantly, FLMs with GPB can produce bigrams with significantly lower perplexity, sometimes lower than highly-optimized baseline trigrams. In a multi-pass speech recognition context, where bigrams are used to create first-pass bigram lattices or N-best lists, these results are highly relevant.
Factored Language Models and Generalized Parallel Backoff
d258890929
We will present our solution to replace the usage of publicly available machine translation (MT) services in companies where privacy and confidentiality are key. Our MT portal can translate across a variety of languages using neural machine translation, and supports an extensive number of file types. Corporations are using it to enable multilingual communication everywhere.
TransPerfect's Private Neural Machine Translation Portal
d5546656
This paper addressees the problem of eliminating unsatisfactory outputs from machine translation (MT) systems. The authors intend to eliminate unsatisfactory MT outputs by using confidence measures. Confidence measures for MT outputs include the rank-sum-based confidence measure (RSCM) for statistical machine translation (SMT) systems. RSCM can be applied to non-SMT systems but does not always work well on them. This paper proposes an alternative RSCM that adopts a mixture of the N-best lists from multiple MT systems instead of a single-system's N-best list in the existing RSCM. In most cases, the proposed RSCM proved to work better than the existing RSCM on two non-SMT systems and to work as well as the existing RSCM on an SMT system.
Using a Mixture of N-Best Lists from Multiple MT Systems in Rank-Sum-Based Confidence Measure for MT Outputs *
d445754
This paper addresses a data-driven surface realisation model based on a large-scale reversible grammar of German. We investigate the relationship between the surface realisation performance and the character of the input to generation, i.e. its degree of underspecification. We extend a syntactic surface realisation system, which can be trained to choose among word order variants, such that the candidate set includes active and passive variants. This allows us to study the interaction of voice and word order alternations in realistic German corpus data. We show that with an appropriately underspecified input, a linguistically informed realisation model trained to regenerate strings from the underlying semantic representation achieves 91.5% accuracy (over a baseline of 82.5%) in the prediction of the original voice.
Underspecifying and Predicting Voice for Surface Realisation Ranking
d12258794
We outline a methodological classification for evaluation approaches of software in general. This classification was initiated partly owing to involvement in a biennial European competition (the European Academic Software Award, EASA) which was held for over a decade. The evaluation grid used in EASA gradually became obsolete and inappropriate in recent years, and therefore needed to be revised. In order to do this, it was important to situate the competition in relation to other software evaluation procedures. A methodological perspective for the classification is adopted rather than a conceptual one, since a number of difficulties arise with the latter. We focus on three main questions: what to evaluate? how to evaluate? and who evaluates? The classification is therefore hybrid: it allows one to account for the most common evaluation approaches and is also an observatory. Two main approaches are differentiated: system and usage. We conclude that any evaluation always constructs its own object, and the objects to be evaluated only partially determine the evaluation which can be applied to them. Generally speaking, this allows one to begin apprehending what type of knowledge is objectified when one or another approach is chosen.
Classification procedures for software evaluation
d44855702
In recent years, the rapid growth of wireless communications has undoubtedly increased the need for speech recognition techniques. In wireless environments, the portability of a computationally powerful device can be realized by distributing data/information and computation resources over wireless networks. Portability can then evolve through personalization and humanization to meet people's needs. An innovative distributed speech recognition (DSR) [ETSI, 1998], [ETSI, 2000] platform, configurable DSR (C-DSR), is thus proposed here to enable various types of wireless devices to be remotely configured and to employ sophisticated recognizers on servers operated over wireless networks. For each recognition task, a configuration file, which contains information regarding types of services, types of mobile devices, speaker profiles and recognition environments, is sent from the client side with each speech utterance. Through configurability, the capabilities of configuration, personalization and humanization can be easily achieved by allowing users and advanced users to be involved in the design of unique speech interaction functions of wireless devices.
An Innovative Distributed Speech Recognition Platform for Portable, Personalized and Humanized Wireless Devices
d231643687
d15631550
In this paper, we introduce the Lefff , a freely available, accurate and large-coverage morphological and syntactic lexicon for French, used in many NLP tools such as large-coverage parsers. We first describe Alexina, the lexical framework in which the Lefff is developed as well as the linguistic notions and formalisms it is based on. Next, we describe the various sources of lexical data we used for building the Lefff , in particular semi-automatic lexical development techniques and conversion and merging of existing resources. Finally, we illustrate the coverage and precision of the resource by comparing it with other resources and by assessing its impact in various NLP tools.
The Lefff , a freely available and large-coverage morphological and syntactic lexicon for French
d17815197
We present a multi-threaded Interaction Manager (IM) that is used to track different dimensions of user-system conversations that are required to interleave with each other in a coherent and timely manner. This is explained in the context of a spoken dialogue system for pedestrian navigation and city question-answering, with information push about nearby or visible points-of-interest (PoI).
Multi-threaded Interaction Management for Dynamic Spatial Applications
d21696490
We present the results of the effort of enriching the pre-existing resource LICO, a Lexicon of Italian COnnectives retrieved from lexicographic sources(Feltracco et al., 2016), with real corpus data for connectives marking contrast relations in text. The motivation beyond our effort is that connectives can only be interpreted when they appear in context, that is, in a relation between the two fragments of text that constitute the two arguments of the relation. In this perspective, adding corpus examples annotated with connectives and arguments for the relation allows us to both extend the resource and validate the lexicon. In order to retrieve good corpus examples, we take advantage of the existing Contrast-Ita Bank(Feltracco et al., 2017), a corpus of news annotated with explicit and implicit discourse contrast relations for Italian according to the annotation scheme proposed in the Penn Discourse Tree Bank (PDTB) guidelines(Prasad et al., 2007). We also use an extended -non contrast annotated-version of the same corpus and documents from Wikipedia. The resulting resource represents a valuable tool for both linguistic analyses of discourse relations and the training of a classifier for NLP applications.
Enriching a Lexicon of Discourse Connectives with Corpus-based Data
d26582300
There are several native languages in Peru which are mostly agglutinative. These languages are transmitted from generation to generation mainly in oral form, causing different forms of writing across different communities. For this reason, there are recent efforts to standardize the spelling in the written texts, and it would be beneficial to support these tasks with an automatic tool such as a spell-checker. In this way, this spelling corrector is being developed based on two steps: an automatic rule-based syllabification method and a character-level graph to detect the degree of error in a misspelled word. The experiments were realized on Shipibo-konibo, a highly agglutinative and Amazonian language, and the results obtained have been promising in a dataset built for the purpose.
Spell-Checking based on Syllabification and Character-level Graphs for a Peruvian Agglutinative Language
d17895526
We apply Support Vector Machines to differentiate between 11 native languages in the 2013 Native Language Identification Shared Task. We expand a set of common language identification features to include cognate interference and spelling mistakes. Our best results are obtained with a classifier which includes both the cognate and the misspelling features, as well as word unigrams, word bigrams, character bigrams, and syntax production rules.
Cognate and Misspelling Features for Natural Language Identification
d11966594
We present an integrated approach to speech and natural language processing which uses a single parser to create training for a statistical speech recognition component and for interpreting recognized text. On the speech recognition side, our innovation is the use of a statistical model combining N-gram and context-free grammars. On the natural language side, our innovation is the integration of parsing and semantic interpretation to build references for only targeted phrase types. In both components, a semantic grammar and partial parsing facilitate robust processing of the targeted portions of a domain. This integrated approach introduces as much linguistic structure and prior statistical information as is available while maintaining a robust full-coverage statistical language model for recognition. In addition, our approach facilitates both the direct detection of linguistic constituents within the speech recognition algorithms and the creation of semantic interpretations of the recognized phrases.
INTEGRATED TECHNIQUES FOR PHRASE EXTRACTION FROM SPEECH
d26571395
We are investigating parts of the mathematical foundations of stemmatology, the science reconstructing the copying history of manuscripts. After Joseph Bédier in 1928 got suspicious about large amounts of root bifurcations he found in reconstructed stemmata, Paul Maas replied in 1937 using a mathematical argument that the proportion of root bifurcating stemmata among all possible stemmata is so large that one should not become suspicious to find them abundant. While Maas' argument was based on one example with a tradition of three surviving manuscripts, we show in this paper that for the whole class of trees corresponding to Maasian reconstructed stemmata and likewise for the class of trees corresponding to complete historical manuscript genealogies, root bifurcations are apriori the most expectable root degree type. We do this by providing a combinatorial formula for the numbers of possible so-called Greg trees according to their root degree(Flight, 1990). Additionally, for complete historical manuscript trees (regardless of loss), which coincide mathematically with rooted labeled trees, we provide formulas for root degrees and derive the asymptotic degree distribution. We find that root bifurcations are extremely numerous in both kinds of trees. Therefore, while previously other studies have shown that root bifurcations are expectable for true stemmata, we enhance this finding to all three philologically relevant types of trees discussed in breadth until today.
How Many Stemmata with Root Degree k?
d6214377
The problem of providing effective computer support for clinical coding has been the target of many research efforts. A recently introduced approach, based on statistical data on co-occurrences of words in clinical notes and assigned diagnosis codes, is here developed further and improved upon. The ability of the word space model to detect and appropriately handle the function of negations is demonstrated to be important in accurately correlating words with diagnosis codes, although the data on which the model is trained needs to be sufficiently large. Moreover, weighting can be performed in various ways, for instance by giving additional weight to 'clinically significant' words or by filtering code candidates based on structured patient records data. The results demonstrate the usefulness of both weighting techniques, particularly the latter, yielding 27% exact matches for a general model (across clinic types); 43% and 82% for two domain-specific models (ear-nosethroat and rheumatology clinics).
Exploiting Structured Data, Negation Detection and SNOMED CT Terms in a Random Indexing Approach to Clinical Coding
d11160504
RWTH Aachen RWTH AachenIn statistical machine translation, correspondences between the words in the source and the target language are learned from parallel corpora, and often little or no linguistic knowledge is used to structure the underlying models. In particular, existing statistical systems for machine translation often treat different inflected forms of the same lemma as if they were independent of one another. The bilingual training data can be better exploited by explicitly taking into account the interdependencies of related inflected forms. We propose the construction of hierarchical lexicon models on the basis of equivalence classes of words. In addition, we introduce sentence-level restructuring transformations which aim at the assimilation of word order in related sentences. We have systematically investigated the amount of bilingual training data required to maintain an acceptable quality of machine translation. The combination of the suggested methods for improving translation quality in frameworks with scarce resources has been successfully tested: We were able to reduce the amount of bilingual training data to less than 10% of the original corpus, while losing only 1.6% in translation quality. The improvement of the translation results is demonstrated on two German-English corpora taken from the Verbmobil task and the Nespole! task. * Computational Linguistics Volume 30, Number 2in most statistical machine translation systems. Apart from the improved coverage, the proposed lexicon models enable the disambiguation of ambiguous word forms by means of annotation with morpho-syntactic tags.OverviewThe article is organized as follows. After briefly reviewing the basic concepts of the statistical approach to machine translation, we discuss the state of the art and related work as regards the incorporation of morphological and syntactic information into systems for natural language processing. Section 2 describes the information provided by morpho-syntactic analysis and introduces a suitable representation of the analyzed corpus. Section 3 suggests solutions for two specific aspects of structural difference, namely, question inversion and separated verb prefixes. Section 4 is dedicated to hierarchical lexicon models. These models are able to infer translations of word forms from the translations of other word forms of the same lemma. Furthermore, they use morpho-syntactic information to resolve categorial ambiguity. In Section 5, we describe how disambiguation between different readings and their corresponding translations can be performed when no context is available, as is typically the case for conventional electronic dictionaries. Section 6 provides an overview of our procedure for training model parameters for statistical machine translation with scarce resources. Experimental results are reported in Section 7. Section 8 concludes the presentation with a discussion of the achievements of this work.
Statistical Machine Translation with Scarce Resources Using Morpho-syntactic Information
d248779953
The performance of deep learning models in NLP and other fields of machine learning has led to a rise in their popularity, and so the need for explanations of these models becomes paramount. Attention has been seen as a solution to increase performance, while providing some explanations. However, a debate has started to cast doubt on the explanatory power of attention in neural networks. Although the debate has created a vast literature thanks to contributions from various areas, the lack of communication is becoming more and more tangible. In this paper, we provide a clear overview of the insights on the debate by critically confronting works from these different areas. This holistic vision can be of great interest for future works in all the communities concerned by this debate. We sum up the main challenges spotted in these areas, and we conclude by discussing the most promising future avenues on attention as an explanation.
Is Attention Explanation? An Introduction to the Debate
d221692460
We introduce a framework in which production-rule based computational cognitive modeling and Reinforcement Learning can systematically interact and inform each other. We focus on linguistic applications because the sophisticated rule-based cognitive models needed to capture linguistic behavioral data promise to provide a stringent test suite for RL algorithms, connecting RL algorithms to both accuracy and reactiontime experimental data. Thus, we open a path towards assembling an experimentally rigorous and cognitively realistic benchmark for RL algorithms. We extend our previous work on lexical decision tasks and tabular RL algorithms (Brasoveanu and Dotlačil, 2020b) with a discussion of neural-network based approaches, and a discussion of how parsing can be formalized as an RL problem.
Production-based Cognitive Models as a Test Suite for Reinforcement Learning Algorithms
d233029470
賽德克語構詞結構之自動解析 Analyzing the Morphological Structures in Seediq Words
d219304211
d226283983
Does neural machine translation yield translations that are congenial with common sense? In this paper, we present a test suite to evaluate the commonsense reasoning capability of neural machine translation. The test suite consists of three test sets, covering lexical and contextless/contextual syntactic ambiguity that requires commonsense knowledge to resolve. We manually create 1,200 triples, each of which contain a source sentence and two contrastive translations, involving 7 different common sense types. Language models pretrained on large-scale corpora, such as BERT, GPT-2, achieve a commonsense reasoning accuracy of lower than 72% on target translations of this test suite. We conduct extensive experiments on the test suite to evaluate commonsense reasoning in neural machine translation and investigate factors that have impact on this capability. Our experiments and analyses demonstrate that neural machine translation performs poorly on commonsense reasoning of the three ambiguity types in terms of both reasoning accuracy ( 60.1%) and reasoning consistency ( 31%). We will release our test suite as a machine translation commonsense reasoning testbed to promote future work in this direction.
The Box is in the Pen: Evaluating Commonsense Reasoning in Neural Machine Translation
d9835029
When individuals lose the ability to produce their own speech, due to degenerative diseases such as motor neurone disease (MND) or Parkinson's, they lose not only a functional means of communication but also a display of their individual and group identity. In order to build personalized synthetic voices, attempts have been made to capture the voice before it is lost, using a process known as voice banking. But, for some patients, the speech deterioration frequently coincides or quickly follows diagnosis. Using HMM-based speech synthesis, it is now possible to build personalized synthetic voices with minimal data recordings and even disordered speech. The power of this approach is that it is possible to use the patient's recordings to adapt existing voice models pre-trained on many speakers. When the speech has begun to deteriorate, the adapted voice model can be further modified in order to compensate for the disordered characteristics found in the patient's speech. The University of Edinburgh has initiated a project for voice banking and reconstruction based on this speech synthesis technology. At the current stage of the project, more than fifteen patients with MND have already been recorded and five of them have been delivered a reconstructed voice. In this paper, we present an overview of the project as well as subjective assessments of the reconstructed voices and feedback from patients and their families.
Towards Personalized Synthesized Voices for Individuals with Vocal Disabilities: Voice Banking and Reconstruction
d2163836
The acquisition of Belief verbs lags behind the acquisition of Desire verbs in children. Some psycholinguistic theories attribute this lag to conceptual differences between the two classes, while others suggest that syntactic differences are responsible. Through computational experiments, we show that a probabilistic verb learning model exhibits the pattern of acquisition, even though there is no difference in the model in the difficulty of the semantic or syntactic properties of Belief vs. Desire verbs. Our results point to the distributional properties of various verb classes as a potentially important, and heretofore unexplored, factor in the observed developmental lag of Belief verbs.
Acquisition of Desires before Beliefs: A Computational Investigation
d5957384
We evaluate the performance of an morphological analyser for Inuktitut across a mediumsized corpus, where it produces a useful analysis for two out of every three types. We then compare its segmentation to that of simpler approaches to morphology, and use these as a pre-processing step to a word alignment task. Our observations show that the richer approaches provide little as compared to simply finding the head, which is more in line with the particularities of the task.
Evaluating a Morphological Analyser of Inuktitut
d7709942
Automatic syntactic analysis of a corpus requires detailed lexical and morphological information that cannot always be harvested from traditional dictionaries. In building the INESS Norwegian treebank, it is often the case that necessary lexical information is missing in the morphology or lexicon. The approach used to build the treebank is incremental parsebanking; a corpus is parsed with an existing grammar, and the analyses are efficiently disambiguated by annotators. When the intended analysis is unavailable after parsing, the reason is often that necessary information is not available in the lexicon. INESS has therefore implemented a text preprocessing interface where annotators can enter unrecognized words before parsing. This may concern words that are unknown to the morphology and/or lexicon, and also words that are known, but for which important information is missing. When this information is added, either during text preprocessing or during disambiguation, the result is that after reparsing the intended analysis can be chosen and stored in the treebank. The lexical information added to the lexicon in this way may be of great interest both to lexicographers and to other language technology efforts, and the enriched lexical resource being developed will be made available at the end of the project.
The Interplay Between Lexical and Syntactic Resources in Incremental Parsebanking
d227231405
d6206064
Argo is a web-based NLP and text mining workbench with a convenient graphical user interface for designing and executing processing workflows of various complexity. The workbench is intended for specialists and nontechnical audiences alike, and provides the ever expanding library of analytics compliant with the Unstructured Information Management Architecture, a widely adopted interoperability framework. We explore the flexibility of this framework by demonstrating workflows involving three processing components capable of performing self-contained machine learning-based tagging. The three components are responsible for the three distinct tasks of 1) generating observations or features, 2) training a statistical model based on the generated features, and 3) tagging unlabelled data with the model. The learning and tagging components are based on an implementation of conditional random fields (CRF); whereas the feature generation component is an analytic capable of extending basic token information to a comprehensive set of features. Users define the features of their choice directly from Argo's graphical interface, without resorting to programming (a commonly used approach to feature engineering). The experimental results performed on two tagging tasks, chunking and named entity recognition, showed that a tagger with a generic set of features built in Argo is capable of competing with taskspecific solutions.
Building trainable taggers in a web-based, UIMA-supported NLP workbench
d2660018
There are conflicting views in the literature as to the role of listener-adaptive processes in language production in general and articulatory reduction in particular. We present two novel pieces of corpus evidence that corroborate the hypothesis that non-lexical variation of durations is related to the speed of retrieval of stored motor code chunks and durational reduction is the result of facilitatory priming.
On the durational reduction of repeated mentions: recency and speaker effects
d18957362
This paper describes the evaluation methodology used to evaluate the TC-STAR speech-to-speech translation (SST) system and their results from the third year of the project. It follows the results presented in , dealing with the first end-to-end evaluation of the project. In this paper, we try to experiment with the methodology and the protocol during the second end-to-end evaluation, by comparing outputs from the TC-STAR system with interpreters from the European parliament. For this purpose, we test different criteria of evaluation and type of questions within a comprehension test. The results reveal that interpreters do not translate all the information (as opposed to the automatic system), but the quality of SST is still far from that of human translation. The experimental comprehension test used provides new information to study the quality of automatic systems, but without settling the issue of what protocol is best. This depends on what the evaluator wants to know about the SST: either to have a subjective end-user evaluation or a more objective one.
An Experimental Methodology for an End-to-End Evaluation in Speech-to-Speech Translation
d2983741
This demonstration will motivate some of the significant properties of the Galaxy Communicator Software Infrastructure and show how they support the goals of the DARPA Communicator program.
Exploring Speech-Enabled Dialogue with the Galaxy Communicator Infrastructure
d10560403
Coxhead's (2000)Academic Word List (AWL) has been frequently used in EAP classrooms and re-examined in light of various domain-specific corpora. Although well-received, the AWL has been criticized for ignoring the fact that words tend to show irregular distributions and be used in different ways across disciplines (Hyland and Tse, 2007). One such difference concerns collocations. Academic words (e.g. analyze) often co-occur with different words across domains and contain different meanings. What EAP students need is a "disciplinebased lexical repertoire" (p.235). Inspired by Hyland and Tse, we develop an online corpus-based tool, TechCollo, which is meant for EAP students to explore collocations in one domain or compare collocations across disciplines. It runs on textual data from six specialized corpora and utilizes frequency, traditional mutual information, and normalized MI (Wible et al., 2004) as measures to decide whether co-occurring word pairs constitute collocations. In this article we describe the current released version of TechCollo and how to use it in EAP studies. Additionally, we discuss a pilot study in which we used TechCollo to investigate whether the AWL words take different collocates in different domainspecific corpora. This pilot basically confirmed Hyland and Tse and demonstrates that many AWL words show uneven distributions and collocational differences across domains.
A Corpus-Based Tool for Exploring Domain-Specific Collocations in English
d5220140
Mining opinion targets is a fundamental and important task for opinion mining from online reviews. To this end, there are usually two kinds of methods: syntax based and alignment based methods. Syntax based methods usually exploited syntactic patterns to extract opinion targets, which were however prone to suffer from parsing errors when dealing with online informal texts. In contrast, alignment based methods used word alignment model to fulfill this task, which could avoid parsing errors without using parsing. However, there is no research focusing on which kind of method is more better when given a certain amount of reviews. To fill this gap, this paper empirically studies how the performance of these two kinds of methods vary when changing the size, domain and language of the corpus. We further combine syntactic patterns with alignment model by using a partially supervised framework and investigate whether this combination is useful or not. In our experiments, we verify that our combination is effective on the corpus with small and medium size.
Syntactic Patterns versus Word Alignment: Extracting Opinion Targets from Online Reviews
d248780043
Cross-domain sentiment analysis has achieved promising results with the help of pre-trained language models. As GPT-3 appears, prompt tuning has been widely explored to enable better semantic modeling in many natural language processing tasks. However, directly using a fixed predefined template for crossdomain research cannot model different distributions of the [MASK] token in different domains, thus making underuse of the prompt tuning technique. In this paper, we propose a novel Adversarial Soft Prompt Tuning method (AdSPT) to better model cross-domain sentiment analysis. On the one hand, AdSPT adopts separate soft prompts instead of hard templates to learn different vectors for different domains, thus alleviating the domain discrepancy of the [MASK] token in the masked language modeling task. On the other hand, AdSPT uses a novel domain adversarial training strategy to learn domain-invariant representations between each source domain and the target domain. Experiments on a publicly available sentiment analysis dataset show that our model achieves new state-of-the-art results for both single-source domain adaptation and multi-source domain adaptation.
Adversarial Soft Prompt Tuning for Cross-Domain Sentiment Analysis
d7255624
An important part of question answering is ensuring a candidate answer is plausible as a response. We present a flexible approach based on discriminative preference ranking to determine which of a set of candidate answers are appropriate. Discriminative methods provide superior performance while at the same time allow the flexibility of adding new and diverse features. Experimental results on a set of focused What ...? and Which ...? questions show that our learned preference ranking methods perform better than alternative solutions to the task of answer typing. A gain of almost 0.2 in MRR for both the first appropriate and first correct answers is observed along with an increase in precision over the entire range of recall.
Flexible Answer Typing with Discriminative Preference Ranking
d15505354
The definition of lexical semantic similarity measures has been the subject of lots of works for many years. In this article, we focus more specifically on distributional semantic similarity measures. Although several evaluations of this kind of measures were already achieved for determining if they actually catch semantic relatedness, it is still difficult to determine if a measure that performs well in an evaluation framework can be applied more widely with the same success. In the work we present here, we first select a semantic similarity measure by testing a large set of such measures against the WordNet-based Synonymy Test, an extended TOEFL test proposed in(Freitag et al., 2005), and we show that its accuracy is comparable to the accuracy of the best state of the art measures while it has less demanding requirements. Then, we apply this measure for extracting automatically synonyms from a corpus and we evaluate the relevance of this process against two reference resources, WordNet and the Moby thesaurus. Finally, we compare our results in details to those of (Curran and Moens, 2002).
Testing semantic similarity measures for extracting synonyms from a corpus
d220445356
Etude par EMA des mouvements de la mâchoire inférieure durant les consonnes de l'arabe marocain
d220060205
d239020526
MT adaptation from TMs in ModernMT
d5892397
The Counselor Project at the Un/versity of Massachusetts
d216914289
Extracting temporal relations between events and time expressions has many applications such as constructing event timelines and timerelated question answering. It is a challenging problem which requires syntactic and semantic information at sentence or discourse levels, which may be captured by deep language models such as BERT (Devlin et al., 2019). In this paper, we developed several variants of BERT-based temporal dependency parser, and show that BERT significantly improves temporal dependency parsing (Zhang and Xue, 2018a). Source code and trained models will be made available at github.com.
Exploring Contextualized Neural Language Models for Temporal Dependency Parsing
d21689265
eRulemaking is a means for government agencies to directly reach citizens to solicit their opinions and experiences regarding newly proposed rules. The effort, however, is partly hampered by citizens' comments that lack reasoning and evidence, which are largely ignored since government agencies are unable to evaluate the validity and strength. We present Cornell eRulemaking Corpus -CDCP, an argument mining corpus annotated with argumentative structure information capturing the evaluability of arguments. The corpus consists of 731 user comments on Consumer Debt Collection Practices (CDCP) rule by the Consumer Financial Protection Bureau (CFPB); the resulting dataset contains 4931 elementary unit and 1221 support relation annotations. It is a resource for building argument mining systems that can not only extract arguments from unstructured text, but also identify what additional information is necessary for readers to understand and evaluate a given argument. Immediate applications include providing real-time feedback to commenters, specifying which types of support for which propositions can be added to construct better-formed arguments.
A Corpus of eRulemaking User Comments for Measuring Evaluability of Arguments
d201085
Identifying Word Correspondences in Parallel Texts
d21718123
It is proved that in text-based communication such as sms, messengers applications, misinterpretation of partner's emotions are pretty common. In order to tackle this problem, we propose a new multilabel corpus named Emotional Movie Transcript Corpus (EMTC). Unlike most of the existing emotion corpora that are collected from Twitters and use hashtags labels, our corpus includes conversations from movie with more than 2.1 millions utterances which are partly annotated by ourselves and independent annotators. To our intuition, conversations from movies are closer to real-life settings and emotionally richer. We believe that a corpus like EMTC will greatly benefit the development and evaluation of emotion analysis systems and improve their ability to express and interpret emotions in text-based communication.
EMTC: Multilabel Corpus in Movie Domain for Emotion Analysis in Conversational Text
d234345291
d7549275
Speech technology applications, such as speech recognition, speech synthesis, and speech dialog systems, often require corpora based on highly customized specifications. Existing corpora available to the community, such as TIMIT and other corpora distributed by LDC and ELDA, do not always meet the requirements of such applications. In such cases, the developers need to create their own corpora. The creation of a highly customized speech corpus, however, could be a very expensive and time-consuming task, especially for small organizations. It requires multidisciplinary expertise in linguistics, management and engineering as it involves subtasks such as the corpus design, human subject recruitment, recording, quality assurance, and in some cases, segmentation, transcription and annotation. This paper describes LDC's recent involvement in the creation of a low-cost yet highly-customized speech corpus for a commercial organization under a novel data creation and licensing model, which benefits both the particular data requester and the general linguistic data user community.
Low-cost Customized Speech Corpus Creation for Speech Technology Applications
d213884007
d9225825
Software requirements are commonly written in natural language, making them prone to ambiguity, incompleteness and inconsistency. By converting requirements to formal semantic representations, emerging problems can be detected at an early stage of the development process, thus reducing the number of ensuing errors and the development costs. In this paper, we treat the mapping from requirements to formal representations as a semantic parsing task. We describe a novel data set for this task that involves two contributions: first, we establish an ontology for formally representing requirements; and second, we introduce an iterative annotation scheme, in which formal representations are derived through step-wise refinements.
Software Requirements: A new Domain for Semantic Parsers
d18427149
We present an operational framework allowing to express a large scale Tree Adjoining Grammar (TAG) by using higher level operational constraints on tree descriptions. These constraints first meant to guarantee the well formedness of the grammatical units may also be viewed as a way to put model theoretic syntax at work through an efficient offline grammatical compilation process. Our strategy preserves TAG formal properties, hence ensures a reasonable processing efficiency.
A constraint driven metagrammar
d9035655
In this paper, we present YAMAMA, a multi-dialect Arabic morphological analyzer and disambiguator. Our system is almost five times faster than the state-of-the-art MADAMIRA system with a slightly lower quality. In addition to speed, YAMAMA outputs a rich representation which allows for a wider spectrum of use. In this regard, YAMAMA transcends other systems, such as FARASA, which is faster but provides specific outputs catering to specific applications.
YAMAMA: Yet Another Multi-Dialect Arabic Morphological Analyzer
d8774560
This paper looks at the transcribed data of patient-doctor consultations in an examination setting. The doctors are internationally qualified and enrolled in a bridging course as preparation for their Australian Medical Council examination. In this study, we attempt to ascertain if there are measurable linguistic features of the consultations, and to investigate whether there is any relevant information about the communicative styles of the qualifying doctors that may predict satisfactory or non-satisfactory examination outcomes. We have taken a discourse analysis approach in this study, where the core unit of analysis is a 'turn'. We approach this problem as a binary classification task and employ data mining methods to see whether the application of which to richly annotated dialogues can produce a system with an adequate predictive capacity.
Applying Discourse Analysis and Data Mining Methods to Spoken OSCE Assessments
d958094
An interactive Multilingual Access Gateway (iMAG) dedicated to a web site S (iMAG-S) is a good tool to make S accessible in many languages immediately and without editorial responsibility. Visitors of S as well as paid or unpaid post-editors and moderators contribute to the continuous and incremental improvement of the most important textual segments, and eventually of all. Pre-translations are produced by one or more free MT systems. Continuous use since 2008 on many web sites and for several access languages shows that a quality comparable to that of a first draft by junior professional translators is obtained in about 40% of the (human) time, sometimes less. There are two interesting side effects obtainable without any added cost: iMAGs can be used to produce high-quality parallel corpora and to set up a permanent task-based evaluation of multiple MT systems. We will demonstrate (1) the multilingual access to a web site, with online postediting of MT results "à la Google", (2) postediting in "advanced mode", using SECTra_w as a back-end, enabling online comparison of MT systems, (3) task-oriented built-in evaluation (postediting time), and (4) application to a large web site to get a trilingual parallel corpus where each segment has a reliability level and a quality score. KEYWORDS: Online post-editing, interactive multilingual access gateway, free MT evaluation TITLE AND ABSTRACT IN CHINESE iMAG功能展示 : 机器翻译 后 编辑, 翻 译质量 评 估,平行语 料生成 简述 一个iMAG (interactive Multilingual Access Gateway, 多语言交互式网关) 是很好的面向一个 网站的工具,它可以提供对该网站的多语言访问,并且无需任何编辑。通过iMAG访问该 网站的用户,可以作为有偿或无偿的后编辑人员或是管理者,来对该网站的文本段进行可 持续的、增量的改进。该网站的预翻译是由一个或多个免费的MT系统提供的。自从2008 年以来,通过iMAG对多个网站进行多语言的持续访问结果表明,对于相对翻译质量,首 轮由初级翻译者提供的翻译,使用iMAG只占纯人工翻译40%的时间,或更少。iMAG有两 个非常吸引人的方面并且无需额外成本:iMAG能用于产生高质量的平行语料,而且可以 通过多个MT系统对其进行长久性的评估。我们将要展示:(1) 多语言访问目标网站,并对 Google提供的预翻译进行在线后编辑,(2) 后编辑的高级模式,SECTra作为后台模块,可 实现MT系统的在线比较,(3) 面向任务的评估 (后编辑时间),和 (4) 应用到大型网站, 可获得三种语言的平行语料,每个文字段都拥有可靠性和质量的评分。 关键词:在线后编辑,多语言交互网关,免费MT评估 475
Demo of iMAG possibilities: MT--postediting, translation quality evaluation, parallel corpus production
d17310394
We have developed an online interface for running all the current state-of-the-art algorithms for WSD. This is motivated by the fact that exhaustive comparison of a new Word Sense Disambiguation (WSD) algorithm with existing state-of-the-art algorithms is a tedious task. This impediment is due to one of the following reasons: (1) the source code of the earlier approach is not available and there is a considerable overhead in implementing it or (2) the source code/binary is available but there is some overhead in using it due to system requirements, portability issues, customization issues and software dependencies. A simple tool which has no overhead for the user and has minimal system requirements would greatly benefit the researchers. Our system currently supports 3 languages, viz., English, Hindi and Marathi, and requires only a web-browser to run. To demonstrate the usability of our system, we compare the performance of current state-of-the-art algorithms on 3 publicly available datasets.
I Can Sense It: a comprehensive online system for WSD
d396930
For social media analysts or social scientists interested in better understanding an audience or demographic cohort, being able to group social media content by demographic characteristics is a useful mechanism to organise data. Social roles are one particular demographic characteristic, which includes work, recreational, community and familial roles. In our work, we look at the task of detecting social roles from English Twitter profiles. We create a new annotated dataset for this task. The dataset includes approximately 1,000 Twitter profiles annotated with social roles. We also describe a machine learning approach for detecting social roles from Twitter profiles, which can act as a strong baseline for this dataset. Finally, we release a set of word clusters obtained in an unsupervised manner from Twitter profiles. These clusters may be useful for other natural language processing tasks in social media.
Detecting Social Roles in Twitter
d21703463
The paper provides a cognitively motivated method for evaluating the inflectional complexity of a language, based on a sample of "raw" inflected word forms processed and learned by a recurrent self-organising neural network with fixed parameter setting. Training items contain no information about either morphological content or structure. This makes the proposed method independent of both meta-linguistic issues (e.g. format and expressive power of descriptive rules, manual or automated segmentation of input forms, number of inflectional classes etc.) and language-specific typological aspects (e.g. word-based, stem-based or template-based morphology). Results are illustrated by contrasting Arabic, English, German, Greek, Italian and Spanish.
Evaluating Inflectional Complexity Crosslinguistically: a Processing Perspective
d18270214
This paper briefly sketches new work-inprogress (i) developing task-based scenarios where human-robot teams collaboratively explore real-world environments in which the robot is immersed but the humans are not, (ii) extracting and constructing "multi-modal interval corpora" from dialog, video, and LIDAR messages that were recorded in ROS bagfiles during task sessions, and (iii) testing automated methods to identify, track, and align co-referent content both within and across modalities in these interval corpora. The pre-pilot study and its corpora provide a unique, empirical starting point for our longerterm research objective: characterizing the balance of explicitly shared and tacitly assumed information exchanged during effective teamwork.
d15388570
This paper describes a method to automatically create and maintain gazetteers for Named Entity Recognition (NER). This method extracts the necessary information from linguistic resources. Our approach is based on the analysis of an on-line encyclopedia entries by using a noun hierarchy and optionally a PoS tagger. An important motivation is to reach a high level of language independence. This restricts the techniques that can be used but makes the method useful for languages with few resources. The evaluation carried out proves that this approach can be successfully used to build NER gazetteers for location (F 78%) and person (F 68%) categories.
A proposal to automatically build and maintain gazetteers for Named Entity Recognition by using Wikipedia
d1221886
This paper describes a rule-based semantic parser that relies on a frame dataset (FrameNet), and a semantic network (WordNet), to identify semantic relations between words in open text, as well as shallow semantic features associated with concepts in the text. Parsing semantic structures allows semantic units and constituents to be accessed and processed in a more meaningful way than syntactic parsing, moving the automation of understanding natural language text to a higher level.Here, the category (cat) is defined as adjective, the type is descriptive, degree is base form. We also record the attr feature, which is derived from the attribute relation in Word-Net, and links a descriptive adjective to the attribute (noun) it modifies, such as slow speed.
Open Text Semantic Parsing Using FrameNet and WordNet
d259376848
This paper describes the system of the ABCD team for three main tasks in the SemEval-2023 Task 12: AfriSenti-SemEval for Low-resource African Languages using Twitter Dataset. We focus on exploring the performance of ensemble architectures based on the soft voting technique and different pre-trained transformerbased language models. The experimental results show that our system has achieved competitive performance in some Tracks in Task A: Monolingual Sentiment Analysis, where we rank the Top 3, Top 2, and Top 4 for the Hause, Igbo and Moroccan languages. Besides, our model achieved competitive results and ranked 14 th place in Task B (multilingual) setting and 14 th and 8 th place in Track 17 and Track 18 of Task C (zero-shot) setting.
ABCD Team at SemEval-2023 Task 12: An Ensemble Transformer-based System for African Sentiment Analysis
d218974150
This paper introduces a new CLARIN Knowledge Center which is the K-Centre for Atypical Communication Expertise (ACE for short) which has been established at the Centre for Language and Speech Technology (CLST) at Radboud University. Atypical communication is an umbrella term used here to denote language use by second language learners, people with language disorders or those suffering from language disabilities, but also more broadly by bilinguals and users of sign languages. It involves multiple modalities (text, speech, sign, gesture) and encompasses different developmental stages. ACE closely collaborates with The Language Archive (TLA) at the Max Planck Institute for Psycholinguistics in order to safeguard GDPR-compliant data storage and access. We explain the mission of ACE and show its potential on a number of showcases and a use case.ACE will offer the following services through its website: Information and guidelines about:consent (forms) hosting corpora and datasets containing atypical communication where to find corpora and datasets containing atypical communication  Helpdesk/consultancy for questions on the above topics 5 https://www.ru.nl/cls/our-research/research-groups/firstlanguage-acquisition/ 6 https://www.ru.nl/cls/our-research/research-groups/languagespeech-learning-therapy/ 7 https://www.ru.nl/cls/our-research/research-groups/signlanguage-linguistics/ 8 https://tla.mpi.nl/resources/ 9 https://www.clarin.eu/content/component-metadata 10 http://delad.net/ 11 https://talkbank.org/ 12 https://sshopencloud.eu/
The CLARIN Knowledge Centre for Atypical Communication Expertise
d219310068
d14691060
Parser disambiguation with precision grammars generally takes place via statistical ranking of the parse yield of the grammar using a supervised parse selection model. In the standard process, the parse selection model is trained over a hand-disambiguated treebank, meaning that without a significant investment of effort to produce the treebank, parse selection is not possible. Furthermore, as treebanking is generally streamlined with parse selection models, creating the initial treebank without a model requires more resources than subsequent treebanks. In this work, we show that, by taking advantage of the constrained nature of these HPSG grammars, we can learn a discriminative parse selection model from raw text in a purely unsupervised fashion. This allows us to bootstrap the treebanking process and provide better parsers faster, and with less resources.
Unsupervised Parse Selection for HPSG
d14327001
We present the first freely available large German dataset for Textual Entailment (TE). Our dataset builds on posts from German online forums concerned with computer problems and models the task of identifying relevant posts for user queries (i.e., descriptions of their computer problems) through TE. We use a sequence of crowdsourcing tasks to create realistic problem descriptions through summarisation and paraphrasing of forum posts. The dataset is represented in RTE-5 Search task style and consists of 172 positive and over 2800 negative pairs. We analyse the properties of the created dataset and evaluate its difficulty by applying two TE algorithms and comparing the results with results on the English RTE-5 Search task. The results show that our dataset is roughly comparable to the RTE-5 data in terms of both difficulty and balancing of positive and negative entailment pairs. Our approach to create task-specific TE datasets can be transferred to other domains and languages.
A Search Task Dataset for German Textual Entailment
d7684835
As part of the STATEMENT MAP project, we are constructing a Japanese corpus annotated with the semantic relations bridging facts and opinions that are necessary for online information credibility evaluation. In this paper, we identify the semantic relations essential to this task and discuss how to efficiently collect valid examples from Web documents by splitting complex sentences into fundamental units of meaning called "statements" and annotating relations at the statement level. We present a statement annotation scheme and examine its reliability by annotating around 1,500 pairs of statements. We are preparing the corpus for release this winter.
Annotating Semantic Relations Combining Facts and Opinions
d210722256
d17522253
This paper tests speech recognition using prosody dependent allophone models. The log likehoods of various prosodically labeled phonemes are calculated using Baum-Welsh re-estimation.These log likehoods are then compared to log likehoods of non-prosodically labeled phonemes. Based on the comparison of these log likehoods, it can be concluded that modeling all prosodic information directly in the vowel model leads to improvement in the model. Consonants, on the other hand, split naturally into three categories, strengthened, lengthened and neutral.
The Importance of Prosodic Factors in Phoneme Modeling with Applications to Speech Recognition
d424313
We introduce a supervised approach for extracting bio-molecular events by using linguistic features that represent the contexts of the candidate event triggers and participants. We use Support Vector Machines as our learning algorithm and train separate models for event types that are described with a single theme participant, multiple theme participants, or a theme and a cause participant. We perform experiments with linear kernel and edit-distance based kernel and report our results on the BioNLP'09 Shared Task test data set.
Supervised Classification for Extracting Biomedical Events
d220046446
We propose approaches to Quality Estimation (QE) for Machine Translation that explore both text and visual modalities for Multimodal QE. We compare various multimodality integration and fusion strategies. For both sentence-level and document-level predictions, we show that state-of-the-art neural and feature-based QE frameworks obtain better results when using the additional modality.
Multimodal Quality Estimation for Machine Translation
d220048080
The goal of Knowledge graph embedding (KGE) is to learn how to represent the lowdimensional vectors for entities and relations based on the observed triples. The conventional shallow models are limited to their expressiveness. ConvE (Dettmers et al., 2018) takes advantage of CNN and improves the expressive power with parameter efficient operators by increasing the interactions between head and relation embeddings. However, there is no structural information in the embedding space of ConvE, and the performance is still limited by the number of interactions. The recent KBGAT (Nathani et al., 2019) provides another way to learn embeddings by adaptively utilizing structural information. In this paper, we take the benefits of ConvE and KBGAT together and propose a Relation-aware Inception network with joint local-global structural information for knowledge graph Embedding (ReInceptionE). Specifically, we first explore the Inception network to learn query embedding, which aims to further increase the interactions between head and relation embeddings. Then, we propose to use a relation-aware attention mechanism to enrich the query embedding with the local neighborhood and global entity information. Experimental results on both WN18RR and FB15k-237 datasets demonstrate that ReIncep-tionE achieves competitive performance compared with state-of-the-art methods.
ReInceptionE: Relation-Aware Inception Network with Joint Local-Global Structural Information for Knowledge Graph Embedding
d220048457
The key to effortless end-user programming is natural language. We examine how to teach intelligent systems new functions, expressed in natural language. As a first step, we collected 3168 samples of teaching efforts in plain English. Then we built fu SE , a novel system that translates English function descriptions into code. Our approach is three-tiered and each task is evaluated separately. We first classify whether an intent to teach new functionality is present in the utterance (accuracy: 97.7% using BERT). Then we analyze the linguistic structure and construct a semantic model (accuracy: 97.6% using a BiLSTM). Finally, we synthesize the signature of the method, map the intermediate steps (instructions in the method body) to API calls and inject control structures (F 1 : 67.0% with information retrieval and knowledge-based methods). In an end-to-end evaluation on an unseen dataset fu SE synthesized 84.6% of the method signatures and 79.2% of the API calls correctly.
Programming in Natural Language with fu SE : Synthesizing Methods from Spoken Utterances Using Deep Natural Language Understanding
d6087019
1) UMR 7503 LORIA, CNRS Campus scientifique 54 506 Vandoeuvre-lès-Nancy (2) UMR 5191 ICAR, CNRS 5 parvis René Descartes 69342 Lyon Cedex 07 (3) UMR 5141 LTCI, CNRS 46 rue Barrault 75 634 Paris Cedex 13 (4) CNAM-CRTD,RÉSUMÉLa présence de conflits dans les communautés épistémiques en ligne peut s'avérer bloquante pour l'activité de conception. Nous présentons une étude sur la détection automatique de conflit dans les discussions entre contributeurs Wikipedia qui s'appuie sur des traits de surface tels que la subjectivité ou la connotation des énoncés et évaluons deux règles de décision : l'une découle d'un modèle dialectique en exploitant localement la structure linéaire de la discussion, la subjectivité et la connotation ; l'autre, plus globale, ne s'appuie que sur la taille des fils et les marques de subjectivité au détriment des marques de connotation. Nous montrons que ces deux règles produisent des résultats similaires mais que la simplicité de la règle globale en fait une approche préférée dans la détection des conflits.ABSTRACTConflicts detection in online epistemic communitiesConflicts in online epistemic communities can be a blocking factor when producing knowledge. We present a way to automatically detect conflict in Wikipedia discussions, based on subjectivity and connotation marks. Two rules are evaluated : a local rule that uses the structure of the discussion threads, connotation and subjectivity marks and a global rule that takes the whole thread into account and only subjectivity. We show that the two rules produce similar results but that the simplicity of the global rule makes it a preferred approach to detect conflicts.MOTS-CLÉS : wikipedia, conflit, syntaxe, sémantique, interaction.
Détection de conflits dans les communautés épistémiques en ligne
d6183518
We report in this paper a way of doing Word Sense Disambiguation (WSD) that has its origin in multilingual MT and that is cognizant of the fact that parallel corpora, wordnets and sense annotated corpora are scarce resources. With respect to these resources, languages show different levels of readiness; however a more resource fortunate language can help a less resource fortunate language. Our WSD method can be applied to a language even when no sense tagged corpora for that language is available. This is achieved by projecting wordnet and corpus parameters from another language to the language in question. The approach is centered around a novel synset based multilingual dictionary and the empirical observation that within a domain the distribution of senses remains more or less invariant across languages. The effectiveness of our approach is verified by doing parameter projection and then running two different WSD algorithms. The accuracy values of approximately 75% (F1-score) for three languages in two different domains establish the fact that within a domain it is possible to circumvent the problem of scarcity of resources by projecting parameters like sense distributions, corpus-co-occurrences, conceptual distance, etc. from one language to another.
Projecting Parameters for Multilingual Word Sense Disambiguation
d6077317
This paper describes a partial taxonomy of control structures for actions in procedural texts. On the basis of the taxonomy, we examine natural language expressions for control structures in Japanese procedural texts and present PT (Procedural Text) -chart which represents the structure of a procedural text.
CONTROL STRUCTURES FOR ACTIONS IN PROCEDURAL TEXTS AND PT-CHART
d18064933
As part of a project to develop a Japanese-English machine translation system for technical texts within a limited domain, we conducted a study to investigate the roles that sublanguage techniques(Harris, 1968) and operatorargument grammar(Harris, 1982)would play in the analysis and transfer stages of the system. The data consisted of fifty sentences from the Japanese and English versions of the FOCUS Query Language Primer, which were decomposed into elementary sentence patterns. A total of 187 pattern instances were found for Japanese and 191 for English. When the elements of these elementary sentences were classified and compared with their counterparts in the other language, we identified 43 word classes in Japanese and 43 corresponding English word classes. These word classes formed 32 sublanguage patterns in each language, 29 of which corresponded to patterns in the other language. This paper examines in detail these correspondences as well as the mismatches between sublanguage patterns in Japanese and English.The high level of agreement found between sublanguage categories and patterns in Japanese and English suggests that these categories and patterns can facilitate analysis and transfer. Moreover, the use of operator-argument grammar, which incorporates operator trees as an intermediate representation, substantially reduces the amount of structural transfer needed in the system. A pilot implementation is underway.
A COMPARATIVE STUDY OF JAPANESE AND ENGLISH SUBLANGUAGE PATTERNS
d8162001
We identify and validate from a large corpus constraints from conjunctions on the positive or negative semantic orientation of the conjoined adjectives. A log-linear regression model uses these constraints to predict whether conjoined adjectives are of same or different orientations, achieving 82% accuracy in this task when each conjunction is considered independently. Combining the constraints across many adjectives, a clustering algorithm separates the adjectives into groups of different orientations, and finally, adjectives are labeled positive or negative. Evaluations on real data and simulation experiments indicate high levels of performance: classification precision is more than 90% for adjectives that occur in a modest number of conjunctions in the corpus.
Predicting the Semantic Orientation of Adjectives
d8535026
A system for object-oriented dialogue in Swedish
d258463952
This quantitative study analyzed the levels of and the relationship between the second language (L2) grit and intrinsic reading motivation of pre-service teachers majoring in English language education (N=128) and elementary education (N=108) from two universities in Central Mindanao, Philippines. Using a quantitative correlational research design and a cross-sectional survey method, the randomly selected respondents answered the L2 Grit scale and Intrinsic Reading Motivation Scale which both had good internal consistency. The results from the descriptive statistics showed that both groups had high levels of L2 grit and intrinsic reading motivation. Moreover, these variables had a significant positive correlation based on the Pearson product-moment correlation analyses. This means that when the level of students' grit in learning the second language increases, their motivation to read English texts also increases, and vice-versa. Such also indicates that strengthening the intrinsic reading motivation of the learners will most likely encourage the development of their L2 grit. As a non-cognitive concept, grit assists students in accomplishing the long-term goals they have. Hence, pedagogical implications and recommendations for future study are presented.
The Relationship between L2 Grit and Intrinsic Reading Motivation of Filipino Pre-service Teachers in Central Mindanao
d12348021
Answering questions that ask about temporal information involves several forms of inference. In order to develop question answering capabilities that benefit from temporal inference, we believe that a large corpus of questions and answers that are discovered based on temporal information should be available. This paper describes our methodology for creating AnswerTime-Bank, a large corpus of questions and answers on which Question Answering systems can operate using complex temporal inference.
An Answer Bank for Temporal Inference
d256461015
This paper describes our submission to the WMT2022 shared metrics task. Our unsupervised metric estimates the translation quality at chunk-level and sentence-level. Source and target sentence chunks are retrieved by using a multi-lingual chunker. Chunk-level similarity is computed by leveraging BERT contextual word embeddings and sentence similarity scores are calculated by leveraging sentence embeddings of Language-Agnostic BERT models. The final quality estimation score is obtained by mean pooling the chunk-level and sentence-level similarity scores. This paper outlines our experiments and also reports the correlation with human judgements for en-de, en-ru and zh-en language pairs of WMT17, WMT18 and WMT19 testsets. Our submission will be made available at https://github.com/ AnanyaCoder/WMT22Submission_REUSE
REUSE: REference-free UnSupervised quality Estimation Metric
d256461250
This paper describes the SPECTRANS submission for the WMT 2022 biomedical shared task. We present the results of our experiments using the training corpora and the JoeyNMT (Kreutzer et al., 2019) and SYSTRAN Pure Neural Server/ Advanced Model Studio toolkits for the language directions English to French and French to English. We compare the predictions of the different toolkits. We also use JoeyNMT to fine-tune the model with a selection of texts from WMT, Khresmoi and UFAL data sets. We report our results and assess the respective merits of the different translated texts.
The SPECTRANS System Description for the WMT22 Biomedical Task
d171835271
La reformulation participe à la structuration du discours, notamment dans le cas des dialogues, et contribue également à la dynamique du discours. Reformuler est un acte significatif qui poursuit des objectifs précis. L'objectif de notre travail est de prédire automatiquement la raison pour laquelle un locuteur effectue une reformulation. Nous utilisons une classification de onze fonctions pragmatiques inspirées des travaux existants et des données analysées. Les données de référence sont issues d'annotations manuelles et consensuelles des reformulations spontanées formées autour de trois marqueurs (c'est-à-dire, je veux dire, disons). Les données proviennent d'un corpus oral et d'un corpus de discussions sur les forums de santé. Nous exploitons des algorithmes de catégorisation supervisée et un ensemble de plusieurs descripteurs (syntaxiques, formels, sémantiques et discursifs) pour prédire les catégories de reformulation. La distribution des énoncés et phrases selon les catégories n'est pas homogène. Les expériences sont positionnées à deux niveaux : générique et spécifique. Nos résultats indiquent qu'il est plus facile de prédire les types de fonctions au niveau générique (la moyenne des F-mesures est autour de 0,80), qu'au niveau des catégories individuelles (la moyenne des F-mesures est autour de 0,40). L'influence de différents paramètres est étudiée.ABSTRACTAutomatic prediction of pragmatic functions in reformulations.Reformulations participate in structuring of discourse, especially in dialogues, and also contributes to the dynamics of the discourse. Reformulation is a significant act which has to satisfy precise objectives. The purpose of our work is to automatically predict the reason for which a speaker performs a reformulation. We use a classification with eleven pragmatic functions inspired by the existing work and by the data analyzed. The reference data are built through manual and consensual annotations of spontaneous reformulations introduced by three markers (c'est-à-dire, je veux dire, disons). The data are provided by spoken corpora and a corpus with forum discussions on health issues. We exploit supervised categorization algorithms and a set with several descriptors (syntactic, formal, semantic and discursive) for the prediction of the reformulation categories. The distribution of utterances and sentences is not homogeneous across categories. The experiments are positioned at two levels : general and specific. Our results indicate that it is easier to predict the types of functions at the general level (the average F-measure is around 0.80), than at the level of individual categories (the average F-measure is around 0.40). We study the influence of various parameters. MOTS-CLÉS : Reformulation, apprentissage automatique, paraphrase, classification, fonction pragmatique.
Prédiction automatique de fonctions pragmatiques dans les reformulations
d14903169
1 This paper presents an adaptable online Multilingual Discourse Processing System (Mul-tiDPS), composed of four natural language processing tools: named entity recognizer, anaphora resolver, clause splitter and a discourse parser. This NLP Meta System allows any user to run it on the web or via web services and, if necessary, to build its own processing chain, by incorporating knowledge or resources for each tool for the desired language. In this paper is presented a brief description for each independent module, and a case study in which the system is adapted to five different languages for creating a multilingual summarization system.
MultiDPS -A multilingual Discourse Processing System
d218974534
d220058855
Because open-domain dialogues allow diverse responses, basic reference-based metrics such as BLEU do not work well unless we prepare a massive reference set of high-quality responses for input utterances. To reduce this burden, a human-aided, uncertainty-aware metric, ∆BLEU, has been proposed; it embeds human judgment on the quality of reference outputs into the computation of multiplereference BLEU. In this study, we instead propose a fully automatic, uncertainty-aware evaluation method for open-domain dialogue systems, υBLEU. This method first collects diverse reference responses from massive dialogue data and then annotates their quality judgments by using a neural network trained on automatically collected training data. Experimental results on massive Twitter data confirmed that υBLEU is comparable to ∆BLEU in terms of its correlation with human judgment and that the state of the art automatic evaluation method, RUBER, is improved by integrating υBLEU.
υBLEU: Uncertainty-Aware Automatic Evaluation Method for Open-Domain Dialogue Systems
d4940347
In this paper we define two intermediate models of textual entailment, which correspond to lexical and lexical-syntactic levels of representation. We manually annotated a sample from the RTE dataset according to each model, compared the outcome for the two models, and explored how well they approximate the notion of entailment. We show that the lexicalsyntactic model outperforms the lexical model, mainly due to a much lower rate of false-positives, but both models fail to achieve high recall. Our analysis also shows that paraphrases stand out as a dominant contributor to the entailment task. We suggest that our models and annotation methods can serve as an evaluation scheme for entailment at these levels.
Definition and Analysis of Intermediate Entailment Levels
d236477710
d191744541
d218974405
d40620337
Dans cet article, nous nous sommes intéressés à la prise en compte des erreurs dans les contenus textuels des documents XML. Nous proposons une approche visant à diminuer l'impact de ces erreurs sur les systèmes de Recherche d'Information (RI). En effet, ces systèmes produisent des index associant chaque document aux termes qu'il contient. Les erreurs affectent donc la qualité des index ce qui conduit par exemple à considérer à tort des documents mal indexés comme non pertinents (resp. pertinents) vis-à-vis de certaines requêtes. Afin de faire face à ce problème, nous proposons d'inclure un mécanisme de correction d'erreurs lors de la phase d'indexation des documents. Nous avons implémenté cette approche au sein d'un prototype que nous avons évalué dans le cadre de la campagne d'évaluation INEX.ABSTRACTStructured Information Retrieval Approach based on Indexing Time Error CorrectionIn this paper, we focused on errors in the textual content of XML documents. We propose an approach to reduce the impact of these errors on Information Retrieval (IR) systems. Indeed, these systems rely on indexes associating each document to corresponding terms. Indexes quality is negatively affected by those misspellings. These errors makes it difficult to later retrieve documents (or parts of them) in an effective way during the querying phase. In order to deal with this problem we propose to include an error correction mechanism during the indexing phase of documents. We achieved an implementation of this spelling aware information retrieval system which is currently evaluated over INEX evaluation campaign documents collection. MOTS-CLÉS : Recherche d'information, dysorthographie, correction d'erreurs, xml.
Une approche de recherche d'information structurée fondée sur la correction d'erreurs à l'indexation des documents
d5144216
This paper describes the NICT statistical machine translation (SMT) system used for the WMT 2009 Shared Task (WMT09) evaluation. We participated in the Spanish-English translation task. The focus of this year's participation was to investigate model adaptation and transliteration techniques in order to improve the translation quality of the baseline phrasebased SMT system.
NICT@WMT09: Model Adaptation and Transliteration for Spanish-English SMT
d250390575
Misogynistic memes are rampant on social media, and often convey their messages using multimodal signals (e.g., images paired with derogatory text or captions). However, to date very few multimodal systems have been leveraged for the detection of misogynistic memes. Recently, researchers have turned to contrastive learning solutions for a variety of problems. Most notably, OpenAI's CLIP model has served as an innovative solution for a variety of multimodal tasks. In this work, we experiment with contrastive learning to address the detection of misogynistic memes within the context of SemEval-2022 Task 5. Although our model does not achieve top results, these experiments provide important exploratory findings for this task. We conduct a detailed error analysis, revealing promising clues and offering a foundation for follow-up work.
UIC-NLP at SemEval-2022 Task 5: Exploring Contrastive Learning for Multimodal Detection of Misogynistic Memes
d221097366
d11296398
The normal practice of selecting relevant documents for training routing queries is to either use all relevants or the 'best n' of them after a (retrieval) ranking operation with respect to each query. Using all relevants can introduce noise and ambiguities in training because documents can be long with many irrelevant portions. Using only the 'best n' risks leaving out documents that do not resemble a query. Based on a method of segmenting documents into more uniform size subdocuments, a better approach is to use the top ranked subdocument of every relevant. An alternative selection strategy is based on document properties without ranking. We found experimentally that short relevant documents are the quality items for training. Beginning portions of longer relevants are also useful. Using both types provides a strategy that is effective and efficient.
Learning from Relevant Documents in Large Scale Routing Retrieval
d3518648
The Wall Street Journal corpora provided for the Workshop on Cross-Framework and Cross-Domain Parser Evaluation Shared Task are investigated in order to see how the structures that are difficult for an annotator of dependency structure are encoded in the different schemes. Non-trivial differences among the schemes are found. The paper also investigates the possibility of merging the information encoded in the different corpora.
Toward an Underspecifiable Corpus Annotation Scheme
d2433417
Recent work on Conditional RandomFields (CRFs) has demonstrated the need for regularisation to counter the tendency of these models to overfit. The standard approach to regularising CRFs involves a prior distribution over the model parameters, typically requiring search over a hyperparameter space. In this paper we address the overfitting problem from a different perspective, by factoring the CRF distribution into a weighted product of individual "expert" CRF distributions. We call this model a logarithmic opinion pool (LOP) of CRFs (LOP-CRFs). We apply the LOP-CRF to two sequencing tasks. Our results show that unregularised expert CRFs with an unregularised CRF under a LOP can outperform the unregularised CRF, and attain a performance level close to the regularised CRF. LOP-CRFs therefore provide a viable alternative to CRF regularisation without the need for hyperparameter search.
Logarithmic Opinion Pools for Conditional Random Fields
d61863536
Recent developments in statistical machine translation (SMT), e.g., the availability of efficient implementations of integrated open-source toolkits like Moses, have made it possible to build a prototype system with decent translation quality for any language pair in a few days or even hours. This is so in theory. In practice, doing so requires having a large set of parallel sentence-aligned bilingual texts (a bi-text) for that language pair, which is often unavailable. Large high-quality bi-texts are rare; except for Arabic, Chinese, and some official languages of the European Union (EU), most of the 6,500+ world languages remain resourcepoor from an SMT viewpoint. This number is even more striking if we consider language pairs instead of individual languages, e.g., while Arabic and Chinese are among the most resource-rich languages for SMT, the Arabic-Chinese language pair is quite resource-poor. Moreover, even resourcerich language pairs could be poor in bi-texts for a specific domain, e.g., biomedical text, conversa-
Reusing Parallel Corpora between Related Languages (invited talk)
d3913472
Toward a Rational Model of Discourse Comprehension
d251402038
In this paper, we present a new corpus of clickbait articles annotated by university students along with a corresponding shared task: clickbait articles use a headline or teaser that hides information from the reader to make them curious to open the article. We therefore propose to construct approaches that can automatically extract the relevant information from such an article, which we call clickbait resolving. We show why solving this task might be relevant for end users, and why clickbait can probably not be defeated with clickbait detection alone. Additionally, we argue that this task, although similar to question answering and some automatic summarization approaches, needs to be tackled with specialized models. We analyze the performance of some basic approaches on this task and show that models fine-tuned on our data can outperform general question answering models, while providing a systematic approach to evaluate the results. We hope that the data set and the task will help in giving users tools to counter clickbait in the future.
Know Better -A Clickbait Resolving Challenge
d14694982
To answer the question "What are the duties of a medical doctor?", one would require knowledge about verb-based relations. A lot of effort has been invested in developing relation learners, however to our knowledge there is no repository (or system) which can return all verb relations for a given term. This paper describes an automated procedure which can learn and produce such information with minimal effort. To evaluate the performance of our verb harvesting procedure, we have conducted two types of evaluations: (1) in the human based evaluation we found that the accuracy of the described algorithm is .95 at rank 100; (2) in the comparative study with existing relation learner and knowledge bases we found that our approach yields 12 times more verb relations.
Learning Verbs on the Fly