_id stringlengths 4 10 | text stringlengths 0 18.4k | title stringlengths 0 8.56k |
|---|---|---|
d13945560 | This paper describes the results of our experiments in building speaker-adaptive recognizers for talkers with spastic dysarthria. We study two modifications -(a) MAP adaptation of speaker-independent systems trained on normal speech and, (b) using a transition probability matrix that is a linear interpolation between fully ergodic and (exclusively) leftto-right structures, for both speaker-dependent and speaker-adapted systems. The experiments indicate that (1) for speaker-dependent systems, left-to-right HMMs have lower word error rate than transition-interpolated HMMs, (2) adapting all parameters other than transition probabilities results in the highest recognition accuracy compared to adapting any subset of these parameters or adapting all parameters including transition probabilities, (3) performing both transition-interpolation and adaptation gives higher word error rate than performing adaptation alone and, (4) dysarthria severity is not a sufficient indicator of the relative performance of speakerdependent and speaker-adapted systems. | State-Transition Interpolation and MAP Adaptation for HMM-based Dysarthric Speech Recognition |
d61514482 | Nous présentons une version XML des informations du WordNet de Princeton qui conserve toute l'information originale, mais l'organise dans un format plus pratique pour la consultation et l'accès par programme. Ces fichiers XML ont permis de générer un ensemble de fichiers HTML permettant d'explorer les synsets avec un simple navigateur internet. Une application de démonstration Java illustre la facilité d'accès à l'information en XML pour d'autres applications de TAL.Abstract. This paper describes an XML version of the original Princeton WordNet which keeps all of the original information but in a more effective format for browsing and program access. These XML files were used to generate a set of HTML files to enable a easy and fast browsing of the synsets. A Java application was developed as a demonstration of the access to the XML format from other NLP applications. | 21 ème Traitement Automatique des Langues Naturelles |
d202775978 | Sentence matching is a key issue in natural language inference and paraphrase identification. Despite the recent progress on multi-layered neural network with cross sentence attention, one sentence learns attention to the intermediate representations of another sentence, which are propagated from preceding layers and therefore are uncertain and unstable for matching, particularly at the risk of error propagation. In this paper, we present an original semantics-oriented attention and deep fusion network (OSOA-DFN) for sentence matching. Unlike existing models, each attention layer of OSOA-DFN is oriented to the original semantic representation of another sentence, which captures the relevant information from a fixed matching target. The multiple attention layers allow one sentence to repeatedly read the important information of another sentence for better matching. We then additionally design deep fusion to propagate the attention information at each matching layer. At last, we introduce a self-attention mechanism to capture global context to enhance attention-aware representation within each sentence. Experiment results on three sentence matching benchmark datasets SNLI, SciTail and Quora show that OSOA-DFN has the ability to model sentence matching more precisely. | Original Semantics-Oriented Attention and Deep Fusion Network for Sentence Matching |
d202778086 | Based on massive amounts of data, recent pretrained contextual representation models have made significant strides in advancing a number of different English NLP tasks. However, for other languages, relevant training data may be lacking, while state-of-the-art deep learning methods are known to be data-hungry. In this paper, we present an elegantly simple robust self-learning framework to include unlabeled non-English samples in the fine-tuning process of pretrained multilingual representation models. We leverage a multilingual model's own predictions on unlabeled non-English data in order to obtain additional information that can be used during further finetuning. Compared with original multilingual models and other cross-lingual classification models, we observe significant gains in effectiveness on document and sentiment classification for a range of diverse languages. | A Robust Self-Learning Framework for Cross-Lingual Text Classification |
d18135256 | The present paper outlines the Vergina speech database, which was developed in support of research and development of corpus-based unit selection and statistical parametric speech synthesis systems for Modern Greek language. In the following, we describe the design, development and implementation of the recording campaign, as well as the annotation of the database. Specifically, a text corpus of approximately 5 million words, collected from newspaper articles, periodicals, and paragraphs of literature, was processed in order to select the utterances-sentences needed for producing the speech database and to achieve a reasonable phonetic coverage. The broad coverage and contents of the selected utterances-sentences of the database -text corpus collected from different domains and writing styles -makes this database appropriate for various application domains. The database, recorded in audio studio, consists of approximately 3,000 phonetically balanced Modern Greek utterances corresponding to approximately four hours of speech. Annotation of the Vergina speech database was performed using task-specific tools, which are based on a hidden Markov model (HMM) segmentation method, and then manual inspection and corrections were performed. | Vergina: A Modern Greek Speech Database for Speech Synthesis |
d42491242 | We discuss the basic ideas behind a Japanese to American Sign Language Translation System for the Japanese users, which assists Japanese Deaf people to communicate. Our discussion covers two main points. The first describes the necessity of a Sign Language Translation System. Since there is no "universal sign language" or real "international sign language," if Deaf people should learn at least three languages: they want to talk to people whose mother tongue is different from their owns, the mother sign language, the mother spoken language as an intermediate language, and the sign language in which they want to communicate. The second describes the use of computer, especially WWW which is very popular today. As the use of computers becomes widespread, it is increasingly convenient to study through computer software or Internet facilities. Our translation system provides Deaf people with an easy means of access using their mother-spoken language. It also provides a way for people who are going to learn American sign language to look up new vocabulary. We are further planning to examine how our system could be used to educate and assist Deaf people. | On the Web Communication Assist Aide based on the Bilingual Sign Language Dictionary |
d13283263 | We present an annotation study on a representative dataset of literal and idiomatic uses of infinitive-verb compounds in German newspaper and journal texts. Infinitive-verb compounds form a challenge for writers of German, because spelling regulations are different for literal and idiomatic uses. Through the participation of expert lexicographers we were able to obtain a high-quality corpus resource which is offered as a testbed for automatic idiomaticity detection and coarse-grained word-sense disambiguation. We trained a classifier on the corpus which was able to distinguish literal and idiomatic uses with an accuracy of 85%. | A Corpus of Literal and Idiomatic Uses of German Infinitive-Verb Compounds |
d14943035 | This paper describes our contribution to the SemEval-2015 Task 11 on sentiment analysis of figurative language in Twitter. We considered two approaches, classification and regression, to provide fine-grained sentiment scores for a set of tweets that are rich in sarcasm, irony and metaphor. To this end, we combined a variety of standard lexical and syntactic features with specific features for capturing figurative content. All experiments were done using supervised learning with LIBSVM. For both runs, our system ranked fourth among fifteen submissions. | LT3: Sentiment Analysis of Figurative Tweets: piece of cake #NotReally |
d227231652 | ||
d19384866 | ||
d5508859 | VerbNet (VN) is a major large-scale English verb lexicon. Mapping verb instances to their VN classes has been proven useful for several NLP tasks. However, verbs are polysemous with respect to their VN classes. We introduce a novel supervised learning model for mapping verb instances to VN classes, using rich syntactic features and class membership constraints. We evaluate the algorithm in both in-domain and corpus adaptation scenarios. In both cases, we use the manually tagged Semlink WSJ corpus as training data. For indomain (testing on Semlink WSJ data), we achieve 95.9% accuracy, 35.1% error reduction (ER) over a strong baseline. For adaptation, we test on the GENIA corpus and achieve 72.4% accuracy with 10.7% ER. This is the first large-scale experimentation with automatic algorithms for this task. | A Supervised Algorithm for Verb Disambiguation into VerbNet Classes |
d42843788 | The lack of demographic information available when conducting passive analysis of social media content can make it difficult to compare results to traditional survey results. We present DEMOGRAPHER, 1 a tool that predicts gender from names, using name lists and a classifier with simple character-level features. By relying only on a name, our tool can make predictions even without extensive user-authored content. We compare DEMOG-RAPHER to other available tools and discuss differences in performance. In particular, we show that DEMOGRAPHER performs well on Twitter data, making it useful for simple and rapid social media demographic inference. | Demographer: Extremely Simple Name Demographics |
d15276369 | This paper describes a variety of nonparametric Bayesian models of word segmentation based on Adaptor Grammars that model different aspects of the input and incorporate different kinds of prior knowledge, and applies them to the Bantu language Sesotho. While we find overall word segmentation accuracies lower than these models achieve on English, we also find some interesting differences in which factors contribute to better word segmentation. Specifically, we found little improvement to word segmentation accuracy when we modeled contextual dependencies, while modeling morphological structure did improve segmentation accuracy. | Unsupervised word segmentation for Sesotho using Adaptor Grammars |
d7164861 | This paper describes the derivation of distributional semantic representations for open class words relative to a concept inventory, and of concepts relative to open class words through grammatical relations extracted from Wikipedia articles. The concept inventory comes from WikiNet, a large-scale concept network derived from Wikipedia. The distinctive feature of these representations are their relation to a concept network, through which we can compute selectional preferences of open-class words relative to general concepts. The resource thus derived provides a meaning representation that complements the relational representation captured in the concept network.It covers English open-class words, but the concept base is language independent. The resource can be extended to other languages, with the use of language specific dependency parsers. Good results in metonymy resolution show the resource's potential use for NLP applications. | Concept-based Selectional Preferences and Distributional Representations from Wikipedia Articles |
d259376625 | Online Gender-Based Violence (GBV), such as misogynistic abuse, is an increasingly prevalent problem that technological approaches have struggled to address. Through the lens of the GBV framework, which is rooted in social science and policy, we systematically review 63 available resources for automated identification of such language. We find the datasets are limited in a number of important ways, such as their lack of theoretical grounding and stakeholder input, static nature, and focus on certain media platforms. Based on this review, we recommend development of future resources rooted in sociological expertise and centering stakeholder voices, namely GBV experts and people with lived experience of GBV. | Resources for Automated Identification of Online Gender-Based Violence: A Systematic Review |
d259376499 | Lay summarisation aims at generating a summary for a non-expert audience which allows them to keep updated with the latest research in a specific field. Despite the significant advancements made in the field of text summarisation, lay summarisation remains relatively under-explored. We present a comprehensive set of experiments and analyses to investigate the effectiveness of existing pre-trained language models in generating lay summaries, focusing on the impact of two factors: model size and training data. When evaluating our models in BioLaySumm Shared Task, our submission ranked second for the relevance criteria and third overall among 21 competing teams. 1 | CSIRO Data61 Team at BioLaySumm Task 1: Lay Summarisation of Biomedical Research Articles Using Generative Models |
d236486219 | Supervised models can achieve very high accuracy for fine-grained text classification. In practice, however, training data may be abundant for some types but scarce or even nonexistent for others. We propose a hybrid architecture that uses as much labeled data as available for fine-tuning classification models, while also allowing for types with little (fewshot) or no (zero-shot) labeled data. In particular, we pair a supervised text classification model with a Natural Language Inference (NLI) reranking model. The NLI reranker uses a textual representation of target types that allows it to score the strength with which a type is implied by a text, without requiring training data for the types. Experiments show that the NLI model is very sensitive to the choice of textual representation, but can be effective for classifying unseen types. It can also improve classification accuracy for the known types of an already highly accurate supervised model. 1 | IBM MNLP IE at CASE 2021 Task 2: NLI Reranking for Zero-Shot Text Classification |
d4644465 | This paper presents a corpus of annotated motion events and their event structure. We consider motion events triggered by a set of motion evoking words and contemplate both literal and figurative interpretations of them. Figurative motion events are extracted into the same event structure but are marked as figurative in the corpus. To represent the event structure of motion, we use the FrameNet annotation standard, which encodes motion in over 70 frames. In order to acquire a diverse set of texts that are different from FrameNet's, we crawled blog and news feeds for five different domains: sports, newswire, finance, military, and gossip. We then annotated these documents with an automatic FrameNet parser. Its output was manually corrected to account for missing and incorrect frames as well as missing and incorrect frame elements. The corpus, UTD-MOTIONEVENT, may act as a resource for semantic parsing, detection of figurative language, spatial reasoning, and other tasks. | A Linguistic Resource for Semantic Parsing of Motion Events |
d259833800 | Sentence embeddings induced with various transformer architectures encode much semantic and syntactic information in a distributed manner in a one-dimensional array. We investigate whether specific grammatical information can be accessed in these distributed representations. Using data from a task developed to test rule-like generalizations, our experiments on detecting subject-verb agreement yield several promising results. First, we show that while the usual sentence representations encoded as onedimensional arrays do not easily support extraction of rule-like regularities, a two-dimensional reshaping of these vectors allows various learning architectures to access such information. Next, we show that various architectures can detect patterns in these two-dimensional reshaped sentence embeddings and successfully learn a model based on smaller amounts of simpler training data, which performs well on more complex test data. This indicates that current sentence embeddings contain information that is regularly distributed, and which can be captured when the embeddings are reshaped into higher dimensional arrays. Our results cast light on representations produced by language models and help move towards developing fewshot learning approaches. | Grammatical information in BERT sentence embeddings as two-dimensional arrays |
d199379798 | ||
d7189212 | This paper describes general requirements for evaluating and documenting NLP tools with a focus on morphological analysers and the design of a Gold Standard. It is argued that any evaluation must be measurable and documentation thereof must be made accessible for any user of the tool. The documentation must be of a kind that it enables the user to compare different tools offering the same service, hence the descriptions must contain measurable values. A Gold Standard presents a vital part of any measurable evaluation process, therefore, the corpus-based design of a Gold Standard, its creation and problems that occur are reported upon here. Our project concentrates on SMOR, a morphological analyser for German that is to be offered as a web-service. We not only utilize this analyser for designing the Gold Standard, but also evaluate the tool itself at the same time. Note that the project is ongoing, therefore, we cannot present final results. | Design and application of a Gold Standard for morphological analysis: SMOR as an example of morphological evaluation |
d15295006 | Single-Classifier Memory-Based Phrase Chunking | |
d9270830 | Multimodal interfaces require effective parsing and nn(lerstanding of utterances whose content is distributed across multiple input modes. Johnston 1998 presents an approach in which strategies lbr multimodal integration are stated declaratively using a unification-based grammar that is used by a mnltidilnensional chart parser to compose inputs. This approach is highly expressive and supports a broad class of interfaces, but offers only limited potential for lnutual compensation among the input modes, is subject to signilicant concerns in terms o1' COml)utational complexity, and complicates selection among alternative multimodal interpretations of the input. In tiffs papeh we l)resent an alternative approacla in which multimodal lmrsing and understanding are achieved using a weighted finite-state device which takes speech and gesture streams as inputs and outputs their joint interpretation. This approach is significantly more efficienl, enables tight-coupling of multimodal understanding with speech recognition, and provides a general probabilistic fralnework for multimodal ambiguity resolution. | Finite-state Multimodal Parsing and Understanding |
d7666863 | Clustering is an optimization procedure that partitions a set of elements to optimize some criteria, based on a fixed distance metric defined between the elements. Clustering approaches have been widely applied in natural language processing and it has been shown repeatedly that their success depends on defining a good distance metric, one that is appropriate for the task and the clustering algorithm used. This paper develops a framework in which clustering is viewed as a learning task, and proposes a way to train a distance metric that is appropriate for the chosen clustering algorithm in the context of the given task. Experiments in the context of the entity identification problem exhibit significant performance improvements over state-of-the-art clustering approaches developed for this problem. | Discriminative Training of Clustering Functions: Theory and Experiments with Entity Identification |
d25670345 | Recent advances in distributional semantics combined with the availability of large-scale diachronic corpora offer new research avenues for the Digital Humanities. JESEME, the Jena Semantic Explorer, renders assistance to a non-technical audience to investigate diachronic semantic topics. JESEME runs as a website with query options and interactive visualizations of results, as well as a REST API for access to the underlying diachronic data sets. | Exploring Diachronic Lexical Semantics with JESEME |
d245855871 | This paper describes LISN's submissions to two shared tasks at WMT'21.For the biomedical translation task, we have developed resource-heavy systems for the English-French language pair, using both out-ofdomain and in-domain corpora. The target genre for this task (scientific abstracts) corresponds to texts that often have a standardized structure. Our systems attempt to take this structure into account using a hierarchical system of sentence-level tags. Translation systems were also prepared for the News task for the French-German language pair. The challenge was to perform unsupervised adaptation to the target domain (financial news). For this, we explored the potential of retrieval-based strategies, where sentences that are similar to test instances are used to prime the decoder. | LISN @ WMT 2021 |
d195741695 | Neural network models have shown promise in the temporal relation extraction task. In this paper, we present the attention based neural network model to extract the containment relations within sentences from clinical narratives. The attention mechanism used on top of GRU model outperforms the existing state-of-the-art neural network models on THYME corpus in intra-sentence temporal relation extraction. | Attention Neural Model for Temporal Relation Extraction |
d195064611 | Personality, an essential foundation of human behavior is difficult to identify and classify from texts because of the scarcity of explicit textual clues. Several works were attempted for personality identification by employing well-known lexicons like WordNet, Senti-WordNet, SenticNet etc. However, a lexicon solely devoted for identifying different types of personality is rare. Thus, in the present article, we have discussed the methodologies to develop a personality lexicon from the Essay dataset, a personality corpus based on Big Five model. We have used a frequency based N-gram approach to extract the unique words as well as phrases with respect to each of the Big Five personality classes. In addition to the words, we have added another feature, corpus based probability of occurrence into the lexicon. Finally, we have evaluated our lexicon on a small Youtube personality dataset and found satisfactory coverage. In addition, we have developed a LIWC based classification framework by employing several machine learning algorithms followed by feature selection using information gain and correlation techniques. SVM and Logistic Regression achieved the maximum accuracies of 78.52% and 62.26% with a reduced set of feature size 15 and 10 selected by information gain and correlation attribute evaluation, respectively. | Developing Lexicon and Classifier for Personality Identification in Texts |
d219308170 | This paper surveys three research directions in parsing. First, we look at methods for both automati cally generating a set of diverse parsers and combining the outputs of different parsers into a single parse. Next, we will discuss a parsing method known as transformation-based parsing. This method, though less accurate than the best current corpus-derived parsers, is able to parse quite accurately while learning only a small set of easily understood rules, as opposed to the many-megabyte parameter files learned, by other techniques. Finally, we review a recent study exploring how people and machines compare at the task of creating a program to automatically annotate noun phrases. | Automatic Grammar Induction: Combining, Reducing and Doing Nothing |
d5162801 | Scalable discriminative training methods are now broadly available for estimating phrase-based, feature-rich translation models. However, the sparse feature sets typically appearing in research evaluations are less attractive than standard dense features such as language and translation model probabilities: they often overfit, do not generalize, or require complex and slow feature extractors. This paper introduces extended features, which are more specific than dense features yet more general than lexicalized sparse features. Large-scale experiments show that extended features yield robust BLEU gains for both Arabic-English (+1.05) and Chinese-English (+0.67) relative to a strong feature-rich baseline. We also specialize the feature set to specific data domains, identify an objective function that is less prone to overfitting, and release fast, scalable, and language-independent tools for implementing the features. | An Empirical Comparison of Features and Tuning for Phrase-based Machine Translation |
d2591891 | A common computational goal is to encapsulate the modeling of a target phenomenon within a unified and comprehensive "engine", which addresses a broad range of the required processing tasks. This goal is followed in common modeling of the morphological and syntactic levels of natural language, where most processing tasks are encapsulated within morphological analyzers and syntactic parsers. In this talk I suggest that computational modeling of the semantic level should also focus on encapsulating the various processing tasks within a unified module (engine). The input/output specification of such engine (API) can be based on the textual entailment paradigm, which will be described in brief and suggested as an attractive framework for applied semantic inference. The talk will illustrate an initial proposal for the engine's API, designed to be embedded within the prominent language processing applications. Finally, I will sketch the entailment formalism and efficient inference algorithm developed at Bar-Ilan University, which illustrates a principled transformational (rather than interpretational) approach towards developing a comprehensive semantic engine. | Invited Talk It's time for a semantic engine |
d795534 | Previous algorithms for the generation of referring expressions have been developed specifically for this purpose. Here we introduce an alternative approach based on a fully generic aggregation method also motivated for other generation tasks. We argue that the alternative contributes to a more integrated and uniform approach to content determination in the context of complete noun phrase generation. | Using aggregation for selecting content when generating referring expressions |
d219306083 | ||
d259376471 | Clickbait creates a nuisance in the online experience by creating a lure towards poor content in order to generate ad revenue. With the use of natural language processing models, we can save users time and reduce the need to follow clickbait links. Task 5 at SemEval-2023 focused on precisely this problem and was broken into two steps: identifying the clickbait spoiler type and then identifying the clickbait itself. Our approach involves the use of fine-tuned text classification and question-answering models. Our classification model is able to determine the type of clickbait with 65.3% accuracy. The question-answering model exactly spoiled clickbait generated around 42.5% of the time. Efforts toward solving this task may have an impact by helping to save users' time and quickly give an insight into the answer of what the clickbait/article is about. | Mr-wallace at SemEval-2023 Task 5: Novel Clickbait Spoiling Algorithm Using Natural Language Processing |
d259376558 | This paper analyzes winning solutions from the Feedback Prize competition series hosted from 2021-2022. The competitions sought to improve Assisted Writing Feedback Tools (AWFTs) by crowdsourcing Large Language Model (LLM) solutions for evaluating student writing. The winning LLM-based solutions are freely available for incorporation into educational applications, but the models need to be assessed for performance and other factors. This study reports the performance accuracy of Feedback Prizewinning models based on demographic factors such as student race/ethnicity, economic disadvantage, and English Language Learner status. Two competitions are analyzed. The first, which focused on identifying discourse elements, demonstrated minimal bias based on students' demographic factors. However, the second competition, which aimed to predict discourse effectiveness, exhibited moderate bias. | |
d259376605 | The Turkish particle dA is a focus-associated enclitic, and it can act as a discourse connective conveying multiple senses, like additive, contrastive, causal etc. Like many other linguistic expressions, it is subject to usage ambiguity and creates a challenge in natural language automatization tasks. For the first time, we annotate the discourse and non-discourse connnective occurrences of dA in Turkish with the PDTB principles. Using a minimal set of linguistic features, we develop binary classifiers to distinguish its discourse connective usage from its other usages. We show that despite its ability to cliticize to any syntactic type, variable position in the sentence and having a wide argument span, its discourse/non-discourse connective usage can be annotated reliably and its discourse usage can be disambiguated by exploiting local cues. | Annotating and Disambiguating the Discourse Usage of the Enclitic dA in Turkish |
d15518970 | In this paper we address the task of transferring FrameNet annotations from an English corpus to an aligned Italian corpus. Experiments were carried out on an English-Italian bitext extracted from the Europarl corpus and on a set of selected sentences from the English FrameNet corpus that have been manually translated into Italian. Our research activity is aimed at answering the following three questions: (1) What is the best annotation transfer algorithm for the English-Italian couple? (2) What kind of parallel corpus is best suitable to the annotation transfer task? (3) How should the annotation transfer be evaluated, given the final aim of the transfer?KeywordsFrame semantics, cross-language annotation transfer, automatic development of lexical resources. | Three issues in cross-language frame information transfer |
d885411 | Recent work classifying citations in scientific literature has shown that it is possible to improve classification results with extensive feature engineering. While this result confirms that citation classification is feasible, there are two drawbacks to this approach: (i) it requires a large annotated corpus for supervised classification, which in the case of scientific literature is quite expensive; and (ii) feature engineering that is too specific to one area of scientific literature may not be portable to other domains, even within scientific literature. In this paper we address these two drawbacks. First, we frame citation classification as a domain adaptation task and leverage the abundant labeled data available in other domains. Then, to avoid over-engineering specific citation features for a particular scientific domain, we explore a deep learning neural network approach that has shown to generalize well across domains using unigram and bigram features. We achieve better citation classification results with this cross-domain approach than using in-domain classification. | Improving Citation Polarity Classification with Product Reviews |
d21698241 | In this paper, we are presenting our work to build Universal Dependencies (UD) resources for Japanese. The UD Japanese resources are built based on automatic conversion from several treebanks. The word delimitation, POS, and syntactic relations of the existing treebanks are ported for the UD annotation scheme. We discuss the issues of the UD scheme found through porting of the Japanese language. | Universal Dependencies Version 2 for Japanese |
d237010907 | ||
d16627556 | We develop a supervised ranking model to rerank candidates generated from an SMT-based grammatical error correction (GEC) system. A range of novel features with respect to GEC are investigated and implemented in our reranker. We train a rank preference SVM model and demonstrate that this outperforms both Minimum Bayes-Risk and Multi-Engine Machine Translation based re-ranking for the GEC task. Our best system yields a significant improvement in I-measure when testing on the publicly available FCE test set (from 2.87% to 9.78%). It also achieves an F 0.5 score of 38.08% on the CoNLL-2014 shared task test set, which is higher than the best original result. The oracle score (upper bound) for the re-ranker achieves over 40% I-measure performance, demonstrating that there is considerable room for improvement in the re-ranking component developed here, such as incorporating features able to capture long-distance dependencies. | Candidate re-ranking for SMT-based grammatical error correction |
d21702397 | In this paper, we present Tilde MT, a custom machine translation (MT) platform that provides linguistic data storage (parallel, monolingual corpora, multilingual term collections), data cleaning and normalisation, statistical and neural machine translation system training and hosting functionality, as well as wide integration capabilities (a machine user API and popular computer-assisted translation tool plugins). We provide details for the most important features of the platform, as well as elaborate typical MT system training workflows for client-specific MT solution development. | Tilde MT Platform for Developing Client Specific MT Solutions |
d251384583 | This paper identifies novel characteristics necessary to successfully represent, search, and modify natural language information shared simultaneously across multiple modalities such as text, speech, image, video, etc. We propose a multi-tiered system that implements these characteristics centered around a declarative configuration. The system facilitates easy incremental extension by allowing the creation of composable workflows of loosely coupled components, or plugins. This will allow simple initial systems to be extended to accommodate rich representations while providing mechanisms for maintaining high data integrity. Key to this is leveraging established tools and technologies. We demonstrate using a small example. | GRAIL-A Generalized Representation and Aggregation of Information Layers |
d11989788 | Bayesian approaches have been shown to reduce the amount of overfitting that occurs when running the EM algorithm, by placing prior probabilities on the model parameters.We apply one such Bayesian technique, variational Bayes, to the IBM models of word alignment for statistical machine translation. We show that using variational Bayes improves the performance of the widely used GIZA++ software, as well as improving the overall performance of the Moses machine translation system in terms of BLEU score. | Improving the IBM Alignment Models Using Variational Bayes |
d17063549 | We introduce a new approach to representing and manipulating various types of non-singular concepts in natural language discourse. The representation we describe is based on a partially ordered structure of levels in which the objects of the same relative singularity are assigned to the same level. Our choice of the representation has been motivated by the following main concerns: I. The representation should systematically distinguish between those language terms that are used to refer to objects of different singularity, that is, those classified within different but related levels of the model; 2. The representation should capture certain types of inter-sentential dependencies in discourse, most notably anaphoric-type cohesive links; 3. Finally, the representation should serve as a basis for defining a formal semantics of discourse paragraphs that would allow for capturing the exact truth conditions of sentences involving non-singular terms, and for computing interlevel inferences. In this paper we discuss (I) and (2) only.(3) is currently under investigation and will be the topic of a forthcoming article. We believe that our approach promotes computational feasibility, because we avoid the identification of general terms, like "temperature," "water," etc., with intensions, that is, functions over possible worlds. In our theory, the concept of non-singularity has a local (often subjective) character. Fregean notion of sense. Unfortunately, the concept of intension does not capture all aspects of non-singu- | NON-SINGULAR CONCEPTS IN NATURAL LANGUAGE DISCOURSE |
d220445617 | ||
d45608574 | L'adaptation à un domaine d'un modèle de langage consiste à réestimer ses probabilités afin de mieux modéliser les spécificités linguistiques d'un thème considéré. Pour ce faire, une approche désormais classique est de récupérer des pages Web propres au domaine à partir d'un échantillon textuel représentatif de ce même domaine, texte appelé noyau. Cet article présente une étude originale sur l'importance qu'a le choix du noyau sur le processus d'adaptation et sur les performances des modèles de langage adaptés en reconnaissance automatique de la parole. Le but de cette étude est d'analyser les différences entre une adaptation supervisée, au sein de laquelle le noyau est généré manuellement, et une adaptation non supervisée, où le noyau est une transcription automatique. Nos expériences, menées sur un cas d'application réel, montrent que les différences varient selon les scénarios d'adaptation et que l'approche non supervisée est globalement convaincante, notamment au regard de son faible coût.ABSTRACTImpact of the level of supervision on Web-based language model domain adaptationDomain adaptation of a language model aims at re-estimating word sequence probabilities in order to better match the peculiarities of a given broad topic of interest. To achieve this task, a common strategy consists in retrieving adaptation texts from the Internet based on a given domain-representative seed text. In this paper, we study the influence of the choice of this seed text on the adaptation process and on the performances of adapted language models in automatic speech recognition. More precisely, the goal of this original study is to analyze the differences between supervised adaptation, in which the seed text is manually generated, and unsupervised adaptation, where the seed text is an automatic transcript. Experiments carried out on videos from a real-world use case mainly show that differences vary according to adaptation scenarios and that the unsupervised approach is globally convincing, especially according to its low cost. MOTS-CLÉS : Modèle de langage, adaptation à un domaine, supervision, données du Web. | Impact du degré de supervision sur l'adaptation à un domaine d'un modèle de langage à partir du Web |
d28503960 | ||
d15652746 | We propose a method of probabilistic natural language generation observing both a syntactic structure and an input of situational content. We employed Monte Carlo Tree Search for this nontrivial search problem, employing context-free grammar rules as search operators and evaluating numerous putative generations from these two aspects using logistic regression and n-gram language model. Through several experiments, we confirmed that our method can effectively generate sentences with various words and phrasings. | Human-like Natural Language Generation Using Monte Carlo Tree Search |
d5421301 | Syntactic parsing requires a fine balance between expressivity and complexity, so that naturally occurring structures can be accurately parsed without compromising efficiency. In dependency-based parsing, several constraints have been proposed that restrict the class of permissible structures, such as projectivity, planarity, multi-planarity, well-nestedness, gap degree, and edge degree. While projectivity is generally taken to be too restrictive for natural language syntax, it is not clear which of the other proposals strikes the best balance between expressivity and complexity. In this paper, we review and compare the different constraints theoretically, and provide an experimental evaluation using data from two treebanks, investigating how large a proportion of the structures found in the treebanks are permitted under different constraints.The results indicate that a combination of the well-nestedness constraint and a parametric constraint on discontinuity gives a very good fit with the linguistic data. | Mildly Non-Projective Dependency Structures |
d21691751 | Mental health and well-being are growing issues in western civilizations. But at the same time, psychotherapy and further education in psychotherapy is a highly demanding occupation, resulting in a severe gap in patient-centered care. The question which arises from recent developments in natural language processing (NLP) and speech recognition is, how these technologies could be employed to support the therapists in their work and allow for a better treatment of patients. Most research in NLP focuses on analysing the language of patients with various psychological conditions, but only few examples exist that analyse the therapists behavior and the interaction between therapist and patient. We present ongoing work in collecting, preparing and analysing data from psychotherapy sessions together with expert annotations on various qualitative dimensions of these sessions, such as feedback and cooperation. Our aim is to use this data in a classification task, which gives insight into what qualifies for good feedback or cooperation in therapy sessions and employ this information to support psychotherapists in improving the quality of the care they offer. | Preparing Data from Psychotherapy for Natural Language Processing |
d12991847 | Les tâches de découverte de connaissances ont pour but de faire émerger des groupes d'entités cohérents. Ils reposent le plus souvent sur du clustering, tout l'enjeu étant de définir une notion de similarité pertinentes entre ces entités. Dans cet article, nous proposons de détourner les champs aléatoires conditionnels (CRF), qui ont montré leur intérêt pour des tâches d'étiquetage supervisées, pour calculer indirectement ces similarités sur des séquences de textes. Pour cela, nous générons des problèmes d'étiquetage factices sur les données à traiter pour faire apparaître des régularités dans les étiquetages des entités. Nous décrivons comment ce cadre peut être mis en oeuvre et l'expérimentons sur deux tâches d'extraction d'informations. Les résultats obtenus démontrent l'intérêt de cette approche non-supervisée, qui ouvre de nombreuses pistes pour le calcul de similarités dans des espaces de représentations complexes de séquences.ABSTRACTUnsupervised CRF for knowledge discoveryKnowledge discovery aims at bringing out coherent groups of entities. They are usually based on clustering ; the challenge is then to define a notion of similarity between the relevant entities. In this paper, we propose to divert Conditional Random Fields (CRF), which have shown their interest in supervised labeling tasks, in order tocalculate indirectly the similarities among text sequences. Our approach consists in generate artificial labeling problems on the data to be processed to reveal regularities in the labeling of the entities. We describe how this framework can be implemented and experiment it on two information retrieval tasks. The results demonstrate the usefulness of this unsupervised approach, which opens many avenues for defining similarities for complex representations of sequential data. MOTS-CLÉS : Découverte de connaissances, CRF, clustering, apprentissage non-supervisé, extraction d'informations. | Découverte de connaissances dans les séquences par CRF non-supervisés |
d6802997 | In natural language processing, many methods have been proposed to solve the ambiguity problems. In this paper, we propose a technique to combine a method of interactive disambiguation and automatic one for alnbiguous words. The characteristic of our method is that the accuracy of the interactive disambiguation is considered. The method solves the two following problems when combining those disambiguation lnethods: (1) when should the interactive disambiguation be executed? (2) which ambiguous word should be disambiguated when more than one ambiguous words exist in a sentence? Our method defines the condition of executing the interaction with users and the order of disambiguation based on the strategy where the accuracy of the result. is maximized, considering the accuracy of the interactive disambiguation and automatic one. Using this lnethod, user interaction can be controlled while holding the accuracy of results. | Combination of an Automatic and an Interactive Disambiguation Method |
d59826498 | Nous présentons dans cet article une étude sur les critères de désambiguïsation sémantique automatique basés sur les cooccurrences. L'algorithme de désambiguïsation utilisé est du type liste de décision, il sélectionne une cooccurrence unique supposée véhiculer l'information la plus fiable dans le contexte ciblé. Cette étude porte sur 60 vocables répartis, de manière égale, en trois classes grammaticales (nom, adjectif et verbe) avec une granularité fine au niveau des sens. Nous commentons les résultats obtenus par chacun des critères évalués de manière indépendante et nous nous intéressons aux particularités qui différencient les trois classes grammaticales étudiées. Cette étude s'appuie sur un corpus français étiqueté sémantiquement dans le cadre du projet SyntSem.This paper describes a study on cooccurrence-based criteria for automatic word sense disambiguation. We use a decision-list algorithm which selects the best disambiguating cue in the target context. The algorithm is tested on 60 words equally distributed among three parts of speech (noun, adjective and verb) with a fine sense granularity. We present the results obtained by each criterion evaluated in an independent way and we discuss the characteristics which differentiate the three parts of speech studied. The study uses a French sense-tagged corpus developed in the SyntSem project. | Etude des critères de désambiguïsation sémantique automatique : résultats sur les cooccurrences |
d6852199 | Named Entity Recognition is a relatively well-understood NLP task, with many publicly available training resources and software for English. Other languages tend to be underserved in this area. For German, CoNLL-2003 provides training data, but there are no publicly available, ready-to-use tools. We fill this gap and develop a German NER system with state-of-the-art performance. In addition to CoNLL 2003 labeled training data, we use two additional resources: (i) 32 million words of unlabeled text and (ii) infobox labels in German Wikipedia articles. We extract informative features of word-types from those resources and train a supervised model on the labeled training data. This approach allows us to deal better with word-types unseen in the training data and achieve state-of-the-art performance on German with little engineering effort. | A Named Entity Labeler for German: exploiting Wikipedia and distributional clusters |
d1860666 | We extract the book reviews on picture books written on the Web site specialized in picture books, and found that those reviews reflect infants' behavioral expressions as well as their parents' reading activities in detail. Analysis of the reviews reveals that infants' reactions written on the reviews are coincident with the findings of developmental psychology concerning infants' behaviors. In order to examine how the stimuli of picture books induces varieties of infants' reactions, this paper proposes to detect an infant's developmental reactions in reviews on picture books and shows effectiveness of the proposed method through experimental evaluation. | Detecting an Infant's Developmental Reactions in Reviews on Picture Books |
d219307907 | ||
d9476097 | This paper introduces a web-based visualization framework for graph-based distributional semantic models. The visualization supports a wide range of data structures, including term similarities, similarities of contexts, support of multiword expressions, sense clusters for terms and sense labels. In contrast to other browsers of semantic resources, our visualization accepts input sentences, which are subsequently processed with languageindependent or language-dependent ways to compute term-context representations. Our web demonstrator currently contains models for multiple languages, based on different preprocessing such as dependency parsing and n-gram context representations. These models can be accessed from a database, the web interface and via a RESTful API. The latter facilitates the quick integration of such models in research prototypes. | JOBIMVIZ: A Web-based Visualization for Graph-based Distributional Semantic Models |
d235097320 | ||
d33563145 | Artificial neural networks are powerful statistical models that have been shown to provide excellent results in a number of domains. In the last few years, the computer vision and automatic speech recognition communities have been heavily influenced by these techniques. Applications to problems that involve natural language, such as machine translation or computational semantics, are becoming mainstream in the NLP research.This tutorial aims to introduce the basic concepts and provide intuitive understanding of neural networks, including the very popular field of deep learning. This should help the researchers who are entering this field to quickly understand the major tricks of the trade.The structure of the tutorial is as follows:Basic machine learning applied to natural language• n-grams and bag-of-words representations• logistic regression, support vector machines | Using Neural Networks for Modeling and Representing Natural Languages |
d10321545 | We investigate the expression of opinions about human entities in user-generated content (UGC). A set of 2,800 online news comments (8,000 sentences) was manually annotated, following a rich annotation scheme designed for this purpose. We conclude that the challenge in performing opinion mining in such type of content is correctly identifying the positive opinions, because (i) they are much less frequent than negative opinions and (ii) they are particularly exposed to verbal irony. We also show that the recognition of human targets poses additional challenges on mining opinions from UGC, since they are frequently mentioned by pronouns, definite descriptions and nicknames. | Liars and Saviors in a Sentiment Annotated Corpus of Comments to Political debates |
d248780459 | Syntactic information has been proved to be useful for transformer-based pre-trained language models. Previous studies often rely on additional syntax-guided attention components to enhance the transformer, which require more parameters and additional syntactic parsing in downstream tasks. This increase in complexity severely limits the application of syntaxenhanced language model in a wide range of scenarios. In order to inject syntactic knowledge effectively and efficiently into pre-trained language models, we propose a novel syntaxguided contrastive learning method which does not change the transformer architecture. Based on constituency and dependency structures of syntax trees, we design phrase-guided and treeguided contrastive objectives, and optimize them in the pre-training stage, so as to help the pre-trained language model to capture rich syntactic knowledge in its representations. Experimental results show that our contrastive method achieves consistent improvements in a variety of tasks, including grammatical error detection, entity tasks, structural probing and GLUE. Detailed analysis further verifies that the improvements come from the utilization of syntactic information, and the learned attention weights are more explainable in terms of linguistics. | Syntax-guided Contrastive Learning for Pre-trained Language Model |
d10077982 | For many years, statistical machine translation relied on generative models to provide bilingual word alignments. In 2005, several independent efforts showed that discriminative models could be used to enhance or replace the standard generative approach. Building on this work, we demonstrate substantial improvement in word-alignment accuracy, partly though improved training methods, but predominantly through selection of more and better features. Our best model produces the lowest alignment error rate yet reported on Canadian Hansards bilingual data. | Improved Discriminative Bilingual Word Alignment |
d227231265 | ||
d5501809 | Word Sense Disambiguation remains one of the most complex problems facing computational linguists to date. In this paper we present modification to the graph based state of the art algorithm In-Degree. Our modifications entail augmenting the basic Lesk similarity measure with more relations based on the structure of WordNet, adding SemCor examples to the basic WordNet lexical resource and finally instead of using the LCH similarity measure for computing verb verb similarity in the In-Degree algorithm, we use JCN. We report results on three standard data sets using three different versions of WordNet. We report the highest performing monolingual unsupervised results to date on the Senseval 2 all words data set. Our system yields a performance of 62.7% using WordNet 1.7.1. | Improvements to Monolingual English Word Sense Disambiguation * |
d219302035 | ||
d6821112 | Ä ÖÒ Ò × ¹ × ÃÒÓÛÐ ÓÖ × Ñ Ù Ø Ò Ò × ÏÓÖ Ë Ñ ÒØ Ø ÓÒ ÔÖ Ð Ñ Ò ÖÝ ×ØÙ Ý ÙÒÝÙ Ã Ø Ý À Ù È Ò Ý ÔØº Ó Ò × ¸ÌÖ Ò×Ð Ø ÓÒ ² Ä Ò Ù ×Ø × Ý ØÝ Í Ò Ú Ö× ØÝ Ó ÀÓÒ ÃÓÒ ÔØº Ó ÓÖ Ò ÌÖ ² ÓÒÓÑ Þ ÓÓÔ Ö Ø ÓÒ Ó Ù Ò ÓÒ ÈÖÓÚ Ò ¸ Ò Ö Ò ¾½ Òº ÓÑ ×ØÖ Ø ÂÙר Ð ÓØ Ö AEÄÈ ÔÔÐ Ø ÓÒ׸ × Ö ÓÙ× ÔÖÓ Ð Ñ Û Ø Ò × ÛÓÖ × Ñ ÒØ Ø ÓÒ Ð × Ò Ø Ñ Ù¹ Ø × ÒÚÓÐÚ º × Ñ Ù Ø ÓÒ Ñ Ø Ó × ÐÐ ÒØÓ ¹ Ö ÒØ Ø ÓÖ ×¸ º º¸ÖÙÐ ¹ × ¸×Ø Ø ×Ø й × Ò Ü ÑÔÐ ¹ × ÔÔÖÓ ×¸ Ó Û Ñ Ý ÒÚÓÐÚ Ú Ö ØÝ Ó Ñ Ò Ð ÖÒ Ò Ø Ò Õ٠׺ ÁÒ Ø × Ô Ô Ö Û Ö ÔÓÖØ ÓÙÖ ÙÖÖ ÒØ ÔÖÓ Ö ×× Û Ø Ò Ø Ü ÑÔÐ ¹ × ÔÔÖÓ ¸ Ò ÐÙ Ò Ø× Ö Ñ ¹ ÛÓÖ ¸ Ü ÑÔÐ Ö ÔÖ × ÒØ Ø ÓÒ Ò ÓÐÐ Ø ÓÒ¸ Ü Ñ¹ ÔÐ Ñ Ø Ò Ò ÔÔÐ Ø ÓÒº ÜÔ Ö Ñ ÒØ Ð Ö ×ÙÐØ× × ÓÛ Ø Ø Ø × « Ø Ú ÔÔÖÓ Ö ×ÓÐÚ × ÑÓÖ Ø Ò ¼± Ó Ñ Ù Ø × ÓÙÒ º À Ò ¸ Ø × ÒØ Ö Ø « Ø Ú ÐÝ Û Ø × Ñ ÒØ Ø ÓÒ Ñ Ø Ó Ó Ø ÔÖ ¹ × ÓÒ È ±¸Ø Ö ×ÙÐØ Ò × Ñ ÒØ Ø ÓÒ ÙÖ Ý Ò Ö ¸Ø ÓÖ Ø ÐÐݸ Ý ÓÒ º ±º ½ ÁÒØÖÓ Ù Ø ÓÒ ÁØ × ÒÒ ÖÐÝ ØÛÓ × × Ò Ø ÖÐÝ ÛÓÖ Ó Ò × ÛÓÖ × Ñ ÒØ Ø ÓÒ´Ä Ò 1 Ä Ò Ò Ä Ù¸½ Ä Ù Ò Ä Ò ¸½ Ä Ò ¸½ µ º ÌÓ Ò Þ Ø ÓÒ × Ò Ö Ó ¹ Ò Þ × Û ×ÔÖ ÔÖÓ Ð Ñ¸Ö Ø Ö Ø Ò ¹ Ò ÙÒ ÕÙ ØÓ Ò × Ò ÓØ Ö ÓÖ ÒØ Ð Ð Ò¹ ٠׺ ÁØ × Ò Ò Ø Ð ÓÖ ÔÖ Ö ÕÙ × Ø Ô × Ó AEÄÈ ÓÖ ÐÐ Ð Ò Ù ×¸ ÐØ ÓÙ Ø Ó × Ù¹ Ö ØÝ Ó Ø ÔÖÓ Ð Ñ Ú Ö × ÖÓÑ Ð Ò Ù ØÓ Ð Ò Ù ´Ï ר Ö Ò Ã Ø¸½ ¾ È ÐÑ Ö3 ¼¼¼µº Ê ÒØ ÛÓÖ ÓÒ ØÓ Ò Þ Ø ÓÒ ÓÖ Ù¹ ÖÓÔ Ò Ð Ò Ù × ×Ù × Ò Ð × × Ö ÔÓÖØ Ò´ Ö Òר ØØ Ò Ì Ô Ò Ò Ò¸½ Ö Ò¹ ר ØØ ¸½ Ö Òר ØØ Ø Ðº¸¾¼¼¼µ¸ ÓÔØ¹ Ò ¬Ò Ø ¹×Ø Ø ÔÔÖÓ º ÀÓÛ Ú Ö¸ ÒØ ¬ ¹ Ø ÓÒ Ó ÑÙÐØ ¹ÛÓÖ ÙÒ Ø× ×Ù × ÔÖÓÔ Ö Ò Ñ × Ò Ø Ò Ð Ø ÖÑ× Ò Ø × Ð Ò Ù × × ÐÝ ÓÑÔ Ö Ð ØÓ Ø Ø Ó ÑÙÐØ ¹ Ö Ø Ö Ò × ÛÓÖ × Ø Ö Ö ÒÓ Ð Ñ Ø Ö× Ú Ð Ð º ËÓ Ö¸ Ö Ø Ú Ö ØÝ Ó × Ñ ÒØ Ø ÓÒ ×ØÖ Ø ¹ × ÓÖ Ò × Û Ø Ú Ö ÓÙ× Ð Ò Ù ×Ø Ö ×ÓÙÖ × Ú Ò ÜÔÐÓÖ ¸Ý Ð Ò Ð Ö ÚÓÐÙÑ Ó Ð Ø Ö ØÙÖ ÓÒ ÓØ Ð Ò Ù ×Ø Ò ÓÑÔÙ¹ Ø Ø ÓÒ Ð × ×¸ × Ð ×Ø Ò´Ä Ù Ø Ðº¸½ ÙÓ¸½ µ¸ ÑÓÒ Ñ ÒÝ ÓØ Ö׺ ÁÒ Ò Ö ÐØ × ×ØÖ Ø × Ò Ú ÒØÓ ØÛÓ ÑÔ×Ò Ñ Ðݸ Ø ÓÒ Öݹ × Ò ×Ø Ø ×Ø й × ÔÔÖÓ ×º AE Ú ÖØ Ð ××¸Ø ÓÖÑ Ö Ò ÙÒ ÖרÓÓ × Ö ×ØÖ Ø Òר Ò Ó Ø Ð Ø¹ Ø Ö¸Û Ø Ò ÕÙ ¹ÔÖÓ Ð ØÝ ÓÖ Û ÓÖ Ò Ú Ò Ø ÓÒ ÖÝ ½ º ÅÓר¸ ÒÓØ Ðи Ø ÓÒ Öݹ × ×ØÖ Ø × Ö Ù ÐØ ÙÔÓÒ Û × Ñ Ò Ð × ¹ Ñ ÒØ Ø ÓÒ Ñ Ø Ó × × ÓÒ ×ØÖ Ò Ñ Ø Ò ´Ã Ø Ø Ðº¸½ µ¸ ÑÓÒ Û Ø ÑÓר Ô¹ ÔÐ Ð ¸Ø Ù× Û ÐÝ Ù× × Ò Ø Ú ÖÝ Ò¹ Ò Ò ¸ Ö Ø ØÛÓ Ñ Ü ÑÙÑ Ñ Ø Ò Ñ Ø Ó × ÅÅ×µ¸ÓÒ × ÒÒ Ò ÓÖÛ Ö ´ Åŵ Ò Ø ÓØ Ö Û Ö ´ Åŵº ÁÒØ Ö ×Ø Ò ÐÝ¸Ø Ö Ô Ö¹ ÓÖÑ Ò ¸ Ö ÕÙ ÒØÐÝ Ù× × Ø × Ð Ò ÓÖ Ú ÐÙ Ø ÓÒ¸ × Ò Ú Ö ØÓÓ Ö Û Ý ÖÓÑ Ø ×Ø Ø ¹ Ó ¹Ø ¹ ÖØ ÔÔÖÓ × Ò Ø ÖÑ× Ó × Ñ ÒØ Ø ÓÒ ÙÖ Ýº ÐØ ÓÙ Ô Ö ÓÖÑ Ò Ð ØØÐ ×Ø Ø ×Ø Ð ÓÑÔÙØ Ø ÓÒ¸Ø ÅÅ× ÓÑÔÐݸ Ò Ò Ö Ð¸Û Ø Ø ×× ÒØ Ð ÔÖ Ò ÔÐ Ó Ø ×Ø Ø ×Ø й × ÔÔÖÓ × × Ð Ø × Ñ ÒØ Ø ÓÒ × ÔÖÓ Ð × ÔÓ×× Ð ÑÓÒ ÐÐ Ó ×º Ì × Ó Û Ý Ó ÓÓ× Ò Ø × Ñ ÒØ Ø ÓÒ Û Ø Û ×Ø ÛÓÖ × Ù×Ù ÐÐÝ Ð × ØÓ¸ Ý Ó Ò Ò ¸ ÑÓÖ ÔÖÓ ¹ Ð ÓÙØÔÙØ Ø Ò ÑÓר ÓØ Ö Ó × Û Ø ÑÓÖ ÛÓÖ × ¾ º ½ Ø ÓÒ ÖÝ × ØÙ ÐÐÝ Ö ×ØÖ Ø ÓÖÑ Ó Ð Ò Ù ÑÓ Ð¸ Ò Ø × × Ò× º ¾ Ì Ó Ò Ò Ó Û Ö ÛÓÖ × Û Ø Ö Ø Ö ÔÖÓ ¹ Ð ØÝ Ò ÐÐÙ×ØÖ Ø × ÓÐÐÓÛ× Ú Ò ×ØÖ Ò ×¸Ø ÔÖÓ Ð ØÝ Ó Ø× ÑÓר ÔÖÓ Ð × Ñ ÒØ Ø ÓÒ × ´×µ Ò Ø ÖÑ× Ó Ú Ò Ð Ò Ù ÑÓ Ð × ÔÖÓ ´× ´×µµ Ñ Ü × Û ½ Û ¾ ¡¡¡ÛÒ Ò ÔÖÓ ´Û ¡µ Û Ö ÔÖÓ ´Û ¡µ × ×ÓÑ ÓÒ Ø ÓÒ Ð ÔÖÓ Ð ØÝ Ò Ø ÑÓ Ðº Ë Ò ÐÐ ÔÖÓ ´Û ¡µ ½ ¼¸Ø × ÔÖÓ Ð ØÝ ¹ ÓÑ × ×Ñ ÐÐ Ö ÓÖ Ö Ø Ö Òº Ð ÖÐݸ Ø ÐÓÓ × ÑÓÖ ×ØÖ Ø ÓÖÛ Ö Ò Ò ÕÙ ¹ÔÖÓ Ð ØÝ × ØØ Ò º | |
d218965598 | ||
d52340229 | Generating the English transliteration of a name written in a foreign script is an important and challenging step in multilingual knowledge acquisition and information extraction. Existing approaches to transliteration generation require a large (>5000) number of training examples. This difficulty contrasts with transliteration discovery, a somewhat easier task that involves picking a plausible transliteration from a given list. In this work, we present a bootstrapping algorithm that uses constrained discovery to improve generation, and can be used with as few as 500 training examples, which we show can be sourced from annotators in a matter of hours. This opens the task to languages for which large number of training examples are unavailable. We evaluate transliteration generation performance itself, as well the improvement it brings to crosslingual candidate generation for entity linking, a typical downstream task. We present a comprehensive evaluation of our approach on nine languages, each written in a unique script. 1 | Bootstrapping Transliteration with Constrained Discovery for Low-Resource Languages |
d15645781 | Restrictive and repetitive behavior (RRB) is a core symptom of autism spectrum disorder (ASD) and are manifest in language. Based on this, we expect children with autism to talk about fewer topics, and more repeatedly, during their conversations. We thus hypothesize a higher semantic overlap ratio between dialogue turns in children with ASD compared to those with typical development (TD). Participants of this study include children ages 4-8, 44 with TD and 25 with ASD without language impairment. We apply several semantic similarity metrics to the children's dialogue turns in semi-structured conversations with examiners. We find that children with ASD have significantly more semantically overlapping turns than children with TD, across different turn intervals. These results support our hypothesis, and could provide a convenient and robust ASD-specific behavioral marker. | Similarity Measures for Quantifying Restrictive and Repetitive Behavior in Conversations of Autistic Children |
d16149116 | Sentiment Analysis for Issues Monitoring Using Linguistic Resources | |
d250390687 | Most existing reading comprehension datasets focus on single-span answers, which can be extracted as a single contiguous span from a given text passage. Multi-span questions, i.e., questions whose answer is a series of multiple discontiguous spans in the text, are common in real life but are less studied. In this paper, we present MultiSpanQA 1 , a new dataset that focuses on questions with multi-span answers. Raw questions and contexts are extracted from the Natural Questions (Kwiatkowski et al., 2019) dataset. After multi-span re-annotation, MultiSpanQA consists of over a total of 6,000 multi-span questions in the basic version, and over 19,000 examples with unanswerable questions, and questions with single-, and multispan answers in the expanded version. We introduce new metrics for the purposes of multispan question answering evaluation, and establish several baselines using advanced models. Finally, we propose a new model which beats all baselines and achieves the state-of-the-art on our dataset. | MultiSpanQA: A Dataset for Multi-Span Question Answering |
d10151424 | This paper describes the functionality of MEAD, a comprehensive, public domain, open source, multidocument multilingual summarization environment that has been thus far downloaded by more than 500 organizations. MEAD has been used in a variety of summarization applications ranging from summarization for mobile devices to Web page summarization within a search engine and to novelty detection. | MEAD -a platform for multidocument multilingual text summarization |
d61982581 | Semantic parsing maps a sentence in natural language into a structured meaning representation. Previous studies show that semantic parsing with synchronous contextfree grammars (SCFGs) achieves favorable performance over most other alternatives. Motivated by the observation that the performance of semantic parsing with SCFGs is closely tied to the translation rules, this paper explores extending translation rules with high quality and increased coverage in three ways. First, we introduce structure informed non-terminals, better guiding the parsing in favor of well formed structure, instead of using a uninformed non-terminal in SCFGs. Second, we examine the difference between word alignments for semantic parsing and statistical machine translation (SMT) to better adapt word alignment in SMT to semantic parsing. Finally, we address the unknown word translation issue via synthetic translation rules. Evaluation on the standard GeoQuery benchmark dataset shows that our approach achieves the state-of-the-art across various languages, including English, German and Greek. | Improving Semantic Parsing with Enriched Synchronous Context-Free Grammar |
d5443893 | Optimal morphology (OM) is a finite state formalism that unifies concepts from Optimality Theory (OT, Prince ~: Smolensky, 1993) and Declarative Phonology (DP, Scobbie, Coleman Bird, 1996) to describe morphophonological alternations in inflectional morphology. Candidate sets are formalized by inviolable lexical constraints which map abstract morpheme signatures to allomorphs. Phonology is implemented as violable rankable constraints selecting optimal candidates from these. Both types of constraints are realized by finite state transducers. Using phonological data from Albanian it is shown that given a finite state lexicalization of candidate outputs for word forms OM allows more natural analyses than unviolable finite state constraints do. Two possible evaluation strategies for OM grammars are considered: the global evaluation procedure from E1lisou (1994) and a simple strategy of local constraint evaluation. While the OM-specific lexicalization of candidate sets allows straightforward generation and a simple method of morphological parsing even under global evaluation, local constraint evaluation is shown to be preferable empirically and to be formally more restrictive. The first point is illustrated by an account of directionality effects in some classical Mende data. A procedure is given that generates a finite state transducer simulating the effects of local constraint evaluation. Thus local as opposed to global evaluation(Frank & Satta, 1998)seems to guarantee the finite-stateness of the input-output-mapping. | Optimal Morphology |
d226283884 | ||
d226283506 | ||
d5083798 | In this paper we present a means of compensating for the semantic deficits of linguistically naive underlying application programs without compromising principled grammatical treatments in natural language generation. We present a method for building an interface from today's underlying application programs to the linguistic realization component Mumble-86. The goal of the paper is not to discuss how Mumble works, but to describe how one exploits its capabilities. We provide examples from current generation projects using Mumble as their linguistic component. | FROM WATER TO WINE: GENERATING NATURAL LANGUAGE TEXT FROM TODAY'S APPLICATIONS PROGRAMS 1 |
d11300818 | This paper presents the ITEM multilingual search engine. This search engine performs full lexical processing (morphological analysis, tagging and Word Sense Disambiguation) on documents and queries in order to provide language-neutral indexes for querying and retrieval. The indexing terms are the EuroWordNet/ITEM InterLingual Index records that link wordnets in 10 languages of the European Community (the search engine currently supports Spanish, English and Catalan). The goal of this application is to provide a way of comparing in context the behavior of different Natural Language Processing strategies for Cross-Language Information Retrieval (CLIR) and, in particular, different Word Sense Disambiguation strategies for query translation and conceptual indexing. | Evaluating wordnets in Cross-Language Information Retrieval: the ITEM search engine |
d198165675 | BLEU is the most well-known automatic evaluation technology in assessing the performance of machine translation systems. However, BLEU does not know which parts of the NMT translation results are good or bad. This paper describes the automatic evaluation approach of NMT systems by linguistic test points. This approach allows automatic evaluation of each linguistic test point not shown in BLEU and provides intuitive insight into the strengths and flaws of NMT systems in handling various important linguistic test points. The linguistic test points used for automatic evaluation were 58 and consisted of 630 sentences. We conducted the evaluation of two bidirectional English/Korean NMT systems. BLEUs of English-to-Korean NMT systems were 0.0898 and 0.2081 respectively, and their automatic evaluations by linguistic test points were 58.35% and 77.31%, respectively. BLEUs of Korean-to-English NMT systems were 0.3939 and 0.4512 respectively, and their automatic evaluations by linguistic test points were 33.10% and 40.47%, respectively. This means that the automatic evaluation approach by linguistic test points has similar results as BLEU assessment. According to automatic evaluation by linguistic test points, we know that both English-to-Korean NMT systems and Korean-to-English NMT systems have strengths in polysemy translations, but has flaws in style translations and translations of sentences with complex syntactic structures. | Automatic Evaluation of English-to-Korean and Korean-to-English Neural Machine Translation Systems by Linguistic Test Points |
d259370783 | Conceptual metaphors present a powerful cognitive vehicle to transfer knowledge structures from a source to a target domain. Prior neural approaches focus on detecting whether natural language sequences are metaphoric or literal. We believe that to truly probe metaphoric knowledge in pre-trained language models, their capability to detect this transfer should be investigated. To this end, this paper proposes to probe the ability of GPT-3 to detect metaphoric language and predict the metaphor's source domain without any pre-set domains. We experiment with different training sample configurations for fine-tuning and few-shot prompting on two distinct datasets. When provided 12 fewshot samples in the prompt, GPT-3 generates the correct source domain for a new sample with an accuracy of 65.15% in English and 34.65% in Spanish. GPT's most common error is a hallucinated source domain for which no indicator is present in the sentence. Other common errors include identifying a sequence as literal even though a metaphor is present and predicting the wrong source domain based on specific words in the sequence that are not metaphorically related to the target domain. | Does GPT-3 Grasp Metaphors? Identifying Metaphor Mappings with Generative Language Models |
d690288 | We present a corpus-based study of the sequential ordering among premodifiers in noun phrases. This information is important for the fluency of generated text in practical applications. We propose and evaluate three approaches to identify sequential order among premodifiers: direct evidence, transitive closure, and clustering. Our implemented system can make over 94% of such ordering decisions correctly, as evaluated on a large, previously unseen test corpus. | Ordering Among Premodifiers |
d724894 | In this paper, we propose new algorithms for learning segmentation strategies for simultaneous speech translation. In contrast to previously proposed heuristic methods, our method finds a segmentation that directly maximizes the performance of the machine translation system. We describe two methods based on greedy search and dynamic programming that search for the optimal segmentation strategy. An experimental evaluation finds that our algorithm is able to segment the input two to three times more frequently than conventional methods in terms of number of words, while maintaining the same score of automatic evaluation. 1 | Optimizing Segmentation Strategies for Simultaneous Speech Translation |
d59637023 | D ep artm en t of C o m p u ter and In form ation ScienceU niversity o f P en n sy lvan ia, P h ilad elph ia. PA 19104-6389 sc h a b e s/jo sh iQ lin c.c is. upenn.eduA bstractIn this p ap er, we in v e stigate the p rocessin g of the so-called 'lex ica liz e d ' gra m m a r. In ie x ic a iiz e d ' g ra m m a rs (S c h a b e s, A beille and Jo sh i, 1988), each elem en tary stru ctu re is sy ste m a tic a lly asso ciated with a lexical ' h e a d '. T h ese stru c tu re s sp ecify exten d ed do m ain s of locality (a s co m p ared to C F G s) over which co n strain ts can be sta te d . T h e 'g ra m m a r' co n sists o f a lexicon where each lexical item is a sso ciated with a finite n um ber of stru c tu re s for which th at item is the 'h e a d ' . T h ere are no se p a ra te gram m ar rules. T h ere are, o f co urse, ' ru les' which tell us how these stru ctu re s are com bined.A gen eral tw o-p ass p arsin g stra te g y for 'lex ica liz e d ' g ra m m a rs follows n atu rally. In the first sta g e , the p arser se lects a set of elem en tary stru c tu re s a sso cia ted with the lexical item s in the input sentence, and in the secon d sta g e the sentence is p arsed with resp ect to this set. We ev alu ate th is stra te g y with re sp ect to two c h a ra cte ristics. F irst, the am ou n t of filtering on the entire gra m m a r is ev alu ated : once the first p ass is perform ed , the p arser uses only a su b se t o f the g ram m ar. Secon d, we ev alu ates the use of non-local in form ation : the stru c tu re s selected du ring the first p ass en code the m orph ological value (an d therefore the p ositio n in the strin g ) of their ' h e a d '; th is en ab les the p arser to use non-local in form ation to gu ide its search .We tak e L exicalized Tree A djo in in g G ram m a rs as an in stan ce o f lexicalized g ra m m a r. We illu stra te the o rgan izatio n of the g ra m m a r. T h en we show how a general E arley -ty p e T A G p arser (S c h a b e s and Jo sh i, 1988) can take a d v an ta g e of lex icalizatio n . E m p irical d a t a show th a t the filtering of the gra m m a r and the non-local in form ation provided by the tw o-pass stra te g y im prove the p erform an ce of the parser.We exp lain how c o n strain ts over the elem en tary stru c tu re s ex p ressed by unification eq u atio n s can be p arsed by a sim p le exten sio n o f the E a rley -ty p e T A G p arser. L exicalization g u a ra n te e s term in atio n of the algorith m w ithout sp e c ia l devices such a s restrictors.•T h is w ork is p a rtia lly s u p p o rte d by A R O g ra n t D A A 29-84-9-007, D A R PA g ra n t N 0014-85-K 0018, N S F g ra n ts M CS-82-191169 a n d D C R -84-10413. W e have b e n efite d from o u r d iscussions w ith A n n e A beille, L auri K a r ttu n e n , M itch M a rcu s an d S tu a rt S h ie b er. We w ould also like to th a n k E llen H ays.By lex ic aliz a tio n we m e a n th a t in e ac h s tr u c tu re th e re is a lexical ite m th a t is realized. We d o n o t m ea n sim ply a d d in g fe atu re s tr u c tu r e s (such as h e a d ) a n d u n ific a tio n e q u a tio n s to th e ru les o f th e form alism .-339-International Parsing Workshop '89 | The Relevance of L egalization to Parsing* |
d208281781 | ||
d26816254 | We have witnessed the research progress of machine translation from phrase/syntax-based to semanticsbased and from single sentence-based to discourse and document-based. This talk presents our work of word sense-based translation model for statistical machine translation, which is one of semantics-based SMT research at word sense level. The sense in which a word is used determines the translation of the word. The talk begins with how to build a broad-coverage sense tagger based on a nonparametric Bayesian topic model that automatically learns sense clusters for words in the source language, and then focuses on the proposed word sense-based translation model that enables the decoder to select appropriate translations for source words according to the inferred senses for these words using maximum entropy classifiers. The talk ends with experiential results and some conclusions. To the best of our knowledge, this is the first attempt to empirically verify the positive impact of lexical semantics (word sense) on translation quality. This is a joint work with Deyi Xiong, Soochow University.Biography | Invited Talk: Word Sense Induction for Machine Translation |
d226283902 | ||
d1642844 | In Combinatory Categnrial Grammar (CCG) [Ste90, Ste91], semantic function-argument structures are compositionally produced through the course of a derivation. These structures identify, inter alia, which entities play the same roles in different events for expressions involving a wide range of coordinate constructs. This sameness of role (i.e. ~hematie) information is not identified, however, across eases of verbal diathesis. To handle these cases as well, the present paper demonstrates how to adapt the solution developed in Conceptual Semantics [Jac90, Jac91] to fit the CCG paradigm.The essence of the approach is to redefine the Linking Theory component of Conceptual Semantics in terms of CCG categories, so that derivations yield conceptual structures representing the desired thematic information; in this way no changes are required on the CCG side. While this redefinition is largely straightforward, an interesting problem arises in the case of Corn ceptual Semantics' Incorporated Argument Adjuncts. In examining these, the paper shows that they cannot be treated as adjuncts in the CCG sense without introducing new machinery, nor without compromising the independence of the two theories. For this reason, the paper instead adopts the more traditional approach of treating them as oblique arguments. | CONCEPTUAL STRUCTURES AND CCG: LINKING THEORY AND INCORPORATED ARGUMENT ADJUNCTS |
d53184582 | ||
d158406 | ||
d225086076 | The success of large-scale contextual language models has attracted great interest in probing what is encoded in their representations. In this work, we consider a new question: to what extent contextual representations of concrete nouns are aligned with corresponding visual representations? We design a probing model that evaluates how effective are text-only representations in distinguishing between matching and non-matching visual representations. Our findings show that language representations alone provide a strong signal for retrieving image patches from the correct object categories. Moreover, they are effective in retrieving specific instances of image patches; textual context plays an important role in this process. Visually grounded language models slightly outperform text-only language models in instance retrieval, but greatly under-perform humans. We hope our analyses inspire future research in understanding and improving the visual capabilities of language models. | Probing Contextual Language Models for Common Ground with Visual Representations |
d236459874 | Wet laboratory protocols (WLPs) are critical for conveying reproducible procedures in biological research. They are composed of instructions written in natural language describing the step-wise processing of materials by specific actions. This process flow description for reagents and materials synthesis in WLPs can be captured by material state transfer graphs (MSTGs), which encode global temporal and causal relationships between actions. Here, we propose methods to automatically generate a MSTG for a given protocol by extracting all action relationships across multiple sentences. We also note that previous corpora and methods focused primarily on local intrasentence relationships between actions and entities and did not address two critical issues: (i) resolution of implicit arguments and (ii) establishing long-range dependencies across sentences. We propose a new model that incrementally learns latent structures and is better suited to resolving inter-sentence relations and implicit arguments. This model draws upon a new corpus WLP-MSTG which was created by extending annotations in the WLP corpora for inter-sentence relations and implicit arguments. Our model achieves an F1 score of 54.53% for temporal and causal relations in protocols from our corpus, which is a significant improvement over previous models -Dy-GIE++:28.17%; spERT:27.81%. We make our annotated WLP-MSTG corpus available to the research community. 1 | Learning Latent Structures for Cross Action Phrase Relations in Wet Lab Protocols |
d5119303 | The paper presents a computational theory for resolving Japanese zero anaphora, based on the notion of discourse segment. We see that the discourse segment reduces the domain of antecedents for zero anaphora and thus leads to their efficient resolution. Also we make crucial use of functional notions such as empathy hierarchy and minimal semantics thesis to resolve reference for zero anaphora[Kuno, 1987]. Our al)proach differs from the Centering analysis[Walker et al., 1990]in that the resolution works by matching one empathy hierarchy against another, which makes it possible to deal with discourses with no explicit topic and those with cataphora [Halliday and Hassan, 1990]. The theory is formalized through the definite clause grammar(DCG) formalism[Pereira and Warren, 1980], [Gazdar and Mellish, 1989;Longacre, 1979]. Finally, we show that graphology i.e., quotation mark, spacing, has an important effect on the interpretation of zero anaphora in Japanese discourse. | Resolving Zero Anaphora in Japanese |
d14244622 | This paper describes our participation in SemEval-2016 Task 5 for Subtask 1, Slot 2. The challenge demands to find domain specific target expressions on sentence level that refer to reviewed entities. The detection of target words is achieved by using word vectors and their grammatical dependency relationships to classify each word in a sentence into target or non-target. A heuristic based function then expands the classified target words to the whole target phrase. Our system achieved an F1 score of 56.816% for this task. | Know-Center at SemEval-2016 Task 5: Using Word Vectors with Typed Dependencies for Opinion Target Expression Extraction |
d733706 | This paper focuses on the influence of changing the text time frame on the performance of a named entity tagger. We followed a twofold approach to investigate this subject: on the one hand, we analyzed a corpus that spans 8 years, and, on the other hand, we assessed the performance of a name tagger trained and tested on that corpus. We created 8 samples from the corpus, each drawn from the articles for a particular year. In terms of corpus analysis, we calculated the corpus similarity and names shared between samples. To see the effect on tagger performance, we implemented a semi-supervised name tagger based on co-training; then, we trained and tested our tagger on those samples. We observed that corpus similarity, names shared between samples, and tagger performance all decay as the time gap between the samples increases. Furthermore, we observed that the corpus similarity and names shared correlate with the tagger F-measure. These results show that named entity recognition systems may become obsolete in a short period of time. | Is this NE tagger getting old? |
d5710677 | Spoken language understanding is a critical component of automated customer service applications. Creating effective SLU models is inherently a data driven process and requires considerable human intervention. We describe an interactive system for speech data mining. Using data visualization and interactive speech analysis, our system allows a User Experience (UE) expert to browse and understand data variability quickly. Supervised machine learning techniques are used to capture knowledge from the UE expert. This captured knowledge is used to build an initial SLU model, an annotation guide, and a training and testing system for the labelers. Our goal is to shorten the time to market by increasing the efficiency of the process and to improve the quality of the call types, the call routing, and the overall application. | Interactive Machine Learning Techniques for Improving SLU Models |
d7347732 | We present a novel method of comparable corpora construction. Unlike the traditional methods which heavily rely on linguistic features, our method only takes image similarity into consideration. We use an image-image search engine to obtain similar images, together with the captions in source language and target language. On the basis, we utilize captions of similar images to construct sentence-level bilingual corpora. Experiments on 10,371 target captions show that our method achieves a precision of 0.85 in the top search results. 联合国秘书长潘基文 任命"愤怒的小鸟" 中的红色小鸟为绿色 荣誉大使. | Image-Image Search for Comparable Corpora Construction |
d207913023 | We describe a special type of deep contextualized word representation that is learned from distant supervision annotations and dedicated to named entity recognition. Our extensive experiments on 7 datasets show systematic gains across all domains over strong baselines, and demonstrate that our representation is complementary to previously proposed embeddings. We report new state-of-the-art results on CONLL and ONTONOTES datasets. | Contextualized Word Representations from Distant Supervision with and for NER |
d253802652 | Docket files, also known as plumitifs, are legal text documents describing judicial cases. They are present in most jurisdictions and are meant to provide a window on legal systems. They contain information of a judicial case such as parties' identities, accusations' provisions, decisions, and pleas. However, this information is cryptic, using abbreviations, and making references to the criminal code. In this paper, we explore the use of neural text generators to improve the legal accuracy of the docket file verbalization regarding the accusations, decisions, and pleas sections. We introduce a legal accuracy evaluation scale used by jurists to manually assess the performance of three architectures with different levels of prior knowledge injection. We also study the correlation of our human evaluation methodology with automatic metrics. | Evaluating Legal Accuracy of Neural Generators on the Generation of Criminal Court Dockets Description |
d10050675 | While recent corpus annotation efforts cover a wide variety of semantic structures, work on temporal and causal relations is still in its early stages. Annotation efforts have typically considered either temporal relations or causal relations, but not both, and no corpora currently exist that allow the relation between temporals and causals to be examined empirically. We have annotated a corpus of 1000 event pairs for both temporal and causal relations, focusing on a relatively frequent construction in which the events are conjoined by the word and. Temporal relations were annotated using an extension of the BEFORE and AFTER scheme used in the TempEval competition, and causal relations were annotated using a scheme based on connective phrases like and as a result. The annotators achieved 81.2% agreement on temporal relations and 77.8% agreement on causal relations. Analysis of the resulting corpus revealed some interesting findings, for example, that over 30% of CAUSAL relations do not have an underlying BEFORE relation. The corpus was also explored using machine learning methods, and while model performance exceeded all baselines, the results suggested that simple grammatical cues may be insufficient for identifying the more difficult temporal and causal relations. | Building a Corpus of Temporal-Causal Structure |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.