_id stringlengths 4 10 | text stringlengths 0 18.4k | title stringlengths 0 8.56k |
|---|---|---|
d17244694 | The article describes LIMSI's submission to the first WMT'16 shared biomedical translation task, focusing on the sole English-French translation direction. Our main submission is the output of a MOSES-based statistical machine translation (SMT) system, rescored with Structured OUtput Layer (SOUL) neural network models. We also present an attempt to circumvent syntactic complexity: our proposal combines the outputs of PB-SMT systems trained either to translate entire source sentences or specific syntactic constructs extracted from those sentences. The approach is implemented using Confusion Network (CN) decoding. The quality of the combined output is comparable to the quality of our main system. | Shared Task Papers |
d11282692 | We present a new LR algorithm for treeadjoining grammars. It is an alternative to an existing algorithm that is shown to be incorrect. Furthermore, the new algorithm is much simpler, being very close to traditional LR parsing for context-free grammars. The construction of derived trees and the computation of features also become straightforward. | An alternative LR algorithm for TAGs |
d232021940 | ||
d12577380 | We demonstrate one aspect of an affectextraction system for use in intelligent conversational agents. This aspect performs a degree of affective interpretation of some types of metaphorical utterance. | Don't worry about metaphor: affect extraction for conversational agents |
d6071441 | Growing privacy and security concerns mean there is an increasing need for data to be anonymized before being publically released. We present a module for anonymizing references implemented as part of the SQUAD tools for specifying and testing non-proprietary means of storing and marking-up data using universal (XML) standards and technologies. The tool is implemented on top of the GUITAR anaphoric resolver. | An Anaphora Resolution-Based Anonymization Module |
d7076450 | With microblogging platforms such as Twitter generating huge amounts of textual data every day, the possibilities of knowledge discovery through Twitter data becomes increasingly relevant. Similar to the public voting mechanism on websites such as the Internet Movie Database (IMDb) that aggregates movies ratings, Twitter content contains reflections of public opinion about movies. This study aims to explore the use of Twitter content as textual data for predictive text mining. In this study, a corpus of tweets was compiled to predict the rating scores of newly released movies on IMDb. Predictions were done with several different machine learning algorithms, exploring both regression and classification methods. In addition, this study explores the use of several different kinds of textual features in the machine learning tasks. Results show that prediction performance based on textual features derived from our corpus of tweets improved on the baseline for both regression and classification tasks. | Predicting Ratings for New Movie Releases from Twitter Content |
d225062698 | ||
d237099288 | ||
d88929 | Online discussion forums are a valuable means for users to resolve specific information needs, both interactively for the participants and statically for users who search/browse over historical thread data. However, the complex structure of forum threads can make it difficult for users to extract relevant information. The discourse structure of web forum threads, in the form of labelled dependency relationships between posts, has the potential to greatly improve information access over web forum archives. In this paper, we present the task of parsing user forum threads to determine the labelled dependencies between posts. Three methods, including a dependency parsing approach, are proposed to jointly classify the links (relationships) between posts and the dialogue act (type) of each link. The proposed methods significantly surpass an informed baseline. We also experiment with "in situ" classification of evolving threads, and establish that our best methods are able to perform equivalently well over partial threads as complete threads. HTML Input Code ...Please can someone tell me how to create an input box that asks the user to enter their ID, and then allows them to press go. It will then redirect to the page . | Predicting Thread Discourse Structure over Technical Web Forums |
d259376749 | Sexism is an injustice afflicting women and has become a common form of oppression in social media. In recent years, the automatic detection of sexist instances has been utilized to combat this oppression. The Subtask A of SemEval-2023 Task 10, Explainable Detection of Online Sexism, aims to detect whether an Englishlanguage post is sexist. In this paper, we describe our system for the competition. The structure of the classification model is based on RoBERTa, and we further pre-train it on the domain corpus. For fine-tuning, we adopt Unsupervised Data Augmentation (UDA), a semisupervised learning approach, to improve the robustness of the system. Specifically, we employ Easy Data Augmentation (EDA) method as the noising operation for consistency training. We train multiple models based on different hyperparameter settings and adopt the majority voting method to predict the labels of test entries. Our proposed system achieves a Macro-F1 score of 0.8352 and a ranking of 41/84 on the leaderboard of Subtask A. | DUTIR at SemEval-2023 Task 10: Semi-supervised Learning for Sexism Detection in English |
d1585821 | We present a simple yet effective unsupervised domain adaptation method that can be generally applied for different NLP tasks. Our method uses unlabeled target domain instances to induce a set of instance similarity features. These features are then combined with the original features to represent labeled source domain instances. Using three NLP tasks, we show that our method consistently outperforms a few baselines, including SCL, an existing general unsupervised domain adaptation method widely used in NLP. More importantly, our method is very easy to implement and incurs much less computational cost than SCL. | A Hassle-Free Unsupervised Domain Adaptation Method Using Instance Similarity Features |
d14399727 | We address the issue of 'topic analysis,' by which is determined a text's topic structure, which indicates what topics are included in a text, and how topics change within the text. We propose a novel approach to this issue, one based on statistical modeling and learning. We represent topics by means of word clusters, and employ a finite mixture model to represent a word distribution within a text. Our experimental results indicate that our method significantly outperforms a method that combines existing techniques. | Topic Analysis Using a Finite Mixture Model |
d6553548 | This paper introduces ONYX, a sentencelevel text analyzer that implements a number of innovative ideas in syntactic and semantic analysis. ONYX is being developed as part of a project that seeks to translate spoken dental examinations directly into chartable findings. ONYX integrates syntax and semantics to a high degree. It interprets sentences using a combination of probabilistic classifiers, graphical unification, and semantically annotated grammar rules. In this preliminary evaluation, ONYX shows inter-annotator agreement scores with humans of 86% for assigning semantic types to relevant words, 80% for inferring relevant concepts from words, and 76% for identifying relations between concepts. | ONYX: A System for the Semantic Analysis of Clinical Text |
d219302000 | ||
d207999578 | ||
d219302622 | The computational model proposed in this book is incompatible with any theory that assumes that a single, invariant interpretation is built as a result of comprehending a particular text. The book also challenges the claim that traditional formal theories of syntax, semantics, and reasoning can adequately implement text comprehension on a computer. Corriveau's computational model is called IDIOT, which stands for "Idiosyncratically-Directed Interpretation of Text." According to IDIOT, the interpretation of a text depends on the individual reader's knowledge and mental state at a particular point in time. Therefore, the interpretation is generated by a mechanism that is nondeterministic (i.e., it varies from person to person) and diachronic (i.e., it varies across time).The major assumptions of IDIOT are embraced by most contemporary models of comprehension in cognitive psychology and discourse processing (Britton and Graesser 1996; Weaver, Mannes, and Fletcher 1995). In particular, most psychological models assume that memory plays a critical role in constructing interpretations. Long-term memory is a vast repository of knowledge units that get activated during comprehension, in a limited-capacity working memory (and short-term memory). Text interpretation fluctuates among readers to the extent that readers have different knowledge units in long-term memory and different spans of working memory. According to ID-IoT, the activation and processing of the knowledge units interact in parallel, much in the spirit of Minsky's Society of Mind (1986) and connectionist models. It takes a nontrivial amount of time for some knowledge units to be activated and to complete their processing steps; if the processing is not completed by some deadline temporal duration, then a knowledge unit might not have any impact on the final interpretation of a text. Readers differ in their processing time parameters (such as learning rate, activation rate, and memory decay rate), which also results in fluctuations in interpretations among readers. Once again, these basic claims about memory and processing time are adopted by many of today's researchers who develop psychological models of reading and comprehension, so these researchers would applaud Corriveau's efforts in the field of computational linguistics.Corriveau uses IDIOT to simulate a broad spectrum of linguistic and discourse phenomena: constructing syntactic trees, resolving the referents of nouns and pronouns, determining the context-appropriate sense of an ambiguous lexical item, and | Book Reviews Time-constrained Memory: A Reader-based Approach to Text Comprehension |
d8524905 | Economic globalization and the needs of the intelligence community have brought machine translation into the forefront. There are not enough skilled human translators to meet the growing demand for high quality translations or "good enough" translations that suffice only to enable understanding. Much research has been done in creating translation systems to aid human translators and to evaluate the output of these systems. Metrics for the latter have primarily focused on improving the overall quality of entire test sets but not on gauging the understanding of individual sentences or paragraphs. Therefore, we have focused on developing a theory of translation effectiveness by isolating a set of translation variables and measuring their effects on the comprehension of translations. In the following study, we focus on investigating how certain linguistic permutations, omissions, and insertions affect the understanding of translated texts. | Toward Determining the Comprehensibility of Machine Translations |
d12527684 | Example-based machine translation (EBMT) is a promising translation method for speech-to-speech translation (S2ST) because of its robustness. However, it has two problems in that the performance degrades when input sentences are long and when the style of the input sentences and that of the example corpus are different. This paper proposes example-based rough translation to overcome these two problems. The rough translation method relies on "meaning-equivalent sentences," which share the main meaning with an input sentence despite missing some unimportant information. This method facilitates retrieval of meaning-equivalent sentences for long input sentences. The retrieval of meaning-equivalent sentences is based on content words, modality, and tense. This method also provides robustness against the style differences between the input sentence and the example corpus. | Example-based Rough Translation for Speech-to-Speech Translation |
d9575291 | Existing algorithms for the Generation of Referring Expressions (GRE) aim at generating descriptions that allow a hearer to identify its intended referent uniquely; the length of the expression is also considered, usually as a secondary issue. We explore the possibility of making the trade-off between these two factors more explicit, via a general cost function which scores these two aspects separately. We sketch some more complex phenomena which might be amenable to this treatment. | The Clarity-Brevity Trade-off in Generating Referring Expressions * |
d260063082 | Monolinguals make up a minority of the world's speakers, and yet most language technologies lag behind in handling linguistic behaviours produced by bilingual and multilingual speakers. A commonly observed phenomenon in such communities is code-mixing, which is prevalent on social media, and thus requires attention in NLP research. In this work, we look into the ability of pretrained language models to handle code-mixed data, with a focus on the impact of languages present in pretraining on the downstream performance of the model as measured on the task of sentiment analysis. Ultimately, we find that the pretraining language has little effect on performance when the model sees code-mixed data during downstream finetuning. We also evaluate the models on code-mixed data in a zeroshot setting, after task-specific finetuning on a monolingual dataset. We find that this brings out differences in model performance that can be attributed to the pretraining languages. We present a thorough analysis of these findings that also looks at model performance based on the composition of participating languages in the code-mixed datasets.Jesujoba O. Alabi, David Ifeoluwa Adelani, MariusMosbach, and Dietrich Klakow. 2022. Adapting pretrained language models to African languages via multilingual adaptive fine-tuning. In | Transfer Learning for Code-Mixed Data: Do Pretraining Languages Matter? |
d1836838 | The SignSpeak project will be the first step to approach sign language recognition and translation at a scientific level already reached in similar research fields such as automatic speech recognition or statistical machine translation of spoken languages. Deaf communities revolve around sign languages as they are their natural means of communication. Although deaf, hard of hearing and hearing signers can communicate without problems amongst themselves, there is a serious challenge for the deaf community in trying to integrate into educational, social and work environments. The overall goal of SignSpeak is to develop a new vision-based technology for recognizing and translating continuous sign language to text. New knowledge about the nature of sign language structure from the perspective of machine recognition of continuous sign language will allow a subsequent breakthrough in the development of a new vision-based technology for continuous sign language recognition and translation. Existing and new publicly available corpora will be used to evaluate the research progress throughout the whole project. | The SignSpeak Project -Bridging the Gap Between Signers and Speakers |
d102353391 | Generalization and reliability of multilingual translation often highly depend on the amount of available parallel data for each language pair of interest. In this paper, we focus on zero-shot generalization-a challenging setup that tests models on translation directions they have not been optimized for at training time.To solve the problem, we (i) reformulate multilingual translation as probabilistic inference, (ii) define the notion of zero-shot consistency and show why standard training often results in models unsuitable for zero-shot tasks, and (iii) introduce a consistent agreement-based training method that encourages the model to produce equivalent translations of parallel sentences in auxiliary languages. We test our multilingual NMT models on multiple public zeroshot translation benchmarks (IWSLT17, UN corpus, Europarl) and show that agreementbased learning often results in 2-3 BLEU zeroshot improvement over strong baselines without any loss in performance on supervised translation directions. | Consistency by Agreement in Zero-shot Neural Machine Translation |
d15290453 | We report the results from a sentencealignment experiment on Danish-Bulgarian and English-Bulgarian parallel texts applying a method based in part on linguistic motivations as implemented in the TCA2 aligner. Since the presence of cognates has a bearing on the alignment score of candidate sentences we attempt to bridge the gap between source and target languages by transliteration of the Bulgarian text, written originally in Cyrillic. An improvement in F 1 -measure is achieved in both cases. | Linguistic Motivation in Automatic Sentence Alignment of Parallel Corpora: the Case of Danish |
d196172944 | It has been shown that implicit connectives can be exploited to improve the performance of the models for implicit discourse relation recognition (IDRR). An important property of the implicit connectives is that they can be accurately mapped into the discourse relations conveying their functions. In this work, we explore this property in a multi-task learning framework for IDRR in which the relations and the connectives are simultaneously predicted, and the mapping is leveraged to transfer knowledge between the two prediction tasks via the embeddings of relations and connectives. We propose several techniques to enable such knowledge transfer that yield the state-of-the-art performance for IDRR on several settings of the benchmark dataset (i.e., the Penn Discourse Treebank dataset). | Employing the Correspondence of Relations and Connectives to Identify Implicit Discourse Relations via Label Embeddings |
d6269055 | We discuss some of the practical issues that arise from decoding with general synchronous context-free grammars. We examine problems caused by unary rules and we also examine how virtual nonterminals resulting from binarization can best be handled. We also investigate adding more flexibility to synchronous context-free grammars by adding glue rules and phrases. | Issues Concerning Decoding with Synchronous Context-free Grammar |
d17154841 | This paper is concerned with automatic generation of all possible questions from a topic of interest. Specifically, we consider that each topic is associated with a body of texts containing useful information about the topic. Then, questions are generated by exploiting the named entity information and the predicate argument structures of the sentences present in the body of texts. The importance of the generated questions is measured using Latent Dirichlet Allocation by identifying the subtopics (which are closely related to the original topic) in the given body of texts and applying the Extended String Subsequence Kernel to calculate their similarity with the questions. We also propose the use of syntactic tree kernels for the automatic judgment of the syntactic correctness of the questions. The questions are ranked by considering both their importance (in the context of the given body of texts) and syntactic correctness. To the best of our knowledge, no previous study has accomplished this task in our setting. A series of experiments demonstrate that the proposed topic-to-question generation approach can significantly outperform the state-of-the-art results.Computational LinguisticsVolume 41, Number 1 and their search task is usually not over (Chali, Joty, and Hasan 2009). The next step for the user is to look into the documents themselves and search for the precise piece of information they were looking for. This method is time-consuming, and a correct answer could easily be missed by either an incorrect query resulting in missing documents or by careless reading. This is why Question Answering (QA) has received immense attention from the information retrieval, information extraction, machine learning, and natural language processing communities in the last 15 years(Hirschman and Gaizauskas 2001;Strzalkowski and Harabagiu 2008;Kotov and Zhai 2010).The main goal of QA systems is to retrieve relevant answers to natural language questions from a collection of documents rather than using keyword matching techniques to extract documents. Automated QA research focuses on how to respond with exact answers to a wide variety of questions, including: factoid, list, definition, how, why, hypothetical, semantically constrained, and crosslingual questions(Simmons 1965; | Towards Topic-to-Question Generation |
d236779169 | ||
d662341 | In this paper, we investigate the use of multivariate Poisson model and feature weighting to learn naive Bayes text classifier. Our new naive Bayes text classification model assumes that a document is generated by a multivariate Poisson model while the previous works consider a document as a vector of binary term features based on the presence or absence of each term. We also explore the use of feature weighting for the naive Bayes text classification rather than feature selection, which is a quite costly process when a small number of the new training documents are continuously provided.Experimental results on the two test collections indicate that our new model with the proposed parameter estimation and the feature weighting technique leads to substantial improvements compared to the unigram language model classifiers that are known to outperform the original pure naive Bayes text classifiers. | Poisson Naive Bayes for Text Classification with Feature Weighting |
d13973810 | A common site of language use is interactive dialogue between two people situated together in shared time and space. In this paper, we present a statistical model for understanding natural human language that works incrementally (i.e., does not wait until the end of an utterance to begin processing), and is grounded by linking semantic entities with objects in a shared space. We describe our model, show how a semantic meaning representation is grounded with properties of real-world objects, and further show that it can ground with embodied, interactive cues such as pointing gestures or eye gaze. | Situated Incremental Natural Language Understanding using a Multimodal, Linguistically-driven Update Model |
d18824033 | In this paper, we propose utilising eye gaze information for estimating parameters of a Japanese predicate argument structure (PAS) analysis model. We employ not only linguistic information in the text, but also the information of annotator eye gaze during their annotation process. We hypothesise that annotator's frequent looks at certain candidates imply their plausibility of being the argument of the predicate. Based on this hypothesis, we consider annotator eye gaze for estimating the model parameters of the PAS analysis. The evaluation experiment showed that introducing eye gaze information increased the accuracy of the PAS analysis by 0.05 compared with the conventional methods. | Parameter estimation of Japanese predicate argument structure analysis model using eye gaze information |
d1679655 | ||
d7982795 | We envisioned responsive generic hierarchical text summarization with summaries organized by topic and paragraph based on hierarchical structure topic models. But we had to be sure that topic models were stable for the sampled corpora. To that end we developed a methodology for aligning multiple hierarchical structure topic models run over the same corpus under similar conditions, calculating a representative centroid model, and reporting stability of the centroid model. We ran stability experiments for standard corpora and a development corpus of Global Warming articles. We found flat and hierarchical structures of two levels plus the root offer stable centroid models, but hierarchical structures of three levels plus the root didn't seem stable enough for use in hierarchical summarization. | Topic Model Stability for Hierarchical Summarization |
d15945816 | In automatic summarization, centrality is the notion that a summary should contain the core parts of the source text. Current systems use centrality, along with redundancy avoidance and some sentence compression, to produce mostly extractive summaries. In this paper, we investigate how summarization can advance past this paradigm towards robust abstraction by making greater use of the domain of the source text. We conduct a series of studies comparing human-written model summaries to system summaries at the semantic level of caseframes. We show that model summaries (1) are more abstractive and make use of more sentence aggregation, (2) do not contain as many topical caseframes as system summaries, and (3) cannot be reconstructed solely from the source text, but can be if texts from in-domain documents are added. These results suggest that substantial improvements are unlikely to result from better optimizing centrality-based criteria, but rather more domain knowledge is needed. | Towards Robust Abstractive Multi-Document Summarization: A Caseframe Analysis of Centrality and Domain |
d45897261 | Stylus is a translation software product for Russian language. The paper describes the structure of dictionary database and feature of the product that give possibility to adapt it to the individual translation needs. | THE TOOLS FOR ADAPTING THE MACHINE TRANSLATION SYSTEM TO INDIVIDUAL NEEDS |
d227923674 | Event information is usually scattered across multiple sentences within a document. The local sentence-level event extractors often yield many noisy event role filler extractions in the absence of a broader view of the documentlevel context. Filtering spurious extractions and aggregating event information in a document remains a challenging problem. Following the observation that a document has several relevant event regions densely populated with event role fillers, we build graphs with candidate role filler extractions enriched by sentential embeddings as nodes, and use graph attention networks to identify event regions in a document and aggregate event information. We characterize edges between candidate extractions in a graph into rich vector representations to facilitate event region identification. The experimental results on two datasets of two languages show that our approach yields new state-of-the-art performance for the challenging event extraction task. | Reconstructing Event Regions for Event Extraction via Graph Attention Networks |
d12599837 | Cross Lingual Word Semantic (CLWS) similarity is defined as a task to find the semantic similarity between two words across languages. Semantic similarity has been very popular in computing the similarity between two words in same language. CLWS similarity will prove to be very effective in the area of Cross Lingual Information Retrieval, Machine Translation, Cross Lingual Word Sense Disambiguation, etc.In this paper, we discuss a system that is developed to compute CLWS similarity of words between two languages, where one language is treated as resourceful and other is resource scarce. The system is developed using WordNet. The intuition behind this system is that, two words are semantically similar if their senses are similar to each other. The system is tested for English and Hindi with the accuracy 60.5% precision@1 and 72.91% preci-sion@3. | Let Sense Bags Do Talking: Cross Lingual Word Semantic Similarity for English and Hindi |
d21725202 | This paper discusses the development and application of a Constraint Grammar parser for the Plains Cree language. The focus of this parser is the identification of relationships between verbs and arguments. The rich morphology and non-configurational syntax of Plains Cree make it an excellent candidate for the application of a Constraint Grammar parser, which is comprised of sets of constraints with two aims: 1) the disambiguation of ambiguous word forms, and 2) the mapping of syntactic relationships between word forms on the basis of morphological features and sentential context. Syntactic modelling of verb and argument relationships in Plains Cree is demonstrated to be a straightforward process, though various semantic and pragmatic features should improve the current parser considerably. When applied to even a relatively small corpus of Plains Cree, the Constraint Grammar parser allows for the identification of common word order patterns and for relationships between word order and information structure to become apparent. | Building a Constraint Grammar Parser for Plains Cree Verbs and Arguments |
d18633250 | This paper discusses constraints on grammaticalization, a primarily diachronic process through which lexical elements take on grammatical functions. In particular, it will argue that two constraints on this process, namely Persistence and Layering, explain the different distributional patterns of time-relationship adverbs in Japanese, Korean, English and German. Furthermore, it will suggest that the distributional difference between Japanese and Korean time-relationship adverbs is not an isolated phenomenon but is a reflection of the overall semantic typological differences between the two languages in the sense of Hawkins (1986). | Grammaticalization and Semantic Typology: Time-relationship Adverbs in |
d8905391 | This paper proposes a dependency treebased SRL system with proper pruning and extensive feature engineering. Official evaluation on the CoNLL 2008 shared task shows that our system achieves 76.19 in labeled macro F1 for the overall task, 84.56 in labeled attachment score for syntactic dependencies, and 67.12 in labeled F1 for semantic dependencies on combined test set, using the standalone MaltParser. Besides, this paper also presents our unofficial system by 1) applying a new effective pruning algorithm; 2) including additional features; and 3) adopting a better dependency parser, MSTParser. Unofficial evaluation on the shared task shows that our system achieves 82.53 in labeled macro F1, 86.39 in labeled attachment score, and 78.64 in labeled F1, using MSTParser on combined test set. This suggests that proper pruning and extensive feature engineering contributes much in dependency tree-based SRL. | Dependency Tree-based SRL with Proper Pruning and Extensive Feature Engineering |
d227230415 | ||
d227230283 | We present a novel approach to unsupervised information extraction by identifying and extracting relevant concept-value pairs from textual data. The system's building blocks are domain agnostic, making it universally applicable. In this paper, we describe each component of the system and how it extracts relevant economic information from U.S. Federal Open Market Committee 1 (FOMC) statements. Our methodology achieves an impressive 96% accuracy for identifying relevant information for a set of seven economic indicators: household spending, inflation, unemployment, economic activity, fixed investment, federal funds rate, and labor market. | Information Extraction from Federal Open Market Committee Statements |
d218974037 | ||
d9108161 | 33Previous stochastic approaches to sentence realization da not include a tree-based representation of syntax. While this may be adequate or even advantageous for some applications, other applications profitfrom using as much syntactic knowledge as is available, leaving to a stochastic model only those issues that are not determined by the grammar. In this paper, we present three results in the context of surface realization: a stochastic tree model derivedfrom a parsed corpus outperforms a tree model derivedfrom unannotated corpus; exploiting a hand-crafted grammar in conjunction with a tree model outpe1fonns a tree model without a grammar; and exploiting a tree model in conjunction with a linear language model outperforms just the tree model. | U sing TAGs, a Tree Model, and a Language Model for Generation |
d122705191 | ABSTRACTSeveral classification and routing methods were implemented and compared. The experiments used FBIS documents from four categories, and the measures used were the ff.idf and Cosine similarity measures, and a maximum likelihood estimate based on ass~lming a Multinomial Distribution for the various topics (populations). In addition, the SMART program was run with 'lnc.ltc' weighting and compared to the others.Decisions for both our classification scheme (documents are put into any number of disjoint categories) and our routing scheme (documents are assigned a 'score' and ranked relative to each category) are based on the highest probability for correct classification or routing. All of the techniques described here are fully automatic, and use a training set of relevant documents to produce lists of distin~i~hin£ terms and weights. All methods (ours and the ones we compared to) gave excellent results for the classification task, while the one based on the Multinomial Distribution produced the best results on the routing task. | A SIMPLE PROBABILISTIC APPROACH TO CLASSIFICATION AND ROUTING |
d243928250 | Adaptive Machine Translation purports to dynamically include user feedback to improve translation quality. In a post-editing scenario, user corrections of machine translation output are thus continuously incorporated into translation models, reducing or eliminating repetitive error editing and increasing the usefulness of automated translation. In neural machine translation, this goal may be achieved via online learning approaches, where network parameters are updated based on each new sample. This type of adaptation typically requires higher learning rates, which can affect the quality of the models over time. Alternatively, less aggressive online learning setups may preserve model stability, at the cost of reduced adaptation to user-generated corrections. In this work, we evaluate different online learning configurations over time, measuring their impact on user-generated samples, as well as separate in-domain and out-of-domain datasets. Results in two different domains indicate that mixed approaches combining online learning with periodic batch fine-tuning might be needed to balance the benefits of online learning with model stability. | Online Learning over Time in Adaptive Neural Machine Translation |
d219308167 | ||
d65246 | CLS Corporate Language Services AG recently began offering the rapid post-editing of raw machine translation output to meet the rising demand for this service among clients. What is meant by rapid post-editing is the rough correction of machine translated texts with emphasis on speed and denotative accuracy. In the preliminary phase of the project, CLS conducted a test among four inhouse translators. The objective was to gain practical experience, establish workflow requirements and set up efficient post-editing processes. Text samples were selected from several subject categories, and post-edited in English, German and French. The participants were given 10, 15 and 30 minutes per page to complete their tasks. This paper aims to present the results of the post-editing test at CLS Corporate Language Services AG, and to examine the conditions under which a rapid post-editing service is feasible in a commercial environment. | Testing "Prompt": The Development of a Rapid Post-Editing Service at CLS Corporate Language Services AG, Switzerland |
d17939520 | In this paper we propose a partial parsing model which achieves robust parsing with a large HPSG grammar. Constraint-based precision grammars, like the HPSG grammar we are using for the experiments reported in this paper, typically lack robustness, especially when applied to real world texts. To maximally recover the linguistic knowledge from an unsuccessful parse, a proper selection model must be used. Also, the efficiency challenges usually presented by the selection model must be answered. Building on the work reported inZhang et al. (2007a), we further propose a new partial parsing model that splits the parsing process into two stages, both of which use the bottom-up chart-based parsing algorithm. The algorithm is implemented and a preliminary experiment shows promising results. | Robust Parsing with a Large HPSG Grammar |
d17983379 | This paper presents a novel implementation of Translog-II. Translog-II is a Windows-oriented program to record and study reading and writing processes on a computer. In our research, it is an instrument to acquire objective, digital data of human translation processes. As their predecessors, Translog 2000 and Translog 2006, also Translog-II consists of two main components: Translog-II Supervisor and Translog-II User, which are used to create a project file, to run a text production experiments (a user reads, writes or translates a text) and to replay the session. Translog produces a log files which contains all user activity data of the reading, writing, or translation session, and which can be evaluated by external tools. While there is a large body of translation process research based on Translog, this paper gives an overview of the Translog-II functions and its data visualization options. | Translog-II: a Program for Recording User Activity Data for Empirical Reading and Writing Research |
d8045822 | In this paper, we present an algorithm for learning a generative model of natural language sentences together with their formal meaning representations with hierarchical structures. The model is applied to the task of mapping sentences to hierarchical representations of their underlying meaning. We introduce dynamic programming techniques for efficient training and decoding. In experiments, we demonstrate that the model, when coupled with a discriminative reranking technique, achieves state-of-the-art performance when tested on two publicly available corpora. The generative model degrades robustly when presented with instances that are different from those seen in training. This allows a notable improvement in recall compared to previous models. | A Generative Model for Parsing Natural Language to Meaning Representations |
d45079713 | We present the national project "Parlare italiano: osservatorio degli usi linguistici", funded by the Italian Ministry of Education, Scientific Research and University (PRIN 2004). Ten research groups participate to the project from various Italian universities. The project has four fundamental objectives: 1) to plan a national website that collects the most recent theoretical and applied results on spoken language; 2) to create an observatory of the linguistic usages of the Italian spoken language; 3) to delineate and implement standard and formalized methods and procedures for the study of spoken language; 4) to develop a training program for young researchers. The website will be accessible starting from November 2006. | An observatory on Spoken Italian linguistic resources and descriptive standards |
d17117576 | Comma placements in Chinese text are relatively arbitrary although there are some syntactic guidelines for them. In this research, we attempt to improve the readability of text by optimizing comma placements through integration of linguistic features of text and gaze features of readers.We design a comma predictor for general Chinese text based on conditional random field models with linguistic features. After that, we build a rule-based filter for categorizing commas in text according to their contribution to readability based on the analysis of gazes of people reading text with and without commas.The experimental results show that our predictor reproduces the comma distribution in the Penn Chinese Treebank with 78.41 in F 1 -score and commas chosen by our filter smoothen certain gaze behaviors. | Modeling Comma Placement in Chinese Text for Better Readability using Linguistic Features and Gaze Information |
d5655642 | GATE is a widely used open-source solution for text processing with a large user community. It contains components for several natural language processing tasks. However, temporal information extraction functionality within GATE has been rather limited so far, despite being a prerequisite for many application scenarios in the areas of natural language processing and information retrieval. This paper presents an integrated approach to temporal information processing. We take state-of-the-art tools in temporal expression and event recognition and bring them together to form an openly-available resource within the GATE infrastructure. GATE-Time provides annotation in the form of TimeML events and temporal expressions complying with this mature ISO standard for temporal semantic annotation of documents. Major advantages of GATE-Time are (i) that it relies on HeidelTime for temporal tagging, so that temporal expressions can be extracted and normalized in multiple languages and across different domains, (ii) it includes a modern, fast event recognition and classification tool, and (iii) that it can be combined with different linguistic pre-processing annotations, and is thus not bound to license restricted preprocessing components. | GATE-Time: Extraction of Temporal Expressions and Events |
d8155707 | This paper discusses an information extraction (IE) system, Textract, in natural language (NL) question answering (QA) and examines the role of IE in QA application. It shows: (i) Named Entity tagging is an important component for QA, (ii) an NL shallow parser provides a structural basis for questions, and (iii) high-level domain independent IE can result in a QA breakthrough. | A Question Answering System Supported by Information Extraction* |
d262356072 | As language-based interaction becomes more ubiquitous and is used by in a larger and larger variety of different situations, the challenge for NLG systems is to not only convey a certain message correctly, but also do so in a way that is appropriate to the situation and the user. From various studies, we know that humans adapt the way they formulate their utterances to their conversational partners and may also change the way they say things as a function of the situation that the conversational partner is in (e.g. while talking to someone who is driving a car). Approaches from psycholinguistics (using information-theoretic measures as well as other complexity metrics) provide a way to formulate and quantify the demands that a certain formulation places on a hearer. In this talk, I will briefly survey ways of assessing human cognitive load in realistic settings, present current models of information density at the content level, and discuss the extent to which these measures have been found to drive choice of formulation in humans.132 | Invited Speaker |
d32737003 | The majority of core techniques to solve many problems in Community Question Answering (CQA) task rely on similarity computation. This work focuses on similarity between two sentences (or questions in subtask B) based on word embeddings. We exploit words importance levels in sentences or questions for similarity features, for classification and ranking with machine learning. Using only 2 types of similarity metric, our proposed method has shown comparable results with other complex systems. This method on subtask B 2017 dataset is ranked on position 7 out of 13 participants. Evaluation on 2016 dataset is on position 8 of 12, outperforms some complex systems. Further, this finding is explorable and potential to be used as baseline and extensible for many tasks in CQA and other textual similarity based system. | UINSUSKA-TiTech at SemEval-2017 Task 3: Exploiting Word Importance Levels for Similarity Features for CQA |
d9946972 | Preparations for MUC-4 were made starting in October, 1991, the call for participation was issued in December, and the system development phase was well underway by February, 1992. A dry run of the evaluation was conducted in late March, final testing was done in late May, and the conference was held in mid-~u n e . 6 The program committee7 approved an ambitious plan for updating various aspects of the MUC-3 evaluation design for use for MUC-4.Changes to the task definition, corpus, measures of performance, and test protocols were made in order to provide * greater focus on the issue of spurious data generation; * isolation of text filtering performance; * better isolation of language analysis performance; * assessment of system independence from the training data; * assessment of system development progress since MUC-3; * more consistent scoring; * means to make valid score comparisons among systems.The MUC-3 measures of performance implicitly encouraged participants to strive to develop their systems t o achieve high recall at the expense of high ~v e r~e n e r a t i o n .~ A few changes were made to the template scoring software to make the generation of spurious data more apparent. One of these changes focuses attention on overgeneration at the slot level (generating more slot values than were expected), while the others focus attention on overgeneration at the template level (generating more templates than were expected).To address the spurious slot-value issue, an additional method of assessing penalties for missing and spurious data (called the "Matched/Spurious" method) was incorporated, completing the picture provided by the three that had been developed for MUC-3. To address the spurious template issue, a preliminary step in the alignment of response templates with key templates was implemented that requires that minimal "content-based mapping conditions" be met in order for alignment to occur. Response templates that fail to meet these minimal conditions New Mexico State University teamed with Brandeis University for MUC-4, and Carnegie Mellon University teamed with General Electric. 6 The conference was hosted by PRC, Inc. at their conference center in McLean, VA. 7 The MUC-4 program committee included B. Sundheim (NRaD), chair; N. Chinchor (SAIC); R. Grishman (NYU); J. Hobbs (SRI); D. Lewis (U Chicago); L. Rau (GE); C. Weir (Paramax).Readers unfamiliar with the usage of the terms "recall," "precision," and "overgeneration" as information extraction evaluation metrics should refer to [3]. | OVERVIEW OF THE FOURTH MESSAGE UNDERSTANDIN G EVALUATION AND CONFERENC E INTRODUCTIO N |
d11404292 | We introduce a novel neural easy-first decoder that learns to solve sequence tagging tasks in a flexible order. In contrast to previous easy-first decoders, our models are end-to-end differentiable. The decoder iteratively updates a "sketch" of the predictions over the sequence. At its core is an attention mechanism that controls which parts of the input are strategically the best to process next. We present a new constrained softmax transformation that ensures the same cumulative attention to every word, and show how to efficiently evaluate and backpropagate over it. Our models compare favourably to BILSTM taggers on three sequence tagging tasks. * This research was partially carried out during an internship at Unbabel. | Learning What's Easy: Fully Differentiable Neural Easy-First Taggers |
d7922149 | We consider the problem of predicting measurable responses to scientific articles based primarily on their text content.Specifically, we consider papers in two fields (economics and computational linguistics) and make predictions about downloads and within-community citations. Our approach is based on generalized linear models, allowing interpretability; a novel extension that captures first-order temporal effects is also presented. We demonstrate that text features significantly improve accuracy of predictions over metadata features like authors, topical categories, and publication venues. 0 1500 4 9 log(# downloads) # docs. 0 2500 0 18 # citations # docs. | Predicting a Scientific Community's Response to an Article |
d11617744 | This paper describes a computational model of human sentence processing based on the principles and parameters paradigm of current linguistic theory. The syntactic processing model posits four modules, recovering phrase structure, long-distance dependencies, coreference, and thematic structure. These four modules are implemented as recta-interpreters over their relevant components of the grammar, permitting variation in the deductive strategies employed by each module. These four interpreters are also 'coroutined' via the freeze directive of constraint logic programruing to achieve incremental interpretation across the modules. | Multiple Interpreters in a Principle-Based Model of Sentence Processing |
d21725617 | In this paper, we present the first analysis of bottom-up manual semantic clustering of verbs in three languages, English, Polish and Croatian. Verb classes including syntactic and semantic information have been shown to support many NLP tasks by allowing abstraction from individual words and thereby alleviating data sparseness. The availability of such classifications is however still non-existent or limited in most languages. While a range of automatic verb classification approaches have been proposed, high-quality resources and gold standards are needed for evaluation and to improve the performance of NLP systems. We investigate whether semantic verb classes in three different languages can be reliably obtained from native speakers without linguistics training. The analysis of inter-annotator agreement shows an encouraging degree of overlap in the classifications produced for each language individually, as well as across all three languages. Comparative examination of the resultant classifications provides interesting insights into cross-linguistic semantic commonalities and patterns of ambiguity. | Acquiring Verb Classes Through Bottom-Up Semantic Verb Clustering |
d14810207 | This paper describes our approach to the development of a Proposition Bank, which involves the addition of semantic information to the Penn English Treebank. Our primary goal is the labeling of syntactic nodes with specific argument labels that preserve the similarity of roles such as the window in John broke the window and the window broke. After motivating the need for explicit predicate argument structure labels, we briefly discuss the theoretical considerations of predicate argument structure and the need to maintain consistency across syntactic alternations. The issues of consistency of argument structure across both polysemous and synonymous verbs are also discussed and we present our actual guidelines for these types of phenomena, along with numerous examples of tagged sentences and verb frames. Metaframes are introduced as a technique for handling similar frames among near− synonymous verbs. We conclude with a summary of the current status of annotation process. | From TreeBank to PropBank |
d14133435 | The Inter-Lingual-Index (ILI) m the EuroWordNet architecture is an mltmlly unstructured fund of concepts whmh functions as the hnk between the vanous language wordnets The ILI concepts originate from WordNetl 5, and have been restructured on the basls of aspects of the internal structure of Word-Net, hnks between WordNet and other resources, and multflmgual mapping between the wordnets This leads to a dtfferentmtlon of the status of ILI concepts, a reductmn of the Wordnet polysemy, and a greater connectivity between the wordnets The restructured ILI represents the first step towards a standardized set of word meanings, ts a worhng platform for further development and testing, and can be put to use m NLP tasks such as (multdmgual) mformatmn remeval | Towards a Universal Index of Meaning |
d257258183 | Dans le domaine du traitement automatique de la langue arabe, la majorité des recherches menées et des réalisations accomplies ont porté principalement sur l'arabe standard moderne (ASM). Les divers dialectes arabes (DA) comptent encore parmi les langues sousdotées. Ce n'est que depuis une dizaine d'années que ces dialectes ont commencé à susciter un intérêt accru au sein de la communauté TAL, notamment compte tenu de leur utilisation de plus en plus importante sur le Web social. Dans ce travail, nous nous focalisons sur le dialecte tunisien (DT), et proposons de fournir un état de l'art sur le traitement automatique de ce dialecte. Une revue des travaux accomplis à ce jour, ainsi qu'un inventaire détaillé des divers outils TAL et ressources linguistiques disponibles pour le DT sont présentés puis discutés.ABSTRACT. In the area of Arabic Natural Language Processing, most of the undertaken research and achievements have mainly involved Modern Standard Arabic (MSA). The various Arabic dialects (AD) are still considered to be among under-resourced languages. It's only in the last decade that these dialects began to arouse the interest of NLP researchers, especially given their increasing use on the social web. In this work, we focus on the Tunisian dialect (TD), and propose to provide a state of the art on the automatic processing of this dialect. A review of the works carried out to date and a detailed inventory of the various NLP tools and language resources available for the TD are presented and discussed.MOTS-CLÉS : dialecte tunisien1, ressources linguistiques2, traitement automatique des langues3.KEYWORDS: Tunisian dialect1, Language resources2, Natural language processing3.TAL. Volume 59 -n°3/2018, pages 93 à 117 1. Parmi les travaux ayant porté sur un groupe de dialectes comprenant le tunisien, sans préciser les tailles de données ni les performances obtenues spécifiques à ce dialecte, nous pouvons citer(Suwaileh et al., 2016 ;Eldesouki et al., 2017). | Un état de l'art du traitement automatique du dialecte tunisien |
d219305462 | The Language of Time: A Reader | |
d5141260 | In this synchronic study, I shall adopt a corpus-based approach to investigate the semantic change of V-diao in Mandarin. Semantically, V-diao constructions fall into three categories:A) Physical disappearance from its original position, with the V slot filled by physical verbs, such as tao-diao "escape," diu-diao "throw away," and so on.B) Disappearance from a certain conceptual domain, rather than from the physical space, with the V slot filled by less physically perceivable verbs, such as jie-diao "quit," wang-diao "forget," and the like.C) The third category of V-diao involves the speaker's subjective, always negative, attitude toward the result. Examples include: lan-diao "rot," ruan-diao "soften," huang-diao "yellow," and so forth.It is claimed in this paper that the polysemy between types A and B is motivated by metaphorical transfer[Sweetser, 1990;Bybee, Perkins and Pagliuca, 1994;Heine, Claudi and Hunnemeyer, 1991]. Based roughly on Huang and Chang [1996], I demonstrate that a cognitive restriction on selection of the verb will cause further repetitive occurrence of negative verbs in the V slot. Finally, I shall claim that pragmatic strengthening[Hopper and Traugott, 1993; Bybee, two anonymous reviewers of ROCLING for their helpful suggestions. Any remaining errors are of course my own.Perkins and Pagliuca, 1994]contributes to the emergence of unfavourable meaning in Type C.Hopefully, this research can serve as a valid argument for the interaction of language use and grammar, and the conceptual basis of human language. | Metaphorical Transfer and Pragmatic Strengthening 1 : On the Development of V-diao in Mandarin |
d62853828 | In this paper we present an evaluation of overlap-based measures of similarity for sentences in the same language. The measures include syntactic and semantic information, and to that end they incorporate grammatical relations and flat logical forms. A full parser is required to build the above information. Separate extrinsic evaluations within the context of question answering have been made with two different parsers to test the impact of the parser and the overlap measures. | Towards Semantic-Based Overlap Measures for Question Answering |
d12492108 | e desrie multilingul ypen oure gevv gmeD gevvEvD whih reuses speeh trnsltion tehnology developed using the egulus pltform to rete n utomti onverstion prtner tht llows intermediteElevel lnguge students to improve their £uenyF e ontrst gevvEv with ng9s nd eneff9s trnsltion gme systemD in prtiulr foussing on three issuesF pirstD we rgue tht the grmmrEsed reognition rhiteture offered y egulus is more suitle for this type of pplitionY seondD tht it is preferle to prompt the student in lngugeEneutrl formD rther thn in the vIY nd thirdD tht we n pro¢tly reord suessful intertions y ntive spekers nd store them to e reused s online help for studentsF he urrent systemD whih will e demoed t the onfereneD supports four vPs @inglishD prenhD tpnese nd wedishA nd two vIs @inglish nd prenhAF e onlude y desriing n evlution exeriseD where version of gevvEv on¢gured for inglish vP nd prenh vI ws used y severl hundred high shool studentsF eout hlf of the sujets reported positive impressions of the systemF | A Multilingual CALL Game Based on Speech Translation @IA niversity of qenevD swGisD RH vd du ontEd9erveD grEIPII qenev RD witzerlnd @PA vsxeD xntes niversityD PD rue de l roussini0 ereD f WPPHV RRQPP xntes gedex HQ |
d51918581 | One important problem in task-based conversations is that of effectively updating the belief estimates of user-mentioned slot-value pairs. Given a user utterance, the intent of a slot-value pair is captured using dialog acts (DA) expressed in that utterance. However, in certain cases, DA's fail to capture the actual update intent of the user. In this paper, we describe such cases and propose a new type of semantic class for user intents. This new type, Update Intents (UI), is directly related to the type of update a user intends to perform for a slot-value pair. We define five types of UI's, which are independent of the domain of the conversation. We build a multi-class classification model using LSTM's to identify the type of UI in user utterances in the Restaurant and Shopping domains. Experimental results show that our models achieve strong classification performance in terms of F-1 score. | Identifying Domain Independent Update Intents in Task Based Dialogs |
d53081209 | Emotion recognition in conversations is crucial for building empathetic machines. Current work in this domain do not explicitly consider the inter-personal influences that thrive in the emotional dynamics of dialogues. To this end, we propose Interactive COnversational memory Network (ICON), a multimodal emotion detection framework that extracts multimodal features from conversational videos and hierarchically models the selfand interspeaker emotional influences into global memories. Such memories generate contextual summaries which aid in predicting the emotional orientation of utterance-videos. Our model outperforms state-of-the-art networks on multiple classification and regression tasks in two benchmark datasets. | ICON: Interactive Conversational Memory Network for Multimodal Emotion Detection |
d249204495 | This paper presents the project proposed by the DeBiasByUs 1 team resulting from the Artificially Correct Hackathon. We briefly explain the hackathon challenge on 'Database and detection of gender bias in A.I. translations', highlight the importance of gender bias in Machine Translation (MT), describe our solution, the current status of the project, and our future plans. | DeBiasByUs: Raising Awareness and Creating a Database of MT Bias |
d14110841 | We propose a supervised lexical substitution system that does not use separate classifiers per word and is therefore applicable to any word in the vocabulary. Instead of learning word-specific substitution patterns, a global model for lexical substitution is trained on delexicalized (i.e., non lexical) features, which allows to exploit the power of supervised methods while being able to generalize beyond target words in the training set. This way, our approach remains technically straightforward, provides better performance and similar coverage in comparison to unsupervised approaches. Using features from lexical resources, as well as a variety of features computed from large corpora (n-gram counts, distributional similarity) and a ranking method based on the posterior probabilities obtained from a Maximum Entropy classifier, we improve over the state of the art in the LexSub Best-Precision metric and the Generalized Average Precision measure. Robustness of our approach is demonstrated by evaluating it successfully on two different datasets. | Supervised All-Words Lexical Substitution using Delexicalized Features |
d9028020 | This paper describes Dublin City University's (DCU) submission to the WMT 2014 Medical Summary task. We report our results on the test data set in the French to English translation direction. We also report statistics collected from the corpora used to train our translation system. We conducted our experiment on the Moses 1.0 phrase-based translation system framework. We performed a variety of experiments on translation models, reordering models, operation sequence model and language model. We also experimented with data selection and removal the length constraint for phrase-pair extraction. | Experiments in Medical Translation Shared Task at WMT 2014 |
d201626593 | In this study, we describe our methods to automatically classify Twitter posts conveying events of adverse drug reaction (ADR). Based on our previous experience in tackling the ADR classification task, we empirically applied the vote-based undersampling ensemble approach along with linear support vector machine (SVM) to develop our classifiers as part of our participation in ACL 2019 Social Media Mining for Health Applications (SMM4H) shared task 1. The best-performed model on the test sets were trained on a merged corpus consisting of the datasets released by SMM4H 2017 and 2019. By using VUE, the corpus was randomly under-sampled with 2:1 ratio between the negative and positive classes to create an ensemble using the linear kernel trained with features including bag-of-word, domain knowledge, negation and word embedding. The best performing model achieved an F-measure of 0.551 which is about 5% higher than the average F-scores of 16 teams. | |
d33185611 | The field of language testing has long led the way in integrative, performance-based assessment. However, the use of technology in language testing has often meant limiting assessment options. We believe computermediated language assessment can enrich opportunities for language learners to demonstrate what they are able to do with their second language. In this paper, we describe the rationale and operation of the Computerized Oral Proficiency Instrument (COPI), a multimedia, computeradministered oral proficiency test. While at present speech performances on the COPI are evaluated by trained raters using a national standard, the COPI affords an excellent opportunity to investigate the use of Natural Language Processing for computer-assisted evaluation. | Multimedia Computer Technology and Performance-Based Language Testing: A Demonstration of the Computerized Oral Proficiency Instrument (COPI) |
d5732270 | This paper describes a simple approach of statistical language modelling for bilingual lexicon acquisition from Amharic-English parallel corpora. The goal is to induce a seed translation lexicon from sentence-aligned corpora. The seed translation lexicon contains matches of Amharic lexemes to weekly inflected English words. Purely statistical measures of term distribution are used as the basis for finding correlations between terms. An authentic scoring scheme is codified based on distributional properties of words. For low frequency terms a two step procedure of: first a rough alignment; and then an automatic filtering to sift the output and improve the precision is made. Given the disparity of the languages and the small size of corpora used the results demonstrate the viability of the approach. | Data-driven Amharic-English Bilingual Lexicon Acquisition |
d8154011 | We describe a constraint-based morphological disambiguation system in which individual constraint rules vote on matching morphological parses followed by its implementation using finite state transducers. Voting constraint rules have a number of desirable properties: The outcome of the disambiguation is independent of the order of application of the local contextual constraint rules. Thus the rule developer is relieved from worrying about conflicting rule sequencing. The approach can also combine statistically and manually obtained constraints, and incorporate negative constraints that rule out certain patterns. The transducer implementation has a number of desirable properties compared to other finite state tagging and light parsing approaches, implemented with automata intersection. The most important of these is that since constraints do not remove parses there is no risk of an overzealous constraint "killing a sentence ~ by removing all parses of a token during intersection. After a description of our approach we present preliminary results from tagging the Wall Street Journal Corpus with this approach. With about 400 statistically derived constraints and about 570 manual constraints, we can attain an accuracy of 97.82% on the training corpus and 97.29% on the test corpus. We then describe a finite state implementation of our approach and discuss various related issues. | Implementing Voting Constraints with Finite State Transducers |
d9424829 | Recent theories of focusing and reference rely crucially on discourse structure to constrain the availability of discourse entities for reference, but deriving the structure of an arbitrary discourse has proved to be a significant problem. A useful level of problem reduction may be achieved by analyzing discourse in which the structure is explicit, rather than implicit. In this paper we consider a genre of explicitly-structured discourse: the Trouble and Failure Report (TFR), whose structure is both explicit and constant across discourses. We present the results of an analysis of a corpus of 331 TFRs, with particular attention to discourse segmentation and focusing. We then describe how the Trouble and Failure Report was automated in a prototype data collection and information retrieval application, using the PUNDIT natural-language processing system. | ANALYZING EXPLICITLY-STRUCTURED DISCOURSE IN A LIMITED DOMAIN: TROUBLE AND FAILURE REPORTS* |
d8478554 | A Multi-Level Account of Cleft Constructions in Discourse | |
d399489 | This article describes a real (nonsynthetic) active-learning experiment to obtain supersense annotations for Danish. We compare two instance selection strategies, namely lowest-prediction confidence (MAX), and sampling from the confidence distribution (SAMPLE). We evaluate their performance during the annotation process, across domains for the final resulting system, as well as against in-domain adjudicated data. The SAMPLE strategy yields competitive models that are more robust than the overly length-biased selection criterion of MAX. | Active learning for sense annotation |
d233305448 | ||
d21721737 | The paper describes an automatic Twitter sentiment lexicon creator and a lexicon-based sentiment analysis system. The lexicon creator is based on a Pointwise Mutual Information approach, utilizing 6.25 million automatically labeled tweets and 103 million unlabeled, with the created lexicon consisting of about 3 000 entries. In a comparison experiment, this lexicon beat a manually annotated lexicon. A sentiment analysis system utilizing the created lexicon, and handling both negation and intensification, produces results almost on par with sophisticated machine learning-based systems, while significantly outperforming those in terms of run-time. | Utilizing Large Twitter Corpora to Create Sentiment Lexica |
d51879885 | Technological advancements in the World Wide Web and social networks in particular coupled with an increase in social media usage has led to a positive correlation between the exhibition of Suicidal ideation on websites such as Twitter and cases of suicide. This paper proposes a novel supervised approach for detecting suicidal ideation in content on Twitter. A set of features is proposed for training both linear and ensemble classifiers over a dataset of manually annotated tweets. The performance of the proposed methodology is compared against four baselines that utilize varying approaches to validate its utility. The results are finally summarized by reflecting on the effect of the inclusion of the proposed features one by one for suicidal ideation detection. | A Computational Approach to Feature Extraction for Identification of Suicidal Ideation in Tweets |
d11199915 | In this paper we present recent works contributing to transformation of the initial PolNet, a Polish wordnet developed at the Adam Mickiewicz University, into a Lexicon Grammar of Polish. We focus on granularity issues that occurred at the stage of including verbnoun collocations as well as information related to language registers. | Recent Advances in Development of a Lexicon-Grammar of Polish: PolNet 3.0 |
d207988685 | ||
d31716358 | The work presented in this paper explores the use of Indonesian transliteration to support English pronunciation practice. It is mainly aimed for Indonesian speakers who have no or minimum English language skills. The approach implemented combines a rule-based and a statistical method. The rules of English-Phone-to-Indonesian-Grapheme mapping are implemented with a Finite State Transducer (FST), followed by a statistical method which is a grapheme-based trigram language model. The Indonesian transliteration generated was used as a means to support the learners where their speech were then recorded. The speech recordings have been evaluated by 19 participants: 8 English native and 11 non-native speakers. The results show that the transliteration positively contributes to the improvement of their English pronunciation. | English to Indonesian Transliteration to Support English Pronunciation Practice |
d11558985 | The growing popularity of multimedia documents requires language technologies to approach automatic language analysis and generation from yet another perspective: that of its use in multimodal communication. In this paper, we present a support tool for COSMOROE, a theoretical framework for modelling multimedia dialectics. The tool is a text-based search interface that facilitates the exploration of a corpus of audiovisual files, annotated with the COSMOROE relations. | A text-based search interface for Multimedia Dialectics |
d9956630 | In the Danish CLARIN-DK infrastructure, chaining language technology (LT) tools into a workflow is easy even for a non-expert user, because she only needs to specify the input and the desired output of the workflow. With this information and the registered input and output profiles of the available tools, the CLARIN-DK workflow management system (WMS) computes combinations of tools that will give the desired result. This advanced functionality was originally not envisaged, but came within reach by writing the WMS partly in Java and partly in a programming language for symbolic computation, Bracmat. Handling LT tool profiles, including the computation of workflows, is easier with Bracmat's language constructs for tree pattern matching and tree construction than with the language constructs offered by mainstream programming languages. | Implementation of a Workflow Management System for Non-Expert Users |
d16751193 | This paper describes the system submitted by the IIIT-H team for the CogALex-2014 shared task on multiword association. The task involves generating a ranked list of responses to a set of stimulus words. The two-stage approach combines the strength of neural network based word embeddings and frequency based association measures. The system achieves an accuracy of 34.9% over the test set. | A Two-Stage Approach for Computing Associative Responses to a Set of Stimulus Words |
d15621723 | J u ssi K a r lg r e n , B jö rn G a m b ä ck an d C h r iste r S a m u elsso n S tock h olmA b stractThe paper describes an experiment on a set of translated sentences obtained from a large group of informants. We discuss the question of transfer equivalence, noting that several target-language translations of a given source-language sentence will be more or less equivalent. Different equivalence classes should form clusters in the set of translated sentences. The main topic of the paper is to examine how these clusters can be found: we consider -and discard as inappropriate -several different methods of examining the sentence set, including traditional syntactic analysis, finding the most likely translation with statistical methods, and simple string distance measures. | Clustering Sentences -Making Sense of Synonymous Sentences |
d15433628 | The most critical issue in generating and recognizing paraphrases is development of wide-coverage paraphrase knowledge. Previous work on paraphrase acquisition has collected lexicalized pairs of expressions; however, the results do not ensure full coverage of the various paraphrase phenomena. This paper focuses on productive paraphrases realized by general transformation patterns, and addresses the issues in generating instances of phrasal paraphrases with those patterns. Our probabilistic model computes how two phrases are likely to be correct paraphrases. The model consists of two components: (i) a structured N -gram language model that ensures grammaticality and (ii) a distributional similarity measure for estimating semantic equivalence and substitutability. | A Probabilistic Model for Measuring Grammaticality and Similarity of Automatically Generated Paraphrases of Predicate Phrases |
d226307746 | ||
d12203752 | Evaluation is critical in offering feedback on progress_toboth developers andpotential consumers of NLG technology. However, evaluation has thus far not been as well-established in NLG as it has become in NLU. This panel will discuss evaluation methods and resources. It is aimed at building a better understanding of NLG evaluation methods, and hopefully arriving at steps to facilitate future evaluations. | Appendix II: Discussion Panel on Evaluation Research in Generation Moderator: Inderjeet Mani |
d218977353 | ||
d14932490 | Electronic Medical Records (EMRs) encode an extraordinary amount of medical knowledge. Collecting and interpreting this knowledge, however, belies a significant level of clinical understanding. Automatically capturing the clinical information is crucial for performing comparative effectiveness research. In this paper, we present a data-driven approach to model semantic dependencies between medical concepts, qualified by the beliefs of physicians. The dependencies, captured in a patient cohort graph of clinical pictures and therapies is further refined into a probabilistic graphical model which enables efficient inference of patient-centered treatment or test recommendations (based on probabilities). To perform inference on the graphical model, we describe a technique of smoothing the conditional likelihood of medical concepts by their semantically-similar belief values. The experimental results, as compared against clinical guidelines are very promising. | Clinical Data-Driven Probabilistic Graph Processing |
d9926816 | Profile inference of SNS users is valuable for marketing, target advertisement, and opinion polls. Several studies examining profile inference have been reported to date. Although information of various types is included in SNS, most such studies only use text information. It is expected that incorporating information of other types into text classifiers can provide more accurate profile inference. As described in this paper, we propose combined method of text processing and image processing to improve gender inference accuracy. By applying the simple formula to combine two results derived from a text processor and an image processor, significantly increased accuracy was confirmed. | Twitter User Gender Inference Using Combined Analysis of Text and Image Processing |
d16312729 | We present a small set of attachment heuristics for postnominal PPs occurring in full-text articles related to enzymes. A detailed analysis of the results suggests their utility for extraction of relations expressed by nominalizations (often with several attached PPs). The system achieves 82% accuracy on a manually annotated test corpus of over 3000 PPs from varied biomedical texts. | Postnominal Prepositional Phrase Attachment in Proteomics |
d237099286 | ||
d6217013 | Since its first implementation in 1995, the shallow NLG system TG/2 has been used as a component in many NLG applications that range from very shallow template systems to in-depth realization engines. TG/2 has continuously been refined, the Java brother implementation XtraGen has become available, and the grammar development environment eGram today allows for designing grammars on a more abstract level. Besides a better understanding of the usability of shallow systems like TG/2 has emerged. Time has come to summarize the developments and look forward to new borders. | Ten Years After: An Update on TG/2 (and Friends) |
d199379748 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.