_id stringlengths 4 10 | text stringlengths 0 18.4k | title stringlengths 0 8.56k |
|---|---|---|
d14606450 | While it is widely recognized that streams of social media messages contain valuable information, such as important trends in the users' interest in consumer products and markets, uncovering such trends is problematic, due to the extreme volumes of messages in such media. In the case Twitter messages, following the interest in relation to all known products all the time is technically infeasible. IE narrows topics to search. In this paper, we present experiments on using deeper NLP-based processing of product-related events mentioned in news streams to restrict the volume of tweets that need to be considered, to make the problem more tractable. Our goal is to analyze whether such a combined approach can help reveal correlations and how they may be captured. | Combined analysis of news and Twitter messages |
d799726 | This paper presents Engkoo 1 , a system for exploring and learning language. It is built primarily by mining translation knowledge from billions of web pages -using the Internet to catch language in motion. Currently Engkoo is built for Chinese users who are learning English; however the technology itself is language independent and can be extended in the future. At a system level, Engkoo is an application platform that supports a multitude of NLP technologies such as cross language retrieval, alignment, sentence classification, and statistical machine translation. The data set that supports this system is primarily built from mining a massive set of bilingual terms and sentences from across the web. Specifically, web pages that contain both Chinese and English are discovered and analyzed for parallelism, extracted and formulated into clear term definitions and sample sentences. This approach allows us to build perhaps the world's largest lexicon linking both Chinese and English together -at the same time covering the most up-to-date terms as captured by the net. | Engkoo: Mining the Web for Language Learning |
d252624584 | We present an automatic verb classifier system that identifies inflectional classes in Abui (AVC-abz), a Papuan language of the Timor-Alor-Pantar family. The system combines manually annotated language data (the learning set) with the output of a morphological precision grammar (corpus data). The morphological precision grammar is trained on a fully glossed smaller corpus and applied to a larger corpus. Using the k-means algorithm, the system clusters inflectional classes discovered in the learning set. In the second step, Naive Bayes algorithm assigns the verbs found in the corpus data to the best-fitting cluster. AVC-abz serves to advance and refine the grammatical analysis of Abui as well as to monitor corpus coverage and its gradual improvement. | Automatic Verb Classifier for Abui (AVC-abz) |
d225062494 | We propose a simple method to align multilingual contextual embeddings as a postpretraining step for improved cross-lingual transferability of the pretrained language models. Using parallel data, our method aligns embeddings on the word level through the recently proposed Translation Language Modeling objective as well as on the sentence level via contrastive learning and random input shuffling. We also perform sentence-level code-switching with English when finetuning on downstream tasks. On XNLI, our best model (initialized from mBERT) improves over mBERT by 4.7% in the zero-shot setting and achieves comparable result to XLM for translate-train while using less than 18% of the same parallel data and 31% fewer model parameters. On MLQA, our model outperforms XLM-R Base , which has 57% more parameters than ours. | Multilingual BERT Post-Pretraining Alignment |
d253762051 | We present BiomedCurator 1 , a web application that extracts the structured data from scientific articles in PubMed and ClinicalTrials.gov. BiomedCurator uses state-of-the-art natural language processing techniques to fill the fields pre-selected by domain experts in the relevant biomedical area. The BiomedCurator web application includes: text generation based model for relation extraction, entity detection and recognition, text classification model for extracting several fields, information retrieval from external knowledge base to retrieve IDs, and a pattern-based extraction approach that can extract several fields using regular expressions over the PubMed and ClinicalTrials.gov articles. Evaluation results show that different approaches of BiomedCurator web application system are effective for automatic data curation in the biomedical domain.BiomedCurator: Data Curation System for Biomedical DomainWe first describe the dataset for this task, and then the natural language processing techniques used in the system, followed by the description of our system as a web application. | BiomedCurator: Data Curation for Biomedical Literature |
d227231259 | We revisit the problem of extracting dependency structures from the derivation structures of Combinatory Categorial Grammar (CCG). Previous approaches are often restricted to a narrow subset of CCG or support only one flavor of dependency tree. Our approach is more general and easily configurable, so that multiple styles of dependency tree can be obtained. In an initial case study, we show promising results for converting English, German, Italian, and Dutch CCG derivations from the Parallel Meaning Bank into (unlabeled) UD-style dependency trees.This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http:// creativecommons.org/licenses/by/4.0/. | Configurable Dependency Tree Extraction from CCG Derivations |
d219303231 | The rapid growth in IT in the last two decades has led to a growth in the amount of information available online. A new style for sharing information is social media. Social media is a continuously instantly updated source of information. In this position paper, we propose a framework for Information Extraction (IE) from unstructured user generated contents on social media. The framework proposes solutions to overcome the IE challenges in this domain such as the short context, the noisy sparse contents and the uncertain contents. To overcome the challenges facing IE from social media, State-Of-The-Art approaches need to be adapted to suit the nature of social media posts. The key components and aspects of our proposed framework are noisy text filtering, named entity extraction, named entity disambiguation, feedback loops, and uncertainty handling. | Information Extraction for Social Media |
d208030974 | Conventional word embeddings represent words with fixed vectors, which are usually trained based on co-occurrence patterns among words. In doing so, however, the power of such representations is limited, where the same word might be functionalized separately under different syntactic relations. To address this limitation, one solution is to incorporate relational dependencies of different words into their embeddings. Therefore, in this paper, we propose a multiplex word embedding model, which can be easily extended according to various relations among words. As a result, each word has a center embedding to represent its overall semantics, and several relational embeddings to represent its relational dependencies. Compared to existing models, our model can effectively distinguish words with respect to different relations without introducing unnecessary sparseness. Moreover, to accommodate various relations, we use a small dimension for relational embeddings and our model is able to keep their effectiveness. Experiments on selectional preference acquisition and word similarity demonstrate the effectiveness of the proposed model, and a further study of scalability also proves that our embeddings only need 1/20 of the original embedding size to achieve better performance. | Multiplex Word Embeddings for Selectional Preference Acquisition |
d233281686 | This paper describes FBK's submission to the end-to-end speech translation (ST) task at IWSLT 2019. The task consists in the "direct" translation (i.e. without intermediate discrete representation) of English speech data derived from TED Talks or lectures into German texts. Our participation had a twofold goal: i) testing our latest models, and ii) evaluating the contribution to model training of different data augmentation techniques. On the model side, we deployed our recently proposed S-Transformer with logarithmic distance penalty, an ST-oriented adaptation of the Transformer architecture widely used in machine translation (MT). On the training side, we focused on data augmentation techniques recently proposed for ST and automatic speech recognition (ASR). In particular, we exploited augmented data in different ways and at different stages of the process. We first trained an end-to-end ASR system and used the weights of its encoder to initialize the decoder of our ST model (transfer learning). Then, we used an English-German MT system trained on large data to translate the English side of the English-French training set into German, and used this newly-created data as additional training material. Finally, we trained our models using SpecAugment, an augmentation technique that randomly masks portions of the spectrograms in order to make them different at every training epoch. Our synthetic corpus and SpecAugment resulted in an improvement of 5 BLEU points over our baseline model on the test set of MuST-C En-De, reaching the score of 22.3 with a single end-to-end system. | Data Augmentation for End-to-End Speech Translation: FBK@IWSLT '19 |
d326944 | Neural machine translation (NMT), a new approach to machine translation, has achieved promising results comparable to those of traditional approaches such as statistical machine translation (SMT). Despite its recent success, NMT cannot handle a larger vocabulary because the training complexity and decoding complexity proportionally increase with the number of target words. This problem becomes even more serious when translating patent documents, which contain many technical terms that are observed infrequently. In this paper, we propose to select phrases that contain out-of-vocabulary words using the statistical approach of branching entropy. This allows the proposed NMT system to be applied to a translation task of any language pair without any language-specific knowledge about technical term identification. The selected phrases are then replaced with tokens during training and post-translated by the phrase translation table of SMT. Evaluation on Japanese-to-Chinese, Chinese-to-Japanese, Japanese-to-English and English-to-Japanese patent sentence translation proved the effectiveness of phrases selected with branching entropy, where the proposed NMT model achieves a substantial improvement over a baseline NMT model without our proposed technique. Moreover, the number of translation errors of under-translation by the baseline NMT model without our proposed technique reduces to around half by the proposed NMT model. | Neural Machine Translation Model with a Large Vocabulary Selected by Branching Entropy |
d198321348 | Notre objectif est l'élaboration d'un système de détection automatique de relations de coréférence le plus général possible, pour le traitement des anaphores pronominales et les coréférences directes. Nous décrivons dans cet article les différentes étapes de traitement des textes dans le système que nous avons développé : (i) l'annotation en traits lexicaux et syntaxiques par le système Macaon ; (ii) le repérage des mentions par un modèle obtenu par apprentissage sur le corpus ANCOR ; (iii) l'annotation sémantique des mentions à partir de deux ressources : le DEM et le LVF ; (iv) l'annotation en coréférences par un système à base de règles. Le système est évalué sur le corpus ANCOR.ABSTRACT End-to-end coreference resolution for French.We aim at developing a general coreference resolution system, for processing pronoun anaphora as well as direct coreferences. We describe in this article the different processing steps of the developped system : (i) extraction of lexical and syntactic features using the Macaon system ; (ii) mention detection with a supervised sequence classifier trained on the ANCOR corpus ; (iii) semantic mention tagging with two resources : DEM and LVF ; (iv) coreference prediction with a rule-based system. The system is evaluated on the ANCOR corpus. MOTS-CLÉS : expression référentielle, mention, détection automatique de relations de coréférence. | |
d236478008 | Developing effective distributed representations of source code is fundamental yet challenging for many software engineering tasks such as code clone detection, code search, code translation and transformation. However, current code embedding approaches that represent the semantic and syntax of code in a mixed way are less interpretable and the resulting embedding can not be easily generalized across programming languages. In this paper, we propose a disentangled code representation learning approach to separate the semantic from the syntax of source code under a multi-programming-language setting, obtaining better interpretability and generalizability. Specially, we design three losses dedicated to the characteristics of source code to enforce the disentanglement effectively. We conduct comprehensive experiments on a real-world dataset composed of programming exercises implemented by multiple solutions that are semantically identical but grammatically distinguished. The experimental results validate the superiority of our proposed disentangled code representation, compared to several baselines, across three types of downstream tasks, i.e., code clone detection, code translation, and code-to-code search. | Disentangled Code Representation Learning for Multiple Programming Languages |
d248118908 | Dialogue State Tracking (DST), a crucial component of task-oriented dialogue (ToD) systems, keeps track of all important information pertaining to dialogue history: filling slots with the most probable values throughout the conversation. Existing methods generally rely on a predefined set of values and struggle to generalise to previously unseen slots in new domains. To overcome these challenges, we propose a domain-agnostic extractive question answering (QA) approach with shared weights across domains. To disentangle the complex domain information in ToDs, we train our DST with a novel domain filtering strategy by excluding out-of-domain question samples. With an independent classifier that predicts the presence of multiple domains given the context, our model tackles DST by extracting spans in active domains. Empirical results demonstrate that our model can efficiently leverage domain-agnostic QA datasets by twostage fine-tuning while being both domainscalable and open-vocabulary in DST. It shows strong transferability by achieving zero-shot domain-adaptation results on MultiWOZ 2.1 with an average JGA of 36.7%. It further achieves cross-lingual transfer with state-ofthe-art zero-shot results, 66.2% JGA from English to German and 75.7% JGA from English to Italian on WOZ 2.0. | XQA-DST: Multi-Domain and Multi-Lingual Dialogue State Tracking |
d235097402 | Semantic representation that supports the choice of an appropriate connective between pairs of clauses inherently addresses discourse coherence, which is important for tasks such as narrative understanding, argumentation, and discourse parsing. We propose a novel clause embedding method that applies graph learning to a data structure we refer to as a dependencyanchor graph. The dependency anchor graph incorporates two kinds of syntactic information, constituency structure and dependency relations, to highlight the subject and verb phrase relation. This enhances coherencerelated aspects of representation. We design a neural model to learn a semantic representation for clauses from graph convolution over latent representations of the subject and verb phrase. We evaluate our method on two new datasets: a subset of a large corpus where the source texts are published novels, and a new dataset collected from students' essays. The results demonstrate a significant improvement over tree-based models, confirming the importance of emphasizing the subject and verb phrase. The performance gap between the two datasets illustrates the challenges of analyzing student's written text, plus a potential evaluation task for coherence modeling and an application for suggesting revisions to students. | Learning Clause Representation from Dependency-Anchor Graph for Connective Prediction |
d226283739 | Deep Learning research has been largely accelerated by the development of huge datasets such as Imagenet. The general trend has been to create big datasets to make a deep neural network learn. A huge amount of resources is being spent in creating these big datasets, developing models, training them, and iterating this process to dominate leaderboards. We argue that the trend of creating bigger datasets needs to be revised by better leveraging the power of pre-trained language models. Since the language models have already been pretrained with huge amount of data and have basic linguistic knowledge, there is no need to create big datasets to learn a task. Instead, we need to create a dataset that is sufficient for the model to learn various task-specific terminologies, such as 'Entailment', 'Neutral', and 'Contradiction' for NLI. As evidence, we show that RoBERTA is able to achieve near-equal performance on ∼ 2% data of SNLI. We also observe competitive zero-shot generalization on several OOD datasets. In this paper, we propose a baseline algorithm to find the optimal dataset for learning a task. | Do We Need to Create Big Datasets to Learn a Task? |
d867659 | This paper presents a set of experiments performed on parsing Basque, a morphologically rich and agglutinative language, studying the effect of using the morphological analyzer for Basque together with the morphological disambiguation module, in contrast to using the gold standard tags taken from the treebank. The objective is to obtain a first estimate of the effect of errors in morphological analysis and disambiguation on the parsers. We tested two freely available and state of the art dependency parser generators, MaltParser, and MST, which represent the two dominant approaches in data-driven dependency parsing. | Testing the Effect of Morphological Disambiguation in Dependency Parsing of Basque |
d15809561 | Argumentative texts have been thoroughly analyzed for their argumentative structure, and recent efforts aim at their automatic classification. This work investigates linguistic properties of argumentative texts and text passages in terms of their semantic clause types. We annotate argumentative texts with Situation Entity (SE) classes, which combine notions from lexical aspect (states, events) with genericity and habituality of clauses. We analyse the correlation of SE classes with argumentative text genres, components of argument structures, and some functions of those components. Our analysis reveals interesting relations between the distribution of SE types and the argumentative text genre, compared to other genres like fiction or report. We also see tendencies in the correlations between argument components (such as premises and conclusions) and SE types, as well as between argumentative functions (such as support and rebuttal) and SE types. The observed tendencies can be deployed for automatic recognition and fine-grained classification of argumentative text passages. | Argumentative texts and clause types |
d14187 | About half of the discourse relations annotated in Penn Discourse Treebank(Prasad et al., 2008)are not explicitly marked using a discourse connective. But we do not have extensive theories of when or why a discourse relation is marked explicitly or when the connective is omitted. Asr and Demberg (2012a) have suggested an information-theoretic perspective according to which discourse connectives are more likely to be omitted when they are marking a relation that is expected or predictable. This account is based on the Uniform Information Density theory (Levy and Jaeger, 2007), which suggests that speakers choose among alternative formulations that are allowed in their language the ones that achieve a roughly uniform rate of information transmission. Optional discourse markers should thus be omitted if they would lead to a trough in information density, and be inserted in order to avoid peaks in information density. We here test this hypothesis by observing how far a specific cue, negation in any form, affects the discourse relations that can be predicted to hold in a text, and how the presence of this cue in turn affects the use of explicit discourse connectives. | Uniform Information Density at the Level of Discourse Relations: Negation Markers and Discourse Connective Omission |
d2920667 | We introduce a method for the automatic construction of noun entries in a semantic lexicon. Using the entries already present in the lexicon, semantic features are inherited from known to yet unknown words along similar contexts. As contexts, we use three specific syntactic-semantic relations: modifying adjective, verb-deep-subject and verbdeep-object. The combination of evidences from different contexts yields very high precision for most semantic features, giving rise to the fully automatic incorporation into the lexicon. | Combining Contexts in Lexicon Learning for Semantic Parsing |
d244464108 | Parallel sentences extracted from comparable corpora can be useful to supplement parallel corpora when training machine translation (MT) systems. This is even more prominent in low-resource scenarios, where parallel corpora are scarce. In this paper, we present a system which uses three very different measures to identify and score parallel sentences from comparable corpora. We measure the accuracy of our methods in low-resource settings by comparing the results against manually curated test data for English-Icelandic, and by evaluating an MT system trained on the concatenation of the parallel data extracted by our approach and an existing data set. We show that the system is capable of extracting useful parallel sentences with high accuracy, and that the extracted pairs substantially increase translation quality of an MT system trained on the data, as measured by automatic evaluation metrics. | Effective Bitext Extraction From Comparable Corpora Using a Combination of Three Different Approaches |
d10188296 | This paper reports the latest development of The Halliday Centre Tagger (the Tagger), an online platform provided with semi-automatic features to facilitate text annotation and analysis. The Tagger features a web-based architecture with all functionalities and file storage space provided online, and a theory-neutral design where users can define their own labels for annotating various kinds of linguistic information. The Tagger is currently optimized for text annotation of Systemic Functional Grammar (SFG), providing by default a pre-defined set of SFG grammatical features, and the function of automatic identification of process types for English verbs. Apart from annotation, the Tagger also offers the features of visualization and summarization to aid text analysis. The visualization feature combines and illustrates multi-dimensional layers of annotation in a unified way of presentation, while the summarization feature categorizes annotated entries according to different SFG systems, i.e., transitivity, theme, logical-semantic relations, etc. Such features help users identify grammatical patterns in an annotated text. | The Halliday Centre Tagger: An Online Platform for Semi-automatic Text Annotation and Analysis |
d219310249 | Traditionally, historical phonologists have relied on tedious manual derivations to calibrate the sequences of sound changes that shaped the phonological evolution of languages. However, humans are prone to errors, and cannot track thousands of parallel word derivations in any efficient manner. We propose to instead automatically derive each lexical item in parallel, and we demonstrate forward reconstruction as both a computational task with metrics to optimize, and as an empirical tool for inquiry. For this end we present DiaSim, a user-facing application that simulates "cascades" of diachronic developments over a language's lexicon and provides diagnostics for "debugging" those cascades. We test our methodology on a Latin-to-French reflex prediction task, using a newly compiled dataset FLLex with 1368 paired Latin/French forms. We also present, FLLAPS, which maps 310 Latin reflexes through five stages until Modern French, derived from Pope (1934)'s sound tables. Our publicly available rule cascades include the baselines BaseCLEF and BaseCLEF*, representing the received view of Latin to French development, and DiaCLEF, build by incremental corrections to BaseCLEF aided by DiaSim's diagnostics. DiaCLEF vastly outperforms the baselines, improving final accuracy on FLLex from 3.2%to 84.9%, and similar improvements across FLLAPS' stages. . | Computerized Forward Reconstruction for Analysis in Diachronic Phonology, and Latin to French Reflex Prediction |
d245838241 | BERT, which has been successfully applied to many types of natural language processing (NLP) tasks, is also effective with various information retrieval (IR) tasks. However, it is not easy to obtain appropriate data for fine-tuning a BERT model. This paper proposes a method that can improve IR performance without fine-tuning the model on the target IR data. Focusing on words appearing in both a query and a document, we introduce local-similarity (LS). LS calculates the similarity of contextualized representations of the common words, encoded using a pretrained model for the semantic textual similarity task. To incorporate the LS into the IR scoring, we propose local-similarity scoring (LSS) functions. Experimental results show that LSS outperforms BM25 on several representative benchmarks. We also demonstrate that LSS reflects improving the pre-trained model of LS to the higher IR performance. Our code is available at https://github. com/nlp-titech/rerank_by_sts. | Incorporating Semantic Textual Similarity and Lexical Matching for Information Retrieval |
d21700478 | We create the SPADE (Syntactic Phrase Alignment Dataset for Evaluation) for systematic research on syntactic phrase alignment in paraphrasal sentences. This is the first dataset to shed lights on syntactic and phrasal paraphrases under linguistically motivated grammar. Existing datasets available for evaluation on phrasal paraphrase detection define the unit of phrase as simply sequence of words without syntactic structures due to difficulties caused by the non-homographic nature of phrase correspondences in sentential paraphrases. Different from these, the SPADE provides annotations of gold parse trees by a linguistic expert and gold phrase alignments identified by three annotators. Consequently, 20, 276 phrases are extracted from 201 sentential paraphrases, on which 15, 721 alignments are obtained that at least one annotator regarded as paraphrases. The SPADE is available at Linguistic Data Consortium for future research on paraphrases. In addition, two metrics are proposed to evaluate to what extent the automatic phrase alignment results agree with the ones identified by humans. These metrics allow objective comparison of performances of different methods evaluated on the SPADE. Benchmarks to show performances of humans and the state-of-the-art method are presented as a reference for future SPADE users. | SPADE: Evaluation Dataset for Monolingual Phrase Alignment |
d1219107 | Statistical Parsing of Messages Message Processing | |
d250120905 | Securing sufficient data to enable automatic sign language translation modeling is challenging. The data insufficiency issue exists in both video and text modalities; however, fewer studies have been performed on text data augmentation compared to video data. In this study, we present three methods of augmenting sign language text modality data, comprising 3,052 Gloss-level Korean Sign Language (GKSL) and Word-level Korean Language (WKL) sentence pairs. Using each of the three methods, the following number of sentence pairs were created: blank replacement 10,654, sentence paraphrasing 1,494, and synonym replacement 899. Translation experiment results using the augmented data showed that when translating from GKSL to WKL and from WKL to GKSL, Bi-Lingual Evaluation Understudy (BLEU) scores improved by 0.204 and 0.170 respectively, compared to when only the original data was used. The three contributions of this study are as follows. First, we demonstrated that three different augmentation techniques used in existing Natural Language Processing (NLP) can be applied to sign language. Second, we propose an automatic data augmentation method which generates quality data by utilizing the Korean sign language gloss dictionary. Lastly, we publish the Gloss-level Korean Sign Language 13k dataset (GKSL13k), which has verified data quality through expert reviews. | Automatic Gloss-level Data Augmentation for Sign Language Translation |
d225047670 | We present a language-independent clausizer (clause splitter) based on Universal Dependencies(Nivre et al., 2016), and a clause-level tagger for grammatical tense, mood, voice and modality in German. The paper recapitulates verbal inflection in German-always juxtaposed with its close relative English-and transforms the linguistic theory into a rule-based algorithm. We achieve state-of-the-art accuracies of 92.6% for tense, 79.0% for mood, 93.8% for voice and 79.8% for modality in the literary domain. Our implementation is available at https://gitlab.gwdg. de/tillmann.doenicke/tense-tagger.Bryan Jurish and Kay-Michael Würzner. 2013. Word and sentence tokenization with hidden Markov models.Journal for Language Technology and Computational Linguistics, 28(2):61-83. Edward L. Keenan and Matthew S. Dryer, 2007. Passive in the world's languages, volume 1, pages 325-361. Cambridge University Press, 2 nd edition. Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In Conference Proceedings: The Tenth Machine Translation Summit, Phuket, Thailand. Elisabeth Leiss. 2008. The silent and aspect-driven patterns of deonticity and epistemicity. Modality-aspect interfaces: Implications and typological solutions, pages 15-41. | Clause-Level Tense, Mood, Voice and Modality Tagging for German |
d248779964 | Learning with Limited Text Data | |
d1007213 | This paper presents an English-Swedish Parallel Treebank, LinES, that is currently under development. LinES is intended as a resource for the study of variation in translation of common syntactic constructions from English to Swedish. For this reason, annotation in LinES is syntactically oriented, multi-level, complete and manually reviewed according to guidelines. Another aim of LinES is to support queries made in terms of types of translation shifts. | LinES: An English-Swedish Parallel Treebank |
d37593550 | The paper describes an ongoing effort aiming at building a sound-aligned corpus of Udmurt spoken texts. The corpus currently consists of about 3.5 hours of recordings, collected during fieldwork trips between 2014 and 2016. The recordings represent three dialect groups of Udmurt (Northern, Central and Southern). The recordings were transcribed with the help of native speakers. All morphological peculiarities characteristic of spoken or dialectal Udmurt were faithfully reflected, however, the transcription was somewhat normalized in order to facilitate morphological annotation and cross-dialectal search. The pipeline of our project includes aligning the texts with the sound in ELAN and annotating them with a morphological analyzer developed for standard Udmurt. We use automatic annotation as a much less time-consuming alternative of manual glossing and explore the resulting quality and the downsides of such annotation. We are specifically investigating how much and what kind of change the standard analyzer requires in order to achieve sufficiently good annotation of spoken/dialectal texts. The corpus has a web interface where the users may execute search queries and listen to the audio. The online interface will be made publicly available in 2018.KivonatEzen tanulmányban egy pilot projektet mutatunk be, amely célja egy hanganyagot tartalmazó udmurt nyelvjárási korpusz építése. A készülő korpusz 2014 és 2016 között végzett terepmunkák során gyűjtött, jelenleg körülbelül 3,5 órányi lejegyzett hanganyagból áll, amely az udmurt nyelv fő nyelvjáráscsoportjait (északi, közép-és déli nyelvjárásait) mutatja be. A hangfelvételek lejegyzése udmurt anyanyelvi beszélők segítségével történt. A lejegyzés hűen tükrözi a hangfelvételeken előforduló, az udmurt nyelvjárásokra vagy az udmurt beszélt nyelvre jellemző morfológiai jelenségeket. A lejegyzés azonban fonetikai szempontból bizonyos mértékben sztenderdizálva lett annak érdekében, hogy megkönnyítse a szövegek morfológiai elemzését és a több nyelvjárásra kiterjedő keresést. A szövegek feldolgozása a következő lépésekből áll: a szövegek ELAN-nal való lejegyzése (amelynek során a legjegyzett szöveg időben illesztve lesz a hanganyaghoz), This work is licensed under a Creative Commons Attribution 4.0 International Licence. Licence details: | Sound-aligned corpus of Udmurt dialectal texts |
d246016077 | Vocabulary learning is vital to foreign language learning. Correct and adequate feedback is essential to successful and satisfying vocabulary training. However, many vocabulary and language evaluation systems perform on simple rules and do not account for real-life user learning data. This work introduces Multi-Language Vocabulary Evaluation Data Set (MuLVE), a data set consisting of vocabulary cards and real-life user answers, labeled indicating whether the user answer is correct or incorrect. The data source is user learning data from the Phase6 vocabulary trainer. The data set contains vocabulary questions in German and English, Spanish, and French as target language and is available in four different variations regarding pre-processing and deduplication. We experiment to fine-tune pre-trained BERT language models on the downstream task of vocabulary evaluation with the proposed MuLVE data set. The results provide outstanding results of > 95.5 accuracy and F2-score. The data set is available on the European Language Grid. | MuLVE, A Multi-Language Vocabulary Evaluation Data Set |
d239020524 | Proto MT Evaluation for Humanitarian Assistance Disaster Response Scenarios Association for Machine Translation in the Americas | |
d7802770 | The present paper reports on the advantages of using graph databases in the development of dynamic language models in Spoken Language Understanding applications, such as spoken dialogue systems. First of all, we introduce Neo4J graph databases and, specifically, MultiWordNet-Extended, a graph representing linguistic knowledge. After this first overview, we show how information included in graphs can be used in speech recognition grammars to automatically extend a generic rule structure. This can be the case of linguistic elements, such as synonyms, hypernyms, meronyms and phonological neighbours, which are semantically or structurally related to each other in our mental lexicon. In all the AI based approaches depending on a training process using large and representative corpora, the probability to correctly predict the creativity a speaker can perform in using language and posing questions is lower than expected. Trying to capture most of the possible words and expressions a speaker could use is extremely necessary, but even an empirical, finite collection of cases could not be enough. For this reason, the use of our tool appears as an appealing solution, capable of including many pieces of information. In addition, we used the proposed tool to develop a spoken dialogue system for museums and the preliminary results are shown and discussed in this paper. | Graph Databases for Designing High-Performance Speech Recognition Grammars |
d49394628 | Most word representation learning methods are based on the distributional hypothesis in linguistics, according to which words that are used and occur in the same contexts tend to possess similar meanings. As a consequence, emotionally dissimilar words, such as "happy" and "sad" occurring in similar contexts would purport more similar meaning than emotionally similar words, such as "happy" and "joy". This complication leads to rather undesirable outcome in predictive tasks that relate to affect (emotional state), such as emotion classification and emotion similarity. In order to address this limitation, we propose a novel method of obtaining emotion-enriched word representations, which projects emotionally similar words into neighboring spaces and emotionally dissimilar ones far apart. The proposed approach leverages distant supervision to automatically obtain a large training dataset of text documents and two recurrent neural network architectures for learning the emotion-enriched representations. Through extensive evaluation on two tasks, including emotion classification and emotion similarity, we demonstrate that the proposed representations outperform several competitive general-purpose and affective word representations. This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http:// creativecommons.org/licenses/by/4.0/ | Learning Emotion-enriched Word Representations |
d232371994 | Recent progress in Natural Language Understanding (NLU) has seen the latest models outperform human performance on many standard tasks. These impressive results have led the community to introspect on dataset limitations, and iterate on more nuanced challenges. In this paper, we introduce the task of HeadLine Grouping (HLG) and a corresponding dataset (HLGD) consisting of 20,056 pairs of news headlines, each labeled with a binary judgement as to whether the pair belongs within the same group. On HLGD, human annotators achieve high performance of around 0.9 F-1, while current state-of-the art Transformer models only reach 0.75 F-1, opening the path for further improvements. We further propose a novel unsupervised Headline Generator Swap model for the task of HeadLine Grouping that achieves within 3 F-1 of the best supervised model. Finally, we analyze highperforming models with consistency tests, and find that models are not consistent in their predictions, revealing modeling limits of current architectures. | News Headline Grouping as a Challenging NLU Task |
d474465 | Effectively assessing Natural Language Processing output tasks is a challenge for research in the area. In the case of Machine Translation (MT), automatic metrics are usually preferred over human evaluation, given time and budget constraints. However, traditional automatic metrics (such as BLEU) are not reliable for absolute quality assessment of documents, often producing similar scores for documents translated by the same MT system. For scenarios where absolute labels are necessary for building models, such as document-level Quality Estimation, these metrics can not be fully trusted. In this paper, we introduce a corpus of reading comprehension tests based on machine translated documents, where we evaluate documents based on answers to questions by fluent speakers of the target language. We describe the process of creating such a resource, the experiment design and agreement between the test takers. Finally, we discuss ways to convert the reading comprehension test into document-level quality scores. | A Reading Comprehension Corpus for Machine Translation Evaluation |
d226262284 | In the task of Visual Question Answering (VQA), most state-of-the-art models tend to learn spurious correlations in the training set and achieve poor performance in out-ofdistribution test data. Some methods of generating counterfactual samples have been proposed to alleviate this problem. However, the counterfactual samples generated by most previous methods are simply added to the training data for augmentation and are not fully utilized. Therefore, we introduce a novel selfsupervised contrastive learning mechanism to learn the relationship between original samples, factual samples and counterfactual samples. With the better cross-modal joint embeddings learned from the auxiliary training objective, the reasoning capability and robustness of the VQA model are boosted significantly. We evaluate the effectiveness of our method by surpassing current state-of-the-art models on the VQA-CP dataset, a diagnostic benchmark for assessing the VQA model's robustness. | Learning to Contrast the Counterfactual Samples for Robust Visual Question Answering |
d790934 | Meaning cannot be based on dictionary definitions all the way down: at some point the circularity of definitions must be broken in some way, by grounding the meanings of certain words in sensorimotor categories learned from experience or shaped by evolution. This is the "symbol grounding problem". We introduce the concept of a reachable set -a larger vocabulary whose meanings can be learned from a smaller vocabulary through definition alone, as long as the meanings of the smaller vocabulary are themselves already grounded. We provide simple algorithms to compute reachable sets for any given dictionary. | How Is Meaning Grounded in Dictionary Definitions? |
d17618880 | A primary motivation of the Dialog State Tracking Challenge (DSTC) is to allow for direct comparisons between alternative approaches to dialog state tracking. While results from DSTC 1 mention performance limitations, an examination of the errors made by dialog state trackers was not discussed in depth. For the new challenge, DSTC 2, this paper describes several techniques for examining the errors made by the dialog state trackers in order to refine our understanding of the limitations of various approaches to the tracking process. The results indicate that no one approach is universally superior, and that different approaches yield different error type distributions. Furthermore, the results show that a pairwise comparative analysis of tracker performance is a useful tool for identifying dialogs where differential behavior is observed. These dialogs can provide a data source for a more careful analysis of the source of errors. | Comparative Error Analysis of Dialog State Tracking |
d247958074 | State-of-the-art neural (re)rankers are notoriously data-hungry which -given the lack of large-scale training data in languages other than English -makes them rarely used in multilingual and cross-lingual retrieval settings. Current approaches therefore commonly transfer rankers trained on English data to other languages and cross-lingual setups by means of multilingual encoders: they fine-tune all parameters of pretrained massively multilingual Transformers (MMTs, e.g., multilingual BERT) on English relevance judgments, and then deploy them in the target language(s). In this work, we show that two parameterefficient approaches to cross-lingual transfer, namely Sparse Fine-Tuning Masks (SFTMs) and Adapters, allow for a more lightweight and more effective zero-shot transfer to multilingual and cross-lingual retrieval tasks. We first train language adapters (or SFTMs) via Masked Language Modelling and then train retrieval (i.e., reranking) adapters (SFTMs) on top, while keeping all other parameters fixed. At inference, this modular design allows us to compose the ranker by applying the (re)ranking adapter (or SFTM) trained with source language data together with the language adapter (or SFTM) of a target language. We carry out a large scale evaluation on the CLEF-2003 and HC4 benchmarks and additionally, as another contribution, extend the former with queries in three new languages: Kyrgyz, Uyghur and Turkish. The proposed parameter-efficient methods outperform standard zero-shot transfer with full MMT finetuning, while being more modular and reducing training times. The gains are particularly pronounced for low-resource languages, where our approaches also substantially outperform the competitive machine translation-based rankers. | Parameter-Efficient Neural Reranking for Cross-Lingual and Multilingual Retrieval |
d12067517 | Streaming media provides a number of unique challenges for computational linguistics. This paper studies the temporal variation in word co-occurrence statistics, with application to event detection. We develop a spectral clustering approach to find groups of mutually informative terms occurring in discrete time frames. Experiments on large datasets of tweets show that these groups identify key real world events as they occur in time, despite no explicit supervision. The performance of our method rivals state-of-the-art methods for event detection on F-score, obtaining higher recall at the expense of precision. | Studying the Temporal Dynamics of Word Co-Occurrences: An Application to Event Detection |
d14295591 | In order to analyse the information present in medical records while maintaining patient privacy, there is a basic need for techniques to automatically de-identify the free text information in these records. This paper presents a machine learning deidentification system for clinical free text in Dutch, relying on best practices from the state of the art in de-identification of English-language texts. We combine string and pattern matching features with machine learning algorithms and compare performance of three different experimental setups using Support Vector Machines and Random Forests on a limited data set of one hundred manually obfuscated texts provided by Antwerp University Hospital (UZA). The setup with the best balance in precision and recall during development was tested on an unseen set of raw clinical texts and evaluated manually at the hospital site. | De-Identification of Clinical Free Text in Dutch with Limited Training Data: A Case Study |
d7440991 | A | |
d225075985 | Unconscious biases continue to be prevalent in modern text and media, calling for algorithms that can assist writers with bias correction. For example, a female character in a story is often portrayed as passive and powerless ("She daydreams about being a doctor") while a man is portrayed as more proactive and powerful ("He pursues his dream of being a doctor").We formulate Controllable Debiasing, a new revision task that aims to rewrite a given text to correct the implicit and potentially undesirable bias in character portrayals. We then introduce POWERTRANSFORMER as an approach that debiases text through the lens of connotation frames(Sap et al., 2017), which encode pragmatic knowledge of implied power dynamics with respect to verb predicates. One key challenge of our task is the lack of parallel corpora. To address this challenge, we adopt an unsupervised approach using auxiliary supervision with related tasks such as paraphrasing and self-supervision based on a reconstruction loss, building on pretrained language models.Through comprehensive experiments based on automatic and human evaluations, we demonstrate that our approach outperforms ablations and existing methods from related tasks. Furthermore, we demonstrate the use of POWER-TRANSFORMER as a step toward mitigating the well-documented gender bias in character portrayal in movie scripts. | POWERTRANSFORMER: Unsupervised Controllable Revision for Biased Language Correction |
d10328681 | Most events described in a news article are background events -only a small number are noteworthy, and a even smaller number serve as the trigger for writing of that article. Although these events are difficult to identify, they are crucial to NLP tasks such as first story detection, document summarization and event coreference, and to many applications of event analysis that depend on event counting and identifying trends. In this work, we introduce the notion of news-peg, a concept borrowed from the political science literature, in an attempt to remedy this problem. A news-peg is an event which prompted the author to write the article, and it serves as a more fine-grained measure of noteworthiness than a summary. We describe a new task of news-peg identification and release an annotated dataset for its evaluation. We formalize an operational definition of a news-peg, on which human annotators achieve high inter-annotator agreement (over 80%), and present a rule-based system for this task, which exploits syntactic features derived from established journalistic devices. | "Making the News": Identifying Noteworthy Events in News Articles |
d18606218 | The continuing popularity of XML as a data exchange format and the concurrent rise of treebanks as natural language resources within various domains of language processing have led us to extend their domain of application to phonological data. Typically, treebanks are a language resource that provides annotations of natural languages at various levels of structure and in this paper we present a treebased format to capture phonological information at the syllable level, at the segment level and even including more fine-grained featural information. Two integrated modules in relation to phonological treebanks are described: the first uses a multilingual feature set to augment segment-annotated corpora in terms of a tree-based structure represented in XML. The second module allows these feature trees to be traversed and the data contained in it to be optimised in a purely data-driven manner. | Phonological Treebanks - Issues in Generation and Application |
d7816596 | Cybersecurity risks and malware threats are becoming increasingly dangerous and common. Despite the severity of the problem, there has been few NLP efforts focused on tackling cybersecurity.In this paper, we discuss the construction of a new database for annotated malware texts. An annotation framework is introduced based around the MAEC vocabulary for defining malware characteristics, along with a database consisting of 39 annotated APT reports with a total of 6,819 sentences. We also use the database to construct models that can potentially help cybersecurity researchers in their data collection and analytics efforts. | MalwareTextDB: A Database for Annotated Malware Articles |
d1447029 | We describe the results of a corpus study of more than 400 text excerpts that accompany graphics. We show that text and graphics play complementary roles in transmitting information from the writer to the reader and derive some observations for the automatic generation of texts associated with graphics. | Integrated generation of graphics and text: a corpus study |
d37154371 | Ð ×× ¬ Ö ÓÔØ Ñ Þ Ø ÓÒ Ò ÓÑ Ò Ø ÓÒ Ò Ø Ò Ð × ÐÐ ÛÓÖ × Ø × º Î ÖÓÒ ÕÙ ÀÓר Ò ÒÒ ÃÓÓÐ Ò Ï ÐØ Ö Ð Ñ Ò× AEÌË ¹ Ä Ò Ù Ì ÒÓÐÓ Ý ÖÓÙÔ ÍÒ Ú Ö× ØÝ Ó ÒØÛ ÖÔ ÍÒ Ú Ö× Ø Ø×ÔÐ Ò ½¸¾ ½¼ Ï ÐÖ Óר Ù ºÙ º º ¸ ÓÓÐÙ ºÙ º º ¸ Ð ÑÙ ºÙ º º ×ØÖ Ø Ï Ö ÔÓÖØ ÓÒ Ø Ù× Ó Ñ Ò Ð ÖÒ Ò Ø ¹ Ò ÕÙ × ÓÖ ÛÓÖ × Ò× × Ñ Ù Ø ÓÒ Ò Ø Ò Ð × ÐÐ ÛÓÖ × Ø × Ó Ë AEË Î Ä¾º Ì Ø × Û × ØÓ ÙØÓÑ Ø ÐÐÝ ×× Ò Ø ÔÔÖÓÔÖ ¹ Ø × Ò× ØÓ ÔÓ×× ÐÝ Ñ ÙÓÙ× ÛÓÖ ÓÖÑ Ú Ò Ø× ÓÒØ ÜØº ÛÓÖ ÜÔ ÖØ ÔÔÖÓ Û × ÓÔØ ¸Ð Ò ØÓ × Ø Ó Ð ×× ¬ Ö׸ ×Ô Ð Þ Ò ÓÒ × Ò Ð ÛÓÖ ÓÖѹÈÇË ÓÑ ¹ Ò Ø ÓÒº ÜÔ ÖØ× ÓÒ× ×Ø Ó ÑÙÐØ ÔÐ Ð ×× ¬ Ö× ØÖ Ò ÓÒ Ë ÑÓÖ Ù× Ò ØÛÓ ØÝÔ × Ó Ð ÖÒ¹ Ò Ø Ò ÕÙ ×¸Ú Þº Ñ ÑÓÖݹ × Ð ÖÒ Ò Ò ÖÙÐ ¹ Ò ÙØ ÓÒº Ì ÖÓÙ ÓÔØ Ñ Þ Ø ÓÒ Ý ÖÓ××¹ Ú Ð Ø ÓÒ Ó Ø Ò Ú Ù Ð Ð ×× ¬ Ö× Ò Ø ÚÓØ Ò × Ñ ÓÖ ÓÑ Ò Ò Ø Ñ¸Ø ×Ø ÔÓ×× Ð ÛÓÖ ÜÔ ÖØ Û × Ø ÖÑ Ò º Ê ×ÙÐØ× × ÓÛ Ø Ø ×Ô ÐÐÝ Ñ ÑÓÖݹ × Ð ÖÒ Ò Ò ÛÓÖ ¹ ÜÔ ÖØ ÔÔÖÓ × × Ð Ñ Ø Ó ÓÖ ÙÒÖ ×ØÖ Ø ÛÓÖ ¹× Ò× × Ñ Ù Ø ÓÒ¸ Ú Ò Û Ø Ð Ñ Ø ØÖ Ò Ò Ø º | |
d220048573 | Meaning Representations (AMRs) capture sentence-level semantics structural representations to broad-coverage natural sentences. We investigate parsing AMR with explicit dependency structures and interpretable latent structures. We generate the latent soft structure without additional annotations, and fuse both dependency and latent structure via an extended graph neural networks. The fused structural information helps our experiments results to achieve the best reported results on both AMR 2.0 (77.5% Smatch F1 on LDC2017T10) and AMR 1.0 (71.8% Smatch F1 on LDC2014T12). | AMR Parsing with Latent Structural Information |
d174800914 | There has been a significant investment in dialog systems (tools and runtime) for building conversational systems by major companies including Google, IBM, Microsoft, and Amazon. The question remains whether these tools are up to the task of building conversational, task-oriented dialog applications at the enterprise level. In our company, we are exploring and comparing several toolsets in an effort to determine their strengths and weaknesses in meeting our goals for dialog system development: accuracy, time to market, ease of replicating and extending applications, and efficiency and ease of use by developers. In this paper, we provide both quantitative and qualitative results in three main areas: natural language understanding, dialog, and text generation. While existing toolsets were all incomplete, we hope this paper will provide a roadmap of where they need to go to meet the goal of building effective dialog systems. | Are the Tools up to the Task? An evaluation of commercial dialog tools in developing conversational enterprise-grade dialog systems |
d21732448 | The paper presents the Public Multilingual Knowledge Management Infrastructure (PMKI) action launched by the European Commission (EC) to promote the Digital Single Market in the European Union (EU). PMKI aims to share maintainable and sustainable Language Resources making them interoperable in order to support language technology industry, and public administrations, with multilingual tools able to improve cross border accessibility of digital services. The paper focuses on the main feature (interoperability) that represents the specificity of PMKI platform distinguishing it from other existing frameworks. In particular it aims to create a set of tools and facilities, based on Semantic Web technologies, to establish semantic interoperability between multilingual lexicons. Such task requires to harmonize in general multilingual language resources using standardised representation with respect to a defined core data model under an adequate architecture. A comparative study among the main data models for representing lexicons and recommendations for the PMKI service was required as well. Moreover, synergies with other programs of the EU institutions, as far as systems interoperability and Machine Translation (MT) solutions, are foreseen. For instance some interactions are foreseen between PMKI and MT service provided by the EC but also with other NLP applications. | PMKI: an European Commission action for the interoperability, maintainability and sustainability of Language Resources |
d29165442 | Fighting offensive language on social media with unsupervised text style transfer | |
d196177793 | The training objective of neural machine translation (NMT) is to minimize the loss between the words in the translated sentences and those in the references. In NMT, there is a natural correspondence between the source sentence and the target sentence. However, this relationship has only been represented using the entire neural network and the training objective is computed in wordlevel. In this paper, we propose a sentencelevel agreement module to directly minimize the difference between the representation of source and target sentence. The proposed agreement module can be integrated into NMT as an additional training objective function and can also be used to enhance the representation of the source sentences. Empirical results on the NIST Chinese-to-English and WMT English-to-German tasks show the proposed agreement module can significantly improve the NMT performance. * Mingming Yang was an internship research fellow at NICT when conducting this work. | Sentence-Level Agreement for Neural Machine Translation |
d243865168 | Text classification is a fundamental task with broad applications in natural language processing. Recently, graph neural networks (GNNs) have attracted much attention due to their powerful representation ability. However, most existing methods for text classification based on GNNs consider only one-hop neighborhoods and low-frequency information within texts, which cannot fully utilize the rich context information of documents. Moreover, these models suffer from over-smoothing issues if many graph layers are stacked. In this paper, a Deep Attention Diffusion Graph Neural Network (DADGNN) model is proposed to learn text representations, bridging the chasm of interaction difficulties between a word and its distant neighbors. Experimental results on various standard benchmark datasets demonstrate the superior performance of the present approach. | Deep Attention Diffusion Graph Neural Networks for Text Classification |
d16282692 | Adi PalmTree adjoining grammars (TAG) represent a derivational formalism to construct trees from a given set of initial and auxiliary trees. We present a logical language that simultaneously describes the generated TAG-tree and the corresponding derivation tree. Based on this language we formulate constraints indicating whether a tree and a derivation tree mean a valid TAGgenerated tree. A method is presented that extracts the underlying TAG from an (imderspecified) TAG-tree and its derivation. This leads to an alternative approach of representing shared structures by means of TAGs. The result is a more general representation of movement which requires no indices since it basically makes use of the properties of the adjunction operation. | A Logical Approach to Structure Sharing in TAGs |
d16017682 | Detecting healthcare-associated infections pose a major challenge in healthcare. Using natural language processing and machine learning applied on electronic patient records is one approach that has been shown to work. However the results indicate that there was room for improvement and therefore we have applied deep learning methods. Specifically we implemented a network of stacked sparse auto encoders and a network of stacked restricted Boltzmann machines. Our best results were obtained using the stacked restricted Boltzmann machines with a precision of 0.79 and a recall of 0.88. | Applying deep learning on electronic health records in Swedish to predict healthcare-associated infections |
d13327514 | This article proposes ESA, a new unsupervised approach to word segmentation. ESA is an iterative process consisting of three phases: Evaluation, Selection, and Adjustment. In Evaluation, both the certainty and uncertainty of character sequence co-occurrence in corpora are considered as statistical evidence supporting goodness measurement. Additionally, the statistical data of character sequences with various lengths become comparable with each other by using a simple process called Balancing. In Selection, a local maximum strategy is adopted without thresholds, and the strategy can be implemented with dynamic programming. In Adjustment, a part of the statistical data is updated to improve successive results. In our experiment, ESA was evaluated on the SIGHAN Bakeoff-2 data set. The results suggest that ESA is effective on Chinese corpora. It is noteworthy that the F-measures of the results are basically monotone increasing and can rapidly converge to relatively high values. Furthermore, empirical formulae based on the results can be used to predict the parameter in ESA to avoid parameter estimation that is usually time-consuming. | A New Unsupervised Approach to Word Segmentation |
d251719340 | The phenomenon of compounding is ubiquitous in Sanskrit. It serves for achieving brevity in expressing thoughts, while simultaneously enriching the lexical and structural formation of the language. In this work, we focus on the Sanskrit Compound Type Identification (SaCTI) task, where we consider the problem of identifying semantic relations between the components of a compound word. Earlier approaches solely rely on the lexical information obtained from the components and ignore the most crucial contextual and syntactic information useful for SaCTI. However, the SaCTI task is challenging primarily due to the implicitly encoded context-sensitive semantic relation between the compound components.Thus, we propose a novel multi-task learning architecture which incorporates the contextual information and enriches the complementary syntactic information using morphological tagging and dependency parsing as two auxiliary tasks. Experiments on the benchmark datasets for SaCTI show 6.1 points (Accuracy) and 7.7 points (F1-score) absolute gain compared to the state-of-the-art system. Further, our multilingual experiments demonstrate the efficacy of the proposed architecture in English and Marathi languages. 1 7 Note that the annotator who created the context for 25% data points is different from these 3 annotators. 8 In case of confusion, we gave additional option to mark as "Not sure". 9 https://spacy.io/models/en | A Novel Multi-Task Learning Approach for Context-Sensitive Compound Type Identification in Sanskrit |
d233240813 | Recall and ranking are two critical steps in personalized news recommendation. Most existing news recommender systems conduct personalized news recall and ranking separately with different models. However, maintaining multiple models leads to high computational cost and poses great challenges to meeting the online latency requirement of news recommender systems. In order to handle this problem, in this paper we propose UniRec, a unified method for recall and ranking in news recommendation. In our method, we first infer user embedding for ranking from the historical news click behaviors of a user using a user encoder model. Then we derive the user embedding for recall from the obtained user embedding for ranking by using it as the attention query to select a set of basis user embeddings which encode different general user interests and synthesize them into a user embedding for recall. The extensive experiments on benchmark dataset demonstrate that our method can improve both efficiency and effectiveness for recall and ranking in news recommendation. | Two Birds with One Stone: Unified Model Learning for Both Recall and Ranking in News Recommendation |
d252818915 | Keyphrase Prediction (KP) is an established NLP task, aiming to yield representative phrases to summarize the main content of a given document. Despite major progress in recent years, existing works on KP have mainly focused on formal texts such as scientific papers or weblogs. The challenges of KP in informal-text domains are not yet fully studied. To this end, this work studies new challenges of KP in transcripts of videos, an understudied domain for KP that involves informal texts and non-cohesive presentation styles. A bottleneck for KP research in this domain involves the lack of high-quality and large-scale annotated data that hinders the development of advanced KP models. To address this issue, we introduce a large-scale manually-annotated KP dataset in the domain of live-stream video transcripts obtained by automatic speech recognition tools. Concretely, transcripts of 500+ hours of videos streamed on the behance.net platform are manually labeled with important keyphrases. Our analysis of the dataset reveals the challenging nature of KP in transcripts. Moreover, for the first time in KP, we demonstrate the idea of improving KP for long documents (i.e., transcripts) by feeding models with paragraph-level keyphrases, i.e., hierarchical extraction. To foster future research, we will publicly release the dataset and code. | Keyphrase Prediction from Video Transcripts: New Dataset and Directions |
d1168649 | Pathology reports are used to store information about cells and tissues of a patient, and they are crucial to monitor the health of individuals and population groups. In this work we present an evaluation of supervised text classification models for the prediction of relevant categories in pathology reports. Our aim is to integrate automatic classifiers to improve the current workflow of medical experts, and we implement and evaluate different machine learning approaches for a large number of categories. Our results show that we are able to predict nominal categories with high average f-score (81.3%), and we can improve over the majority class baseline by relying on Naive Bayes and feature selection. We also find that the classification of numeric categories is harder, and deeper analysis would be required to predict these labels. | Information Extraction of Multiple Categories from Pathology Reports |
d234324760 | Modeling German Word Order Acquisition via Bayesian Inference | |
d54447151 | This paper describes the USTC-NEL (short for "National Engineering Laboratory for Speech and Language Information Processing University of science and technology of china") system to the speech translation task of the IWSLT Evaluation 2018. The system is a conventional pipeline system which contains 3 modules: speech recognition, postprocessing and machine translation. We train a group of hybrid-HMM models for our speech recognition, and for machine translation we train transformer based neural machine translation models with speech recognition output style text as input. Experiments conducted on the IWSLT 2018 task indicate that, compared to baseline system from KIT, our system achieved 14.9 BLEU improvement. | The USTC-NEL Speech Translation system at IWSLT 2018 |
d250179899 | Le français inclusif est une variété du français standard mise en avant pour témoigner d'une conscience de genre et d'identité. Plusieurs procédés existent pour lutter contre l'utilisation générique du masculin (coordination de formes féminines et masculines, féminisation des fonctions, écriture inclusive, et neutralisation). Dans cette étude, nous nous intéressons aux performances des outils sur quelques tâches du TAL (étiquetage, lemmatisation, repérage d'entités nommées) appliqués sur des productions langagières de ce type. Les taux d'erreur sur l'étiquetage en parties du discours (TreeTagger et spaCy) augmentent de 3 à 7 points sur les portions rédigées en français inclusif par rapport au français standard, sans lemmatisation possible pour le TreeTagger. Sur le repérage d'entités nommées, les modèles sont sensibles aux contextes en français inclusif et font des prédictions erronées, avec une précision en baisse.ABSTRACTImpact of French Inclusive Language on NLP ToolsFrench Inclusive language (Gender-Neutral language) is a variety of standard French that is used to highlight an awareness of gender and identity. Several processes exist to substitute the generic use of the masculine form (coordination of feminine and masculine forms, feminization of functions, inclusive writing, and neutralization). In this study, we focus on the performance of a few NLP tools (labeling, lemmatization, name entity recognition) applied to language productions in French Inclusive language. The error rate of TreeTagger and spaCy on Part-of-Speech tagging increases from 3 to 7 points on spans written in French inclusive language with respect to standard French language, and no lemmatization was possible using the TreeTagger. In named entity recognition, models are sensitive to contexts written in French Inclusive and produce erroneous predictions, implying a lower precision.MOTS-CLÉS : Français inclusif, Traitement Automatique des Langues, Taux d'erreur. | Impact du français inclusif sur les outils du TAL |
d1767630 | End-to-end neural network models for named entity recognition (NER) have shown to achieve effective performances on general domain datasets (e.g. newswire), without requiring additional hand-crafted features. However, in biomedical domain, recent studies have shown that handengineered features (e.g. orthographic features) should be used to attain effective performance, due to the complexity of biomedical terminology (e.g. the use of acronyms and complex gene names). In this work, we propose a novel approach that allows a neural network model based on a long short-term memory (LSTM) to automatically learn orthographic features and incorporate them into a model for biomedical NER. Importantly, our bi-directional LSTM model learns and leverages orthographic features on an end-to-end basis. We evaluate our approach by comparing against existing neural network models for NER using three well-established biomedical datasets. Our experimental results show that the proposed approach consistently outperforms these strong baselines across all of the three datasets. | Learning Orthographic Features in Bi-directional LSTM for Biomedical Named Entity Recognition |
d14236677 | In this paper we describe our system we designed and implemented for the crosslingual pronoun prediction task as a part of WMT 2016. The majority of the paper will be dedicated to the system whose outputs we submitted wherein we describe the simplified mathematical model, the details of the components and the working by means of an architecture diagram which also serves as a flowchart. We then discuss the results of the official scores and our observations on the same. | Shared Task Papers |
d11616343 | Almost all current dependency parsers classify based on millions of sparse indicator features. Not only do these features generalize poorly, but the cost of feature computation restricts parsing speed significantly. In this work, we propose a novel way of learning a neural network classifier for use in a greedy, transition-based dependency parser. Because this classifier learns and uses just a small number of dense features, it can work very fast, while achieving an about 2% improvement in unlabeled and labeled attachment scores on both English and Chinese datasets. Concretely, our parser is able to parse more than 1000 sentences per second at 92.2% unlabeled attachment score on the English Penn Treebank. | A Fast and Accurate Dependency Parser using Neural Networks |
d58124 | This paper addresses an important problem in Example-Based Machine Translation (EBMT), namely how to measure similarity between a sentence fragment and a set of stored examples. A new method is proposed that measures similarity according to both surface structure and content. A second contribution is the use of clustering to make retrieval of the best matching example from the database more efficient. Results on a large number of test cases from the CELEX database are presented. | A MATCIIING TECHNIQUE IN EXAMPLE,-BASED MACIIINE TRANSLATION |
d8661 | This paper describes our results at the NLI shared task 2017. We participated in essays, speech, and fusion task that uses text, speech, and i-vectors for the task of identifying the native language of the given input. In the essay track, a linear SVM system using word bigrams and character 7-grams performed the best. In the speech track, an LDA classifier based only on i-vectors performed better than a combination system using text features from speech transcriptions and i-vectors. In the fusion task, we experimented with systems that used combination of i-vectors with higher order n-grams features, combination of i-vectors with word unigrams, a mean probability ensemble, and a stacked ensemble system. Our finding is that word unigrams in combination with i-vectors achieve higher score than systems trained with larger number of n-gram features. Our best-performing systems achieved F1scores of 87.16 %, 83.33 % and 91.75 % on the essay track, the speech track and the fusion track respectively. | Fewer features perform well at Native Language Identification task |
d14727849 | A constraint-based approach to morphological analysis (preliminaries) | A constraint-based approach to morphological analysis (preliminaries) |
d37620085 | Signifion is designing a preprocessor chip for speechrecognition systems operating in noisy environments. Current preprocessors, such as those based on moving Fourier transforms or linear predictive coding, are linear and not effective when speech is embedded in high levels of noise.Specifically we are: | A Robust Preprocessor for Speech-Recognition Systems |
d14043792 | Causal precedence between biochemical interactions is crucial in the biomedical domain, because it transforms collections of individual interactions, e.g., bindings and phosphorylations, into the causal mechanisms needed to inform meaningful search and inference. Here, we analyze causal precedence in the biomedical domain as distinct from open-domain, temporal precedence. First, we describe a novel, hand-annotated text corpus of causal precedence in the biomedical domain. Second, we use this corpus to investigate a battery of models of precedence, covering rule-based, feature-based, and latent representation models. The highestperforming individual model achieved a micro F1 of 43 points, approaching the best performers on the simpler temporalonly precedence tasks. Feature-based and latent representation models each outperform the rule-based models, but their performance is complementary to one another. We apply a sieve-based architecture to capitalize on this lack of overlap, achieving a micro F1 score of 46 points. | This before That: Causal Precedence in the Biomedical Domain |
d8825906 | Information Extraction from Indian languages requires effective shallow parsing, especially identification of "meaningful" noun phrases. Particularly, for an agglutinative and free word order language like Marathi, this problem is quite challenging. We model this task of extracting noun phrases as a sequence labelling problem. A Distant Supervision framework is used to automatically create a large labelled data for training the sequence labelling model. The framework exploits a set of heuristic rules based on corpus statistics for the automatic labelling. Our approach puts together the benefits of heuristic rules, a large unlabelled corpus as well as supervised learning to model complex underlying characteristics of noun phrase occurrences. In comparison to a simple English-like chunking baseline and a publicly available Marathi Shallow Parser, our method demonstrates a better performance. | Noun Phrase Chunking for Marathi using Distant Supervision |
d9458432 | We present results from our study of which uses syntactically and semantically motivated information to group segments of sentences into unbreakable units for the purpose of typesetting those sentences in a region of a fixed width, using an otherwise standard dynamic programming line breaking algorithm, to minimize raggedness. In addition to a rule-based baseline segmenter, we use a very modest size text, manually annotated with positions of breaks, to train a maximum entropy classifier, relying on an extensive set of lexical and syntactic features, which can then predict whether or not to break after a certain word position in a sentence. We also use a simple genetic algorithm to search for a subset of the features optimizing F 1 , to arrive at a set of features that delivers 89.2% Precision, 90.2% Recall (89.7% F 1 ) on a test set, improving the rule-based baseline by about 11 points and the classifier trained on all features by about 1 point in F 1 . | Typesetting for Improved Readability using Lexical and Syntactic Information |
d29077308 | In this paper I report on the development of an application in which HTML forms serve as a front-end to a lexical database. Lexical information and data retrieval strategies are based on the Longman Language Activator. A Visual Basic CGI application connects a front end HTML form with the back-end relational database implemented using Microsoft Access. Three aspects of the applcation are discussed in this paper: (1) the lexical database; (2) the HTML front end; and (3) the Visual Basic CGI programming necessary to connect(1)and(2). | Web Access to a Lexical Database using VB/Access CGI Programming |
d15955820 | When humans speak they often use grammatically incorrect sentences, which is a problem for grammar-based language processing methods, since they expect input that is valid for the grammar. We present two methods to transform spoken language into grammatically correct sentences. The first is an algorithm for automatic ellipsis detection, which finds ellipses in spoken sentences and searches in a combinatory categorial grammar for suitable words to fill the ellipses. The second method is an algorithm that computes the semantic similarity of two words using WordNet, which we use to find alternatives to words that are unknown to the grammar. In an evaluation, we show that the usage of these two methods leads to an increase of 38.64% more parseable sentences on a test set of spoken sentences that were collected during a human-robot interaction experiment. | Using Ellipsis Detection and Word Similarity for Transformation of Spoken Language into Grammatically Valid Sentences |
d17409918 | This paper presents a rule-based approach to Named Entity Recognition for the German language. The approach rests upon deep linguistic parsing and has already been applied to English and Russian. In this paper we present the first results of our system, ABBYY InfoExtractor, on Ger-mEval 2014 Shared Task corpus. We focus on the main challenges of German NER that we have encountered when adapting our system to German and possible solutions for them. | German NER with a Multilingual Rule Based Information Extraction System: Analysis and Issues |
d38743106 | On the Affinity of TAG with Projective, Bilexical Dependency Grammar | |
d67855860 | Attention mechanisms have seen wide adoption in neural NLP models. In addition to improving predictive performance, these are often touted as affording transparency: models equipped with attention provide a distribution over attended-to input units, and this is often presented (at least implicitly) as communicating the relative importance of inputs. However, it is unclear what relationship exists between attention weights and model outputs. In this work we perform extensive experiments across a variety of NLP tasks that aim to assess the degree to which attention weights provide meaningful "explanations" for predictions. We find that they largely do not. For example, learned attention weights are frequently uncorrelated with gradient-based measures of feature importance, and one can identify very different attention distributions that nonetheless yield equivalent predictions. Our findings show that standard attention modules do not provide meaningful explanations and should not be treated as though they do. Code to reproduce all experiments is available at https://github.com/successar/ AttentionExplanation. | Attention is not Explanation |
d15357409 | Many sequence labeling tasks in NLP require solving a cascade of segmentation and tagging subtasks, such as Chinese POS tagging, named entity recognition, and so on. Traditional pipeline approaches usually suffer from error propagation. Joint training/decoding in the cross-product state space could cause too many parameters and high inference complexity. In this paper, we present a novel method which integrates graph structures of two subtasks into one using virtual nodes, and performs joint training and decoding in the factorized state space. Experimental evaluations on CoNLL 2000 shallow parsing data set and Fourth SIGHAN Bakeoff CTB POS tagging data set demonstrate the superiority of our method over cross-product, pipeline and candidate reranking approaches. | Joint Training and Decoding Using Virtual Nodes for Cascaded Segmentation and Tagging Tasks |
d251449141 | Document authoring involves a lengthy revision process, marked by individual edits that are frequently linked to comments. Modeling the relationship between edits and comments leads to a better understanding of document evolution, potentially benefiting applications such as content summarization, and task triaging. Prior work on understanding revisions has primarily focused on classifying edit intents, but falling short of a deeper understanding of the nature of these edits. In this paper, we present explore the challenge of describing an edit at two levels: identifying the edit intent, and describing the edit using free-form text. We begin by defining a taxonomy of general edit intents and introduce a new dataset of full revision histories of Wikipedia pages, annotated with each revision's edit intent. Using this dataset, we train a classifier that achieves a 90% accuracy in identifying edit intent. We use this classifier to train a distantly-supervised model that generates a high-level description of a revision in free-form text. Our experimental results show that incorporating edit intent information aids in generating better edit descriptions. We establish a set of baselines for the edit description task, achieving a best score of 28 ROUGE, thus demonstrating the effectiveness of our layered approach to edit understanding. | One Document, Many Revisions: A Dataset for Classification and Description of Edit Intents |
d17365569 | In spontaneous speech, emotion information is embedded at several levels: acoustic, linguistic, gestural (non-verbal), etc. For emotion recognition in speech, there is much attention to acoustic level and some attention at the linguistic level. In this study, we identify paralinguistic markers for emotion in the language. We study two Indian languages belonging to two distinct language families. We consider Marathi from Indo-Aryan and Kannada from Dravidian family. We show that there exist large numbers of specific paralinguistic emotion markers in these languages, referred to as emotiphons. They are inter-twined with prosody and semantics. Preprocessing of speech signal with respect to emotiphons would facilitate emotion recognition in speech for Indian languages. Some of them are common between the two languages, indicating cultural influence in language usage. | Emotiphons: Emotion markers in Conversational Speech - Comparison across Indian Languages |
d7792758 | Referent identification in human conversation is performed both by describing the objects in question and by pointing at them. Up till now, only the linguistic component could be simulated in dialog systems. But recently, technical innovations have made it possible to 'point' at the objects on a display as well. The paper has two intentions. First, it investigates natural pointing in more detail and offers some possibilities to classify the great variety of pointing actions. Then, it tries to clarify the extent to which pointing by technical means (especially mouse-clicks) can be regarded as a simulation of natural pointing or as a functional equivalent. Furthermore, some steps towards even more accurate simulation are briefly mentioned. | NATURAL AND SIMULATED POINTING |
d236486200 | For updating the translations of Japanese statutes based on their amendments, we need to consider the translation "focality;" that is, we should only modify expressions that are relevant to the amendment and retain the others to avoid misconstruing its contents. In this paper, we introduce an evaluation metric and a corpus to improve focality evaluations. Our metric is called an Inclusive Score for DIfferential Translation: (ISDIT). ISDIT consists of two factors: (1) the n-gram recall of expressions unaffected by the amendment and (2) the n-gram precision of the output compared to the reference. This metric supersedes an existing one for focality by simultaneously calculating the translation quality of the changed expressions in addition to that of the unchanged expressions. We also newly compile a corpus for Japanese partially amendment translation that secures the focality of the post-amendment translations, while an existing evaluation corpus does not. With the metric and the corpus, we examine the performance of existing translation methods for Japanese partially amendment translations. | Evaluation Scheme of Focal Translation for Japanese Partially Amended Statutes |
d5146581 | The form of rules in ¢ombinatory categorial grammars (CCG) is constrained by three principles, called "adjacency", "consistency" and "inheritance". These principles have been claimed elsewhere to constrain the combinatory rules of composition and type raising in such a way as to make certain linguistic universals concerning word order under coordination follow immediately. The present paper shows that the three principles have a natural expression in a unification-based interpretation of CCG in which directional information is an attribute of the arguments of functions grounded in string position. The universals can thereby be derived as consequences of elementary assumptions. Some desirable results for grammars and parsers follow, concerning type-raising rules. | TYPE-RAISING AND DIRECTIONALITY IN COMBINATORY GRAMMAR* |
d10582446 | This paper describes the design and application of time-enhanced, finite state models of discourse cues to the automated segmentation of broadcast news. We describe our analysis of a broadcast news corpus, the design of a discourse cue based story segmentor that builds upon information extraction techniques, and finally its computational implementation and evaluation in the Broadcast News Navigator (BNN) to support video news browsing, retrieval, and summarization. | Discourse Cues for Broadcast News Segmentation |
d3203102 | This paper introduces the problem of measuring coherence in Machine Translation. Previously, local coherence has been assessed in a monolingual context using essentially coherent texts. These are then artificially shuffled to create an incoherent one. We investigate existing models for the task of measuring the coherence of machine translation output. This is a much more challenging case where coherent source documents are machine translated into a target language and the task is to distinguish them from their human translated counterparts. We benchmark stateof-the-art coherence models, and propose a new model which explores syntax following a more principled method to learn the syntactic patterns. This extension outperforms existing ones in the monolingual shuffling task on news data, and performs well in our new, more challenging task. Additionally, we show that breaches in coherence in the translation task are much more difficult to capture by any model. | The Trouble with Machine Translation Coherence |
d52187067 | In this paper we present the results of an investigation of the importance of verbs in a deep learning QA system trained on SQuAD dataset. We show that main verbs in questions carry little influence on the decisions made by the system -in over 90% of researched cases swapping verbs for their antonyms did not change system decision. We track this phenomenon down to the insides of the net, analyzing the mechanism of self-attention and values contained in hidden layers of RNN. Finally, we recognize the characteristics of the SQuAD dataset as the source of the problem. Our work refers to the recently popular topic of adversarial examples in NLP, combined with investigating deep net structure. | Does it care what you asked? Understanding Importance of Verbs in Deep Learning QA System |
d18824729 | We present a simple method for learning part-of-speech taggers for languages like Akawaio, Aukan, or Cakchiquel -languages for which nothing but a translation of parts of the Bible exists. By aggregating over the tags from a few annotated languages and spreading them via wordalignment on the verses, we learn POS taggers for 100 languages, using the languages to bootstrap each other. We evaluate our cross-lingual models on the 25 languages where test sets exist, as well as on another 10 for which we have tag dictionaries. Our approach performs much better (20-30%) than state-of-the-art unsupervised POS taggers induced from Bible translations, and is often competitive with weakly supervised approaches that assume high-quality parallel corpora, representative monolingual corpora with perfect tokenization, and/or tag dictionaries. We make models for all 100 languages available. | If all you have is a bit of the Bible: Learning POS taggers for truly low-resource languageš |
d22265397 | This paper aims to describe the linguistic-computational modeling of Frames [Fillmore 1982] and Qualia [Pustejovsky 1995] for the Sports domain, carried out in the FrameNet Brasil database. The modeling relied on a domainspecific corpus. By adding Qualia roles to the database, this work promoted the densification of FrameNet Brasil, focusing on its use in tools that deal with Natural Language Processing, such as parsers and machine translators. Resumo. Este trabalho busca descrever a modelagem linguísticocomputacional de frames [Fillmore 1982] e relações Qualia [Pustejovsky, 1995], realizada na base de dados da FrameNet Brasil para o domínio dos Esportes. A modelagem se sucedeu a partir de uma pesquisa em corpus. Ao adicionar papéis Qualia à base de dados, este trabalho promoveu o adensamento da FrameNet Brasil, visando a sua utilização em ferramentas que lidam com o Processamento de Língua Natural, tais como parsers e tradutores por máquina. | A Modelagem Computacional do Domínio dos Esportes na FrameNet Brasil |
d3195143 | Measuring the information content of news text is useful for decision makers in their investments since news information can influence the intrinsic values of companies. We propose a model to automatically measure the information content given news text, trained using news and corresponding cumulative abnormal returns of listed companies. Existing methods in finance literature exploit sentiment signal features, which are limited by not considering factors such as events. We address this issue by leveraging deep neural models to extract rich semantic features from news text. In particular, a novel tree-structured LSTM is used to find target-specific representations of news text given syntax structures. Empirical results show that the neural models can outperform sentiment-based models, demonstrating the effectiveness of recent NLP technology advances for computational finance. | Measuring the Information Content of Financial News |
d250391090 | This paper describes the approach developed by the LT3 team in the Intended Sarcasm Detection task at SemEval-2022 Task 6. We considered the binary classification subtask A for English data. The presented system is based on the fuzzy-rough nearest neighbor classification method using various text embedding techniques. Our solution reached 9th place in the official leader-board for English subtask A. | LT3 at SemEval-2022 Task 6: Fuzzy-Rough Nearest Neighbor Classification for Sarcasm Detection |
d445781 | Coherence in Machine Translation (MT) has received little attention to date. One of the main issues we face in work in this area is the lack of labelled data. While coherent (human authored) texts are abundant and incoherent texts could be taken from MT output, the latter also contains other errors which are not specifically related to coherence. This makes it difficult to identify and quantify issues of coherence in those texts. We introduce an initiative to create a corpus consisting of data artificially manipulated to contain errors of coherence common in MT output. Such a corpus could then be used as a benchmark for coherence models in MT, and potentially as training data for coherence models in supervised settings. | A Proposal for a Coherence Corpus in Machine Translation |
d17866765 | Spoken language contains disfluencies that, because of their irregular nature, may lead to reduced performance of data-driven parsers. This paper describes an experiment that quantifies the effects of disfluency detection and disfluency removal on data-driven parsing of spoken language data. The experiment consists of creating two reduced versions from a spoken language treebank, the Switchboard Corpus, mimicking a speechrecognizer output with and without disfluency detection and deletion. Two datadriven parsers are applied on the new data, and the parsers' output is evaluated and compared. | The Effects of Disfluency Detection in Parsing Spoken Language |
d248780042 | Question Answering (QA) is a Natural Language Processing (NLP) task that can measure language and semantics understanding ability, it requires a system not only to retrieve relevant documents from a large number of articles but also to answer corresponding questions according to documents. However, various language styles and sources of human questions and evidence documents form the different embedding semantic spaces, which may bring some errors to the downstream QA task. To alleviate these problems, we propose a framework for enhancing downstream evidence retrieval by generating evidence, aiming at improving the performance of response generation. Specifically, we take the pre-training language model as a knowledge base, storing documents' information and knowledge into model parameters. With the Child-Tuning approach being designed, the knowledge storage and evidence generation avoid catastrophic forgetting for response generation. Extensive experiments carried out on the multi-documents dataset show that the proposed method can improve the final performance, which demonstrates the effectiveness of the proposed framework. | A Knowledge Storage and Semantic Space Alignment Method for Multi-documents Dialogue Generation |
d9821042 | We describe the Stanford entry to the BioNLP 2011 shared task on biomolecular event extraction(Kim et al., 2011a). Our framework is based on the observation that event structures bear a close relation to dependency graphs. We show that if biomolecular events are cast as these pseudosyntactic structures, standard parsing tools (maximum-spanning tree parsers and parse rerankers) can be applied to perform event extraction with minimum domainspecific tuning. The vast majority of our domain-specific knowledge comes from the conversion to and from dependency graphs. Our system performed competitively, obtaining 3rd place in the Infectious Diseases track (50.6% f-score), 5th place in Epigenetics and Post-translational Modifications (31.2%), and 7th place in Genia (50.0%). Additionally, this system was part of the combined system inRiedel et al. (2011)to produce the highest scoring system in three out of the four event extraction tasks. | Event Extraction as Dependency Parsing for BioNLP 2011 |
d210063827 | The paper describes a computational approach to produce functionally comparable monolingual corpus resources for translation studies and contrastive analysis. We exploit a text-external approach, based on a set of Functional Text Dimensions to model text functions, so that each text can be represented as a vector in a multidimensional space of text functions. These vectors can be used to find reasonably homogeneous subsets of functionally similar texts across different corpora. Our models for predicting text functions are based on recurrent neural networks and traditional feature-based machine learning approaches. In addition to using the categories of the British National Corpus as our test case, we investigated the functional comparability of the English parts from the two parallel corpora: CroCo (English-German) and RusLTC (English-Russian) and applied our models to define functionally similar clusters in them. Our results show that the Functional Text Dimensions provide a useful description for text categories, while allowing a more flexible representation for texts with hybrid functions. | Towards Functionally Similar Corpus Resources for Translation |
d43346911 | This paper evaluates the impact of machine translation on the software localization process and the daily work of professional translators when SMT is applied to low-resourced languages with rich morphology. Translation from English into six low-resourced languages (Czech, Estonian, Hungarian, Latvian, Lithuanian and Polish) from different language groups are examined. Quality, usability and applicability of SMT for professional translation were evaluated. The building of domain and project tailored SMT systems for localization purposes was evaluated in two setups. The results of the first evaluation were used to improve SMT systems and MT platform. The second evaluation analysed a more complex situation considering tag translation and its effects on the translator's productivity. | Application of Machine Translation in Localization into Low-Resourced Languages |
d12700615 | Recently, the attention mechanism plays a key role to achieve high performance for Neural Machine Translation models. However, as it computes a score function for the encoder states in all positions at each decoding step, the attention model greatly increases the computational complexity. In this paper, we investigate the adequate vision span of attention models in the context of machine translation, by proposing a novel attention framework that is capable of reducing redundant score computation dynamically. The term "vision span" means a window of the encoder states considered by the attention model in one step. In our experiments, we found that the average window size of vision span can be reduced by over 50% with modest loss in accuracy on English-Japanese and German-English translation tasks. | An Empirical Study of Adequate Vision Span for Attention-Based Neural Machine Translation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.