_id stringlengths 4 10 | text stringlengths 0 18.4k | title stringlengths 0 8.56k |
|---|---|---|
d3052195 | An efficient context-free parsing algorithm is presented that can parse sentences with unknown parts of unknown length. It pa'oduees in finite form all possible parses (often infinite in number) that could account for the missing parts. The algorithm is a variation oa the construction due to Earley. ltowever, its presentation is such that it can readily be adapted to any chart parsing schema (topdown, bottom-up, etc...). | Parsing Incomplete Sentences |
d14625706 | German verbal inflection is frequently wrong in standard statistical machine translation approaches. German verbs agree with subjects in person and number, and they bear information about mood and tense. For subject-verb agreement, we parse German MT output to identify subject-verb pairs and ensure that the verb agrees with the subject. We show that this approach improves subject-verb agreement. We model tense/mood translation from English to German by means of a statistical classification model. Although our model shows good results on wellformed data, it does not systematically improve tense and mood in MT output. Reasons include the need for discourse knowledge, dependency on the domain, and stylistic variety in how tense/mood is translated. We present a thorough analysis of these problems. | Modeling verbal inflection for English to German SMT |
d229366028 | ||
d250390690 | This paper describes our system in the SemEval-2022 Task 12: 'linking mathematical symbols to their descriptions', achieving first on the leaderboard for all the subtasks comprising named entity extraction (NER) and relation extraction (RE). Our system is a two-stage pipeline model based on SciBERT that detects symbols, descriptions, and their relationships in scientific documents. The system consists of 1) machine reading comprehension(MRC)-based NER model, where each entity type is represented as a question and its entity mention span is extracted as an answer using an MRC model, and 2) span pair classification for RE, where two entity mentions and their type markers are encoded into span representations that are then fed to a Softmax classifier. In addition, we deploy a rule-based symbol tokenizer to improve the detection of the exact boundary of symbol entities. Regularization and ensemble methods are further explored to improve the RE model. | JBNU-CCLab at SemEval-2022 Task 12: Machine Reading Comprehension and Span Pair Classification for Linking Mathematical Symbols to Their Descriptions |
d16965576 | Cancer (a.k.a neoplasms in a broader sense) is one of the leading causes of death worldwide and its incidence is expected to exacerbate. To respond to the critical need from the society, there have been rigorous attempts for the cancer research community to develop treatment for cancer. Accordingly, we observe a surge in the sheer volume of research products and outcomes in relation to neoplasms. | Analyzing Impact, Trend, and Diffusion of Knowledge associated with Neoplasms Research |
d15143079 | Semantic databases are a stable starting point in developing knowledge based systems. Since creating language resources demands many temporal, financial and human resources, a possible solution could be the import of a resource annotation from one language to another. This paper presents the creation of a semantic role database for Romanian, starting from the English FrameNet semantic resource. The intuition behind the importing program is that most of the frames defined in the English FN are likely to be valid cross-lingual, since semantic frames express conceptual structures, language independent at the deep structure level. The surface realization, the surface level, is realized according to each language syntactic constraints. In the paper we present the advantages of choosing to import the English FrameNet annotation, instead of annotating a new corpus. We also take into account the mismatches encountered in the validation process. The rules created to manage particular situations are used to improve the import program. We believe the information and argumentations in this paper could be of interest for those who wish develop FrameNet-like systems for other languages. | Romanian Semantic Role Resource |
d254336568 | We present our initial experiments on binary classification of sentences into linguistically correct versus incorrect ones in Swedish using the DaLAJ dataset(Volodina et al., 2021a). The nature of the task is bordering on linguistic acceptability judgments, on the one hand, and on grammatical error detection task, on the other. The experiments include models trained with different input features and on different variations of the training, validation, and test splits. We also analyze the results focusing on different error types and errors made on different proficiency levels. Apart from insights into which features and approaches work well for this task, we present first benchmark results on this dataset. The implementation is based on a bidirectional LSTM network and pretrained FastText embeddings, BERT embeddings, own word and character embeddings, as well as part-of-speech tags and dependency labels as input features. The best model used BERT embeddings and a training and validation set enriched with additional correct sentences. It reached an accuracy of 73% on one of three test sets used in the evaluation. These promising results illustrate that the data and format of DaLAJ make a valuable new resource for research in acceptability judgements in Swedish.This work is licensed under a Creative Commons Attribution 4.0 International Licence. | Exploring Linguistic Acceptability in Swedish Learners' Language |
d227905687 | ||
d259370843 | Syntactic probing methods have been used to examine whether and how pre-trained language models (PLMs) encode syntactic relations. However, the probing methods are usually biased by the PLMs' memorization of common word co-occurrences, even if they do not form syntactic relations. This paper presents a random-word-substitution and random-labelmatching control task to reduce these biases and improve the robustness of syntactic probing methods. Our control tasks are also shown to notably improve the consistency of probing results between different probing methods and make the methods more robust with respect to the text attributes of the probing instances. Our control tasks make syntactic probing methods better at reconstructing syntactic relations and more generalizable to unseen text domains. Our experiments show that our proposed control tasks are effective on different PLMs, probing methods, and syntactic relations. 2 SpanBERT achieves an F1 score of 79.60% on the Ontonotes v5.0 coreference dataset (Pradhan et al., 2012). 3 https://languagetool.org/ 403 | Improving Syntactic Probing Correctness and Robustness with Control Tasks |
d259376651 | Grammatical error correction (GEC) is a challenging task for non-native second language (L2) learners and learning machines. Datadriven GEC learning requires as much humanannotated genuine training data as possible. However, it is difficult to produce larger-scale human-annotated data, and synthetically generated large-scale parallel training data is valuable for GEC systems. In this paper, we propose a method for rebuilding a corpus of synthetic parallel data using target sentences predicted by a GEC model to improve performance. Experimental results show that our proposed pre-training outperforms that on the original synthetic datasets. Moreover, it is also shown that our proposed training without human-annotated L2 learners' corpora is as practical as conventional full pipeline training with both synthetic datasets and L2 learners' corpora in terms of accuracy. | Training for Grammatical Error Correction Without Human-Annotated L2 Learners' Corpora |
d259376872 | For SemEval-2023 Task 5, we have submitted three DeBERTaV3 LARGE models to tackle the first subtask, classifying spoiler types (passage, phrase, multi) of clickbait web articles. The choice of basic parameters like sequence length with BERT BASE uncased and further approaches were then tested with DeBERTaV3 BASE only moving the most promising ones to DeBERTaV3 LARGE . Our research showed that information-placement on webpages is often optimized regarding e.g. adplacement. Those informations are usually described within the webpages markup which is why we conducted an approach that takes this into account. Overall we could not manage to beat the baseline, which we lead down to three reasons: First we only crawled markup for Huffington Post articles, extracting only <p>-and <a>-tags which will not cover enough aspects of a webpages design. Second Huffington Post articles are overrepresented in the given dataset, which, third, shows an imbalance towards the spoiler tags. We highly suggest re-annotating the given dataset to use markupoptimized models like MarkupLM or TIE and to clear it from embedded articles like "Yahoo" or archives like "archive.is" or "web.archive" to avoid noise. Also, the imbalance should be tackled by adding articles from sources other than Huffington Post, considering that also multi-tagged entries should be balanced towards passage-and phrase-tagged ones. | Stephen Colbert at SemEval-2023 Task 5: Using Markup for Classifying Clickbait |
d53449351 | The paper investigates the potential effects user features have on hate speech classification. A quantitative analysis of Twitter data was conducted to better understand user characteristics, but no correlations were found between hateful text and the characteristics of the users who had posted it. However, experiments with a hate speech classifier based on datasets from three different languages showed that combining certain user features with textual features gave slight improvements of classification performance. While the incorporation of user features resulted in varying impact on performance for the different datasets used, user network-related features provided the most consistent improvements. | The Effects of User Features on Twitter Hate Speech Detection |
d226283834 | This paper presents the model submitted by NIT COVID-19 team for identified informative COVID-19 English tweets at WNUT-2020 Task2. This shared task addresses the problem of automatically identifying whether an English tweet related to informative (novel coronavirus) or not. These informative tweets provide information about recovered, confirmed, suspected, and death cases as well as location or travel history of the cases. The proposed approach includes pre-processing techniques and pre-trained RoBERTa with suitable hyperparameters for English coronavirus tweet classification. The performance achieved by the proposed model for shared task WNUT 2020 Task2 is 89.14% in the F1-score metric. | NIT COVID-19 at WNUT-2020 Task 2: Deep Learning Model RoBERTa for Identify Informative COVID-19 English Tweets |
d216151988 | ||
d188487 | Thesauruses are useful resources for NLP; however, manual construction of thesaurus is time consuming and suffers low coverage. Automatic thesaurus construction is developed to solve the problem. Conventional way to automatically construct thesaurus is by finding similar words based on context vector models and then organizing similar words into thesaurus structure. But the context vector methods suffer from the problems of vast feature dimensions and data sparseness. Latent Semantic Index (LSI) was commonly used to overcome the problems. In this paper, we propose a feature clustering method to overcome the same problems. The experimental results show that it performs better than the LSI models and do enhance contextual information for infrequent words. | Improving Context Vector Models by Feature Clustering for Auto- matic Thesaurus Construction |
d38898903 | Automatic identification of good arguments on a controversial topic has applications in civics and education, to name a few. While in the civics context it might be acceptable to create separate models for each topic, in the context of scoring of students' writing there is a preference for a single model that applies to all responses. Given that good arguments for one topic are likely to be irrelevant for another, is a single model for detecting good arguments a contradiction in terms? We investigate the extent to which it is possible to close the performance gap between topicspecific and across-topics models for identification of good arguments. | Detecting Good Arguments in a Non-Topic-Specific Way: An Oxymoron? |
d219300513 | ||
d755490 | Accurate dependency parsing requires large treebanks, which are only available for a few languages. We propose a method that takes advantage of shared structure across languages to build a mature parser using less training data. We propose a model for learning a shared "universal" parser that operates over an interlingual continuous representation of language, along with language-specific mapping components. Compared with supervised learning, our methods give a consistent 8-10% improvement across several treebanks in low-resource simulations. | A Neural Network Model for Low-Resource Universal Dependency Parsing |
d28893172 | In this paper, we evaluate for the first time the use of Machine Translation technology to repair general errors in second language (L2) authoring. Contrary to previously evaluated approaches which rely exclusively on unilingual models of L2, this method takes into account both languages, and is thus able to model linguistic interference phenomena where the author produces an erroneous word for word translation of his L1 intent. We evaluate a simple roundtrip MT approach on a corpus of foreign-sounding errors produced in the context of French as a Second Language. We show that the roundtrip approach is better at repairing linguistic interference errors than non-interference ones, and that it is better at repairing errors which only involve function words. We also show that the first leg of the roundtrip (inferring the author's L1 intent) is more sensitive to error type and more error prone than the second leg (rendering a correct L1 intent back into L2). | Using Automatic Roundtrip Translation to Repair General Errors in Second Language Writing |
d18331413 | This paper describes FERRET, an interactive question-answering (Q/A) system designed to address the challenges of integrating automatic Q/A applications into real-world environments. FERRET utilizes a novel approach to Q/A -known as predictive questioning -which attempts to identify the questions (and answers) that users need by analyzing how a user interacts with a system while gathering information related to a particular scenario. | FERRET: Interactive Question-Answering for Real-World Environments |
d8761718 | In this paper, we describe our approach to Semeval 2015 task 10 subtask B, message level sentiment detection. Our system implements a variety of classifiers and data preparation techniques from previous work. The set of features and classifiers used in the final system produced consistently strong results using crossvalidation on the provided training data. Our final system achieved an F-score of 57.60 on the provided test data. The overall best performing system had an F-score of 64.84. | SWAT-CMW: Classification of Twitter Emotional Polarity using a Multiple-Classifier Decision Schema and Enhanced Emotion Tagging |
d705693 | This paper presents a prototype system for key term manipulation and visualization in a real-world commercial environment. The system consists of two components. A preprocessor generates a set of key terms from a text dataset which represents a specific topic. The generated key terms are organized in a hierarchical structure and fed into a graphic user interface (GUI). The friendly and interactive GUI toolkit allows the user to visualize the key terms in context and explore the content of the original dataset. | Construction and Visualization of Key Term Hierarchies |
d15548317 | The Person Cross Document Coreference systems depend on the context for making decisions on the possible coreferences between person name mentions. The amount of context required is a parameter that varies from corpora to corpora, which makes it difficult for usual disambiguation methods. In this paper we show that the amount of context required can be dynamically controlled on the basis of the prior probabilities of coreference and we present a new statistical model for the computation of these probabilities. The experiment we carried on a news corpus proves that the prior probabilities of coreference are an important factor for maintaining a good balance between precision and recall for cross document coreference systems. | Person Cross Document Coreference with Name Perplexity Estimates |
d12632594 | In this paper we describe a hybrid system that applies Maximum Entropy model (Max-Ent), language specific rules and gazetteers to the task of Named Entity Recognition (NER) in Indian languages designed for the IJCNLP NERSSEAL shared task. Starting with Named Entity (NE) annotated corpora and a set of features we first build a baseline NER system. Then some language specific rules are added to the system to recognize some specific NE classes. Also we have added some gazetteers and context patterns to the system to increase the performance. As identification of rules and context patterns requires language knowledge, we were able to prepare rules and identify context patterns for Hindi and Bengali only. For the other languages the system uses the MaxEnt model only. After preparing the one-level NER system, we have applied a set of rules to identify the nested entities. The system is able to recognize 12 classes of NEs with 65.13% f-value in Hindi, 65.96% f-value in Bengali and 44.65%, 18.74%, and 35.47% f-value in Oriya, Telugu and Urdu respectively. | A Hybrid Approach for Named Entity Recognition in Indian Languages |
d202766449 | We study how to sample negative examples to automatically construct a training set for effective model learning in retrieval-based dialogue systems. Following an idea of dynamically adapting negative examples to matching models in learning, we consider four strategies including minimum sampling, maximum sampling, semi-hard sampling, and decay-hard sampling. Empirical studies on two benchmarks with three matching models indicate that compared with the widely used random sampling strategy, although the first two strategies lead to performance drop, the latter two ones can bring consistent improvement to the performance of all the models on both benchmarks. | Sampling Matters! An Empirical Study of Negative Sampling Strategies for Learning of Matching Models in Retrieval-based Dialogue Systems |
d5881896 | We examine the task of temporal relation classification. Unlike existing approaches to this task, we (1) classify an event-event or eventtime pair as one of the 14 temporal relations defined in the TimeBank corpus, rather than as one of the six relations collapsed from the original 14;(2) employ sophisticated linguistic knowledge derived from a variety of semantic and discourse relations, rather than focusing on morpho-syntactic knowledge; and (3) leverage a novel combination of rule-based and learning-based approaches, rather than relying solely on one or the other. Experiments with the TimeBank corpus demonstrate that our knowledge-rich, hybrid approach yields a 15-16% relative reduction in error over a state-of-the-art learning-based baseline system. | Classifying Temporal Relations with Rich Linguistic Knowledge |
d231821651 | ||
d3157653 | The purpose of COVAX (Contemporary Culture Virtual Archives in XML) financed by the European Commission in IST Programme was to analyse and draw up the technical solutions required to provide access through the Internet to homogeneously-encoded document descriptions of archive, library and museum collections based on the application of XML. The project demonstrated its feasibility through a prototype containing a meaningful sample of all the different types of documents to build a global system for search and retrieval. The aim of this paper is to create in the COVAX system a new presentation of the documents. A system capable of processing markup semantics declarations can act as an interactive environment for testing conjectures and validating hypotheses. Semantics is one of the ways of improving information retrieval performances; we will explore this problem in the COVAX case study. We will investigate the possibility to derive a semantic knowledge from COVAX repositories, in order to improve the site analysis process and the query answering process. | Investigation on Semantics to Improve the COVAX System |
d8060447 | We propose a novel approach to crosslingual model transfer based on feature representation projection. First, a compact feature representation relevant for the task in question is constructed for either language independently and then the mapping between the two representations is determined using parallel data. The target instance can then be mapped into the source-side feature representation using the derived mapping and handled directly by the source-side model. This approach displays competitive performance on model transfer for semantic role labeling when compared to direct model transfer and annotation projection and suggests interesting directions for further research. | Cross-lingual Model Transfer Using Feature Representation Projection |
d28730125 | In this paper, we propose to learn word embeddings based on the recent fixedsize ordinally forgetting encoding (FOFE) method, which can almost uniquely encode any variable-length sequence into a fixed-size representation. We use FOFE to fully encode the left and right context of each word in a corpus to construct a novel word-context matrix, which is further weighted and factorized using truncated SVD to generate low-dimension word embedding vectors. We have evaluated this alternative method in encoding word-context statistics and show the new FOFE method has a notable effect on the resulting word embeddings. Experimental results on several popular word similarity tasks have demonstrated that the proposed method outperforms many recently popular neural prediction methods as well as the conventional SVD models that use canonical count based techniques to generate word context matrices. | Word Embeddings based on Fixed-Size Ordinally Forgetting Encoding |
d51918639 | This demonstration paper presents a bilingual (Arabic-English) interactive human avatar dialogue system. The system is named TOIA (time-offset interaction application), as it simulates face-to-face conversations between humans using digital human avatars recorded in the past. TOIA is a conversational agent, similar to a chat bot, except that it is based on an actual human being and can be used to preserve and tell stories. The system is designed to allow anybody, simply using a laptop, to create an avatar of themselves, thus facilitating cross-cultural and cross-generational sharing of narratives to wider audiences. The system currently supports monolingual and cross-lingual dialogues in Arabic and English, but can be extended to other languages. | A Bilingual Interactive Human Avatar Dialogue System |
d8420751 | Different languages contain complementary cues about entities, which can be used to improve Named Entity Recognition (NER) systems. We propose a method that formulates the problem of exploring such signals on unannotated bilingual text as a simple Integer Linear Program, which encourages entity tags to agree via bilingual constraints. Bilingual NER experiments on the large OntoNotes 4.0 Chinese-English corpus show that the proposed method can improve strong baselines for both Chinese and English. In particular, Chinese performance improves by over 5% absolute F 1 score. We can then annotate a large amount of bilingual text (80k sentence pairs) using our method, and add it as uptraining data to the original monolingual NER training corpus. The Chinese model retrained on this new combined dataset outperforms the strong baseline by over 3% F 1 score. | Named Entity Recognition with Bilingual Constraints |
d8436443 | The existence, public availability, and widespread acceptance of a standard benchmark for a given information retrieval (IR) task are beneficial to research on this task, since they allow different researchers to experimentally compare their own systems by comparing the results they have obtained on this benchmark. The Reuters-21578 test collection, together with its earlier variants, has been such a standard benchmark for the text categorization (TC) task throughout the last ten years. However, the benefits that this has brought about have somehow been limited by the fact that different researchers have "carved" different subsets out of this collection, and tested their systems on one of these subsets only; systems that have been tested on different Reuters-21578 subsets are thus not readily comparable. In this paper we present a systematic, comparative experimental study of the three subsets of Reuters-21578 that have been most popular among TC researchers. The results we obtain allow us to determine the relative difficulty of these subsets, thus establishing an indirect means for comparing TC systems that have, or will be, tested on these different subsets. | An Analysis of the Relative Difficulty of Reuters-21578 Subsets |
d53380935 | In this paper, we describe a morpho-syntactic tagger of tweets, an important component of the CEA List DeepLIMA tool which is a multilingual text analysis platform based on deep learning. This tagger is built for the Morpho-syntactic Tagging of Tweets (MTT) Shared task of the 2018 VarDial Evaluation Campaign. The MTT task focuses on morpho-syntactic annotation of noncanonical Twitter varieties of three South-Slavic languages: Slovene, Croatian and Serbian. We propose to use a neural network model trained in an end-to-end manner for the three languages without any need for task or domain specific features engineering. The proposed approach combines both character and word level representations. Considering the lack of annotated data in the social media domain for South-Slavic languages, we have also implemented a cross-domain Transfer Learning (TL) approach to exploit any available related out-of-domain annotated data.This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http:// creativecommons.org/licenses/by/4.0/. | Using Neural Transfer Learning for Morpho-syntactic Tagging of South-Slavic Languages Tweets |
d11868668 | This paper proposes a technique to build entity profiles starting from a set of defining corpora, i.e., a corpus considered as the definition of each entity. The proposed technique is applied in a classification task in order to determine how much a text, or corpus, is related to each of the profiled entities. This technique is general enough to be applied to any kind of entity, however, this paper experiments are conduct over entities describing a set of professors of a computer science graduate school through their advised M.Sc. thesis and Ph.D. dissertations. The profiles of each entity are applied to categorize other texts into one of the builded profiles. The analysis of the obtained results illustrates the power of the proposed technique. | Building and Applying Profiles Through Term Extraction |
d219308751 | ||
d17847726 | We present the Database of Catalan Adjectives (DCA), a database with 2,296 adjective lemmata enriched with morphological, syntactic and semantic information. This set of adjectives has been collected from a fragment of the Corpus Textual Informatitzat de la Llengua Catalana of the Institut d'Estudis Catalans and constitutes a representative sample of the adjective class in Catalan as a whole. The database includes both manually coded and automatically extracted information regarding the most prominent properties used in the literature regarding the semantics of adjectives, such as morphological origin, suffix (if any), predicativity, gradability, adjective position with respect to the head noun, adjective modifiers, or semantic class.The DCA can be useful for NLP applications using adjectives (from POS-taggers to Opinion Mining applications) and for linguistic analysis regarding the morphological, syntactic, and semantic properties of adjectives. We now make it available to the research community under a Creative Commons Attribution Share Alike 3.0 Spain license. | The Database of Catalan Adjectives |
d17476563 | We present a framework for question answering that can efficiently scale to longer documents while maintaining or even improving performance of state-ofthe-art models. While most successful approaches for reading comprehension rely on recurrent neural networks (RNNs), running them over long documents is prohibitively slow because it is difficult to parallelize over sequences. Inspired by how people first skim the document, identify relevant parts, and carefully read these parts to produce an answer, we combine a coarse, fast model for selecting relevant sentences and a more expensive RNN for producing the answer from those sentences. We treat sentence selection as a latent variable trained jointly from the answer only using reinforcement learning. Experiments demonstrate the state of the art performance on a challenging subset of the WIKIREADING dataset(Hewlett et al., 2016)and on a new dataset, while speeding up the model by 3.5x-6.7x. | Coarse-to-Fine Question Answering for Long Documents |
d8136321 | Kashmiri is a resource poor language with very less computational and language resources available for its text processing. As the main contribution of this paper, we present an initial version of the Kashmiri Dependency Treebank. The treebank consists of 1,000 sentences (17,462 tokens), annotated with part-of-speech (POS), chunk and dependency information. The treebank has been manually annotated using the Pān . inian Computational Grammar (PCG) formalism (Begum et al., 2008;Bharati et al., 2009). This version of Kashmiri treebank is an extension of its earlier version of 500 sentences (Bhat, 2012), a pilot experiment aimed at defining the annotation guidelines on a small subset of Kashmiri corpora. In this paper, we have refined the guidelines with some significant changes and have carried out inter-annotator agreement studies to ascertain its quality. We also present a dependency parsing pipeline, consisting of a tokenizer, a stemmer, a POS tagger, a chunker and an inter-chunk dependency parser. It, therefore, constitutes the first freely available, open source dependency parser of Kashmiri, setting the initial baseline for Kashmiri dependency parsing. | Towards building a Kashmiri Treebank: Setting up the Annotation Pipeline |
d10983275 | We present a natural language interface system which is based entirely on trained statistical models. The system consists of three stages of processing: parsing, semantic interpretation, and discourse.Each of these stages is modeled as a statistical process. The models are fully integrated, resulting in an end-to-end system that maps input utterances into meaning representation frames. | A FULLY STATISTICAL APPROACH TO NATURAL LANGUAGE INTERFACES |
d252186406 | With the increasing availability of large-scale parallel corpora derived from web crawling and bilingual text mining, data filtering is becoming an increasingly important step in neural machine translation (NMT) pipelines. This paper applies several available tools to the task of data filtration, and compares their performance in filtering out different types of noisy data. We also study the effect of filtration with each tool on model performance in the downstream task of NMT by creating a dataset containing a combination of clean and noisy data, filtering the data with each tool, and training NMT engines using the resulting filtered corpora. We evaluate the performance of each engine with a combination of MQM-based human evaluation and automated metrics. Our results show that cross-entropy filtering substantially outperforms the other tested methods for the types of noise we studied, and also leads to better NMT models. Our best results are obtained by training for a short time on all available data then filtering the corpus with cross-entropy filtering and training until convergence. | A Comparison of Data Filtering Methods for Neural Machine Translation |
d233305301 | ||
d202590292 | In this paper, we propose an approach for semi-automatically creating a data-to-text (D2T) corpus for Russian that can be used to learn a D2T natural language generation model. An error analysis of the output of an English-to-Russian neural machine translation system shows that 80% of the automatically translated sentences contain an error and that 53% of all translation errors bear on named entities (NE). We therefore focus on named entities and introduce two post-editing techniques for correcting wrongly translated NEs. | Creating a Corpus for Russian Data-to-Text Generation Using Neural Machine Translation and Post-Editing |
d14952042 | We describe a simple and efficient algorithm to disambiguate non-functional weighted finite state transducers (WFSTs), i.e. to generate a new WFST that contains a unique, best-scoring path for each hypothesis in the input labels along with the best output labels. The algorithm uses topological features combined with a tropical sparse tuple vector semiring. We empirically show that our algorithm is more efficient than previous work in a PoStagging disambiguation task. We use our method to rescore very large translation lattices with a bilingual neural network language model, obtaining gains in line with the literature. | Transducer Disambiguation with Sparse Topological Features |
d229365665 | We report the results of the first edition of the WMT shared task on Chat Translation. The task consisted of translating bilingual conversational text, in particular customer support chats for the English-German language pair (English agent, German customer). This task varies from the other translation shared tasks, i.e. news and biomedical, mainly due to the fact that the conversations are bilingual, less planned, more informal, and often ungrammatical. Furthermore, such conversations are usually characterized by shorter and simpler sentences and contain more pronouns.We received 14 submissions from 6 participating teams, all of them covering both directions, i.e. En→De for agent utterances and De→En for customer messages. We used automatic metrics (BLEU and TER) for evaluating the translations of both agent and customer messages and human document-level direct assessments to evaluate the agent translations. | Findings of the WMT 2020 Shared Task on Chat Translation |
d17247976 | Van der Sandt's algorithm for handling presupposition is based on a "presupposition as anaphora" paradigm and is expressed in the realm of Kamp's DRT. In recent years, we have proposed a typetheoretic rebuilding of DRT that allows Montague's semantics to be combined with discourse dynamics. Here we explore van der Sandt's theory along the line of this formal framework. It then results that presupposition handling may be expressed in a purely Montagovian setting, and that presupposition accommodation amounts to exception handling. | Presupposition Accommodation as Exception Handling |
d52966647 | A surprising property of word vectors is that vector algebra can often be used to solve word analogies. However, it is unclear why -and when -linear operators correspond to nonlinear embedding models such as skip-gram with negative sampling (SGNS). We provide a rigorous explanation of this phenomenon without making the strong assumptions that past theories have made about the vector space and word distribution. Our theory has several implications. Past work has conjectured that linear structures exist in vector spaces because relations can be represented as ratios; we prove that this holds for SGNS. We provide novel justification for the addition of SGNS word vectors by showing that it automatically downweights the more frequent word, as weighting schemes do ad hoc. Lastly, we offer an information theoretic interpretation of Euclidean distance in vector spaces, justifying its use in capturing word dissimilarity. | Towards Understanding Linear Word Analogies |
d6387329 | We present the Dependency Parser, called Maxuxta, for the linguistic processing of Basque, which can serve as a representative of agglutinative languages that are also characterized by the free order of its constituents. The Dependency syntactic model is applied to establish the dependency-based grammatical relations between the components within the clause. Such a deep analysis is used to improve the output of the shallow parsing where syntactic structure ambiguity is not fully and explicitly resolved. Previous to the completion of the grammar for the dependency parsing, the design of the Dependency Structure-based Scheme had to be accomplished; we concentrated on issues that must be resolved by any practic al system that uses such models. This scheme was used both to the manual tagging of the corpus and to develop the parser. The manually tagged corpus has been used to evaluate the accuracy of the parser. We have evaluated the application of the grammar to corpus, measuring the linking of the verb with its dependents, with satisfactory results. | Towards a Dependency Parser for Basque |
d11843300 | Crowdsourcing offers a convenient means of obtaining labeled data quickly and inexpensively. However, crowdsourced labels are often noisier than expert-annotated data, making it difficult to aggregate them meaningfully. We present an aggregation approach that learns a regression model from crowdsourced annotations to predict aggregated labels for instances that have no expert adjudications. The predicted labels achieve a correlation of 0.594 with expert labels on our data, outperforming the best alternative aggregation method by 11.9%. Our approach also outperforms the alternatives on third-party datasets. | Finding Patterns in Noisy Crowds: Regression-based Annotation Aggregation for Crowdsourced Data |
d258463969 | This study investigated the do-constructions in Chinese, Russian, and Czech, a predicateargument structure comprised of the light verb 'to do' -zuò in Chinese, delat' in Russian, and dělat in Czech -and a verbal noun as the head in the accusative role, considering the linguistic traits and pragmatic use of the constructions in spoken and written discourse. The corpus results attested that the three languages not only have lexical and grammatical equivalences, they also demonstrate a functional equivalence in packaging information to define a type of action within the construction. Similar lexicogrammatical strategies are employed to encode tense and aspectual information of the predicates and various kinds of information about the nominal heads. The preference of the do-usage in the written genre is unequivocal in Chinese and Russian, suggesting that the structural change could have started as a writing style. The relative novelty of the dousage to communicate generic or specific action events in Czech is evidence of languagespecificity in pragmatic use. | The Information Packaging of the Do-Constructions in Chinese, Russian, and Czech |
d14958830 | Arabic words are often ambiguous between name and non-name interpretations, frequently leading to incorrect name translations. We present a technique to disambiguate and transliterate names even if name interpretations do not exist or have relatively low probability distributions in the parallel training corpus. The key idea comprises named entity classing at the preprocessing step, decoding of a simple confusion network created from the name class label and the input word at the statistical machine translation step, and transliteration of names at the post-processing step. Human evaluations indicate that the proposed technique leads to a statistically significant translation quality improvement of highly ambiguous evaluation data sets without degrading the translation quality of a data set with very few names.This work is licenced under a Creative Commons Attribution 4.0 International License. Page numbers and proceedings footer are added by the organizers. License details: http://creativecommons.org/licenses/by/4.0/ 1 Arabic should be read from right to left, and the Buckwalter transliteration should be read from left to right. | Confusion Network for Arabic Name Disambiguation and Translitera- tion in Statistical Machine Translation |
d7764845 | In this opinion piece, I present four somewhat controversial suggestions for the design of future treebanks: a) Treebanks should be based on adversarial samples, rather than pseudorepresentative samples. b) Treebanks should include multiple splits of the data, rather than just a single split, as in most treebanks today. c) They should include multiple annotations of each sentence, whenever possible, instead of adjudicated annotations. d) There is no real motivation for adhering to a notion of well-formedness, since we now have parsers based on deep learning that generalize easily and perform well on any type of graphs, and treebanks therefore do not have to limit themselves to trees or directed acyclic graphs. | What I think when I think about treebanks |
d53590978 | This paper describes our Automatic Dialect Recognition (ADI) system for the VarDial 2018 challenge, with the goal of distinguishing four major Arabic dialects, as well as Modern Standard Arabic (MSA). The training and development ADI VarDial 2018 data consists of 16,157 utterances, their words transcription, their phonetic transcriptions obtained with four non-Arabic phoneme recognizers and acoustic embedding data. Our overall system is a combination of four different systems. One system uses the words transcriptions and tries to recognize the speaker dialect by modeling the sequence of words for each dialect. Another system tries to recognize the dialect by modeling the phone sequences produced by non-Arabic phone recognizers, whereas, the other two systems use GMM trained on the acoustic features for recognizing the dialect. The best performance was achieved by the fused system which combines four systems together, with F1 micro of 68.77%. This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http:// creativecommons.org/licenses/by/4.0/ | Birzeit Arabic Dialect Identification System for the 2018 VarDial Challenge |
d2780011 | The goal of this paper is to explore some consequences of the dichotomy between competence and performance from the point of view of incrementality. We introduce a TAG-based formalism that encodes a strong notion of incrementality directly into the operations of the formal system. A left-associative operation is used to build a lexicon of extended elementary trees. Extended elementary trees allow derivations in which a single fully connected structure is mantained through the course of a leftto-right word-by-word derivation. In the paper, we describe the consequences of this view for semantic interpretation, and we also evaluate some of the computational consequences of enlarging the lexicon in this way. | Competence and Performance Grammar in Incremental Processing |
d2048218 | RESOLVING A PRAGMATIC PREPOSITIONAL PHRASE ATTACHMENT AMBIGUITY | |
d15531056 | This paper presents Russian information retrieval evaluation initiative and results obtained during first year. In particular, we describe first ROMIP seminar, used Cyrillic Web collection and search tasks as well as ongoing efforts on ROMIP'2004. | Russian Information Retrieval Evaluation Seminar |
d29627503 | ||
d5024817 | This paper presents a hybrid approach to question answering in the clinical domain that combines techniques from summarization and information retrieval. We tackle a frequently-occurring class of questions that takes the form "What is the best drug treatment for X?" Starting from an initial set of MEDLINE citations, our system first identifies the drugs under study. Abstracts are then clustered using semantic classes from the UMLS ontology. Finally, a short extractive summary is generated for each abstract to populate the clusters. Two evaluations-a manual one focused on short answers and an automatic one focused on the supporting abstracts-demonstrate that our system compares favorably to PubMed, the search system most widely used by physicians today. | Answer Extraction, Semantic Clustering, and Extractive Summarization for Clinical Question Answering |
d17204537 | This article makes two contributions towards the use of lexical resources and corpora; specifically making use of them for gaining access to and using word associations. The direct application of our approach is for detecting linguistic and conceptual metaphors automatically in text. We describe our method of building conceptual spaces, that is, defining the vocabulary that characterizes a Source Domain (e.g., Disease) of a conceptual metaphor (e.g., Poverty is a Disease). We also describe how these conceptual spaces are used to group linguistic metaphors into conceptual metaphors. Our method works in multiple languages, including English, Spanish, Russian and Farsi. We provide details of how our method can be evaluated and evaluation results that show satisfactory performance across all languages. | Discovering Conceptual Metaphors Using Source Domain Spaces |
d12887137 | This paper introduces EasyText, a fully operational NLG system. This application processes numerical data (in tables) in order to generate specific analytical commentaries of these tables. We start by describing the context of this particular NLG application (communicative goal, user profiles, etc.). We then shortly present the theoretical background which underlies EasyText, before describing its implementation, realization and evaluation. | EasyText: an Operational NLG System |
d236486298 | In recent years, language models (LMs) have become almost synonymous with NLP. Pre-trained to "read" a large text corpus, such models are useful as both a representation layer as well as a source of world knowledge. But how well do they represent MWEs?This talk will discuss various problems in representing MWEs, and the extent to which LMs address them:• Do LMs capture the implicit relationship between constituents in compositional MWEs (from baby oil through parsley cake to cheeseburger stabbing)?• Do LMs recognize when words are used nonliterally in non-compositional MWEs (e.g. do they know whether there are fleas in the flea market)?• Do LMs know idioms, and can they infer the meaning of new idioms from the context as humans often do?Bio | A Long Hard Look at MWEs in the Age of Language Models |
d245289877 | Question answering (QA) is one of the most challenging and impactful tasks in natural language processing. Most research in QA, however, has focused on the open-domain or monolingual setting while most real-world applications deal with specific domains or languages. In this tutorial, we attempt to bridge this gap. Firstly, we introduce standard benchmarks in multi-domain and multilingual QA. In both scenarios, we discuss stateof-the-art approaches that achieve impressive performance, ranging from zero-shot transfer learning to out-of-the-box training with open-domain QA systems. Finally, we will present open research problems that this new research agenda poses such as multi-task learning, cross-lingual transfer learning, domain adaptation and training large scale pre-trained multilingual language models. 1 | Multi-Domain Multilingual Question Answering |
d14360681 | We report on a rule-based procedure of extracting and labeling English verb collocates from a dependency-parsed corpus. Instead of relying on the syntactic labels provided by the parser, we use a simple topological sequence that we fill with the extracted collocates in a prescribed order. A more accurate syntactic labeling will be obtained from the topological fields by comparison of corresponding collocate positions across the most common syntactic alternations. So far, we have extracted and labeled verb forms and predicate complements according to their morphosyntactic structure. In the next future, we will provide the syntactic labeling of the complements. | Rule-based extraction of English verb collocates from a dependency-parsed corpus |
d220046608 | This paper proposes a simple and effective approach to address the problem of posterior collapse in conditional variational autoencoders (CVAEs). It thus improves performance of machine translation models that use noisy or monolingual data, as well as in conventional settings. Extending Transformer and conditional VAEs, our proposed latent variable model measurably prevents posterior collapse by (1) using a modified evidence lower bound (ELBO) objective which promotes mutual information between the latent variable and the target, and (2) guiding the latent variable with an auxiliary bag-of-words prediction task. As a result, the proposed model yields improved translation quality compared to existing variational NMT models on WMT Ro↔En and De↔En. With latent variables being effectively utilized, our model demonstrates improved robustness over non-latent Transformer in handling uncertainty: exploiting noisy source-side monolingual data (up to +3.2 BLEU), and training with weakly aligned web-mined parallel data (up to +4.7 BLEU). | Addressing Posterior Collapse with Mutual Information for Improved Variational Neural Machine Translation |
d34083209 | We present opinion recommendation, a novel task of jointly generating a review with a rating score that a certain user would give to a certain product which is unreviewed by the user, given existing reviews to the product by other users, and the reviews that the user has given to other products. A characteristic of opinion recommendation is the reliance of multiple data sources for multi-task joint learning. We use a single neural network to model users and products, generating customised product representations using a deep memory network, from which customised ratings and reviews are constructed jointly. Results show that our opinion recommendation system gives ratings that are closer to real user ratings on Yelp.com data compared with Yelp's own ratings. our methods give better results compared to several pipelines baselines. | Opinion Recommendation Using A Neural Model * |
d541807 | The objective of this project is to develop a robust and high-performance speech recognitiotl system using a segment-based approach to phonetic recognition. The recognition system will eventually be integrated with natural language processing to achieve spoken lallguagc understanding.SUMMA R Y OF ACCOMPLISHMENTS:Developed a phonetic recognition front-end and achieved 77% and 71% classiilcatiou accuracy under speaker-dependent and -independent conditions, respectively, using a set of 38 context-independent models.Collaborated with researchers at SRI in the development of the MISTRI system, making explicit use of acoustic-phonetic and phonological knowledge.Developed the SUMMIT speech recognition system that incorporates auditory modelling and explicit segmentation, and achieved a speaker-independent accuracy of 87% on the DARPA 1000-word Resource Management task using 75 phoneme models.Developed probabilistic natural language system, TINA, and achieved a test-set coverage of 78% with perplexity of 42 for the Resource Management task.• Transcribed all 6300 sentences for the TIMIT database.Developed a set of research tools for the DARPA speech research community in ot'dcr to facilitate data collection, parameter computation, statistical analysis, and speech synthesis.PLANS:Improve the speech recognition performance by incorporating context-dependency ia phoneme modelling.Integrate TINA into SUMMIT in order to develop spoken language understanding capabilities.Develop a back-end on the task of a Knowledgeable Navigator, and integrate it with the spoken language system. Begin hardware development, such that the system will soon be able to execute in near real-time.167 | Acoustic-Phonetics Based Speech Recognition |
d14549911 | In this paper, we present the foundations and the properties of the Dislog language, a logic-based language designed to describe and implement discourse structure analysis. Dislog has the flexibility and the expressiveness of a rule-based system, it offers the possibility to include knowledge and reasoning capabilities and the expression a variety of well-formedness constraints proper to discourse. Dislog is embedded into the <TextCoop> platform that offers an engine with various processing capabilities and a programming environment. | DISLOG: A logic-based language for processing discourse structures |
d237099275 | Automated Frequently Asked Question (FAQ) retrieval provides an effective procedure to provide prompt responses to natural language based queries, providing an efficient platform for large-scale service-providing companies for presenting readily available information pertaining to customers' questions. We propose DTAFA, a novel multi-lingual FAQ retrieval system that aims at improving the top-1 retrieval accuracy with the least number of parameters. We propose two decoupled deep learning architectures trained for (i) candidate generation via text classification for a user question, and (ii) learning fine-grained semantic similarity between user questions and the FAQ repository for candidate refinement. We validate our system using real-life enterprise data as well as open source dataset. Empirically we show that DTAFA achieves better accuracy compared to existing state-of-the-art while requiring nearly 30× lesser number of training parameters. | DTAFA: Decoupled Training Architecture for Efficient FAQ Retrieval |
d2174286 | This paper reports the results of research into the current state of affairs with respect to teaching MT/CAT tools in Greece. Although a variety of methods is employed, this research essentially takes the form of a survey. According to the data provided by respondents, the current status of MT/CAT tools, now described as "poor" and "nonexistent" or at best as beginning to set off, appears somewhat disappointing. Yet, with respect to the future, it is our contention that the growing awareness at least among respondents of the need to integrate MT/CAT tools in their curricula, will sooner or later bring about the changes now anticipated.Department | Teaching MT/CAT tools in Greece: The State of the Art |
d5512173 | Keyphrase annotation is the task of identifying textual units that represent the main content of a document. Keyphrase annotation is either carried out by extracting the most important phrases from a document, keyphrase extraction, or by assigning entries from a controlled domain-specific vocabulary, keyphrase assignment. Assignment methods are generally more reliable. They provide better-formed keyphrases, as well as keyphrases that do not occur in the document. But they are often silent on the contrary of extraction methods that do not depend on manually built resources. This paper proposes a new method to perform both keyphrase extraction and keyphrase assignment in an integrated and mutual reinforcing manner. Experiments have been carried out on datasets covering different domains of humanities and social sciences. They show statistically significant improvements compared to both keyphrase extraction and keyphrase assignment state-of-the art methods. | Keyphrase Annotation with Graph Co-Ranking |
d237155060 | ||
d218594780 | ||
d10523514 | This paper discusses sampling strategies for building a dependency-analyzed corpus and analyzes them with different kinds of corpora. We used the Kyoto Text Corpus, a dependency-analyzed corpus of newspaper articles, and prepared the IPAL corpus, a dependency-analyzed corpus of example sentences in dictionaries, as a new and different kind of corpus. The experimental results revealed that the length of the test set controlled the accuracy and that the longest-first strategy was good for an expanding corpus, but this was not the case when constructing a corpus from scratch. | Analysis of Selective Strategies to Build a Dependency-Analyzed Corpus |
d9595259 | We present a method to store additional information in a minimal automaton so that it is possible to compute a corresponding tree node number for a state. The number can then be used to retrieve additional information. The method works for minimal (and any other) deterministic acyclic finite state automata (DFAs). We also show how to compute the inverse mapping. | Preserving Trees in Minimal Automata |
d18803813 | Computational LinguisticsVolume 42, Number 4 the other hand, the results suggest that negation, when addressed from a broader pragmatic perspective, far from being a nuisance, is an ideal application domain for distributional semantic methods. | There Is No Logical Negation Here, But There Are Alternatives: Modeling Conversational Negation with Distributional Semantics |
d235482362 | 摘要 近年來對話商務的概念在各大科技巨頭間興起,人機互動方式由圖形化介面轉向對話交 互介面的方式。因而自然語言成為人機互動介面的關鍵因子。然而教導機器要如何與人 類溝通,以完成一項具體任務是相當有挑戰性的。其中一個需要克服的困難是自然語言 理解,包含如何辨識使用者在詢問何種問題及如何取得文字間隱藏的資訊。讓機器了解 使用者的問題意圖及資訊是相當重要的。本研究主要是針對去識別化後的中文客服對話 資料,利用深度學習模型以達到辨識使用者意圖。為了更有效處理中文未知詞以及減少 錯誤辨識,本研究比較不同預訓練詞向量模型與深度學習模型來辨識使用者意圖。相較 於使用隨機詞嵌入,使用 BERT-WWM-Chinese (BWC)模型的正確率提升近 10%。這表 示 BWC 模型產生的向量更能抓住用戶問句字詞間的語意關係。使得語意相近的字詞能 產生近似的向量進而提升使用者意圖辨識的準確率。AbstractIn recent years, the concept of dialogue business has arisen among major technology giants, and the way of human-computer interaction has changed from a graphical interface to a | 預訓練詞向量模型應用於客服對話系統意圖偵測之研究 Study on Pre-trained Word Vector Model Applied to Intent Detection of Customer Service Dialogue System |
d11616352 | Imbalanced training sets, where one class is heavily underrepresented compared to the others, have a bad effect on the classification of rare class instances. We apply One-sided Sampling for the first time to a lexical acquisition task (learning verb complements from Modern Greek corpora) to remove redundant and misleading training examples of verb nondependents and thereby balance our training set. We experiment with well-known learning algorithms to classify new examples. Performance improves up to 22% in recall and 15% in precision after balancing the dataset 1 . | Learning Greek Verb Complements: Addressing the Class Imbalance |
d27258039 | Annotation sémantique du French Treebank à l'aide de la réécriture modulaire de graphes | |
d235719387 | The task of semantic code search is to retrieve code snippets from a source code corpus based on an information need expressed in natural language. The semantic gap between natural language and programming languages has for long been regarded as one of the most significant obstacles to the effectiveness of keyword-based information retrieval (IR) methods. It is a common assumption that "traditional" bag-of-words IR methods are poorly suited for semantic code search: our work empirically investigates this assumption. Specifically, we examine the effectiveness of two traditional IR methods, namely BM25 and RM3, on the CodeSearchNet Corpus, which consists of natural language queries paired with relevant code snippets. We find that the two keyword-based methods outperform several pre-BERT neural models. We also compare several code-specific data pre-processing strategies and find that specialized tokenization improves effectiveness. Code for reproducing our experiments is available at https: //github.com/crystina-z/CodeSearch Net-baseline. | Bag-of-Words Baselines for Semantic Code Search |
d16147833 | Explicit Semantic Analysis (ESA) is an approach to calculate the semantic relatedness between two words or natural language texts with the help of concepts grounded in human cognition. ESA usage has received much attention in the field of natural language processing, information retrieval and text analysis, however, performance of the approach depends on several parameters that are included in the model, and also on the text data type used for evaluation. In this paper, we investigate the behavior of using different number of Wikipedia articles in building ESA model, for calculating the semantic relatedness for different types of text pairs: word-word, phrasephrase and document-document. With our findings, we further propose an approach to improve the ESA semantic relatedness scores for words by enriching the words with their explicit context such as synonyms, glosses and Wikipedia definitions. | Exploring ESA to Improve Word Relatedness |
d2659217 | A lexicon is an essential component in a generation system but few efforts have been made to build a rich, large-scale lexicon and make it reusable for different generation applications. In this paper, we describe our work to build such a lexicon by combining multiple, heterogeneous linguistic resources which have been developed for other purposes. Novel transformation and integration of resources is required to reuse them for generation. We also applied the lexicon to the lexical choice and realization component of a practical generation application by using a multi-level feedback architecture. The integration of the lexicon and the architecture is able to effectively improve the system paraphrasing power, minimize the chance of grammatical errors, and simplify the development process substantially. | Combining Multiple, Large-Scale Resources in a Reusable Lexicon for Natural Language Generation |
d222132851 | Neural network NLP models are vulnerable to small modifications of the input that maintain the original meaning but result in a different prediction. In this paper, we focus on robustness of text classification against word substitutions, aiming to provide guarantees that the model prediction does not change if a word is replaced with a plausible alternative, such as a synonym. As a measure of robustness, we adopt the notion of the maximal safe radius for a given input text, which is the minimum distance in the embedding space to the decision boundary. Since computing the exact maximal safe radius is not feasible in practice, we instead approximate it by computing a lower and upper bound. For the upper bound computation, we employ Monte Carlo Tree Search in conjunction with syntactic filtering to analyse the effect of single and multiple word substitutions. The lower bound computation is achieved through an adaptation of the linear bounding techniques implemented in tools CNN-Cert and POPQORN, respectively for convolutional and recurrent network models. We evaluate the methods on sentiment analysis and news classification models for four datasets (IMDB, SST, AG News and NEWS) and a range of embeddings, and provide an analysis of robustness trends. We also apply our framework to interpretability analysis and compare it with LIME. . 2015. sense2vec-a fast and accurate method for word sense disambiguation in neural word embeddings. arXiv preprint arXiv:1511.06388. | Assessing Robustness of Text Classification through Maximal Safe Radius Computation |
d15074076 | We are developing a multilingual machine translation system to provide foreign tourists with a multilingual speech translation service in the Winter Olympic Games that will be held in Korea in 2018. For a knowledge learning to make the multilingual expansibility possible, we needed large bilingual corpus. In Korea there were a lot of Korean-English bilingual corpus, but Korean-French bilingual corpus and Korean-Spanish bilingual corpus lacked absolutely. Korean-English-French and Korean-English-Spanish triangle corpus were constructed by crowdsourcing translation using the existing large Korean-English corpus. But we found a lot of translation errors from the triangle corpora. This paper aims at filtering of translation errors in large triangle corpus constructed by crowdsourcing translation to reduce the translation loss of triangle corpus with English as a pivot language. Experiment shows that our method improves +0.34 BLEU points over the baseline system.PACLIC 2972 | Semi-automatic Filtering of Translation Errors in Triangle Corpus |
d7218020 | Extraction was attempted of broad-scale, high-precision Japanese-English parallel translation expressions from large aligned parallel corpora. To acquire broad-scale parallel translation expressions, a new method was used to extract single Japanese and English word n-grams, by which as many parallel translation expressions as possible could then be extracted. To achieve high extraction precision, first, hand-crafted rules were used to prune the unnecessary words often found in expressions extracted on the basis of word n-grams, and lexical information was used to refine the parallel translation expressions. Computer experiments with aligned parallel corpora consisting of about 280,000 pairs of Japanese-English parallel sentences found that more than 125,000 pairs of parallel translation expressions could be extracted with a precision of 0.96. These figures show that the proposed methods for extracting a broad range of parallel translation expressions have reached a level high enough for practical use. | Extraction of Broad-Scale, High-Precision Japanese-English Parallel Translation Expressions Using Lexical Information and Rules |
d15792551 | LDA-frames is an unsupervised approach for identifying semantic frames from semantically unlabeled text corpora, and seems to be a useful competitor for manually created databases of selectional preferences. The most limiting property of the algorithm is such that the number of frames and roles must be predefined. In this paper we present a modification of the LDA-frames algorithm allowing the number of frames and roles to be determined automatically, based on the character and size of training data. | Parameter Estimation for LDA-Frames |
d227217629 | ||
d16197187 | We introduce a new method for learning to detect grammatical errors in learner's writing and provide suggestions. The method involves parsing a reference corpus and inferring grammar patterns in the form of a sequence of content words, function words, and parts-of-speech (e.g., "play ~ role in Ving" and "look forward to Ving"). At runtime, the given passage submitted by the learner is matched using an extended Levenshtein algorithm against the set of pattern rules in order to detect errors and provide suggestions. We present a prototype implementation of the proposed method, EdIt, that can handle a broad range of errors. Promising results are illustrated with three common types of errors in nonnative writing. | EdIt: A Broad-Coverage Grammar Checker Using Pattern Grammar |
d5769757 | In this paper we investigate the Presentational Relative Clause (PRC) construction. In both the linguistic and NLP literature, relative clauses have been considered to contain background information that is not directly relevant or highly useful in semantic analysis. In text summarization in particular, the information contained in the relative clauses is often removed, being viewed as non-central content to the topic or discourse. We discuss the importance of distinguishing the PRC construction from other relative clause types. We show that in the PRC, the relative clause, rather than the main clause, contains the assertion of the utterance. Based on linguistic analysis, we suggest informative features that may be used in automatic extraction of PRC constructions. We believe that identifying this construction will be useful in discriminating central information from peripheral. | Identifying Assertions in Text and Discourse: The Presentational Relative Clause Construction |
d235097696 | ||
d3920857 | Some Psycholinguistic Constraints on the Construction and Interpretation of Definite Descriptions I | |
d464513 | We investigate the task of open domain opinion relation extraction. Given a large number of unlabelled texts, we propose an efficient distantly supervised framework based on pattern matching and neural network classifiers. The patterns are designed to automatically generate training data, and the deep learning model is designed to capture various lexical and syntactic features. The result algorithm is fast and scalable on large-scale corpus. We test the system on the Amazon online review dataset, and show that the proposed model is able to achieve promising performances without any human annotations. | Large-scale Opinion Relation Extraction with Distantly Supervised Neural Network |
d10446942 | The automatic syllabification process is an important prerequisite for speech synthesis systems. However, the task is not trivial and several techniques have been adopted over the last decade. Furthermore, there is a few number of researches and public resources dedicated to Brazilian Portuguese compared to another languages. This paper discusses efforts to reduce this disabilities, implementing and testing a syllabification algorithm based on linguistic rules. All developed codes and databases will be publicly available.Resumo. A divisão automática das palavras em sílabas é um importante prérequisito para sistemas que envolvem síntese de voz. Contudo, a tarefa não é trivial e diversas técnicas vêm sendo adotadas ao longo da última década. Além disso, são poucos os estudos na área e recursos públicos dedicados aoPortuguês Brasileiro quando comparado a outras línguas. Este trabalho discute esforços para reduzir essa deficiência, implementado e testando um separador silábico baseado em regras linguísticas. Todos os códigos e bases de dados desenvolvidos serão livremente disponibilizados. | Implementação de um Separador Silábico Gratuito Baseado em Regras Linguísticas para o Português Brasileiro |
d18106982 | We present online learning techniques for statistical machine translation (SMT). The availability of large training data sets that grow constantly over time is becoming more and more frequent in the field of SMT-for example, in the context of translation agencies or the daily translation of government proceedings. When new knowledge is to be incorporated in the SMT models, the use of batch learning techniques require very time-consuming estimation processes over the whole training set that may take days or weeks to be executed. By means of the application of online learning, new training samples can be processed individually in real time. For this purpose, we define a state-of-the-art SMT model composed of a set of submodels, as well as a set of incremental update rules for each of these submodels. To test our techniques, we have studied two well-known SMT applications that can be used in translation agencies: post-editing and interactive machine translation. In both scenarios, the SMT system collaborates with the user to generate highquality translations. These user-validated translations can be used to extend the SMT models by means of online learning. Empirical results in the two scenarios under consideration show the great impact of frequent updates in the system performance. The time cost of such updates was also measured, comparing the efficiency of a batch learning SMT system with that of an online learning system, showing that online learning is able to work in real time whereas the time cost of batch retraining soon becomes infeasible. Empirical results also showed that the performance of online learning is comparable to that of batch learning. Moreover, the proposed techniques were able to learn from previously estimated models or from scratch. We also propose two new measures to predict the effectiveness of online learning in SMT tasks. The translation system with online learning capabilities presented here is implemented in the open-source Thot toolkit for SMT. | Online Learning for Statistical Machine Translation |
d16215847 | While there has been much work on computational models to predict readability based on the lexical, syntactic and discourse properties of a text, there are also interesting open questions about how computer generated text should be evaluated with target populations. In this paper, we compare two offline methods for evaluating sentence quality, magnitude estimation of acceptability judgements and sentence recall. These methods differ in the extent to which they can differentiate between surface level fluency and deeper comprehension issues. We find, most importantly, that the two correlate. Magnitude estimation can be run on the web without supervision, and the results can be analysed automatically. The sentence recall methodology is more resource intensive, but allows us to tease apart the fluency and comprehension issues that arise. | Offline Sentence Processing Measures for testing Readability with Users |
d16243263 | This paper describes the IIRG 1 system entered in SemEval-2013, the 7th International Workshop on Semantic Evaluation. We participated in Task 5 Evaluating Phrasal Semantics. We have adopted a token-based approach to solve this task using 1) Naïve Bayes methods and 2) Word Overlap methods, both of which rely on the extraction of syntactic features. We found that the word overlap method significantly out-performs the Naïve Bayes methods, achieving our highest overall score with an accuracy of approximately 78%. | IIRG: A Naïve Approach to Evaluating Phrasal Semantics |
d1909021 | We present a first approach to the application of a data mining technique, Multiple Sequence Alignment, to the systematization of a polemic aspect of discourse, namely, the expression of contrast, concession, counterargument and semantically similar discursive relations. The representation of the phenomena under study is carried out by very simple techniques, mostly pattern-matching, but the results allow to drive insightful conclusions on the organization of this aspect of discourse: equivalence classes of discourse markers are established, and systematic patterns are discovered, which will be applied in enhancing a discursive parser. | Multiple Sequence Alignment for characterizing the lineal structure of revision |
d15583749 | Word Order transfer is a compulsory stage and has a great effect on the translation result of a transfer-based machine translation system. To solve this problem, we can use fixed rules (rulebased) or stochastic methods (corpus-based) which extract word order transfer rules between two languages. However, each approach has its own advantages and disadvantages. In this paper, we present a hybrid approach based on fixed rules and Transformation-Based Learning (or TBL) method. Our purpose is to transfer automatically the English word orders into the Vietnamese ones. The learning process will be trained on the annotated bilingual corpus (named EVC: English-Vietnamese Corpus) that has been automatically word-aligned, phrase-aligned and POStagged. This transfer result is being used for the transfer module in the English-Vietnamese transfer-based machine translation system. | A Hybrid Approach to Word Order Transfer in the English-to-Vietnamese Machine Translation |
d218515665 | Understanding discourse structures of news articles is vital to effectively contextualize the occurrence of a news event. To enable computational modeling of news structures, we apply an existing theory of functional discourse structure for news articles that revolves around the main event and create a human-annotated corpus of 802 documents spanning over four domains and three media sources. Next, we propose several documentlevel neural-network models to automatically construct news content structures. Finally, we demonstrate that incorporating system predicted news structures yields new state-of-theart performance for event coreference resolution. The news documents we annotated are openly available and the annotations are publicly released for future research 1 . | Discourse as a Function of Event: Profiling Discourse Structure in News Articles around the Main Event |
d18610564 | The INTERA and ECHO projects were partly intended to create a critical mass of open and linked metadata descriptions of language resources, helping researchers to understand the benefits of an increased visibility of language resources in the Internet and motivating them to participate. The work was based on the new IMDI version 3.0.3 which is a result of experiences with the earlier versions and new requirements coming from the involved partners. While in INTERA major data centers in Europe are participating, the ECHO project focuses on resources that can be seen as part of cultural heritage. Currently, 27 institutions and projects are active with the goal of having a large browsable and searchable domain by the summer of 2004. Experience shows that the creation of high quality metadata is not trivial and asks for a considerable amount of effort and skills, since manual work alone is too time consuming. | A Large Metadata Domain of Language Resources |
d6010928 | We present DSim, a new sentence aligned Danish monolingual parallel corpus extracted from 3701 pairs of news telegrams and corresponding professionally simplified short news articles. The corpus is intended for building automatic text simplification for adult readers. We compare DSim to different examples of monolingual parallel corpora, and we argue that this corpus is a promising basis for future development of automatic data-driven text simplification systems in Danish. The corpus contains both the collection of paired articles and a sentence aligned bitext, and we show that sentence alignment using simple tf*idf weighted cosine similarity scoring is on line with state-of-the-art when evaluated against a hand-aligned sample. The alignment results are compared to state of the art for English sentence alignment. We finally compare the source and simplified sides of the corpus in terms of lexical and syntactic characteristics and readability, and find that the one-to-many sentence aligned corpus is representative of the sentence simplifications observed in the unaligned collection of article pairs. | DSim, a Danish Parallel Corpus for Text Simplification |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.