_id
stringlengths
4
10
text
stringlengths
0
18.4k
title
stringlengths
0
8.56k
d2747931
We present minimum Bayes-risk system combination, a method that integrates consensus decoding and system combination into a unified multi-system minimum Bayes-risk (MBR) technique. Unlike other MBR methods that re-rank translations of a single SMT system, MBR system combination uses the MBR decision rule and a linear combination of the component systems' probability distributions to search for the minimum risk translation among all the finite-length strings over the output vocabulary. We introduce expected BLEU, an approximation to the BLEU score that allows to efficiently apply MBR in these conditions. MBR system combination is a general method that is independent of specific SMT models, enabling us to combine systems with heterogeneous structure. Experiments show that our approach bring significant improvements to single-system-based MBR decoding and achieves comparable results to different state-of-the-art system combination methods.
Minimum Bayes-risk System Combination
d532096
In this paper, we propose a new Bayesian inference method to train statistical machine translation systems using only nonparallel corpora. Following a probabilistic decipherment approach, we first introduce a new framework for decipherment training that is flexible enough to incorporate any number/type of features (besides simple bag-of-words) as side-information used for estimating translation models. In order to perform fast, efficient Bayesian inference in this framework, we then derive a hash sampling strategy that is inspired by the work of Ahmed et al. (2012). The new translation hash sampler enables us to scale elegantly to complex models (for the first time) and large vocabulary/corpora sizes. We show empirical results on the OPUS data-our method yields the best BLEU scores compared to existing approaches, while achieving significant computational speedups (several orders faster). We also report for the first time-BLEU score results for a largescale MT task using only non-parallel data (EMEA corpus).
Scalable Decipherment for Machine Translation via Hash Sampling
d5177306
The paper studies the diversity of ways to express entity aspects in users' reviews. Besides explicit aspect terms, it is possible to distinguish implicit aspect terms and sentiment facts. These subtypes of aspect terms were annotated during SentiRuEval evaluation of Russian sentiment analysis systems organized in 2014-2015. The created annotation gives the possibility to analyze the contribution of non-explicit aspects to the overall sentiment of a review, their main patterns, and possible use.
Types of Aspect Terms in Aspect-Oriented Sentiment Labeling
d16417902
In this paper, we show that the lexical function model for composition of distributional semantic vectors can be improved by adopting a more advanced regression technique. We use the pathwise coordinate-descent optimized elastic-net regression method to estimate the composition parameters, and compare the resulting model with several recent alternative approaches in the task of composing simple intransitive sentences, adjective-noun phrases and determiner phrases. Experimental results demonstrate that the lexical function model estimated by elastic-net regression achieves better performance, and it provides good qualitative interpretability through sparsity constraints on model parameters.
Improving the Lexical Function Composition Model with Pathwise Optimized Elastic-Net Regression
d1494032
d27865020
DiMLex is a lexicon of German connectives that can be used for various language understanding purposes. We enhanced the coverage to 275 connectives, which we regard as covering all known German discourse connectives in current use. In this paper, we consider the task of adding the semantic relations that can be expressed by each connective. After discussing different approaches to retrieving semantic information, we settle on annotating each connective with senses from the new PDTB 3.0 sense hierarchy. We describe our new implementation in the extended DiMLex, which will be available for research purposes.
Adding Semantic Relations to a Large-Coverage Connective Lexicon of German
d35427434
A common reason for errors in coreference resolution is the lack of semantic information to help determine the compatibility between mentions referring to the same entity. Distributed representations, which have been shown successful in encoding relatedness between words, could potentially be a good source of such knowledge. Moreover, being obtained in an unsupervised manner, they could help address data sparsity issues in labeled training data at a small cost. In this work we investigate whether and to what extend features derived from word embeddings can be successfully used for supervised coreference resolution. We experiment with several word embedding models, and several different types of embeddingbased features, including embedding cluster and cosine similarity-based features. Our evaluations show improvements in the performance of a supervised state-of-theart coreference system.
Word Embeddings as Features for Supervised Coreference Resolution
d15677185
We present results from an eye tracking study of automatic text summarization. Automatic text summarization is a growing field due to the modern world's Internet based society, but to automatically create perfect summaries is challenging. One problem is that extraction based summaries often have cohesion errors. By the usage of an eye tracking camera, we have studied the nature of four different types of cohesion errors occurring in extraction based summaries. A total of 23 participants read and rated four different texts and marked the most difficult areas of each text. Statistical analysis of the data revealed that absent cohesion or context and broken anaphoric reference (pronouns) caused some disturbance in reading, but that the impact is restricted to the effort to read rather than the comprehension of the text. However, erroneous anaphoric references (pronouns) were not always detected by the participants which poses a problem for automatic text summarizers. The study also revealed other potential disturbing factors.
The Impact of Cohesion Errors in Extraction Based Summaries
d226541
In this paper we explore the power of surface text patterns for open-domain question answering systems. In order to obtain an optimal set of patterns, we have developed a method for learning such patterns automatically. A tagged corpus is built from the Internet in a bootstrapping process by providing a few hand-crafted examples of each question type to Altavista. Patterns are then automatically extracted from the returned documents and standardized. We calculate the precision of each pattern, and the average precision for each question type. These patterns are then applied to find answers to new questions. Using the TREC-10 question set, we report results for two cases: answers determined from the TREC-10 corpus and from the web.
Learning Surface Text Patterns for a Question Answering System
d52099507
Paraphrases, the rewordings of the same semantic meaning, are useful for improving generalization and translation. However, prior works only explore paraphrases at the word or phrase level , not at the sentence or corpus level. Unlike previous works that only explore paraphrases at the word or phrase level, we use different translations of the whole training data that are consistent in structure as paraphrases at the corpus level. We train on parallel paraphrases in multiple languages from various sources. We treat paraphrases as foreign languages, tag source sentences with paraphrase labels, and train on parallel paraphrases in the style of multilingual Neural Machine Translation (NMT). Our multi-paraphrase NMT that trains only on two languages outperforms the multilingual baselines. Adding paraphrases improves the rare word translation and increases entropy and diversity in lexical choice. Adding the source paraphrases boosts performance better than adding the target ones. Combining both the source and the target paraphrases lifts performance further; combining paraphrases with multilingual data helps but has mixed performance. We achieve a BLEU score of 57.2 for French-to-English translation using 24 corpus-level paraphrases of the Bible, which outperforms the multilingual baselines and is +34.7 above the single-source singletarget NMT baseline.
Paraphrases as Foreign Languages in Multilingual Neural Machine Translation
d26481271
This paper describes the University of Sheffield's submission to the WMT17 Multimodal Machine Translation shared task. We participated in Task 1 to develop an MT system to translate an image description from English to German and French, given its corresponding image. Our proposed systems are based on the state-of-the-art Neural Machine Translation approach. We investigate the effect of replacing the commonly-used image embeddings with an estimated posterior probability prediction for 1,000 object categories in the images.
Shared Task Papers
d67788297
In this paper we present our model on the task of emotion detection in textual conversations in SemEval-2019. Our model extends the Recurrent Convolutional Neural Network (RCNN) by using external fine-tuned word representations and DeepMoji sentence representations. We also explored several other competitive pre-trained word and sentence representations including ELMo, BERT and InferSent but found inferior performance. In addition, we conducted extensive sensitivity analysis, which empirically shows that our model is relatively robust to hyper-parameters. Our model requires no handcrafted features or emotion lexicons but achieved good performance with a micro-F1 score of 0.7463.
ntuer at SemEval-2019 Task 3: Emotion Classification with Word and Sentence Representations in RCNN
d40146954
In this participation in the WMT'2017 metrics shared task we implement a fuzzy match score for n-gram precisions in the BLEU metric. To do this we learn ngram embeddings; we describe two ways of extending the WORD2VEC approach to do so.Evaluation results show that the introduced score beats the original BLEU metric on system and segment level.
Shared Task Papers
d236460139
Generating image captions with user intention is an emerging need. The recently published Localized Narratives dataset takes mouse traces as another input to the image captioning task, which is an intuitive and efficient way for a user to control what to describe in the image. However, how to effectively employ traces to improve generation quality and controllability is still under exploration. This paper aims to solve this problem by proposing a novel model called LoopCAG, which connects Contrastive constraints and Attention Guidance in a Loop manner, engaged explicit spatial and temporal constraints to the generating process. Precisely, each generated sentence is temporally aligned to the corresponding trace sequence through a contrastive learning strategy. Besides, each generated text token is supervised to attend to the correct visual objects under heuristic spatial attention guidance. Comprehensive experimental results demonstrate that our LoopCAG model learns better correspondence among the three modalities(vision, language, and traces) and achieves SOTA performance on trace controlled image captioning task. Moreover, the controllability and explainability of LoopCAG are validated by analyzing spatial and temporal sensitivity during the generation process.
Control Image Captioning Spatially and Temporally
d251980399
Users and Translators Track Nagoya
d236486192
We introduce a method for the classification of texts into fine-grained categories of sociopolitical events. This particular method is responsive to all three Subtasks of Task 2, Fine-Grained Classification of Socio-Political Events, introduced at the CASE workshop of ACL-IJCNLP 2021. We frame Task 2 as textual entailment: given an input text and a candidate event class ("query"), the model predicts whether the text describes an event of the given type. The model is able to correctly classify in-sample event types with an average F 1score of 0.74 but struggles with some out-ofsample event types. Despite this, the model shows promise for the zero-shot identification of certain sociopolitical events by achieving an F 1 -score of 0.52 on one wholly out-of-sample event class.
CASE 2021 Task 2: Zero-Shot Classification of Fine-Grained Sociopolitical Events with Transformer Models
d9124334
We describe a new corpus of over 180,000 handannotated dialog act tags and accompanying adjacency pair annotations for roughly 72 hours of speech from 75 naturally-occurring meetings. We provide a brief summary of the annotation system and labeling procedure, inter-annotator reliability statistics, overall distributional statistics, a description of auxiliary files distributed with the corpus, and information on how to obtain the data.
The ICSI Meeting Recorder Dialog Act (MRDA) Corpus
d1209346
This paper presents experiments with the evaluation of automatically produced summaries of literary short stories. The summaries are tailored to a particular purpose of helping a reader decide whether she wants to read the story. The evaluation procedure includes extrinsic and intrinsic measures, as well as subjective and factual judgments about the summaries pronounced by human subjects. The experiments confirm the experience of summarizing more conventional genres: sentence overlap between human-and machine-made summaries is not a complete picture of the quality of a summary. In fact, in our case, sentence overlap does not correlate well with human judgment. We explain the evaluation procedures and discuss several challenges of evaluating summaries of works of fiction.
Challenges in Evaluating Summaries of Short Stories
d29063930
The text mining community has shown significant impact for use of text mining in drug discovery and basic research. So far, there have been a lot of point solutions that have solved a particular problem or augmented a valuable database. The next step is integrating text mining as part of the entire literature analytic solution and delivering it effectively and comprehensively to support research in academia, biotechs and pharmaceuticals.
Text Mining -Next Steps for Drug Discovery
d27855180
Terms extensively exist in specific domains, and term translation plays a critical role in domain-specific machine translation (MT) tasks. However, it's a challenging task to translate them correctly for the huge number of pre-existing terms and the endless new terms. To achieve better term translation quality, it is necessary to inject external term knowledge into the underlying MT system. Fortunately, there are plenty of term translation knowledge in parenthetical sentences on the Internet. In this paper, we propose a simple, straightforward and effective framework to improve term translation by learning from parenthetical sentences. This framework includes: (1) a focused web crawler; (2) a parenthetical sentence filter, acquiring parenthetical sentences including bilingual term pairs; (3) a term translation knowledge extractor, extracting bilingual term translation candidates; (4) a probability learner, generating the term translation table for MT decoders.The extensive experiments demonstrate that our proposed framework significantly improves the translation quality of terms and sentences.
Learning from Parenthetical Sentences for Term Translation in Machine Translation
d6777537
Unsupervised approaches to multi-document summarization consist of two steps: finding a content model of the documents to be summarized, and then generating a summary that best represents the most salient information of the documents. In this paper, we present a sentence selection objective for extractive summarization in which sentences are penalized for containing content that is specific to the documents they were extracted from. We modify an existing system, HIER-SUM(Haghighi & Vanderwende, 2009), to use our objective, which significantly outperforms the original HIERSUM in pairwise user evaluation. Additionally, our ROUGE scores advance the current state-of-the-art for both supervised and unsupervised systems with statistical significance.
Extractive Multi-Document Summaries Should Explicitly Not Contain Document-Specific Content
d18175352
In this paper we present a simple to use web based error analysis tool to help computational linguists, researchers building language applications, and non-technical personnel managing development of language tools to analyze the predictions made by their machine learning models. The only expectation is that the users of the tool convert their data into an intuitive XML format. Once the XML is ready, several error analysis functionalities that promote principled feature engineering are a click away.
An Error Analysis Tool for Natural Language Processing and Applied Machine Learning
d241583555
Language models used in speech recognition are often either evaluated intrinsically using perplexity on test data, or extrinsically with an automatic speech recognition (ASR) system. The former evaluation does not always correlate well with ASR performance, while the latter could be specific to particular ASR systems. Recent work proposed to evaluate language models by using them to classify ground truth sentences among alternative phonetically similar sentences generated by a fine state transducer. Underlying such an evaluation is the assumption that the generated sentences are linguistically incorrect. In this paper, we first put this assumption into question, and observe that alternatively generated sentences could often be linguistically correct when they differ from the ground truth by only one edit. Secondly, we showed that by using multi-lingual BERT, we can achieve better performance than previous work on two codeswitching data sets. Our implementation is publicly available on Github. 1
Intrinsic evaluation of language models for code-switching
d11050250
Over the last few years there has been substantial research on text summarization, but comparatively little research has been carried out on adaptable components that allow rapid development and evaluation of summarization solutions. This paper presents a set of adaptable summarization components together with well-established evaluation tools, all within the GATE paradigm. The toolkit includes resources for the computation of summarization features which are combined in order to provide functionalities for single-document, multi-document, querybased, and multi/cross-lingual summarization. The summarization tools have been successfully used in a number of applications including a fully-fledged information access system. RÉSUMÉ. Au cours des dernières années il y a eu un nombre important de recherches au sujet du résumé automatique. Toutefois, il y a eu comparativement peu de recherche au sujet des ressources computationnelles et composantes qui peuvent être adaptées facilement pour le développement et l'évaluation des systèmes de résumé automatique. Ici on présente un ensemble de ressources spécifiquement développées pour le résumé automatique qui se basent sur la plateforme GATE. Les composantes sont utilisées pour calculer des traits indiquant la pertinence des phrases. Ces composantes sont combinées pour produire différents types de systèmes de résumé tels que résumé de document simple, résumé de document multiple, et résumé basé sur des topiques. Les ressources et algorithmes implémentés ont été utilisés pour développer plusieurs applications pour l'accès à l'information dans des systèmes d'information.
SUMMA A Robust and Adaptable Summarization Tool
d15547235
In this paper we present our work in the MTPIL-2012 dependency parsing task on Hindi using MaltParser. Here we have experimented with MaltParser by selecting different parsing algorithms and different features selection. Finally, we have achieved unlabeled attachment score (UAS) of 91.80%, labeled attachment score (LAS) 86.51% and labeled accuracy (LA) 88.47% respectively.
ISI-Kolkata at MTPIL-2012
d18701203
This paper reports a psycholinguistic research for the human intuition on the sense classification. The goal of this research is to find a computational model that fits best with our experiments on human intuition. In this regard, we compare three different computational models; the Boolean model, the probabilistic model, and the probabilistic inference model. We first measured the values of each models found in the semantically annotated Sejong corpus. Then the experimental result was compared with the values in the initial measurements. Kappa statistics supports that this agreement experiment is homogeneously coincidental. The Pearson correlation coefficient test shows that the Boolean model is strongly correlated with the human intuition.
Word Sense Disambiguation and Human Intuition for Semantic Classification on Homonyms *
d8338282
We propose AliMe Chat, an open-domain chatbot engine that integrates the joint results of Information Retrieval (IR) and Sequence to Sequence (Seq2Seq) based generation models. AliMe Chat uses an attentive Seq2Seq based rerank model to optimize the joint results. Extensive experiments show our engine outperforms both IR and generation based models. We launch AliMe Chat for a real-world industrial application and observe better results than another public chatbot.
AliMe Chat: A Sequence to Sequence and Rerank based Chatbot Engine
d16724531
Automatic Speech Recognition (ASR) has received greater level of acceptance as it creates speech recognition by the human machine interface. This paper focuses on developing a syllable based speech recognition system for Malayalam language.The proposed system consists of three different phases such as preprocessing, segmentation and classification. The preprocessing is performed for noise reduction, DC component removal, pre-emphasis and framing. The segmentation process implemented using Syllable Segmentation Algorithm segments the word utterances into syllables, that are inturn fed into the system for feature extraction. In the feature extraction step, we have proposed a novel approach by adding energy and zero crossing, along with MFCC features. The classification is done using Artificial Neural Network and is also compared with HMM classifier. Experiments are carried out with real-time utterances of 100 words, and obtained 96.4 % accuracy in ANN, which outperformed HMM.
Isolated Word Recognition System for Malayalam using Machine Learning
d2209224
Book Review
d259833782
Does the following hypothesis ENTAIL or NOT ENTAIL the premise? . 2020. Exploring the limits of transfer learning with a unified text-to-text transformer.
Can In-context Learners Learn a Reasoning Concept from Demonstrations?
d13901352
In this paper we report our experiments in creating a parallel corpus using German/Simple German documents from the web. We require parallel data to build a statistical machine translation (SMT) system that translates from German into Simple German. Parallel data for SMT systems needs to be aligned at the sentence level. We applied an existing monolingual sentence alignment algorithm. We show the limits of the algorithm with respect to the language and domain of our data and suggest ways of circumventing them.
Building a German/Simple German Parallel Corpus for Automatic Text Simplification
d3094494
In a spoken dialog system that can handle natural conversation between a human and a machine, spoken language understanding (SLU) is a crucial component aiming at capturing the key semantic components of utterances. Building a robust SLU system is a challenging task due to variability in the usage of language, need for labeled data, and requirements to expand to new domains (movies, travel, finance, etc.). In this paper, we survey recent research on bootstrapping or improving SLU systems by using information mined or extracted from web search query logs, which include (natural language) queries entered by users as well as the links (web sites) they click on. We focus on learning methods that help unveiling hidden information in search query logs via implicit crowd-sourcing.
Mining Search Query Logs for Spoken Language Understanding
d256461426
Knowledge Graphs (KGs) are structured databases that capture real-world entities and their relationships. The task of entity retrieval from a KG aims at retrieving a ranked list of entities relevant to a given user query. While English-only entity retrieval has attracted considerable attention, user queries, as well as the information contained in the KG, may be represented in multiple-and possibly distinctlanguages. Furthermore, KG content may vary between languages due to different information sources and points of view. Recent advances in language representation have enabled natural ways of bridging gaps between languages. In this paper, we, therefore, propose to utilise language models (LMs) and diverse entity representations to enable truly multilingual entity retrieval. We propose two approaches: (i) an array of monolingual retrievers and (ii) a single multilingual retriever trained using queries and documents in multiple languages. We show that while our approach is on par with the significantly more complex state-of-the-art method for the English task, it can be successfully applied to virtually any language with an LM. Furthermore, it allows languages to benefit from one another, yielding significantly better performance, both for low-and high-resource languages. * Research conducted when the author was doing an internship at Bloomberg.
Entity Retrieval from Multilingual Knowledge Graphs
d5668187
We propose a method to construct a phrase class n-gram model for Kana-Kanji Conversion by combining phrase and class methods. We use a word-pronunciation pair as the basic prediction unit of the language model. We compared the conversion accuracy and model size of a phrase class bi-gram model constructed by our method to a tri-gram model. The conversion accuracy was measured by F measure and model size was measured by the vocabulary size and the number of non-zero frequency entries. The F measure of our phrase class bi-gram model was 90.41%, while that of a word-pronunciation pair tri-gram model was 90.21%. In addition, the vocabulary size and the number of non-zero frequency entries in the phrase class bi-gram model were 5,550 and 206,978 respectively, while those of the tri-gram model were 22,801 and 645,996 respectively. Thus our method makes a smaller, more accurate language model.
Statistical Input Method based on a Phrase Class n-gram Model
d253541735
A variety of NLP applications use word2vec skip-gram, GloVe, and fastText word embeddings. These models learn two sets of embedding vectors, but most practitioners use only one of them, or alternately an unweighted sum of both. This is the first study to systematically explore a range of linear combinations between the first and second embedding sets. We evaluate these combinations on a set of six NLP benchmarks including IR, POS-tagging, and sentence similarity. We show that the default embedding combinations are often suboptimal and demonstrate up to 12.5% improvements. Notably, GloVe's default unweighted sum is its least effective combination across tasks. We provide a theoretical basis for weighting one set of embeddings more than the other according to the algorithm and task. We apply our findings to improve accuracy in applications of cross-lingual alignment and navigational knowledge by up to 15.2%.Northern European Journal of Language Technology
Task-dependent Optimal Weight Combinations for Static Embed- dings
d150381007
Medical area is an integral part of our lives due to health concerns, but the availability of medical information does not guarantee its correct understanding by patients. Several studies addressed this issue and pointed out real difficulties in the understanding of health contents by patients. We propose to use eye-tracking methods for studying this issue. For this, original technical and simplified versions of a deidentified clinical document are exploited. Eye-tracking methods permit to follow and to record the gaze of participants and to detect reading indicators such as duration of fixations, regressions and saccades. These indicators are correlated with answers to questionnaires submitted to participants after the reading. Our results indicate that there is statistically significant difference in reading and understanding of original and simplified versions of health documents. These results, in combination with another experiment, permit to propose a typology of medical words which need to be explained or simplified to non-expert readers.
Study of readability of health documents with eye-tracking approaches
d2049933
This research aims to integrate embodiment with generative lexicon. By analyzing the metaphorically used human body part terms in Sinica Corpus, the first balanced modern Chinese corpus, we reveal how these two theories complement each other. Embodiment strengthens generative lexicon by spelling out the cognitive reasons which underlies the production of meaning, and generative lexicon, specifically the qualia structure, complements embodiment by accounting for the reason underlying the selection of a particular body part for metaphorization. Discussing how the four body part terms-血 xie "blood", 肉 rou "flesh", 骨 gu "bone", 脈 mai "meridian"behave metaphorically, this research argues that the visibility and the telic role of the qualia structure are the major reasons motivating the choice of a body part to represent a comparatively abstract notion. The finding accounts for what constrains the selection of body parts for metaphorical uses. It also facilitates the prediction of the behavior of the four body part terms in these uses, which can function as the starting point to examine whether the two factors-visibility and telicity-also motivate the metaphorization of the rest human body parts.PACLIC 29396
When Embodiment Meets Generative Lexicon: The Human Body Part Metaphors in Sinica Corpus
d3066240
We present a very efficient statistical incremental parser for LTAG-spinal, a variant of LTAG. The parser supports the full adjoining operation, dynamic predicate coordination, and non-projective dependencies, with a formalism of provably stronger generative capacity as compared to CFG. Using gold standard POS tags as input, on section 23 of the PTB, the parser achieves an f-score of 89.3% for syntactic dependency defined on LTAG derivation trees, which are deeper than the dependencies extracted from PTB alone with head rules (for example, in Magerman's style).
Incremental LTAG Parsing
d10339151
We present a simple history-based model for sentence generation from LFG f-structures, which improves on the accuracy of previous models by breaking down PCFG independence assumptions so that more f-structure conditioning context is used in the prediction of grammar rule expansions. In addition, we present work on experiments with named entities and other multi-word units, showing a statistically significant improvement of generation accuracy. Tested on section 23 of the Penn Wall Street Journal Treebank, the techniques described in this paper improve BLEU scores from 66.52 to 68.82, and coverage from 98.18% to 99.96%.
Exploiting Multi-Word Units in History-Based Probabilistic Generation
d243864634
Frame-semantic parsers traditionally predict predicates, frames, and semantic roles in a fixed order. This paper explores the 'chickenor-egg' problem of interdependencies between these components theoretically and practically. We introduce a flexible BERT-based sequence labeling architecture that allows for predicting frames and roles independently from each other or combining them in several ways. Our results show that our setups can approximate more complex traditional models' performance, while allowing for a clearer view of the interdependencies between the pipeline's components, and of how frame and role prediction models make different use of BERT's layers.
Breeding Fillmore's Chickens and Hatching the Eggs: Recombining Frames and Roles in Frame-Semantic Parsing
d1564433
There have been two opposing views on the so-called head internal relative construction (HIRC) in Korean/Japanese, i.e., a view that analyzes the HIRC categorially as a nominal projection and functionally as an argument(Kuroda 1992, Watanabe 1992, Hoshi 1994, Jhang 1991/1994vs. a view that analyzes the HIRC categorially as an adjunct clause and functionally as a non-argument (Murasugi 1994). This paper on the one hand points out several phenomena indicating that Murasugi's analysis is more viable, while on the other hand proposing a more complex structure than Murasugi's to account for other facts as well. The no/kes clause in the HIRC will be analyzed as the complement of a null perception verb whose projection constitutes part of an adjunct clause.
On the Structure of the So-called Head Internal Relative Construction
d24018550
d650496
d64631730
Data security and privacy is an issue of growing importance in the healthcare domain. In this paper, we present an auditing system to detect privacy violations for unstructured text documents such as healthcare records. Given a sensitive document, we present an anomaly detection algorithm that can find the top-k suspicious keyword queries that may have accessed the sensitive document. Since unstructured healthcare data, such as medical reports and query logs, are not easily available for public research, in this paper, we show how one can use the publicly available DBLP data to create an equivalent healthcare data and query log, which can then be used for experimental evaluation.
Auditing Keyword Queries Over Text Documents
d227231315
d252624742
While reading financial documents, investors need to know the causes and their effects. This empowers them to make data-driven decisions. Thus, there is a need to develop an automated system for extracting causes and their effects from financial texts using Natural Language Processing. In this paper, we present the approach our team LIPI followed while participating in the FinCausal 2022 shared task. This approach is based on the winning solution of the first edition of FinCausal held in the year 2020.
LIPI at FinCausal 2022: Mining Causes and Effects from Financial Texts
d16883302
We propose a method to improve the accuracy of parsing bilingual texts (bitexts) with the help of statistical machine translation (SMT) systems. Previous bitext parsing methods use human-annotated bilingual treebanks that are hard to obtain. Instead, our approach uses an auto-generated bilingual treebank to produce bilingual constraints. However, because the auto-generated bilingual treebank contains errors, the bilingual constraints are noisy. To overcome this problem, we use large-scale unannotated data to verify the constraints and design a set of effective bilingual features for parsing models based on the verified results. The experimental results show that our new parsers significantly outperform state-of-theart baselines. Moreover, our approach is still able to provide improvement when we use a larger monolingual treebank that results in a much stronger baseline. Especially notable is that our approach can be used in a purely monolingual setting with the help of SMT.
SMT Helps Bitext Dependency Parsing
d19272313
Approximately 80% to 95% of patients with Amyotrophic Lateral Sclerosis (ALS) eventually develop speech impairments(Beukelman et al., 2011), such as defective articulation, slow laborious speech and hypernasality (Duffy, 2013). The relationship between impaired speech and asymptomatic speech may be seen as a divergence from a baseline. This relationship can be characterized in terms of measurable combinations of phonological characteristics that are indicative of the degree to which the two diverge. We demonstrate that divergence measurements based on phonological characteristics of speech correlate with physiological assessments of ALS. Speech-based assessments offer benefits over commonly-used physiological assessments in that they are inexpensive, non-intrusive, and do not require trained clinical personnel for administering and interpreting the results.
Characterization of Divergence in Impaired Speech of ALS Patients
d219307974
d7695235
We study subjective language in social media and create Twitter-specific lexicons via bootstrapping sentiment-bearing terms from multilingual Twitter streams. Starting with a domain-independent, highprecision sentiment lexicon and a large pool of unlabeled data, we bootstrap Twitter-specific sentiment lexicons, using a small amount of labeled data to guide the process. Our experiments on English, Spanish and Russian show that the resulting lexicons are effective for sentiment classification for many underexplored languages in social media.
Exploring Sentiment in Social Media: Bootstrapping Subjectivity Clues from Multilingual Twitter Streams
d227231824
d8938603
Most computational approaches to metaphor have focused on discerning between metaphorical and literal text. Recent work on computational metaphor identification (CMI) instead seeks to identify overarching conceptual metaphors by mapping selectional preferences between source and target corpora. This paper explores using semantic role labeling (SRL) in CMI. Its goals are two-fold: first, to demonstrate that semantic roles can effectively be used to identify conceptual metaphors, and second, to compare SRL to the current use of typed dependency parsing in CMI. The results show that SRL can be used to identify potential metaphors and that it overcomes some of the limitations of using typed dependencies, but also that SRL introduces its own set of complications. The paper concludes by suggesting future directions, both for evaluating the use of SRL in CMI, and for fostering critical and creative thinking about metaphors.
Comparing Semantic Role Labeling with Typed Dependency Parsing in Computational Metaphor Identification
d45336072
Les travaux liés à la définition et à la reconnaissance des entités nommées sont généralement envisagés en domaine ouvert, à travers la conception de catégories génériques (noms de personnes, de lieux, etc.) et leur application à des données textuelles issues de la presse (orale comme écrite). Par ailleurs, la fouille des données issues de centres d'appel est stratégique pour une entreprise comme EDF, compte tenu du rôle crucial joué par l'opinion pour les applications marketing, ce qui passe par la définition d'entités d'intérêt propres au domaine. Nous comparons les deux types de modèles d'entités -génériques et spécifiques à un domaine précis -afin d'observer leurs points de recouvrement, via l'annotation manuelle d'un corpus de conversations en centres d'appel. Nous souhaitons ainsi étudier l'apport d'une détection en entités nommées génériques pour l'extraction d'information métier en domaine restreint.ABSTRACTWhat is the contribution of named entities detection for information extraction in restricted domain ?In the framework of general domain dialog corpora a particular focus is dedicated to Named Entities definition and recognition, which are mostly very generic (personal names, locations, etc.). Moreover, call-centre data mining is strategic for a company like EDF, the public opinion analysis playing a significant role in EDF services quality evaluation and for marketing applications. In this purpose a domain dependant definition of entities of interest is essential. In this primary work we compare two types of entities models (generic and specific to the domain) in order to observe their respective coverage. We annotated manually a sub-corpus extracted from a large corpus of oral dialogs recorded in an EDF call-centre. The respective proportion of generic vs domain-specific Named Entities is then estimated. Impact for future work on building EDF domain-specific entities models is discussed.
Quel est l'apport de la détection d'entités nommées pour l'extraction d'information en domaine restreint ?
d15598604
We present an adaptive translation quality estimation (QE) method to predict the human-targeted translation error rate (HTER) for a document-specific machine translation model. We first introduce features derived internal to the translation decoding process as well as externally from the source sentence analysis. We show the effectiveness of such features in both classification and regression of MT quality. By dynamically training the QE model for the document-specific MT model, we are able to achieve consistency and prediction quality across multiple documents, demonstrated by the higher correlation coefficient and F-scores in finding Good sentences. Additionally, the proposed method is applied to IBM English-to-Japanese MT post editing field study and we observe strong correlation with human preference, with a 10% increase in human translators' productivity.
Adaptive HTER Estimation for Document-Specific MT Post-Editing
d253628232
Due to the surge in global demand for English as a second language (ESL), developments of automated methods for grading speaking proficiency have gained considerable attention. This paper aims to present a computerized regime of grading the spontaneous spoken language for ESL learners. Based on the speech corpus of ESL learners recently collected in Taiwan, we first extract multi-view features (e.g., pronunciation, fluency, and prosody features) from either automatic speech recognition (ASR) transcription or audio signals. These extracted features are, in turn, fed into a tree-based classifier to produce a new set of indicative features as the input of the automated assessment system, viz. the grader. Finally, we use different machine learning models to predict ESL learners' respective speaking 1 ELSA SPEAK: https://elsaspeak.com/en/ proficiency and map the result into the corresponding CEFR level. The experimental results and analysis conducted on the speech corpus of ESL learners in Taiwan show that our approach holds great potential for use in automated speaking assessment, meanwhile offering more reliable predictive results than the human experts.
d15879777
We describe a purely confidence-based geographic term disambiguation system that crucially relies on the notion of "positive" and "negative" context and methods for combining confidence-based disambiguation with measures of relevance to a user's query.
A confidence-based framework for disambiguating geographic terms
d53101886
Word embeddings are powerful tools that facilitate better analysis of natural language. However, their quality highly depends on the resource used for training. There are various approaches relying on n-gram corpora, such as the Google n-gram corpus. However, ngram corpora only offer a small window into the full text -5 words for the Google corpus at best. This gives way to the concern whether the extracted word semantics are of high quality. In this paper, we address this concern with two contributions. First, we provide a resource containing 120 word-embedding models -one of the largest collection of embedding models. Furthermore, the resource contains the ngramed versions of all used corpora, as well as our scripts used for corpus generation, model generation and evaluation. Second, we define a set of meaningful experiments allowing to evaluate the aforementioned quality differences. We conduct these experiments using our resource to show its usage and significance. The evaluation results confirm that one generally can expect high quality for n-grams with n ≥ 3.
Resources to Examine the Quality of Word Embedding Models Trained on n-Gram Datá
d235097598
In this paper we describe our contribution to the CMCL 2021 Shared Task, which consists in predicting 5 different eye tracking variables from English tokenized text. Our approach is based on a neural network that combines both raw textual features we extracted from the text and parser-based features that include linguistic predictions (e.g. part of speech) and complexity metrics (e.g., entropy of parsing). We found that both the features we considered as well as the architecture of the neural model that combined these features played a role in the overall performance. Our system achieved relatively high accuracy on the test data of the challenge and was ranked 2nd out of 13 competing teams and a total of 30 submissions.
TALEP at CMCL 2021 Shared Task: Non Linear Combination of Low and High-Level Features for Predicting Eye-Tracking Data
d16854945
E-infrastructure projects such as CLARIN do not only make research data available to the scientific community, but also deliver a growing number of web services. While the standard methods for deploying web services using dedicated (virtual) server may suffice in many circumstances, CLARIN centers are also faced with a growing number of services that are not frequently used and for which significant compute power needs to be reserved. This paper describes an alternative approach towards service deployment capable of delivering on demand services in a workflow using cloud infrastructure capabilities. Services are stored as disk images and deployed on a workflow scenario only when needed this helping to reduce the overall service footprint.
Dynamic web service deployment in a cloud environment
d6455706
We describe a probabilistic approach that combines information obtained from a lexicon with information obtained from a Naïve Bayes (NB) classifier for multi-way sentiment analysis. Our approach also employs grammatical structures to perform adjustments for negations, modifiers and sentence connectives. The performance of this method is compared with that of an NB classifier with feature selection, and MCST -a state-of-the-art system. The results of our evaluation show that the performance of our hybrid approach is at least as good as that of these systems. We also examine the influence of three factors on performance: (1) sentiment-ambiguous sentences, (2) probability of the most probable star rating, and (3) coverage of the lexicon and the NB classifier. Our results indicate that the consideration of these factors supports the identification of regions of improved reliability for sentiment analysis.
Experimental Evaluation of a Lexicon-and Corpus-based Ensemble for Multi-way Sentiment Analysis *
d5272196
A high-level language lor the description of inflectional morphology is presented, in which the organization of word lormation rules into an ii~herilance hierarchy of paradigms allows lora natural encoding of the kinds of nfles typically pre~uted in grammar txroks. We show how tim language, composed of orthographic rides, word formation rules, and paradigm inheritance, can be compiled into a run-time data structure for efficient morphological analysis and generation with a dynamic secondary storage lexicon.Acri?s DE COLING-92, NANTES, 23-28 AO~n" 1992 6 7
A High-level Morphological Description Language Exploiting Inflectional Paradigms
d10005952
MultiUN is a multilingual parallel corpus extracted from the official documents of the United Nations. It is available in the six official languages of the UN and a small portion of it is also available in German. This paper presents a major update on the first public version of the corpus released in 2010. This version 2 consists of over 513, 091 documents, including around 9% of new documents retrieved from the United Nations official document system. Compared to the first release, we applied several modifications to the corpus preparation method. In this paper, we describe the methods we used for processing the UN documents and aligning the sentences. The most significant improvement compared to the previous release is the newly added multilingual sentence alignment information. The alignment information is encoded together with the text in XML instead of additional files. Our representation of the sentence alignment allows quick construction of aligned texts parallel in arbitrary number of languages, which is essential for building machine translation systems.
MultiUN v2: UN Documents with Multilingual Alignments
d18208555
The dramatic improvements shown by statistical machine translation systems in recent years clearly demonstrate the benefits of having large quantities of manually translated parallel text for system training and development. And while many competing evaluation metrics exist to evaluate MT technology, most of those methods also crucially rely on the existence of one or more high quality human translations to benchmark system performance. Given the importance of human translations in this framework, understanding the particular challenges of human translation-for-MT is key, as is comprehending the relative strengths and weaknesses of human versus machine translators in the context of an MT evaluation.Vanni (2000)argued that the metric used for evaluation of competence in human language learners may be applicable to MT evaluation; we apply similar thinking to improve the prediction of MT performance, which is currently unreliable. In the current paper we explore an alternate model based upon a set of genre-defining features that prove to be consistently challenging for both humans and MT systems.
Identifying Common Challenges for Human and Machine Translation: A Case Study from the GALE 1 Program
d250390994
When listening comprehension is tested as a free-text production task, a challenge for scoring the answers is the resulting wide range of spelling variants. When judging whether a variant is acceptable or not, human raters perform a complex holistic decision. In this paper, we present a corpus study in which we analyze human acceptability decisions in a high stakes test for German. We show that for human experts, spelling variants are harder to score consistently than other answer variants. Furthermore, we examine how the decision can be operationalized using features that could be applied by an automatic scoring system. We show that simple measures like edit distance and phonetic similarity between a given answer and the target answer can model the human acceptability decisions with the same inter-annotator agreement as humans, and discuss implications of the remaining inconsistencies.
'Meet me at the ribary' -Acceptability of spelling variants in free-text answers to listening comprehension prompts
d17125962
We present the results of the experiment of bootstrapping a Treebank for Catalan by using a Dependency Parser trained with Spanish sentences. In order to save time and cost, our approach was to profit from the typological similarities between Catalan and Spanish to create a first Catalan data set quickly by (i) automatically annotating with a delexicalized Spanish parser, (ii) manually correcting the parses, and (iii) using the Catalan corrected sentences to train a Catalan parser. The results showed that the number of parsed sentences required to train a Catalan parser is about 1000, which were achieved in 4 months with 2 annotators.
Boosting the creation of a treebank
d1530263
The paper presents a semi-automatic approach to creating sentiment dictionaries in many languages. We first produced high-level goldstandard sentiment dictionaries for two languages and then translated them automatically into third languages. Those words that can be found in both target language word lists are likely to be useful because their word senses are likely to be similar to that of the two source languages. These dictionaries can be further corrected, extended and improved. In this paper, we present results that verify our triangulation hypothesis, by evaluating triangulated lists and comparing them to nontriangulated machine-translated word lists.
Creating Sentiment Dictionaries via Triangulation
d251403526
Emotion recognition in conversation is important for an empathetic dialogue system to understand the user's emotion and then generate appropriate emotional responses. However, most previous researches focus on modeling conversational contexts primarily based on the textual modality or simply utilizing multimodal information through feature concatenation. In order to exploit multimodal information and contextual information more effectively, we propose a multimodal directed acyclic graph (MMDAG) network by injecting information flows inside modality and across modalities into the DAG architecture. Experiments on IEMOCAP and MELD show that our model outperforms other state-of-the-art models. Comparative studies validate the effectiveness of the proposed modality fusion method.
MMDAG: Multimodal Directed Acyclic Graph Network for Emotion Recognition in Conversation
d17817934
A small fragment of the Systemic Functional Grammar of the PENMAN system is reformulated in the Typed Feature Structure language. Through this reformulation wc gain full reversibility for the SFG description and access for unification-based grammar descriptions to the rich semantic levels of description that SFG supports. We illustrate this reformulation with respect to both generation mid semantic aalalysis and set out the future goals for research this result establishes.
THE NONDIRECTIONAL REPRESENTATION OF SYSTEMIC FUNCTIONAL GRAMMARS AND SEMANTICS AS TYPED FEATURE STRUCTURES
d199553035
The fields of cognitive science and philosophy have proposed many different theories for how humans represent "concepts". Multiple such theories are compatible with state-of-theart NLP methods, and could in principle be operationalized using neural networks. We focus on two particularly prominent theories-Classical Theory and Prototype Theory-in the context of visually-grounded lexical representations. We compare when and how the behavior of models based on these theories differs in terms of categorization and entailment tasks. Our preliminary results suggest that Classicalbased representations perform better for entailment and Prototype-based representations perform better for categorization. We discuss plans for additional experiments needed to confirm these initial observations.
Using Grounded Word Representations to Study Theories of Lexical Concepts
d13515597
Predicting the sense of a discourse relation is particularly challenging when connective markers are missing. To address this challenge, we propose a simple deep neural network approach that replaces manual feature extraction by introducing event vectors as an alternative representation, which can be pre-trained using a very large corpus, without explicit annotation. We model discourse arguments as a combination of word and event vectors. Event information is aggregated with word vectors and a Multi-Layer Neural Network is used to classify discourse senses. This work was submitted as part of the CoNLL 2016 shared task on Discourse Parsing. We obtain competitive results, reaching an accuracy of 38%, 34% and 34% for the development, test and blind test datasets, competitive with the best performing system on CoNLL 2015.
Adapting Event Embedding for Implicit Discourse Relation Recognition
d15299520
In recent years there has been a growing interest in clarifying the process of Information Extraction (IE) from documents, particularly when coupled with Machine Learning. We believe that a fundamental step forward in clarifying the IE process would be to be able to perform comparative evaluations on the use of different representations. However, this is difficult because most of the time the way information is represented is too tightly coupled with the algorithm at an implementation level, making it impossible to vary representation while keeping the algorithm constant. A further motivation behind our work is to reduce the complexity of designing, developing and testing IE systems. The major contribution of this work is in defining a methodology and providing a software infrastructure for representing language resources independently of the algorithm, mainly for Information Extraction but with application in other fields -we are currently evaluating its use for ontology learning and document classification.
A Methodology and Tool for Representing Language Resources for Information Extraction
d16622297
This article presents a comparative study of a subfield of morphology learning referred to as minimally supervised morphological segmentation. In morphological segmentation, word forms are segmented into morphs, the surface forms of morphemes. In the minimally supervised data-driven learning setting, segmentation models are learned from a small number of manually annotated word forms and a large set of unannotated word forms. In addition to providing a literature survey on published methods, we present an in-depth empirical comparison on three diverse model families, including a detailed error analysis. Based on the literature survey, we conclude that the existing methodology contains substantial work on generative morph lexicon-based approaches and methods based on discriminative boundary detection. As for which approach has been more successful, both the previous work and the empirical evaluation presented here strongly imply that the current state of the art is yielded by the discriminative boundary detection methodology.
A Comparative Study of Minimally Supervised Morphological Segmentation
d248780385
We propose our solution to the multimodal semantic role labeling task from the CON-STRAINT'22 workshop. The task aims at classifying entities in memes into classes such as "hero" and "villain". We use several pre-trained multi-modal models to jointly encode the text and image of the memes, and implement three systems to classify the role of the entities. We propose dynamic sampling strategies to tackle the issue of class imbalance. Finally, we perform qualitative analysis on the representations of the entities.
Fine-tuning and Sampling Strategies for Multimodal Role Labeling of Entities under Class Imbalance
d297068
Effectively exploring and analyzing large text corpora requires visualizations that provide a high level summary. Past work has relied on faceted browsing of document metadata or on natural language processing of document text. In this paper, we present a new web-based tool that integrates topics learned from an unsupervised topic model in a faceted browsing experience. The user can manage topics, filter documents by topic and summarize views with metadata and topic graphs. We report a user study of the usefulness of topics in our tool.
Topic Models and Metadata for Visualizing Text Corpora
d193765776
d218977371
Identifying statements related to suicidal behaviour in psychiatric electronic health records (EHRs) is an important step when modeling that behaviour, and when assessing suicide risk. We apply a deep neural network based classification model with a lightweight context encoder, to classify sentence level suicidal behaviour in EHRs. We show that incorporating information from sentences to left and right of the target sentence significantly improves classification accuracy. Our approach achieved the best performance when classifying suicidal behaviour in Autism Spectrum Disorder patient records. The results could have implications for suicidality research and clinical surveillance.
Using Deep Neural Networks with Intra-and Inter-Sentence Context to Classify Suicidal Behaviour
d5960273
This paper introduces a new task of crosslingual slot filling which aims to discover attributes for entity queries from crosslingual comparable corpora and then present answers in a desired language. It is a very challenging task which suffers from both information extraction and machine translation errors. In this paper we analyze the types of errors produced by five different baseline approaches, and present a novel supervised rescoring based validation approach to incorporate global evidence from very large bilingual comparable corpora. Without using any additional labeled data this new approach obtained 38.5% relative improvement in Precision and 86.7% relative improvement in Recall over several state-of-the-art approaches. The ultimate system outperformed monolingual slot filling pipelines built on much larger monolingual corpora.
Cross-lingual Slot Filling from Comparable Corpora
d5666119
As supervised machine learning methods are increasingly used in language technology, the need for high-quality annotated language data becomes imminent. Active learning (AL) is a means to alleviate the burden of annotation. This paper addresses the problem of knowing when to stop the AL process without having the human annotator make an explicit decision on the matter. We propose and evaluate an intrinsic criterion for committee-based AL of named entity recognizers.
An Intrinsic Stopping Criterion for Committee-Based Active Learning
d186206281
We investigate the relationship between basic principles of human morality and the expression of opinions in user-generated text data. We assume that people's backgrounds, culture, and values are associated with their perceptions and expressions of everyday topics, and that people's language use reflects these perceptions. While personal values and social effects are abstract and complex concepts, they have practical implications and are relevant for a wide range of NLP applications. To extract human values (in this paper, morality) and measure social effects (morality and stance), we empirically evaluate the usage of a morality lexicon that we expanded via a quality controlled, human in the loop process. As a result, we enhanced the Moral Foundations Dictionary in size (from 324 to 4,636 syntactically disambiguated entries) and scope. We used both lexica for featurebased and deep learning classification (SVM, RF, and LSTM) to test their usefulness for measuring social effects. We find that the enhancement of the original lexicon led to measurable improvements in prediction accuracy for the selected NLP tasks.
Enhancing the Measurement of Social Effects by Capturing Morality
d14123779
Decision rules that explicitly account for non-probabilistic evaluation metrics in machine translation typically require special training, often to estimate parameters in exponential models that govern the search space and the selection of candidate translations. While the traditional Maximum A Posteriori (MAP) decision rule can be optimized as a piecewise linear function in a greedy search of the parameter space, the Minimum Bayes Risk (MBR) decision rule is not well suited to this technique, a condition that makes past results difficult to compare. We present a novel training approach for non-tractable decision rules, allowing us to compare and evaluate these and other decision rules on a large scale translation task, taking advantage of the high dimensional parameter space available to the phrase based Pharaoh decoder. This comparison is timely, and important, as decoders evolve to represent more complex search space decisions and are evaluated against innovative evaluation metrics of translation quality.
Training and Evaluating Error Minimization Rules for Statistical Machine Translation
d235097333
Low-resource polysynthetic languages pose many challenges in NLP tasks, such as morphological analysis and Machine Translation, due to available resources and tools, and the morphologically complex languages. This research focuses on the morphological segmentation while adapting an unsupervised approach based on Adaptor Grammars in low-resource setting. Experiments and evaluations on Inuinnaqtun, one of Inuit language family in Northern Canada, considered a language that will be extinct in less than two generations, have shown promising results.
Towards a First Automatic Unsupervised Morphological Segmentation for Inuinnaqtun
d237099280
d7380620
We have developed a toolkit in which an annotation tool, a syntactic tree editor, and an extraction rule editor interact dynamically. Its output can be stored in a database for further use. In the field of biomedicine, there is a critical need for automatic text processing. However, current language processing approaches suffer from insufficient basic data incorporating both human domain expertise and domain-specific language processing capabilities. With the annotation tool presented here, a set of "gold standards" can be collected, representing what should be extracted. At the same time, any change in annotation can be viewed on an associated syntactic tree. These facilities provide a clear picture of the relationship between the extraction target and the syntactic tree. Underlying sentences can be analyzed with a parser which can be plugged in, or a set of parsed sentences can be used to generate the tree. Extraction rules written with the integrated editor can be applied at once, and their validity can immediately be verified both on the syntactic tree and on the sentence string by coloring the corresponding segments. Thus our toolkit enables the user to efficiently construct parse-based extraction rules.
PBIE: A DATA PREPARATION TOOLKIT toward DEVELOPING a PARSING-BASED INFORMATION EXTRACTION SYSTEM
d15154323
This paper reports results of the 1992 Evaluation of machine translation (MT) systems in the DARPA MT initiative and results of a Pre-test to the 1993 Evaluation. The DARPA initiative is unique in that the evaluated systems differ radically in languages translated, theoretical approach to system design, and intended end-user application. In the 1992 suite, a Comprehension Test compared the accuracy and interpretability of system and control outputs; a Quality Panel for each language pair judged the fidelity of translations from each source version. The 1993 suite evaluated adequacy and fluency and investigated three scoring methods.
EVALUATION OF MACHINE TRANSLATION
d232021660
With the increasing availability of wordnets for ancient languages, such as Ancient Greek and Latin, gaps remain in the coverage of less studied languages of antiquity. This paper reports on the construction and evaluation of a new wordnet for Coptic, the language of Late Roman, Byzantine and Early Islamic Egypt in the first millenium CE. We present our approach to constructing the wordnet which uses multilingual Coptic dictionaries and wordnets for five different languages. We further discuss the results of this effort and outline our on-going/future work.
The Making of Coptic Wordnet
d18615417
Simple text classification algorithms perform remarkably well when used for detecting famous quotes in literary or philosophical text, with f-scores approaching 95%. We compare the task to topic classification, polarity classification and authorship attribution.
Workshop on Computational Linguistics for Literature, pages 54-58, Mining wisdom
d244081300
We develop a minimally-supervised model for spelling correction and evaluate its performance on three datasets annotated for spelling errors in Russian. The first corpus is a dataset of Russian social media data that was recently used in a shared task on Russian spelling correction. The other two corpora contain texts produced by learners of Russian as a foreign language. Evaluating on three diverse datasets allows for a cross-corpus comparison. We compare the performance of the minimallysupervised model to two baseline models that do not use context for candidate re-ranking, as well as to a character-level statistical machine translation system with context-based re-ranking. We show that the minimallysupervised model outperforms all of the other models. We also present an analysis of the spelling errors and discuss the difficulty of the task compared to the spelling correction problem in English. 1 Context in this work refers to the sentence (or the n-gram window) in which the misspelled word occurs.
Spelling Correction for Russian: A Comparative Study of Datasets and Methods
d9331790
This paper presents a reordering framework for statistical machine translation (SMT) where source-side reorderings are integrated into SMT decoding, allowing for a highly constrained reordered search graph. The monotone search is extended by means of a set of reordering patterns (linguistically motivated rewrite patterns). Patterns are automatically learnt in training from word-to-word alignments and source-side Part-Of-Speech (POS) tags. Traversing the extended search graph, the decoder evaluates every hypothesis making use of a group of widely used SMT models and helped by an additional Ngram language model of sourceside POS tags.Experiments are reported on the Euparl task (Spanish-to-English and English-to-Spanish). Results are presented regarding translation accuracy (using human and automatic evaluations) and computational efficiency, showing significant improvements in translation quality for both translation directions at a very low computational cost.
Integration of POStag-based source reordering into SMT decoding by an extended search graph
d8318920
This corpus-based study analyzes meanings of khɨn3 'ascend' and loŋ1 'descend' in Thai in comparison with up and down in English. Data came from three corpora: the Thai National Corpus (TNC)(Aroonmanakun et al., 2009), the British National Corpus (BNC), and the English-Thai Parallel Concordance(Aroonmanakun, 2009). Results of the analyses show that there are senses of the vertical spatial terms khɨn3 and loŋ1 in Thai that overlap with those of up and down in English. This reflects a universal image schema of vertical movement and similar semantic extension processes in the two languages. Data from the parallel corpus also reveal that the vertical spatial terms khɨn3 and loŋ1 do not always occur in the same contexts with up and down. But, when they do, the frequently shared meaning involves vertical movement, which is the basic sense of the terms. The use of corpora as a tool to study the semantics of vertical spatial terms in Thai and English makes it possible to obtain objective and naturalistic data as well as to observe frequency of various senses that are in use.
The Semantics of khɨn3 and loŋ1 in Thai Compared to up and down in English: A Corpus-Based Study
d7968497
The objective of this work is to disambiguate transducers which have the following form: T = R • D and to be able to apply the determinization algorithm described in(Mohri, 1997). Our approach to disambiguating T = R • D consists first of computing the composition T and thereafter to disambiguate the transducer T . We will give an important consequence of this result that allows us to compose any number of transducers R with the transducer D, in contrast to the previous approach which consisted in first disambiguating transducers D and R to produce respectively D and R , then computing T = R • D where T is unambiguous. We will present results in the case of a transducer D representing a dictionary and R representing phonological rules.
Disambiguation of Finite-State Transducers
d252819375
Predicting difficulty of questions is crucial for technical interviews. However, such questions are long-form and more open-ended than factoid and multiple choice questions explored so far for question difficulty prediction. Existing models also require large volumes of candidate response data for training. We study weaksupervision and use unsupervised algorithms for both question generation and difficulty prediction. We create a dataset of interview questions with difficulty scores for Deep Learning and use it to evaluate SOTA models for question difficulty prediction trained using weak supervision. Our analysis brings out the task's difficulty as well as the promise of weak supervision for it.
A Weak Supervision Approach for Predicting Difficulty of Technical Interview Questions
d8661407
We propose a transition system for dependency parsing with a left-corner parsing strategy. Unlike parsers with conventional transition systems, such as arc-standard or arc-eager, a parser with our system correctly predicts the processing difficulties people have, such as of center-embedding. We characterize our transition system by comparing its oracle behaviors with those of other transition systems on treebanks of 18 typologically diverse languages. A crosslinguistical analysis confirms the universality of the claim that a parser with our system requires less memory for parsing naturally occurring sentences.IntroductionIt is sometimes argued that transition-based dependency parsing is appealing not only from an engineering perspective due to its efficiency, but also from a scientific perspective: These parsers process a sentence incrementally similar to a human parser, which have motivated several studies concerning their cognitive plausibility(Nivre, 2004;Boston and Hale, 2007;Boston et al., 2008). A cognitively plausible dependency parser is attractive for many reasons, one of the most important being that dependency treebanks are available in many languages, so it is suitable for crosslinguistical studies of human language processing (Keller, 2010). However, current transition systems based on shift-reduce actions fully or partially employ a bottom-up strategy 1 , which is problematic from a psycholinguistical point of view: Bottom-up or top-down strategies are known to fail in predicting the difficulty for certain sentences, such as center-embedding, which people have troubles in comprehending(Abney and Johnson, 1991).We propose a transition system for dependency parsing with a left-corner strategy. For constituency parsing, unlike other strategies, the arc-eager left-corner strategy is known to correctly predict processing difficulties people have(Abney and Johnson, 1991). To the best of our knowledge, however, the idea of left-corner strategy has not been introduced in the dependency parsing literature. We define the memory cost for a transition system as the number of unconnected subtrees on a stack. Under this condition, the proposed system incurs non-constant memory cost only when encountering center-embedded structures.After developing the transition system, we characterize it by looking into the following question: Is it true that naturally occurring sentences can be parsed on this system with a lower memory overhead? This should be true under the assumptions that 1) people avoid generating a sentence that causes difficulty for them, and 2) center-embedding is a kind of such structure. Specifically, we focus on analyzing the oracle transitions of the system, i.e., parser actions to recover the gold dependency tree for a sentence. In English, it is known that left-corner transformed treebank sentences can be parsed with less memory(Schuler et al., 2010), but our focus in this paper is on the language universality of the claim in a crosslingual setting. Two different but relevant motivations exist for this analysis. The first is to answer the following scientific question: Is the claim that people tend to avoid generating center-embedded sentences language universal? This is unclear since the observation that a center-embedded sentence is This work is licenced under a Creative Commons Attribution 4.0 International License. Page numbers and proceedings footer are added by the organizers. License details: http://creativecommons.org/licenses/by/4.0/ 1 The top-down parser of Hayashi et al. (2012) is an exception, but its processing is not incremental.
Left-corner Transitions on Dependency Parsing
d232021732
d14045921
Multi-task learning is the problem of maximizing the performance of a system across a number of related tasks. When applied to multiple domains for the same task, it is similar to domain adaptation, but symmetric, rather than limited to improving performance on a target domain. We present a more principled, better performing model for this problem, based on the use of a hierarchical Bayesian prior. Each domain has its own domain-specific parameter for each feature but, rather than a constant prior over these parameters, the model instead links them via a hierarchical Bayesian global prior. This prior encourages the features to have similar weights across domains, unless there is good evidence to the contrary. We show that the method of (Daumé III, 2007), which was presented as a simple "preprocessing step," is actually equivalent, except our representation explicitly separates hyperparameters which were tied in his work. We demonstrate that allowing different values for these hyperparameters significantly improves performance over both a strong baseline and (Daumé III, 2007) within both a conditional random field sequence model for named entity recognition and a discriminatively trained dependency parser.
Hierarchical Bayesian Domain Adaptation
d26184111
The paper describes the architecture and functionality of LTC Communicator, a software product from the Language Technology Centre Ltd, which offers an innovative and cost-effective response to the growing need for multilingual web based communication in various user contexts. LTC Communicator was originally developed to support software vendors operating in international markets facing the need to offer web based multilingual support to diverse customers in a variety of countries, where end users may not speak the same language as the helpdesk. This is followed by a short description of several additional application areas of this software for which LTC has received EU funding: The AMBIENT project carries out a market validation for multilingual and multimodal eLearning for business and innovation management, the EUCAM project tests multilingual eLearning in the automotive industry, including a major car manufacturer and the German and European Metal Workers Associations, and the ALADDIN project provides a mobile multilingual environment for tour guides, interacting between tour operators and tourists, with the objective of optimising their travel experience. Finally, a case study of multilingual email exchange in conjunction with web based product sales is described.
Computer-assisted multilingual e-communication in a variety of application areas
d198230526
d252847558
Image captioning is a prominent research area in computer vision and natural language processing, which automatically generates natural language descriptions for images. Most of the existing works have focused on developing models for image captioning in the English language. The current paper introduces a novel deep learning architecture based on encoder-decoder with an attention mechanism for image captioning in the Hindi language. For encoder, decoder, and attention, several deep learning-based architectures have been explored. Hindi is the third-most spoken language globally; it is extensively spoken in India and South Asia; it is one of India's official languages. The proposed encoder-decoder architecture employs scaling in convolution neural networks to achieve better accuracy than existing image captioning methods in Hindi. The proposed method's performance is compared with state-of-the-art methods in terms of BLEU scores and manual evaluation. The results show that the proposed method is more effective than existing methods.
A Scaled Encoder Decoder Network for Image Captioning in Hindi
d32509855
Tree-structured Long Short-Term Memory (Tree-LSTM) has been proved to be an effective method in the sentiment analysis task. It extracts structural information on text, and uses Long Short-Term Memory (LSTM) cell to prevent gradient vanish. However, though combining the LSTM cell, it is still a kind of model that extracts the structural information and almost not extracts serialization information. In this paper, we propose three new models in order to combine those two kinds of information: the structural information generated by the Constituency Tree-LSTM and the serialization information generated by Long-Short Term Memory neural network. Our experiments show that combining those two kinds of information can give contributes to the performance of the sentiment analysis task compared with the single Constituency Tree-LSTM model and the LSTM model.
Text Sentiment Analysis based on Fusion of Structural Information and Serialization Information
d14189632
Word segmentation is helpful in Chinese natural language processing in many aspects. However it is showed that different word segmentation strategies do not affect the performance of Statistical Machine Translation (SMT) from English to Chinese significantly. In addition, it will cause some confusions in the evaluation of English to Chinese SMT. So we make an empirical attempt to translation English to Chinese in the character level, in both the alignment model and language model. A series of empirical comparison experiments have been conducted to show how different factors affect the performance of character-level English to Chinese SMT. We also apply the recent popular continuous space language model into English to Chinese SMT. The best performance is obtained with the BLEU score 41.56, which improve baseline system (40.31) by around 1.2 BLEU score. * Correspondence author. † Thank all the reviewers for valuable comments and suggestions on our paper.
English to Chinese Translation: How Chinese Character Matters?
d14160045
In this paper, the crucial ingredients for our submission to SemEval-2014 Task 4 "Aspect Level Sentiment Analysis" are discussed. We present a simple aspect detection algorithm, a co-occurrence based method for category detection and a dictionary based sentiment classification algorithm. The dictionary for the latter is based on co-occurrences as well. The failure analysis and related work section focus mainly on the category detection method as it is most distinctive for our work.
COMMIT-P1WP3: A Co-occurrence Based Approach to Aspect-Level Sentiment Analysis