_id stringlengths 4 10 | text stringlengths 0 18.4k | title stringlengths 0 8.56k |
|---|---|---|
d13632095 | Practitioners and researchers need to stay up-to-date with the latest advances in their fields, but the constant growth in the amount of literature available makes this task increasingly difficult. We investigated the literature browsing task via a user requirements analysis, and identified the information needs that biomedical researchers commonly encounter in this application scenario. Our analysis reveals that a number of literature-based research tasks are preformed which can be served by both generic and contextually tailored preview summaries. Based on this study, we describe the design of an implemented literature browsing support tool which helps readers of scientific literature decide whether or not to pursue and read a cited document. We present findings from a preliminary user evaluation, suggesting that our prototype helps users make relevance judgements about cited documents. | Designing a Citation-Sensitive Research Tool: An Initial Study of Browsing-Specific Information Needs |
d252819095 | As an emerging research topic in natural language processing community, emotion recognition in multi-party conversations has attained increasing interest. Previous approaches that focus either on dyadic or multi-party scenarios exert much effort to cope with the challenge of emotional dynamics and achieve appealing results. However, since emotional interactions among speakers are often more complicated within the entangled multi-party conversations, these works are limited in capturing effective emotional clues in conversational context. In this work, we propose Mutual Conversational Detachment Network (MuCDN) to clearly understand the conversational context by separating conversations into detached threads. Specifically, two detachment ways are devised to perform context and speaker-specific modeling within detached threads and they are bridged through a mutual module. Experimental results on two datasets show that our model achieves better performance over the baseline models. | MuCDN: Mutual Conversational Detachment Network for Emotion Recognition in Multi-Party Conversations |
d232021617 | ||
d259370498 | Classic approaches to content moderation typically apply a rule-based heuristic approach to flag content. While rules are easily customizable and intuitive for humans to interpret, they are inherently fragile and lack the flexibility or robustness needed to moderate the vast amount of undesirable content found online today. Recent advances in deep learning have demonstrated the promise of using highly effective deep neural models to overcome these challenges. However, despite the improved performance, these data-driven models lack transparency and explainability, often leading to mistrust from everyday users and a lack of adoption by many platforms. In this paper, we present Rule By Example (RBE): a novel exemplarbased contrastive learning approach for learning from logical rules for the task of textual content moderation. RBE is capable of providing rule-grounded predictions, allowing for more explainable and customizable predictions compared to typical deep learning-based approaches. We demonstrate that our approach is capable of learning rich rule embedding representations using only a few data examples. Experimental results on 3 popular hate speech classification datasets show that RBE is able to outperform state-of-the-art deep learning classifiers as well as the use of rules in both supervised and unsupervised settings while providing explainable model predictions via rulegrounding. | Rule By Example: Harnessing Logical Rules for Explainable Hate Speech Detection |
d9627105 | Although much work in NLP has focused on simply determining what a document means, we also must know whether or not to believe it. Fact-finding algorithms attempt to identify the "truth" among competing claims in a corpus, but fail to take advantage of the user's prior knowledge and presume that truth itself is universal and objective rather than subjective. We introduce a framework for incorporating prior knowledge into any factfinding algorithm, expressing both general "common-sense" reasoning and specific facts already known to the user as first-order logic and translating this into a tractable linear program. As our results show, this approach scales well to even large problems, both reducing error and allowing the system to determine truth respective to the user rather than the majority. Additionally, we introduce three new fact-finding algorithms capable of outperforming existing fact-finders in many of our experiments. | Knowing What to Believe (when you already know something) |
d241583450 | ||
d18531389 | This paper presents a semantic classification of reflexive verbs in Bulgarian, augmenting the morphosyntactic classes of verbs in the large Bulgarian Lexical Data Base -a language resource utilized in a number of Language Engineering (LE) applications. The semantic descriptors conform to the Unified Eventity Representation (UER), developed by Andrea Schalley. The UER is a graphical formalism, introducing the object-oriented system design to linguistic semantics. Reflexive/non-reflexive verb pairs are analyzed where the nonreflexive member of the opposition, a two-place predicate, is considered the initial linguistic entity from which the reflexive correlate is derived. The reflexive verbs are distributed into initial syntactic-semantic classes which serve as the basis for defining the relevant semantic descriptors in the form of EVENTITY FRAME diagrams. The factors that influence the categorization of the reflexives are the lexical paradigmatic approach to the data, the choice of only one reading for each verb, top level generalization of the semantic descriptors. The language models described in this paper provide the possibility for building linguistic components utilizable in knowledge-driven systems. | Semantic Descriptors: The Case of Reflexive Verbs |
d1409630 | Automatic summarization and information extraction are two important Internet services. MUC and SUMMAC play their appropriate roles in the next generation Internet. This paper focuses on the automatic summarization and proposes two different models to extract sentences for summary generation under two tasks initiated by SUMMAC-1. For categorization task, positive feature vectors and negative feature vectors are used cooperatively to construct generic, indicative summaries. For adhoc task, a text model based on relationship between nouns and verbs is used to filter out irrelevant discourse segment, to rank relevant sentences, and to generate the user-directed summaries. The result shows that the NormF of the best summary and that of the fixed summary for adhoc tasks are 0.456 and 0.447. The NormF of the best summary and that of the fixed summary for categorization task are 0.4090 and 0.4023. Our system outperforms the average system in categorization task but does a common job in adhoc task. | AN NTU-APPROACH TO AUTOMATIC SENTENCE EXTRACTION FOR SUMMARY GENERATION |
d8045155 | Large scale annotated corpora are prerequisite to developing high-performance semantic role labeling systems. Unfortunately, such corpora are expensive to produce, limited in size, and may not be representative. Our work aims to reduce the annotation effort involved in creating resources for semantic role labeling via semi-supervised learning. Our algorithm augments a small number of manually labeled instances with unlabeled examples whose roles are inferred automatically via annotation projection. We formulate the projection task as a generalization of the linear assignment problem. We seek to find a role assignment in the unlabeled data such that the argument similarity between the labeled and unlabeled instances is maximized. Experimental results on semantic role labeling show that the automatic annotations produced by our method improve performance over using hand-labeled instances alone. | Semi-Supervised Semantic Role Labeling |
d236486300 | ||
d7314668 | We propose a unified model of syntax and discourse in which text structure is viewed as a tree structure augmented with anaphoric relations and other secondary relations. We describe how the model accounts for discourse connectives and the syntax-discourse-semantics interface. Our model is dependency-based, ie, words are the basic building blocks in our analyses. The analyses have been applied cross-linguistically in the Copenhagen Dependency Treebanks, a set of parallel treebanks for Danish, English, German, Italian, and Spanish which are currently being annotated with respect to discourse, anaphora, syntax, morphology, and translational equivalence. | The unified annotation of syntax and discourse in the Copenhagen Dependency Treebanks |
d19034908 | We model scientific expertise as a mixture of topics and authority. Authority is calculated based on the network properties of each topic network. ThemedPageRank, our combination of LDA-derived topics with PageRank differs from previous models in that topics influence both the bias and transition probabilities of PageRank. It also incorporates the age of documents. Our model is general in that it can be applied to all tasks which require an estimate of document-document, documentquery, document-topic and topic-query similarities. We present two evaluations, one on the task of restoring the reference lists of 10,000 articles, the other on the task of automatically creating reading lists that mimic reading lists created by experts. In both evaluations, our system beats state-of-the-art, as well as Google Scholar and Google Search indexed againt the corpus. Our experiments also allow us to quantify the beneficial effect of our two proposed modifications to PageRank. | Topical PageRank: A Model of Scientific Expertise for Bibliographic Search |
d251497225 | Irish underwent a major spelling standardization in the 1940's and 1950's, and as a result it can be challenging to apply language technologies designed for the modern language to older, "pre-standard" texts. Lemmatization, tagging, and parsing of these pre-standard texts play an important role in a number of applications, including the lexicographical work on Foclóir Stairiúil na Gaeilge, a historical dictionary of Irish covering the period from 1600 to the present. We have two main goals in this paper. First, we introduce a small benchmark corpus containing just over 3800 tokens, annotated according to the Universal Dependencies guidelines and covering a range of dialects and time periods since 1600. Second, we establish baselines for lemmatization, tagging, and dependency parsing on this corpus by experimenting with a variety of machine learning approaches. | Diachronic Parsing of Pre-Standard Irish |
d10452640 | This paper describes a rule-learning approach towards Chinese prosodic phrase prediction for TTS systems. Firstly, we prepared a speech corpus having about 3000 sentences and manually labelled the sentences with two-level prosodic structure. Secondly, candidate features related to prosodic phrasing and the corresponding prosodic boundary labels are extracted from the corpus text to establish an example database. A series of comparative experiments is conducted to figure out the most effective features from the candidates. Lastly, two typical rule learning algorithms (C4.5 and TBL) are applied on the example database to induce prediction rules. The paper also suggests general evaluation parameters for prosodic phrase prediction. With these parameters, our methods are compared with RNN and bigram based statistical methods on the same corpus. The experiments show that the automatic rule-learning approach can achieve better prediction accuracy than the non-rule based methods and yet retain the advantage of the simplicity and understandability of rule systems. Thus it is justified as an effective alternative to prosodic phrase prediction. | Learning Rules for Chinese Prosodic Phrase Prediction |
d15749857 | This paper describes experiments for statistical dependency parsing using two different parsers trained on a recently extended dependency treebank for Greek, a language with a moderately rich morphology. We show how scores obtained by the two parsers are influenced by morphology and dependency types as well as sentence and arc length. The best LAS obtained in these experiments was 80.16 on a test set with manually validated POS tags and lemmas.This work is licensed under a Creative Commons Attribution 4.0 International Licence. Page numbers and proceedings footer are added by the organisers. Licence details: http://creativecommons.org/licenses/by/4.0/ | Experiments for Dependency Parsing of Greek |
d233365349 | ||
d4958019 | Type checking defines and constrains system output and intermediate representations. We report on the advantages of introducing multiple levels of type checking in deep parsing systems, even with untyped formalisms. | Type-checking in Formally non-typed Systems |
d59336383 | A common need of NLP applications is to extract structured data from text corpora in order to perform analytics or trigger an appropriate action. The ontology defining the structure is typically application dependent and in many cases it is not known a priori. We describe the FRAMEIT System that provides a workflow for (1) quickly discovering an ontology to model a text corpus and (2) learning an SRL model that extracts the instances of the ontology from sentences in the corpus. FRAMEIT exploits data that is obtained in the ontology discovery phase as weak supervision data to bootstrap the SRL model and then enables the user to refine the model with active learning. We present empirical results and qualitative analysis of the performance of FRAMEIT on three corpora of noisy user-generated text. | FrameIt: Ontology Discovery for Noisy User-Generated Text |
d18561984 | A problem in dialogue research is that of finding and managing expectations. Adjacency pair theory has widespread acceptance, but traditional classification features (in particular, 'previous-tag' type features) do not exploit this information optimally. We suggest a method of dialogue segmentation that verifies adjacency pairs and allows us to use dialogue-level information within the entire segment and not just the previous utterance. We also use the χ 2 test for statistical significance as 'noise reduction' to refine a list of pairs. Together, these methods can be used to extend expectation beyond the traditional classification features. | Empirical Verification of Adjacency Pairs Using Dialogue Segmentation |
d250390956 | This paper describes our system in SemEval-2022 Task 8, where participants were required to predict the similarity of two multilingual news articles. In the task of pairwise sentence and document scoring, there are two main approaches: Cross-Encoder, which inputs pairs of texts into a single encoder, and Bi-Encoder, which encodes each input independently. The former method often achieves higher performance, but the latter gave us a better result in SemEval-2022 Task 8. This paper presents our exploration of BERT-based Bi-Encoder approach for this task, and there are several findings such as pretrained models, pooling methods, translation, data separation, and the number of tokens. The weighted average ensemble of the four models achieved the competitive result and ranked in the top 12. | Nikkei at SemEval-2022 Task 8: Exploring BERT-based Bi-Encoder Approach for Pairwise Multilingual News Article Similarity |
d399210 | The aims of the SpeechDat-Car project are to develop a set of speech databases to support training and testing of multilingual speech recognition applications in the car environment. As a result, a total of ten (10) equivalent and similar resources will be created. The 10 languages are Danish, British English, Finnish, Flemish/Dutch, | |
d44108850 | The task of automatic text summarization is to generate a short text that summarizes the most important information in a given set of documents. Sentence regression is an emerging branch in automatic text summarizations. Its key idea is to estimate the importance of information via learned utility scores for individual sentences. These scores are then used for selecting sentences from the source documents, typically according to a greedy selection strategy. Recently proposed state-ofthe-art models learn to predict ROUGE recall scores of individual sentences, which seems reasonable since the final summaries are evaluated according to ROUGE recall. In this paper, we show in extensive experiments that following this intuition leads to suboptimal results and that learning to predict ROUGE precision scores leads to better results. The crucial difference is to aim not at covering as much information as possible but at wasting as little space as possible in every greedy step. | Which Scores to Predict in Sentence Regression for Text Summarization? |
d219184224 | Heart failure is a global epidemic with debilitating effects. People with heart failure need to actively participate in home self-care regimens to maintain good health. However, these regimens are not as effective as they could be and are influenced by a variety of factors. Patients from minority communities like African American (AA) and Hispanic/Latino (H/L), often have poor outcomes compared to the average Caucasian population. In this paper, we lay the groundwork to develop an interactive dialogue agent that can assist AA and H/L patients in a culturally sensitive and linguistically accurate manner with their heart health care needs. This will be achieved by extracting relevant educational concepts from the interactions between health educators and patients. Thus far we have recorded and transcribed 20 such interactions. In this paper, we describe our data collection process, thematic and initiative analysis of the interactions, and outline our future steps. | Heart Failure Education of African American and Hispanic/Latino Patients: Data Collection and Analysis |
d10916532 | Domain adaptation is a time consuming and costly procedure calling for the development of algorithms and tools to facilitate its automation. This paper presents an unsupervised algorithm able to learn the main concepts in event summaries. The method takes as input a set of domain summaries annotated with shallow linguistic information and produces a domain template. We demonstrate the viability of the method by applying it to three different domains and two languages. We have evaluated the generated templates against human templates obtaining encouraging results. | Unsupervised Content Discovery from Concise Summaries |
d245838264 | ||
d13860564 | We examine an emerging NLP application that supports creative writing by automatically suggesting continuing sentences in a story. The application tracks users' modifications to generated sentences, which can be used to quantify their "helpfulness" in advancing the story. We explore the task of predicting helpfulness based on automatically detected linguistic features of the suggestions. We illustrate this analysis on a set of user interactions with the application using an initial selection of features relevant to story generation. | Linguistic Features of Helpfulness in Automated Support for Creative Writing |
d15807944 | Corry is a system for coreference resolution in English. It supports both local (Soon et al. (2001)-style) and global (Integer Linear Programming, Denis and Baldridge (2007)style) models of coreference. Corry relies on a rich linguistically motivated feature set, which has, however, been manually reduced to 64 features for efficiency reasons. Three runs have been submitted for the SemEval task 1 on Coreference Resolution (Recasens et al., 2010), optimizing Corry's performance for BLANC (Recasens and Hovy, in prep), MUC (Vilain et al., 1995) and CEAF (Luo, 2005). Corry runs have shown the best performance level among all the systems in their track for the corresponding metric. | Corry: A System for Coreference Resolution |
d251436004 | Constructions are direct form-meaning pairs with possible schematic slots. These slots are simultaneously constrained by the embedded construction itself and the sentential context. We propose that the constraint could be described by a conditional probability distribution. However, as this conditional probability is inevitably complex, we utilize language models to capture this distribution. Therefore, we build CxLM, a deep learning-based masked language model explicitly tuned to constructions' schematic slots. We first compile a construction dataset consisting of over ten thousand constructions in Taiwan Mandarin. Next, an experiment is conducted on the dataset to examine to what extent a pretrained masked language model is aware of the constructions. We then fine-tune the model specifically to perform a cloze task on the opening slots. We find that the fine-tuned model predicts masked slots more accurately than baselines and generates both structurally and semantically plausible word samples. Finally, we release CxLM and its dataset as publicly available resources and hope to serve as new quantitative tools in studying construction grammar. | CxLM: A Construction and Context-aware Language Model |
d2785745 | Tree based translation models are a compelling means of integrating linguistic information into machine translation. Syntax can inform lexical selection and reordering choices and thereby improve translation quality. Research to date has focussed primarily on decoding with such models, but less on the difficult problem of inducing the bilingual grammar from data. We propose a generative Bayesian model of tree-to-string translation which induces grammars that are both smaller and produce better translations than the previous heuristic two-stage approach which employs a separate word alignment step. | A Bayesian Model of Syntax-Directed Tree to String Grammar Induction |
d174803409 | Word representations trained on text reproduce human implicit bias related to gender, race and age. Methods have been developed to remove such bias. Here, we present results that show that human stereotypes exist even for much more nuanced judgments such as personality, for a variety of person identities beyond the typically legally protected attributes and that these are similarly captured in word representations. Specifically, we collected human judgments about a person's Big Five personality traits formed solely from information about the occupation, nationality or a common noun description of a hypothetical person. Analysis of the data reveals a large number of statistically significant stereotypes in people. We then demonstrate the bias captured in lexical representations is statistically significantly correlated with the documented human bias. Our results, showing bias for a large set of person descriptors for such nuanced traits put in doubt the feasibility of broadly and fairly applying debiasing methods and call for the development of new methods for auditing language technology systems and resources. | Word Embeddings (Also) Encode Human Personality Stereotypes |
d219310354 | ||
d44135667 | This article describes the unsupervised strategy submitted by the CitiusNLP team to Se-mEval 2018 Task 10, a task which consists of predicting whether a word is a discriminative attribute between two other words. The proposed strategy relies on the correspondence between discriminative attributes and relevant contexts of a word. More precisely, the method uses transparent distributional models to extract salient contexts of words which are identified as discriminative attributes. The system performance reaches about 70% accuracy when it is applied on the development dataset, but its accuracy goes down (63%) on the official test dataset. | CitiusNLP at SemEval-2018 Task 10: The Use of Transparent Distributional Models and Salient Contexts to Discriminate Word Attributes |
d765181 | In this paper, we propose to enhance the modulation spectrum of the spectrograms for speech signals via the technique of non-negative matrix factorization (NMF). In the training phase, the clean speech and noise in the training set are separately transformed to spectrograms and modulation spectra in turn, and then the magnitude modulation spectra are used to train the NMF-based basis matrices for clean speech and noise, respectively. In the test phase, the test signal is converted to its modulation spectrum, which is then enhanced via NMF with the basis matrices obtained in the training phase. The updated modulation spectrum is finally transformed back to the time domain as the enhanced signal. In addition, we propose two variants for the newly method in order to possess relatively high computation complexity One is to consider the several adjacent acoustic frequencies as a whole for the subsequent processing, and the other is to process the low modulation frequency components. These new methods are validated via a subset of the Aurora-2 noisy connected-digit database. Preliminary experiments have indicated that these methods can achieve better signal quality relative to the baseline results in terms of the Perceptual Evaluation of Speech Quality (PESQ) index, and they outperform some well-known speech enhancement methods including spectral subtraction (SS), Wiener filtering (WF) and minimum mean squared error short-time spectral amplitude estimation (MMSE-STSA). | 非負矩陣分解法於語音調變頻譜強化之研究 A study of enhancing the modulation spectrum of speech signals via nonnegative matrix factorization |
d7431109 | For sentiment classification, it is often recognized that embedding based on distributional hypothesis is weak in capturing sentiment contrast-contrasting words may have similar local context. Based on broader context, we propose to incorporate Theta Pure Dependence (TPD) into the Paragraph Vector method to reinforce topical and sentimental information. TPD has a theoretical guarantee that the word dependency is pure, i.e., the dependence pattern has the integral meaning whose underlying distribution can not be conditionally factorized. Our method outperforms the state-of-the-art performance on text classification tasks. | Reinforcing the Topic of Embeddings with Theta Pure Dependence for Text Classification * |
d14691823 | Statistical Machine Translation (SMT) systems are usually trained on large amounts of bilingual text and monolingual target language text. If a significant amount of out-of-domain data is added to the training data, the quality of translation can drop. On the other hand, training an SMT system on a small amount of training material for given indomain data leads to narrow lexical coverage which again results in a low translation quality. In this paper, (i) we explore domain-adaptation techniques to combine large out-of-domain training data with small-scale in-domain training data for English-Hindi statistical machine translation and (ii) we cluster large out-of-domain training data to extract sentences similar to in-domain sentences and apply adaptation techniques to combine clustered sub-corpora with in-domain training data into a unified framework, achieving a 0.44 absolute corresponding to a 4.03% relative improvement in terms of BLEU over the baseline. | Experiments on Domain Adaptation for English-Hindi SMT * |
d6676324 | For extractive meeting summarization, previous studies have shown performance degradation when using speech recognition transcripts because of the relatively high speech recognition errors on meeting recordings. In this paper we investigated using confusion networks to improve the summarization performance on the ASR condition under an unsupervised framework by considering more word candidates and their confidence scores. Our experimental results showed improved summarization performance using our proposed approach, with more contribution from leveraging the confidence scores. We also observed that using these rich speech recognition results can extract similar or even better summary segments than using human transcripts. | Using Confusion Networks for Speech Summarization |
d261344688 | Overview of CCL23-Eval Task 6: Telecom Network Fraud Case Classification 等筮 筲筥筣筥筮筴 筹筥筡筲筳第 筴筨筥 筳筩筴筵筡筴筩筯筮 筯筦 筴筥筬筥筣筯筭 筮筥筴筷筯筲筫 筦筲筡筵筤 筨筡筳 筢筥筥筮 筳筥筶筥筲筥第 筡筮筤 筡筵筴筯筭筡筴筥筤 筣筡筳筥 筣筬筡筳筳筩笌筣筡筴筩筯筮 筣筡筮 筨筥筬筰 笌筧筨筴 筣筲筩筭筥笮 答筨筩筳 筡筲筴筩筣筬筥 筩筮筴筲筯筤筵筣筥筳 筴筨筥 筴筡筳筫笭筲筥筬筡筴筥筤 筣筬筡筳笭 筳筩笌筣筡筴筩筯筮 筳筹筳筴筥筭第 筡筮筤 筴筨筥筮 筩筮筴筲筯筤筵筣筥筳 筡筮筤 筤筩筳筰筬筡筹筳 筴筨筥 筲筥筬筥筶筡筮筴 筩筮筦筯筲筭筡筴筩筯筮 筯筦 筴筨筩筳 筥筶筡筬筵筡筴筩筯筮 筴筡筳筫 筦筲筯筭 筴筨筥 筡筳筰筥筣筴筳 筯筦 筤筡筴筡 筳筥筴筳第 筴筡筳筫 筩筮筴筲筯筤筵筣筴筩筯筮第 筡筮筤 筣筯筭筰筥筴筩筴筩筯筮 筲筥笭 筳筵筬筴筳笮 筁 筴筯筴筡筬 筯筦 笶笰 筰筡筲筴筩筣筩筰筡筴筩筮筧 筴筥筡筭筳 筳筩筧筮筥筤 筵筰 筦筯筲 筴筨筩筳 筴筡筳筫第 筡筮筤 笌筮筡筬筬筹 笳笴 筴筥筡筭筳 筳筵筢筭筩筴筴筥筤 筲筥筳筵筬筴筳第 筯筦 筷筨筩筣筨 笱笵 筴筥筡筭筳 筳筣筯筲筥筤 筭筯筲筥 筴筨筡筮 筢筡筳筥筬筩筮筥第 筴筨筥 筨筩筧筨筥筳筴 筳筣筯筲筥 筷筡筳 笰笮笸笶笶笰第 筷筨筩筣筨 筷筡筳 笱笮笶笥 筨筩筧筨筥筲 筴筨筡筮 筢筡筳筥筬筩筮筥笮 筁筣筣筯筲筤筩筮筧 筴筯 筴筨筥 筡筮筡筬筹筳筩筳 筯筦 筴筨筥 筲筥筳筵筬筴筳第 筭筯筳筴 筯筦 筴筨筥 筴筥筡筭筳 筨筡筶筥 筡筤筯筰筴筥筤 筴筨筥 筂筅筒答笭筬筩筫筥 筭筯筤筥筬笮 | |
d236460050 | ||
d1317549 | We propose a set of rules for the computation of prosody which are implemented in an existing generic Data-to-Speech system. The rules make crucial use of both sentence-internal and sentence-external semantic and syntactic information provided by the system. In a Textto-Speech system, this information would have to be obtained through text analysis, but in Data-to-Speech it is readily available, and its reliable and detailed character makes it possible to compute the prosodic properties of generated sentences in a sophisticated way. This in turn allows for a close control of prosodic realization, resulting in natural-sounding intonation. | Computing prosodic properties in a data-to-speech system |
d16047614 | Exploiting Lexical Regularities in Designing Natural Language Systems | |
d208332297 | ||
d14127698 | ||
d229365773 | ||
d2773754 | Deep residual learning (ResNet)(He et al., 2016)is a new method for training very deep neural networks using identity mapping for shortcut connections. ResNet has won the ImageNet ILSVRC 2015 classification task, and achieved state-of-theart performances in many computer vision tasks. However, the effect of residual learning on noisy natural language processing tasks is still not well understood. In this paper, we design a novel convolutional neural network (CNN) with residual learning, and investigate its impacts on the task of distantly supervised noisy relation extraction. In contradictory to popular beliefs that ResNet only works well for very deep networks, we found that even with 9 layers of CNNs, using identity mapping could significantly improve the performance for distantly-supervised relation extraction. | Deep Residual Learning for Weakly-Supervised Relation Extraction |
d2595749 | This paper deals with the problem of predicting structures in the context of NLP. Typically, in structured prediction, an inference procedure is applied to each example independently of the others. In this paper, we seek to optimize the time complexity of inference over entire datasets, rather than individual examples. By considering the general inference representation provided by integer linear programs, we propose three exact inference theorems which allow us to re-use earlier solutions for certain instances, thereby completely avoiding possibly expensive calls to the inference procedure. We also identify several approximation schemes which can provide further speedup. We instantiate these ideas to the structured prediction task of semantic role labeling and show that we can achieve a speedup of over 2.5 using our approach while retaining the guarantees of exactness and a further speedup of over 3 using approximations that do not degrade performance. | On Amortizing Inference Cost for Structured Prediction |
d336625 | The NLP systems often have low performances because they rely on unreliable and heterogeneous knowledge. We show on the task of non-anaphoric it identification how to overcome these handicaps with the Bayesian Network (BN) formalism. The first results are very encouraging compared with the state-of-the-art systems. | Bayesian Network, a model for NLP? |
d219310288 | ||
d31039 | The structure of a discourse is reflected in many aspects of its linguistic realization, including its lexical, prosodic, syntactic, and semantic nature. Multiparty dialog contains a particular kind of discourse structure, the dialog act (DA). Like other types of structure, the dialog act sequence of a conversation is also reflected in its lexical, prosodic, and syntactic realization. This paper presents a preliminary investigation into the realization of a particular class of dialog acts which play an essential structuring role in dialog, the backchannels or acknowledgements tokens. We discuss the lexical, prosodic, and syntactic realization of these and subsumed or related dialog acts like continuers, assessments, yesanswers, agreements, and incipient-speakership.We show that lexical knowledge plays a role in distinguishing these dialog acts, despite the widespread ambiguity of words such as yeah, and that prosodic knowledge plays a role in DA identification for certain DA types, while lexical cues may be sufficient for the remainder. Finally, our investigation of the syntax of assessments suggests that at least some dialog acts have a very constrained syntactic realization, a per-dialog act 'microsyntax'. | Lexical, Prosodic, and Syntactic Cues for Dialog Acts |
d171268 | Extensive knowledge bases of entailment rules between predicates are crucial for applied semantic inference. In this paper we propose an algorithm that utilizes transitivity constraints to learn a globally-optimal set of entailment rules for typed predicates. We model the task as a graph learning problem and suggest methods that scale the algorithm to larger graphs. We apply the algorithm over a large data set of extracted predicate instances, from which a resource of typed entailment rules has been recently released(Schoenmackers et al., 2010). Our results show that using global transitivity information substantially improves performance over this resource and several baselines, and that our scaling methods allow us to increase the scope of global learning of entailment-rule graphs. | Global Learning of Typed Entailment Rules |
d4894551 | ||
d2548389 | Topic modeling and word embedding are two important techniques for deriving latent semantics from data. General-purpose topic models typically work in coarse granularity by capturing word co-occurrence at the document/sentence level. In contrast, word embedding models usually work in fine granularity by modeling word co-occurrence within small sliding windows. With the aim of deriving latent semantics by capturing word co-occurrence information at different levels of granularity, we propose a novel model named Latent Topic Embedding (LTE), which seamlessly integrates topic generation and embedding learning in one unified framework. We further propose an efficient Monte Carlo EM algorithm to estimate the parameters of interest. By retaining the individual advantages of topic modeling and word embedding, LTE results in better latent topics and word embedding. Experimental results verify the superiority of LTE over the state-of-the-arts in real-life applications. | Latent Topic Embedding |
d9503114 | We present work on the automatic generation of short indicative-informative abstracts of scientific and technical articles. The indicative part of the abstract identifies the topics of the document while the informative part of the abstract elaborate some topics according to the reader's interest by motivating the topics, describing entities and defining concepts. We have defined our method of automatic abstracting by studying a corpus professional abstracts. The method also considers the reader's interest as essential in the process of abstracting. | Using Linguistic Knowledge in Automatic Abstracting |
d29080401 | In this paper we introduce the UD A annotation tool for manual annotation of Universal Dependencies. This tool has been designed with the aim that it should be tailored to the needs of the Universal Dependencies (UD) community, including that it should operate in fullyoffline mode, and is freely-available under the GNU GPL licence. 1 In this paper, we provide some background to the tool, an overview of its development, and background on how it works. We compare it with some other widely-used tools which are used for Universal Dependencies annotation, describe some features unique to UD A , and finally outline some avenues for future work and provide a few concluding remarks. | UD Annotatrix: An annotation tool for Universal Dependencies |
d32627127 | In distributional semantics words are represented by aggregated context features. The similarity of words can be computed by comparing their feature vectors. Thus, we can predict whether two words are synonymous or similar with respect to some other semantic relation. We will show on six different datasets of pairs of similar and non-similar words that a supervised learning algorithm on feature vectors representing pairs of words outperforms cosine similarity between vectors representing single words. We compared different methods to construct a feature vector representing a pair of words. We show that simple methods like pairwise addition or multiplication give better results than a recently proposed method that combines different types of features. The semantic relation we consider is relatedness of terms in thesauri for intellectual document classification. Thus our findings can directly be applied for the maintenance and extension of such thesauri. To the best of our knowledge this relation was not considered before in the field of distributional semantics. | Learning Thesaurus Relations from Distributional Features |
d226283767 | ||
d393568 | We present a method for the extraction of stochastic lexicalized tree grammars (S-LTG) of different complexities from existing treebanks, which allows us to analyze the relationship of a grammar automatically induced from a treebank wrt. its size, its complexity, and its predictive power on unseen data.Processing of different S-LTG is performed by a stochastic version of the two-stepEarly-based parsing strategy introduced in (Schabes and Joshi, 1991). | Automatie Extraction of Stochastic Lexicalized Tree Grammars from Treebanks |
d227231686 | ||
d776531 | By exploring the relationship between parsing and deduction, a new and more general view of chart parsing is obtained, which encompasses parsing for grammar formalisms based on unification, and is the basis of the Earley Deduction proof procedure for definite clauses. The efficiency of this approach for an interesting class of grammars is discussed. | PARSING AS DEDUCTION l |
d5498777 | Phrase-based machine translation (PBMT) relies upon the phrase-table as the main resource for bilingual knowledge at decoding time. A phrase table in its basic form includes aligned phrases along with four probabilities indicating aspects of the co-occurrence statistics for each phrase pair. In this paper we add a new semantic similarity score as a statistical feature to enrich the phrase table. The new feature is inferred from a bilingual corpus by a neural network (NN), and estimates the semantic similarity of each source and target phrase pair. We observe a significant increase in system performance with the addition of the new feature. We evaluated our model on the English-French (En-Fr) and English-Farsi (En-Fa) language pairs. Experimental results show improvements for all translation directions of En↔Fr and En↔Fa. | Bilingual Distributed Phrase Representations for Statistical Machine Translation |
d34598816 | Recent years have seen increased interest within the speaker recognition community in high-level features including, for example, lexical choice, idiomatic expressions or syntactic structures. The promise of speaker recognition in forensic applications drives development toward systems robust to channel differences by selecting features inherently robust to channel difference. Within the language recognition community, there is growing interest in differentiating not only languages but also mutually intelligible dialects of a single language. Decades of research in dialectology suggest that high-level features can enable systems to cluster speakers according to the dialects they speak. The Phanotics (Phonetic Annotation of Typicality in Conversational Speech) project seeks to identify high-level features characteristic of American dialects, annotate a corpus for these features, use the data to dialect recognition systems and also use the categorization to create better models for speaker recognition. The data, once published, should be useful to other developers of speaker and dialect recognition systems and to dialectologists and sociolinguists. We expect the methods will generalize well beyond the speakers, dialects, and languages discussed here and should, if successful, provide a model for how linguists and technology developers can collaborate in the future for the benefit of both groups and toward a deeper understanding of how languages vary and change. | Bridging the Gap between Linguists and Technology Developers: Large-Scale, Sociolinguistic Annotation for Dialect and Speaker Recognition * |
d26638502 | The development of ap integrated knowledge-based machine-aided translation system called PANGLOSS in collaboration with the Center for Machine 'Ikanslation (CMT) at CMU and the Computing Research Laboratory (CRL) at New Mexico State University. The IS1 part of the collaboration is focused initially on providing the system's output capabilities, primarily in English and then in other languages, including (some of) German, Chinese, and Japanese. Additional tasks are the maintenance and continued distribution of the Penman sentence generator and text planner and the development of ancillary knowledge sources and software.RECENT RESULTSMembers of the project have participated in several aspects of the design and setting up of PANGLOSS and in the overall MT effort. Three major efforts are:1. Incorporation of language generation: In the first-year version of PANGLOSS, the ULTRA analyzer of CRL is linked to the Penman generator, both being embedded in the Translator's Workstation (TWS) that includes several browsing, editing, and other user facilities. A process for converting ULTRA output to Penman input has been developed and is being debugged. Approximately 80 ULTRA output sentences (each with approximately 13 variant parses) have been used as test suite; at present the conversion+Penman system produces roughly 25% correct throughput, 35% identifiable errors (which will be trapped and sent to the user for correction), 15% Penman grammar shortcomings, and 25% miscellaneous problems, mostly involving representational inconsistencies. Current work is focusing on extending the grammar, developing ways of interacting with the user, and ironing out the inconsistencies. Also, work on acquiring the system substrate to support PAN-GLOSS at IS1 has been performed; including software acquisition and various licensing requirements. | IN-DEPTH KNOWLEDGE-BASED MACHINE TRANSLATION |
d11721067 | We present a divide-and-conquer strategy based on finite state technology for shallow parsing of realworld German texts. In a first phase only the topological structure of a sentence (i.e., verb groups, subclauses) are determined. In a second phase the phrasal grammars are applied to the contents of the different fields of the main and sub-clauses. Shallow parsing is supported by suitably configured preprocessing, including: morphological and on-line compound analysis, efficient POS-filtering, and named entity recognition. The whole approach proved to be very useful for processing of free word order languages like German. Especially for the divide-andconquer parsing strategy we obtained an f-measure of 87.14% on unseen data. | A Divide-and-Conquer Strategy for Shallow Parsing of German Free Texts |
d7410354 | This paper addresses the problem of grapheme to phoneme conversion in order to create a pronunciation dictionary from a vocabulary of the most frequent words in European Portuguese. A system based on a mixed approach funded on a stochastic model with embedded rules for stressed vowel assignment is described. The model can generate pronunciations from unrestricted words; however, a dictionary with the 40k most frequent words was constructed and corrected interactively. The vocabulary was defined using the CETEMPúblico corpus. The model and dictionary are publicly available. | Generating a Pronunciation Dictionary for European Portuguese Using a Joint-Sequence Model with Embedded Stress Assignment |
d17878213 | In this paper we conduct a detailed examination of the tough construction in Japanese with the main focus on some types of nominative case particles ga. They are correlated with the difference not only in the nominativegenitive case alternation but also in the semantic or pragmatic interpretation. Based on these data, we discuss the categories of the nominative case particles and derivations for tough predicates within the framework of Combinatory Categorial Grammar. | Nominative-marked Phrases in Japanese Tough Constructions |
d18075251 | This paper presents a novel automatic sentence segmentation method for evaluating machine translation output with possibly erroneous sentence boundaries. The algorithm can process translation hypotheses with segment boundaries which do not correspond to the reference segment boundaries, or a completely unsegmented text stream. Thus, the method is especially useful for evaluating translations of spoken language. The evaluation procedure takes advantage of the edit distance algorithm and is able to handle multiple reference translations. It efficiently produces an optimal automatic segmentation of the hypotheses and thus allows application of existing well-established evaluation measures. Experiments show that the evaluation measures based on the automatically produced segmentation correlate with the human judgement at least as well as the evaluation measures which are based on manual sentence boundaries. | Evaluating Machine Translation Output with Automatic Sentence Segmentation |
d236460203 | ||
d236486310 | Scripts (Schank and Abelson, 1977)capture commonsense knowledge about everyday activities and their participants. Script knowledge has been shown to be useful in a number of NLP tasks, such as referent prediction, discourse classification, and story generation. A crucial step for the exploitation of script knowledge is script parsing, the task of tagging a text with the events and participants from a certain activity. This task is challenging: it requires information both about the ways events and participants are usually realized in surface language as well as the order in which they occur in the world. We show how to do accurate script parsing with a hierarchical sequence model. Our model improves the state of the art of event parsing by over 16 points F-score and, for the first time, accurately tags script participants. | Script Parsing with Hierarchical Sequence Modelling |
d11589483 | ||
d28861894 | Pronouns are frequently dropped in Chinese sentences, especially in informal data such as text messages. In this work we propose a solution to recover dropped pronouns in SMS data. We manually annotate dropped pronouns in 684 SMS files and apply machine learning algorithms to recover them, leveraging lexical, contextual and syntactic information as features. We believe this is the first work on recovering dropped pronouns in Chinese text messages. | Recovering dropped pronouns from Chinese text messages |
d21680159 | Colloquial dialects of Arabic can be roughly categorized into five groups based on relatedness and geographic location (Egyptian, North African/Maghrebi, Gulf, Iraqi, and Levantine), but given that all dialects utilize much of the same writing system and share overlapping features and vocabulary, dialect identification and text classification is no trivial task. Furthermore, text classification by dialect is often performed at a coarse-grained level into these five groups or a subset thereof, and there is little work on sub-dialectal classification. The current study utilizes an n-gram based SVM to classify on a fine-grained sub-dialectal level, and compares it to methods used in dialect classification such as vocabulary pruning of shared items across dialects. A test case of the dialect Levantine is presented here, and results of 65% accuracy on a four-way classification experiment to sub-dialects of Levantine (Jordanian, Lebanese, Palestinian and Syrian) are presented and discussed. This paper also examines the possibility of leveraging existing mixed-dialectal resources to determine their sub-dialectal makeup by automatic classification. | Classification of Closely Related Sub-dialects of Arabic Using Support-Vector Machines |
d10340565 | Natural language interfaces to data services will be a key technology to guarantee access to huge data repositories in an effortless way. This involves solving the complex problem of recognizing a relevant service or service composition given an ambiguous, potentially ungrammatical natural language question. As a first step toward this goal, we study methods for identifying the salient terms (or foci) in natural language questions, classifying the latter according to a taxonomy of services and extracting additional relevant information in order to route them to suitable data services. While current approaches deal with single-focus (and therefore single-domain) questions, we investigate multi-focus questions in the aim of supporting conjunctive queries over the data services they refer to. Since such complex queries have seldom been studied in the literature, we have collected an ad-hoc dataset, SeCo-600, containing 600 multi-domain queries annotated with a number of linguistic and pragmatic features. Our experiments with the dataset have allowed us to reach very high accuracy in different phases of query analysis, especially when adopting machine learning methods. | Evaluating Multi-focus Natural Language Queries over Data Services |
d10145463 | Inferring lexical type labels for entity mentions in texts is an important asset for NLP tasks like semantic role labeling and named entity disambiguation (NED). Prior work has focused on flat and relatively small type systems where most entities belong to exactly one type. This paper addresses very fine-grained types organized in a hierarchical taxonomy, with several hundreds of types at different levels. We present HYENA for multi-label hierarchical classification. HYENA exploits gazetteer features and accounts for the joint evidence for types at different levels. Experiments and an extrinsic study on NED demonstrate the practical viability of HYENA. | HYENA: Hierarchical Type Classification for Entity Names |
d1378015 | Modern Standard Arabic (MSA) has a wealth of natural language processing (NLP) tools and resources. In comparison, resources for dialectal Arabic (DA), the unstandardized spoken varieties of Arabic, are still lacking. We present Elissa , a machine translation (MT) system from DA to MSA. Elissa (version 1.0) employs a rule-based approach that relies on morphological analysis, morphological transfer rules and dictionaries in addition to language models to produce MSA paraphrases of dialectal sentences. Elissa can be employed as a general preprocessor for dialectal Arabic when using MSA NLP tools. | Elissa: A Dialectal to Standard Arabic Machine Translation System |
d7798552 | This paper extends previous work on extracting parallel sentence pairs from comparable data(Munteanu and Marcu, 2005). For a given source sentence S, a maximum entropy (ME) classifier is applied to a large set of candidate target translations . A beam-search algorithm is used to abandon target sentences as non-parallel early on during classification if they fall outside the beam. This way, our novel algorithm avoids any document-level prefiltering step. The algorithm increases the number of extracted parallel sentence pairs significantly, which leads to a BLEU improvement of about 1 % on our Spanish-English data. | A Beam-Search Extraction Algorithm for Comparable Data |
d5741058 | In this paper we propose a novel statistical language model to capture long-range semantic dependencies. Specifically, we apply the concept of semantic composition to the problem of constructing predictive history representations for upcoming words. We also examine the influence of the underlying semantic space on the composition task by comparing spatial semantic representations against topic-based ones. The composition models yield reductions in perplexity when combined with a standard n-gram language model over the n-gram model alone. We also obtain perplexity reductions when integrating our models with a structured language model. | Language Models Based on Semantic Composition |
d2803773 | This demo presents MAGES (multilingual angle-integrated grouping-based entity summarization), an entity summarization system for a large knowledge base such as DBpedia based on a entity-group-bound ranking in a single integrated entity space across multiple language-specific editions. MAGES offers a multilingual angle-integrated space model, which has the advantage of overcoming missing semantic tags (i.e., categories) caused by biases in different language communities, and can contribute to the creation of entity groups that are well-formed and more stable than the monolingual condition within it. MAGES can help people quickly identify the essential points of the entities when they search or browse a large volume of entity-centric data. Evaluation results on the same experimental data demonstrate that our system produces a better summary compared with other representative DBpedia entity summarization methods. | MAGES: A Multilingual Angle-integrated Grouping-based Entity Summarization System |
d7493838 | One problem of data-driven answer extraction in open-domain factoid question answering is that the class distribution of labeled training data is fairly imbalanced. This imbalance has a deteriorating effect on the performance of resulting classifiers. In this paper, we propose a method to tackle class imbalance by applying some form of cost-sensitive learning which is preferable to sampling. We present a simple but effective way of estimating the misclassification costs on the basis of the class distribution. This approach offers three benefits. Firstly, it maintains the distribution of the classes of the labeled training data. Secondly, this form of meta-learning can be applied to a wide range of common learning algorithms. Thirdly, this approach can be easily implemented with the help of state-of-the-art machine learning software. | Cost-Sensitive Learning in Answer Extraction |
d250390958 | In this paper, we formulate system combination for grammatical error correction (GEC) as a simple machine learning task: binary classification. We demonstrate that with the right problem formulation, a simple logistic regression algorithm can be highly effective for combining GEC models. Our method successfully increases the F 0.5 score from the highest base GEC system by 4.2 points on the CoNLL-2014 test set and 7.2 points on the BEA-2019 test set. Furthermore, our method outperforms the state of the art by 4.0 points on the BEA-2019 test set, 1.2 points on the CoNLL-2014 test set with original annotation, and 3.4 points on the CoNLL-2014 test set with alternative annotation. We also show that our system combination generates better corrections with higher F 0.5 scores than the conventional ensemble. 1 | Frustratingly Easy System Combination for Grammatical Error Correction |
d27834461 | We propose prefix constraints, a novel method to enforce constraints on target sentences in neural machine translation. It places a sequence of special tokens at the beginning of target sentence (target prefix), while side constraints (Sennrich et al., 2016) places a special token at the end of source sentence (source suffix). Prefix constraints can be predicted from source sentence jointly with target sentence, while side constraints must be provided by the user or predicted by some other methods. In both methods, special tokens are designed to encode arbitrary features on target-side or metatextual information. We show that prefix constraints are more flexible than side constraints and can be used to control the behavior of neural machine translation, in terms of output length, bidirectional decoding, domain adaptation, and unaligned target word generation. | Controlling Target Features in Neural Machine Translation via Prefix Constraints |
d13082435 | Hierarchical faceted metadata is a proven and popular approach to organizing information for navigation of information collections. More recently, digital libraries have begun to adopt faceted navigation for collections of scholarly holdings. A key impediment to further adoption is the need for the creation of subject-oriented faceted metadata. The Castanet algorithm was developed for the purpose of (semi) automated creation of such structures. This paper describes the application of Castanet to journal title content, and presents an evaluation suggesting its efficacy. This is followed by a discussion of areas for future work. | NLP Support for Faceted Navigation in Scholarly Collections |
d7968000 | We describe a system proposed for measuring the degree of relational similarity beetwen a pair of words at the Task #2 of Semeval 2012. The approach presented is based on a vectorial representation using the following features: i) the context surrounding the words with a windows size = 3, ii) knowledge extracted from WordNet to discover several semantic relationships, such as meronymy, hyponymy, hypernymy, and part-whole between pair of words, iii) the description of the pairs with their POS tag, morphological information (gender, person), and iv) the average number of words separating the two words in text. | BUAP: A First Approximation to Relational Similarity Measuring |
d8743015 | Inferring the information structure of scientific documents has proved useful for supporting information access across scientific disciplines. Current approaches are largely supervised and expensive to port to new disciplines. We investigate primarily unsupervised discovery of information structure. We introduce a novel graphical model that can consider different types of prior knowledge about the task: within-document discourse patterns, cross-document sentence similarity information based on linguistic features, and prior knowledge about the correct classification of some of the input sentences when this information is available. We apply the model to Argumentative Zoning (AZ) scheme and evaluate it on a fully unsupervised learning scenario and two transduction scenarios where the categories of some test sentences are known. The model substantially outperforms similarity and topic model based clustering approaches as well as traditional transduction algorithms.TITLE AND ABSTRACT IN FINNISHDokumentti-ja korpustason inferenssiin perustuva ohjaamattomankoneoppimisen tekniikka tieteellisen julkaisujen rakenteen analyysissaTieteellisten julkaisujen rakenteen analyysi voi tukea tietojen saatavuutta eri tieteenaloilta. Nykyiset koneoppimismetodit ovat pitkälti ohjattuja ja niiden soveltaminen uusille tieteenaloille on kallista. Tämä artikkeli tutkii pääasiassa ohjaamatonta julkaisujen rakenteen analyysia. Lähtökohtana on uusi graafinen malli, joka pystyy integoimaan erilaista etukäteistietoa tehtävästä: dokumenttien sisäisen diskurssin, dokumenttienvälisten samankaltaisuuden kielellisten ominaisuuksien suhteen, ja tietoa joidenkin lauseiden oikeasta luokittelusta, silloin kun tämänkaltaista tietoa on saatavilla. Malli sovellettiin Argumentative Zoning (AZ) -analyysiin ja sen soveltuvuutta täysin ohjaamattomaan oppimiseen sekä transduktio-oppimiseen, jossa joidenkin testilauseiden luokat on tiedossa, tutkittiin. Malli osoittautuu huomattavasti tarkemmaksi kuin samankaltaisuuteen ja klusterointiin perustuvat vertailumallit sekä perinteiset transduktio-algoritmit. | Document and Corpus Level Inference For Unsupervised and Transductive Learning of Information Structure of Scientific Documents |
d716050 | A system for the acquisition and management of reusable morphological dictionaries is clearly a useful tool for NLP. As such, most currently popular finite-state morphology systems have a number of drawbacks. In the development of Word Manager, these problems have been taken into account. As a result, its knowledge acquisition component is well-developed, and its knowledge representation enables more flexible use than typical finite-state systems. | A Knowledge Acquisition and Management System for Morphological Dictionaries |
d216804022 | We present an algorithm for generating strings from logical form encodings that improves upon previous algorithms in that it places fewer restrictions on the class of grammars to which it is applicable. In particular, unlike an Earley deduction generator(Shieber, 1988), it allows use of semantically nonmonotonic grammars, yet unlike topdown methods, it also permits left-recursion. The enabling design feature of the algorithm is its implicit traversal of the analysis tree for the string being generated in a semantic-head-driven fashion.to the generation algorithm presented here. Its existence might point the way towards a uniform architecture. | A Semantic-Head-Driven Generation Algorithm for Unification-Based Formalisms |
d41485781 | We present a feature-rich knowledge tracing method that captures a student's acquisition and retention of knowledge during a foreign language phrase learning task. We model the student's behavior as making predictions under a log-linear model, and adopt a neural gating mechanism to model how the student updates their log-linear parameters in response to feedback. The gating mechanism allows the model to learn complex patterns of retention and acquisition for each feature, while the log-linear parameterization results in an interpretable knowledge state. We collect human data and evaluate several versions of the model. | Knowledge Tracing in Sequential Learning of Inflected Vocabulary |
d49743698 | Orthographic similarities across languages provide a strong signal for unsupervised probabilistic transduction (decipherment) for closely related language pairs. The existing decipherment models, however, are not well suited for exploiting these orthographic similarities. We propose a loglinear model with latent variables that incorporates orthographic similarity features. Maximum likelihood training is computationally expensive for the proposed log-linear model. To address this challenge, we perform approximate inference via Markov chain Monte Carlo sampling and contrastive divergence. Our results show that the proposed log-linear model with contrastive divergence outperforms the existing generative decipherment models by exploiting the orthographic features. The model both scales to large vocabularies and preserves accuracy in low-and no-resource contexts. | Feature-Based Decipherment for Machine Translation |
d8514556 | In this paper, we propose a three-step multilingual dependency parser, which generalizes an efficient parsing algorithm at first phase, a root parser and postprocessor at the second and third stages. The main focus of our work is to provide an efficient parser that is practical to use with combining only lexical and part-ofspeech features toward language independent parsing. The experimental results show that our method outperforms Maltparser in 13 languages. We expect that such an efficient model is applicable for most languages. | The Exploration of Deterministic and Efficient Dependency Parsing |
d15485185 | This paper examines the introduction of "Easy Japanese" by extracting important segments for translation. The need for Japanese language has increased dramatically due to the recent influx of non-Japanese-speaking foreigners. Therefore, in order for non-native speakers of Japanese to successfully adapt to society, the so-called Easy Japanese is being developed to aid them in every aspect from basic conversation to translation of official documents. The materials of our project are the official documents since they are generally distributed in public offices, hospitals, and schools, where they include essential information that should be accessed for all residents. Through an analysis of Japanese language dependency as a pre-experiment, this paper introduces a translation by extracting important segments to facilitate the acquisition of Easy Japanese. Upon effective completion, the project will be introduced for use on the Internet and proposed for use by foreigners living in Japan as well as educators. | Automatic Easy Japanese Translation for information accessibility of foreigners |
d10837748 | We argue that groups of unannotated texts with overlapping and non-contradictory semantics represent a valuable source of information for learning semantic representations. A simple and efficient inference method recursively induces joint semantic representations for each group and discovers correspondence between lexical entries and latent semantic concepts. We consider the generative semantics-text correspondence model(Liang et al., 2009)and demonstrate that exploiting the noncontradiction relation between texts leads to substantial improvements over natural baselines on a problem of analyzing human-written weather forecasts. | Bootstrapping Semantic Analyzers from Non-Contradictory Texts |
d226283606 | ||
d11204982 | Text preprocessing is an important and necessary task for all NLP applications. A simple variation in any preprocessing step may drastically affect the final results. Moreover replicability and comparability, as much as feasible, is one of the goals of our scientific enterprise, thus building systems that can ensure the consistency in our various pipelines would contribute significantly to our goals. The problem has become quite pronounced with the abundance of NLP tools becoming more and more available yet with different levels of specifications. In this paper, we present a dynamic unified preprocessing framework and tool, SPLIT, that is highly configurable based on user requirements which serves as a preprocessing tool for several tools at once. SPLIT aims to standardize the implementations of the most important preprocessing steps by allowing for a unified API that could be exchanged across different researchers to ensure complete transparency in replication. The user is able to select the required preprocessing tasks among a long list of preprocessing steps. The user is also able to specify the order of execution which in turn affects the final preprocessing output. | SPLIT: Smart Preprocessing (Quasi) Language Independent Tool |
d233365250 | WikiHow is an open-domain repository of instructional articles for a variety of tasks, which can be revised by users. In this paper, we extract pairwise versions of an instruction before and after a revision was made. Starting from a noisy dataset of revision histories, we specifically extract and analyze edits that involve cases of vagueness in instructions. We further investigate the ability of a neural model to distinguish between two versions of an instruction in our data by adopting a pairwise ranking task from previous work and showing improvements over existing baselines. | A Computational Analysis of Vagueness in Revisions of Instructional Texts |
d184483278 | ||
d14411957 | We present a simple tool that enables the computer to read subtitles of movies and TV shows aloud. The tool extracts information from subtitle files, which can be freely downloaded or extracted from a DVD, and reads the text aloud through a speech synthesizer.The target audience is people who have trouble reading subtitles while watching a movie, for example people with visual impairments and people with reading difficulties such as dyslexia. The application will be evaluated together with user from these groups to see if this could be an accepted solution to their needs. | SubTTS: Light-weight automatic reading of subtitles |
d1895032 | Ubersetzungstheorie und maschinelle Ubersetzung | |
d9885054 | In this paper, we propose PAL, a prototype chatterbot for answering non-obstructive psychological domain-specific questions. This system focuses on providing primary suggestions or helping people relieve pressure by extracting knowledge from online forums, based on which the chatterbot system is constructed. The strategies used by PAL, including semantic-extension-based question matching, solution management with personal information consideration, and XML-based knowledge pattern construction, are described and discussed. We also conduct a primary test for the feasibility of our system. | PAL: A Chatterbot System for Answering Domain-specific Questions |
d261124745 | Cet article a pour objectif de mettre en évidence les biais de genre dans les systèmes de traduction automatique et de rechercher leurs causes en étudiant les différentes manières dont l'information de genre peut circuler entre le décodeur et l'encodeur. Pour cela, nous décrivons un corpus minimal et contrôlé pour mesurer l'intensité de ces biais dans les traductions de l'anglais vers le français et du français vers l'anglais. Grâce à des méthodes de sondage et des interventions sur les représentations internes de l'encodeur, nos expériences montrent que l'information de genre est distribuée sur l'ensemble des représentations des tokens sources et cibles et que la sélection du genre en langue cible résulte d'une multiplicité d'interactions entre les diverses unités impliquées dans la traduction.ABSTRACT. This paper describes a study on gender bias in French/English neural machine translation (MT) systems. We introduce a controlled corpus to measure the intensity of such biases in the two translation directions (from and into English). This corpus also allows us to investigate the information flow in a encoder-decoder architecture and to identify how gender information can be transfered between languages. Considering both probing as well as interventions on the internal representations of the MT system, we show that gender information is encoded in all token representations built by the encoder and the decoder and that there are multiple paths to transfer gender. MOTS-CLÉS : biais de genre, traduction automatique neuronale, évaluation diagnostique en TAL. | Biais de genre dans un système de traduction automatique neuronale : une étude des mécanismes de transfert cross-langue |
d13115581 | In neural image captioning systems, a recurrent neural network (RNN) is typically viewed as the primary 'generation' component. This view suggests that the image features should be 'injected' into the RNN. This is in fact the dominant view in the literature. Alternatively, the RNN can instead be viewed as only encoding the previously generated words. This view suggests that the RNN should only be used to encode linguistic features and that only the final representation should be 'merged' with the image features at a later stage. This paper compares these two architectures. We find that, in general, late merging outperforms injection, suggesting that RNNs are better viewed as encoders, rather than generators. | What is the Role of Recurrent Neural Networks (RNNs) in an Image Caption Generator? |
d250122126 | Obtaining linguistic annotation from novice crowdworkers is far from trivial. A case in point is the annotation of discourse relations, which is a complicated task. Recent methods have obtained promising results by extracting relation labels from either discourse connectives (DCs) or question-answer (QA) pairs that participants provide. The current contribution studies the effect of worker selection and training on the agreement on implicit relation labels between workers and gold labels, for both the DC and the QA method. In Study 1, workers were not specifically selected or trained, and the results show that there is much room for improvement. Study 2 shows that a combination of selection and training does lead to improved results, but the method is cost-and time-intensive. Study 3 shows that a selection-only approach is a viable alternative; it results in annotations of comparable quality compared to annotations from trained participants. The results generalized over both the DC and QA method and therefore indicate that a selection-only approach could also be effective for other crowdsourced discourse annotation tasks. | Design Choices in Crowdsourcing Discourse Relation Annotations: The Effect of Worker Selection and Training |
d9170803 | The paper presents a system for the CoNLL-2011 share task of coreference resolution. The system composes of two components: one for mentions detection and another one for their coreference resolution. For mentions detection, we adopted a number of heuristic rules from syntactic parse tree perspective. For coreference resolution, we apply SVM by exploiting multiple syntactic and semantic features. The experiments on the CoNLL-2011 corpus show that our rule-based mention identification system obtains a recall of 87.69%, and the best result of the SVM-based coreference resolution system is an average F-score 50.92% of the MUC, B-CUBED and CEAFE metrics. | Combining Syntactic and Semantic Features by SVM for Unrestricted Coreference Resolution |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.