_id stringlengths 4 10 | text stringlengths 0 18.4k | title stringlengths 0 8.56k |
|---|---|---|
d21719026 | In this paper, we describe a study we conducted to determine, if a person who is highly influential in a discussion on a familiar topic would retain influence when moving to a topic that is less familiar or perhaps not as interesting. For this research, we collected samples of realistic on-line chat room discussions on several topics related to current issues in education, technology, arts, sports, finances, current affairs, etc. The collected data allowed us to create models for specific types of conversational behavior, such as agreement, disagreement, support, persuasion, negotiation, etc. These models were used to study influence in online discussions. It also allowed us to study how human influence works in online discussion and what affects a person's influence from one topic to another. We found that influence is impacted by topic familiarity, sometimes dramatically, and we explain how it is affected and why. | Gaining and Losing Influence in Online Conversation |
d10893099 | The use of Electronic Health Records (EHRs) is becoming more prevalent in healthcare institutions world-wide. These digital records contain a wealth of information on patients' health in the form of Natural Language text. The electronic format of the clinical notes has evident advantages in terms of storage and shareability, but also makes it easy to duplicate information from one document to another through copy-pasting. Previous studies have shown that (copy-paste-induced) redundancy can reach high levels in American EHRs, and that these high levels of redundancy have a negative effect on the performance of Natural Language Processing (NLP) tools that are used to process EHRs automatically. In this paper, we present a preliminary study on the level of redundancy in French EHRs. We study the evolution of redundancy over time, and its occurrence in respect to different document types and sections in a small corpus comprising of three patient records (361 documents). We find that average redundancy levels in our subset are lower than those observed in U.S. corpora (respectively 33% vs. up to 78%), which may indicate different cultural practices between these two countries. Moreover, we find no evidence of the incremental increase (over time) of redundant text in clinical notes which has been found in American EHRs. These results suggest that redundancy mitigating strategies may not be needed when processing French EHRs. | Redundancy in French Electronic Health Records: A preliminary study |
d10042186 | This paper presents ongoing research on computational models for non-cooperative dialogue. We start by analysing different levels of cooperation in conversation. Then, inspired by findings from an empirical study, we propose a technique for measuring non-cooperation in political interviews. Finally, we describe a research programme towards obtaining a suitable model and discuss previous accounts for conflictive dialogue, identifying the differences with our work. | Non-Cooperation in Dialogue |
d31221949 | 摘要 本論文研究智慧型的客語拼音輸入法為一基於 Android 輸入法框架(Input Method Framework,IMF) ,使用者能在任何文書的 APP(Application,APP)輸入客語文字。 使用者輸入客語單字或客語詞彙拼音縮寫時,輸入法會依照使用者的輸入搜尋儲存 於 Android 上的 SQLite 資料庫的單字拼音對照字庫、縮寫詞對照詞庫及前後詞對照詞 庫,根據搜尋的結果產生出候選字或候選詞彙,提供使用者選擇輸出。 單字拼音字庫和縮寫詞對照詞庫分別包含 9361 個字數及 32453 個詞彙數;客語音 檔資料庫包含單音節檔 2427、詞彙檔 3392 及靜音檔 27,總計 5846。除了基本的客語 拼音輸入外,輸入法本身提供了幾種功能: (1) 使用者偏好輸入:記錄使用者平常輸入的字詞,目的讓使用者能依照自己偏好 更快速將常用的字或詞彙做輸出。 (2) 客語詞字首快速輸入:使用者可透過縮寫詞對照詞庫內搜尋字母,快速得到該 客語詞,節省打字次數。 (3) 前後詞預測輸出:此功能具備了讓使用者快速輸出客語句,讓客語詞彙或句達 到更快輸出的效率,並訓練出客語的前後詞 bi-gram。 (4) 客語詞人聲發音:此目的是讓客語初學者能聽到正確的客語拼音唸出,達到學 習的目的。 關鍵詞:客語無聲調拼音輸入法、好客輸入法、中文轉客文語音合成系統、拼音輸入法。 Abstract The proposal scheme called Hakka pinyin input method is based on Android (IMF) Input Method Framework. Users can input Hakka texts in any APP of mobile cell. When user inputs a Hakka character or Hakka vocabulary phonetic abbreviation, the input method will refer to the input of user and search for a single character phonetic transcription font stored in the SQLite database. The data will send to database Single Word Pinyin the Based Word Library, Acronym, and Previous and Successive the Based Word Library. According to the results of the search produce a candidate word or vocabulary, and provide the user to select the output. The Single Word Pinyin and Abbreviation in our systems contain 9361 words and 32453 vocabularies. In addition to the basic message Pinyin input, the input method self provides several functions: (1) User preference input: Record the frequency of the words normally entered by the user in pref. the purpose to allow users follow their own preferences more quickly, that will commonly used words or vocabulary for output. (2) Hakka fast input word: User can search for letters by acronyms, or APP quickly guest the Hakka word, and save the number of typing. (3) Previous and Successive Word Prediction Output: This feature has the ability to let the user quickly generate the Hakka sentence, let the Hakka words or sentences to achieve faster output efficiency, and training the Bi-gram probability. (4) Hakka Language pronunciation: The purpose is to let the beginner to hear the correct Hakka pronunciation, to achieve the purpose of language learning. | 手機平台 APP 之四縣客語輸入法的研發 Research and Implementation of Sixian Hakka Pinyin Input Method for Mobile Cell APP |
d10432165 | In the translation industry, human translations are assessed by comparison with the source texts. In the Machine Translation (MT) research community, however, it is a common practice to perform quality assessment using a reference translation instead of the source text. In this paper we show that this practice has a serious issue -annotators are strongly biased by the reference translation provided, and this can have a negative impact on the assessment of MT quality. | Reference Bias in Monolingual Machine Translation Evaluation |
d1045460 | In this paper we apply distributional semantic information to document-level machine translation. We train monolingual and bilingual word vector models on large corpora and we evaluate them first in a cross-lingual lexical substitution task and then on the final translation task. For translation, we incorporate the semantic information in a statistical document-level decoder (Docent), by enforcing translation choices that are semantically similar to the context. As expected, the bilingual word vector models are more appropriate for the purpose of translation. The final document-level translator incorporating the semantic model outperforms the basic Docent (without semantics) and also performs slightly over a standard sentencelevel SMT system in terms of ULC (the average of a set of standard automatic evaluation metrics for MT). Finally, we also present some manual analysis of the translations of some concrete documents. | Document-Level Machine Translation with Word Vector Models |
d30366990 | Full text discourse parsing relies on texts comprehensively annotated with discourse relations. To this end, we address a significant gap in the inter-sentential discourse relations annotated in the Penn Discourse Treebank (PDTB), namely the class of cross-paragraph implicit relations, which account for 30% of inter-sentential relations in the corpus. We present our annotation study to explore the incidence rate of adjacent vs. non-adjacent implicit relations in cross-paragraph contexts, and the relative degree of difficulty in annotating them. Our experiments show a high incidence of non-adjacent relations that are difficult to annotate reliably, suggesting the practicality of backing off from their annotation to reduce noise for corpusbased studies. Our resulting guidelines follow the PDTB adjacency constraint for implicits while employing an underspecified representation of non-adjacent implicits, and yield 62% inter-annotator agreement on this task. | Towards Full Text Shallow Discourse Relation Annotation: Experiments with Cross-Paragraph Implicit Relations in the PDTB |
d7491150 | This paper describes an on-going research for a practical question answering system for a home agent robot. Because the main concern of the QA system for the home robot is the precision, rather than coverage (No answer is better than wrong answers), our approach is try to achieve high accuracy in QA. We restrict the question domains and extract answers from the pre-selected, semi-structured documents on the Internet. A named entity tagger and a dependency parser are used to analyze the question accurately. User profiling and inference rules are used to infer hidden information that is required for finding a precise answer. Testing with a small set of queries on weather domain, the QA system showed 90.9% of precision and 75.0% of recall. | A Practical QA System in Restricted Domains |
d10021284 | Existing plan-based theories of speech act interpretation do not account for the conventional aspect of speech acts. We use patterns of linguistic features (e.g. mood, verb form, sentence adverbials, thematic roles) to suggest a range of speech act interpretations for the utterance. These are filtered using plan-bused conversational implicatures to eliminate inappropriate ones.Extended plan reasoning is available but not necessary for familiar forms. Taking speech act ambiguity seriously, with these two constraints, explains how "Can you pass the salt?" is a typical indirect request while "Are you able to pass the salt?" is not. | Two Constraints on Speech Act Ambiguity |
d252847542 | In recent years, libraries and archives led important digitisation campaigns that opened the access to vast collections of historical documents. While such documents are often available as XML ALTO documents, they lack information about their logical structure. In this paper, we address the problem of logical layout analysis applied to historical documents. We propose a method which is based on the study of a dataset in order to identify rules that assign logical labels to both block and lines of text from XML ALTO documents. Our dataset contains newspapers in French, published in the first half of the 20th century. The evaluation shows that our methodology performs well for the identification of first lines of paragraphs and text lines, with F1 above 0.9. The identification of titles obtains an F1 of 0.64. This method can be applied to preprocess XML ALTO documents in preparation for downstream tasks, and also to annotate largescale datasets to train machine learning and deep learning algorithms. 1 ALTO: Technical metadata for layout and text objects: https://www.loc.gov/standards/alto/ | Logical Layout Analysis Applied to Historical Newspapers |
d37047 | Recent work on information extraction has suggested that fast, interactive tools can be highly effective; however, creating a usable system is challenging, and few publically available tools exist. In this paper we present IKE, a new extraction tool that performs fast, interactive bootstrapping to develop high-quality extraction patterns for targeted relations. Central to IKE is the notion that an extraction pattern can be treated as a search query over a corpus. To operationalize this, IKE uses a novel query language that is expressive, easy to understand, and fast to execute -essential requirements for a practical system. It is also the first interactive extraction tool to seamlessly integrate symbolic (boolean) and distributional (similarity-based) methods for search. An initial evaluation suggests that relation tables can be populated substantially faster than by manual pattern authoring while retaining accuracy, and more reliably than fully automated tools, an important step towards practical KB construction. We are making IKE publically available (http://allenai.org/ software/interactive-knowledge-extraction). | IKE -An Interactive Tool for Knowledge Extraction |
d184483104 | In this paper we built several deep learning architectures to participate in shared task Of-fensEval: Identifying and categorizing Offensive language in Social media by semEval-2019 (Zampieri et al., 2019b). The dataset was annotated with three level annotation schemes and task was to detect between offensive and not offensive, categorization and target identification in offensive contents. Deep learning models with POS information as feature were also leveraged for classification. The three best models that performed best on individual sub tasks are stacking of CNN-Bi-LSTM with Attention, BiLSTM with POS information added with word features and Bi-LSTM for third task. Our models achieved a Macro F1 score of 0.7594, 0.5378 and 0.4588 in Task(A,B,C) respectively with rank of 33 rd , 54 th and 52 nd out of 103, 75 and 65 submissions. | NLP at SemEval-2019 Task 6: Detecting Offensive language using Neural Networks |
d9737436 | In this paper we propose a new method to express quantification and especially quantifier scope in French generation. Our approach is based on two points: the identification of the sentence components between which quantifier scope can indeed be expressed and a mechanism to reinforce the expression of quantifier scope. This approach is being integrated in a written French generator , called Hermes, which will become the generator of a portable natural language interface. | Expressing quantifier scope in French generation |
d196213305 | Phonemic Verbal Fluency (PVF) is a cognitive assessment task where a patient is asked to produce words constrained to a given alphabetical letter for a specified time duration. Patient productions are later evaluated based on strategies to reveal crucial diagnostic information by manually scoring results according to predetermined clinical criteria. In this paper, we propose four alternative similarity metrics and evaluate them in a two-fold argument, using the clinical criteria as a baseline. First, we consider the capacity of each metric to model PVF production using a rank-based approach, and then consider the metrics ability to compute finer resolution clinical measures that are indicative of the underlying strategy. Automation of the clinical criteria and proposed metrics are evaluated on PVF performances for 16 letters from 32 healthy German students (n=512). Weighted phonemic edit distance performed best overall for modelling both production and strategy. | Automatic Data-Driven Approaches for Evaluating the Phonemic Verbal Fluency Task with Healthy Adults |
d218973884 | ||
d15618468 | Grammaire contextuelle, arbre de dependance, arbre projectif de dépendance Contextual grammar, dependency tree, projective dependency treeRésumé -AbstractOn présente une nouvelle variante de grammaire contextuelle structurée, qui produit des arbres de dépendance. Le nouveau modèle génératif, appelé grammaire contextuelle de dépendance, améliore la puissance générative forte et faible des grammaires contextuelles, tout enétant un candidat potentiel pour la description mathématique des modéles syntactiques de dépendance.A new variant of structured contextual grammar, which generates dependency trees, is introduced. The new generative model, called dependency contextual grammar, improves both the strong and weak generative power of contextual grammars, while being a potential candidate for the mathematical description of dependency-based syntactic models. | Contextual Grammars and Dependency Trees Mots-clefs -Keywords |
d5931964 | It is common opinion that current hypertextual systems do not allow to express objectively the information content of documents, but only the view of the "author". The hyperlink building requires an heavy and highly specialised human intervention: this task is very expensive whenever possible! A different approach, based on NLP methodologies, aiming at automatizing the development of an hypertext, is hereafter proposed. Anchorage points are inferred both from content and structure of documents. A semantic lexicon based on conceptual graph structures is used to guide text understanding. Contextual roles are introduced to model domain specific concepts relevant to the navigation. An off-line activation of useful links has been defined according to explicit user specifications. A simple declarative language (HyDeL) for the definition of such links is available to the user to create his own views on the document base. HERMES is a prototype system implementing our approach. The paper discusses the semantic processing of a document base and highlights the performance of different hypertextual systems derived by HERMES over different languages and knowledge domains. | Might a semantic lexicon support hypertextual authoring? |
d17286939 | For SemEval-2013 Task 2, A and B (Sentiment Analysis in Twitter), we use a rulebased pattern matching system that is based on an existing 'Domain Independent' sentiment taxonomy for English, essentially a highly phrasal sentiment lexicon. We have made some modifications to our set of rules, based on what we found in the annotated training data that was made available for the task. The resulting system scores competitively, especially on task B. | teragram: Rule-based detection of sentiment phrases using SAS Sentiment Analysis |
d42438206 | A formalism for ~he representation of "semantic emphases" is introduced, using principal and accessory instantiatiQns. It m~es it possible to convert predicate expressions inbo network-like structures. As an application criteria for ooligatory and optional actants are dealt with. | INSTANTIATIONS AND (OBLIGATORY VS. OPTIONAL) ACTANTS |
d21085828 | BSO/Research Postbus 8348 NL-3503 RH Utrecht Nederländerna Uppsala universitet, FUMS Box 1834 S-751 48 Uppsala Sverige Elektroniskadress: schubert@dltl.uucpEtt mångspråkigt datoröversättningssystemJag beskriver här i korthet datoröversättningssystemet Distributed Language Translation (DLT) och tar i samband med detta upp Nordens språk. DLT är ett omfattande forsknings-och utvecklingsprojekt som bedrivs av det nederländska softwareföretaget Buro voor Systeemontwikkeling (BSO/Research i Utrecht) med anslag från Nederländernas Ekonomidepanement. Projektet är inne på en än så länge icke-kommersiell sjuårsperiod (1985-1991) som skall leda till en prototyp för ett översättningssystem för icke-litterär engelska och franska. Prototypen omfattar bara två språk, men DLT är från början beräknat att bli mångspråkigt, vilket innebär att det måste vara modulärt utbyggbart. Därför utförs redan nu i samarbete med forskare vid universitet i vederbörande länder och med andra experter förberedande studier om tillämpligheten av DLT:s grammatikmodell på andra språk (och även implementeringar i begränsat omfång). Bland dessa är även några av Nordens språk.Det är möjligt att utvecklingen av utgångs-eller målspråkssystem för fler språk påböijas inom DLT före 1991. Med detta perspektiv upptar jag här frågan om hur Nordens språk kan knytas till DLT.Utbyggbarhet och spridningenDLT:s översättningsmetod är styrd av två förutsättningar: utbyggbarhetskravet och den idé från vilken beteckningen Distributed härrör: spridningen i översättningsprocessen.Utbyggbarhetskravet gör det nödvändigt att skapa ett väldefinierat interface till vilket godtyckliga utgångs-och målspråk kan knytas, utan att redan befintliga delar av systemet för den skull behöver anpassas. I DLT är detta interface ett mellanspråk. DLT;s mellanspråk är en något modifierad version av esperanto. Valet av esperanto Att knyta nordens språ k till ett må ngsprå kigt datorö versä ttningssystem | ATT KNYTA NORDENS SPRÅK TILL ETT MÅNGSPRÅKIGT DATORÖVERSÄTTNINGSSYSTEM |
d10275081 | We present a new framework for classifying common nouns that extends namedentity classification. We used a fixed set of 26 semantic labels, which we called supersenses. These are the labels used by lexicographers developing WordNet. This framework has a number of practical advantages. We show how information contained in the dictionary can be used as additional training data that improves accuracy in learning new nouns. We also define a more realistic evaluation procedure than cross-validation. | Supersense Tagging of Unknown Nouns in WordNet |
d31769438 | The main Lingenio MT products are based on rule-based architectures. In the presentation we show how knowledge from corpora is integrated into the systems using the language analysis-and translation-components in a bootstrapping approach. This relates to the bilingual dictionaries, but also to learning decisions concerning the selection of syntactic rules and semantic readings in parsing and semantic evaluation. These strategies contribute both to improve the quality of the systems and to shorten go-to-market of new products significantly. Also a number of attractive spinoff functions can be generated from them which, in addition, can be used for designing new types of products and as preparatory and postediting features in MT systems whose core is of type SMT.97 | Hybrid Strategies for better products and shorter time-to-market |
d227230517 | One of the remaining challenges for aspect term extraction in sentiment analysis resides in the extraction of phrase-level aspect terms, which is non-trivial to determine the boundaries of such terms. In this paper, we aim to address this issue by incorporating the span annotations of constituents of a sentence to leverage the syntactic information in neural network models. To this end, we first construct a constituency lattice structure based on the constituents of a constituency tree. Then, we present two approaches to encoding the constituency lattice using BiLSTM-CRF and BERT as the base models, respectively. We experimented on two benchmark datasets to evaluate the two models, and the results confirm their superiority with respective 3.17 and 1.35 points gained in F1-Measure over the current state of the art. The improvements justify the effectiveness of the constituency lattice for aspect term extraction. | Constituency Lattice Encoding for Aspect Term Extraction |
d1121208 | We present the CimS submissions to the 2014 Shared Task for the language pair EN→DE. We address the major problems that arise when translating into German: complex nominal and verbal morphology, productive compounding and flexible word ordering. Our morphologyaware translation systems handle word formation issues on different levels of morpho-syntactic modeling. | CimS -The CIS and IMS joint submission to WMT 2014 translating from English into German |
d14170272 | Cultural heritage, and other special domains, pose a particular problem for information retrieval: evaluation requires a dedicated test collection that takes the particular documents and information requests into account, but building such a test collection requires substantial human effort. This paper investigates methods of generating a document retrieval test collection from a search engine's transaction log, based on submitted queries and user-click data. We test our methods on a museum's search log file, and compare the quality of the generated test collections against a collection with manually generated and judged known-item topics. Our main findings are the following. First, the test collection derived from a transaction log corresponds well to the actual search experience of real users. Second, the ranking of systems based on the derived judgments corresponds well to the ranking based on the manual topics. Third, deriving pseudo-relevance judgments from a transaction log file is an attractive option in domains where dedicated test collections are not readily available. | Deriving a Domain Specific Test Collection from a Query Log |
d8683986 | This paper discusses the extension of ViewGen, a program for belief ascription, to the area of intensional object identification with applications to battle environments, and its combination in a overall system with MGR, a Model-Generative Reasoning system, and PREMO a semantics-based parser for robust parsing of noisy message data.ViewGen represents the beliefs of agents as explicit, partitioned proposition-sets known as environments. Environments are convenient, even essential, for addressing important pragmatic issues of reasoning. The paper concentrates on showing that the transfer of information in intensional object identification and belief ascription itself can both be seen as different manifestations of a single environment-amalgamation process. The entities we shall be concerned with will be ones, for example, the system itself believes to be separate entities while it is computing the beliefs and reasoning of a hostile agent that believes them to be the same entity (e.g. we believe enemy radar shows two of our ships to be the same ship, or vice-versa. The KAL disaster should bring the right kind of scenario to mind). The representational issue we address is how to represent that fictional single entity in the belief space of the other agent, and what content it should have given that it is an amalgamation of two real entities.A major feature of the paper is our work on embedding within the ViewGen belief-and-point-ofview system the knowledge representation system of our MGR reasoner, and then bringing together the multiple viewpoints offered by ViewGen with the multiple representations of MGR. The fusing of these techniques, we believe, offers a very strong system for extracting message gists from texts and reasoning about them.PREMO: A ROBUST PARSER OF MESSAGESPREMO: the PREference Machine Organization is a knowledge-based Preference Semantics parser (Wilks | Belief Ascription and Model Generative Reasoning: joining two paradigms to a robust parser of messages |
d198976900 | Large amount of user-generated data are posted online in social media platforms, including user preferences, dining and leisure activities, events, news and personal blogs. This resulted in varying efforts to process social media data using NLP and ML algorithms for topic classification, sentiment analysis and detection, and events classification. Such information are problematic to process, as they tend to be short, informal, inconsistent, and are highly contextualized. A series of tasks is involved from collecting, pre-processing, classification and extraction before social media data can be used. In this study, we built a multi-class classifier model to process Facebook posts in order to identify a user's online persona based on his/her preferences. Information extraction is then applied to find relevant data from the classified posts that can be used to generate a description of the user's online persona. The classifier currently achieves an accuracy of 76.02% and an F1 score of 73.10% using 10-fold cross validation from a dataset containing 16,682 posts. | Classifying and Extracting Data from Facebook Posts for Online Persona Identification |
d226284004 | ||
d252624619 | In the use and creation of current Deep Learning Models the only number that is used for the overall computation is the frequency value associated with the current word form in the corpus, which is used to substitute it. Frequency values come in two forms: absolute and relative. Absolute frequency is used indirectly when selecting the vocabulary against which the word embeddings are created: the cutoff threshold is usually fixed at 30/50K entries of the most frequent words. Relative frequency comes in directly when computing word embeddings based on co-occurrence values of the tokens included in a window size 2/5 adjacent tokens. The latter values are then used to compute similarity, mostly based on cosine distance. In this paper we will evaluate the impact of these two frequency parameters on a small corpus of Italian sentences whose main features are two: presence of very rare words and of non-canonical structures. The results computed on the basis of a perusal of BERT's raw embeddings shows that the two parameters conspire to decide the level of predictability. | Measuring Similarity by Linguistic Features rather than Frequency |
d38854531 | RESULTSThe Synchronetics entry in the MUC-3 competition is a full-parser, semantic net-based system written i n C . Our system attempts to fill the first four slots of each template and, in some cases, the three perpetrato r slots and the human-target-ids slot . The Synchronetics system achieved the following official scores on the tst2 corpus :SLOT REC PRE OVG FA L ------------------------------------template-id 31 51 49 incident-date 17 55 0 incident-type 19 61 0 0 category 24 56 28 1 1 | SYNCHRONETICS : MUC-3 TEST RESULTS AND ANALYSI S indiv-perps 0 * * org-perps 0 * * perp-confidence 0 * * 0 phys-target-ids 0 * * phys-target-num 0 * * phys-target-types 0 * * 0 human-target-ids 2 100 0 human-target-num 0 * * human-target-types 0 * * 0 target-nationality 0 * * 0 instrument-types 0 * * 0 incident-location 0 * * phys-effects 0 * * 0 human-effects 0 * * |
d16428009 | Text mining for biomedicine requires a significant amount of domain knowledge. Much of this information is contained in biomedical ontologies. Developers of text mining applications often look for appropriate ontologies that can be integrated into their systems, rather than develop new ontologies from scratch. However, there is often a lack of documentation of the qualities of the ontologies. A number of methodologies for evaluating ontologies have been developed, but it is difficult for users by using these methods to select an ontology. In this paper, we propose a framework for selecting the most appropriate ontology for a particular text mining application. The framework comprises three components, each of which considers different aspects of requirements of text mining applications on ontologies. We also present an experiment based on the framework choosing an ontology for a gene normalization system. | Selecting an Ontology for Biomedical Text Mining |
d812819 | One of the main tasks of the Natural Language Processing Group at the Faculty of Mathematics, University of Belgrade is the development of various lexical resources. Among them the two most important ones are: the system of morphological dictionaries of Serbian (SMD) in Intex format and the Serbian wordnet (SWN) developed in the scope of the Balkanet project. Although these two resources represent dictionaries of a different type, developed using different models, each of them contains information that can either be incorporated in to the other dictionary or that can be used in its development. In this paper we will outline some of the most interesting examples. W e also present an integrated programming tool that enables the integration of these diverse lexical resources, as well as possible applications. We envisage the use of these resources in defining and linking lexical data in a way that will enable their more effective retrieval, integration, and reuse across various Web applications. | Combining Heterogeneous Lexical Resources |
d18044918 | In this paper, we present a new approach to writing tools that extends beyond the rudimentary spelling and grammar checking to the content of the writing itself. Linguistic methods have long been used to detect familiar lexical patterns in the text to aid automatic summarization and translation of documents. We apply these methods to determine the quality of the text and implement new techniques for measuring readability and providing feedback to authors on how to improve the quality of their documents. We take an extended view of readability that considers text cohesion, propositional density, and word familiarity. We provide simple feedback to the user detailing the most and least readable sentences, the sentences most densely packed with information and the most cohesive words in their document. Commonly used verbose words and phrases in the text, as identified by The Plain English Campaign, can be replaced with user-selected replacements. Our techniques were implemented as a free download extension to the Open Office word processor generating 6,500 downloads to date. | The Linguistics of Readability: The Next Step for Word Processing |
d14539387 | There are many examples in which a word changes its polarity from domain to domain. For example, unpredictable is positive in the movie domain, but negative in the product domain. Such words cannot be entered in a "universal sentiment lexicon" which is supposed to be a repository of words with polarity invariant across domains. Rather, we need to maintain separate domain specific sentiment lexicons. The main contribution of this paper is to present an effective method of generating a domain specific sentiment lexicon. For a word whose domain specific polarity needs to be determined, the approach uses the Chi-Square test to detect if the difference is significant between the counts of the word in positive and negative polarity documents. We extract 274 words that are polar in the movie domain, but are not present in the universal sentiment lexicon. Our overall accuracy is around 60% in detecting movie domain specific polar words. | Detecting Domain Dedicated Polar Words |
d16003130 | WordNet is extensively used as a major lexical resource in NLP. However, its quality is far from perfect, and this alters the results of applications using it. We propose here to complement previous efforts for "cleaning up" the top-level of its taxonomy with semi-automatic methods based on the detection of errors at the lower levels. The methods we propose test the coherence of two sources of knowledge, exploiting ontological principles and semantic constraints. | Towards semi-automatic methods for improving WordNet |
d259376681 | In this system paper, we describe our submission for the 11 th task of SemEval2023: Learning with Disagreements, or Le-Wi-Di for short. In the task, the assumption that there is a single gold label in NLP tasks such as hate speech or misogyny detection is challenged, and instead the opinions of multiple annotators are considered. The goal is instead to capture the agreements/disagreements of the annotators. For our system, we utilize the capabilities of modern large-language models as our backbone and investigate various techniques built on top, such as ensemble learning, multi-task learning, or Gaussian processes. Our final submission shows promising results and we achieve an upper-half finish. | CICL_DMS at SemEval-2023 Task 11: Learning With Disagreements (Le-Wi-Di) |
d259376697 | MultiCoNER-II is a fine-grained Named Entity Recognition (NER) task that aims to identify ambiguous and complex named entities in multiple languages, with a small amount of contextual information available. To address this task, we propose a multi-stage information retrieval (IR) pipeline that improves the performance of language models for fine-grained NER. Our approach involves leveraging a combination of a BM25-based IR model and a language model to retrieve relevant passages from a corpus. These passages are then used to train a model that utilizes a weighted average of losses. The prediction is generated by a decoder stack that includes a projection layer and conditional random field. To demonstrate the effectiveness of our approach, we participated in the English track of the MultiCoNER-II competition. Our approach yielded promising results, which we validated through detailed analysis. | IITD at SemEval-2023 Task 2: A Multi-Stage Information Retrieval Approach for Fine-Grained Named Entity Recognition |
d257767742 | In order to provide personalized interactions in a conversational system, responses must be consistent with the user and agent persona while still being relevant to the context of the conversation. Existing personalized conversational systems increase the consistency of the generated response by leveraging persona descriptions, which sometimes tend to generate irrelevant responses to the context. To solve this problems, we propose to extend the persona-agnostic meta-learning (PAML) framework (Madotto et al., 2019) by adding knowledge from ConceptNet knowledge graph (Speer et al.) with multi-hop attention mechanism (Tran and Niedereée, 2018). Knowledge is a concept in a triple form that helps in conversational flow. The multi-hop attention mechanism helps select the most appropriate triples with respect to the conversational context and persona description, as not all triples are beneficial for generating responses. The Meta-Learning (PAML) framework allows quick adaptation to different personas by utilizing only a few dialogue samples from the same user. Our experiments on the Persona-Chat dataset show that our method outperforms in terms of persona-adaptability, resulting in more persona-consistent responses, as evidenced by the entailment (Entl) score in the automatic evaluation and the consistency (Con) score in human evaluation. | KnowPAML: A Knowledge Enhanced Framework for Adaptable Personalized Dialogue Generation Using Meta-Learning |
d219310178 | ||
d112119 | Wikipedia is a potentially very useful source of information, but intuitively it is difficult to have confidence in the quality of an encyclopedia that anyone can modify. One aspect of correctness is writing style, which we examine in a computer based study of the full Japanese Wikipedia. This is possible because Japanese is a language with clearly distinct writing styles using e.g., different verb forms.We find that the writing style of the Japanese Wikipedia is largely consistent with the style guidelines for the project. Exceptions appear to occur primarily in articles with a small number of changes and editors. | Language Homogeneity in the Japanese Wikipedia |
d9225214 | Kernel-based methods are widely used for relation extraction task and obtain good results by leveraging lexical and syntactic information. However, in biomedical domain these methods are limited by the size of dataset and have difficulty in coping with variations in text. To address this problem, we propose Extended Dependency Graph (EDG) by incorporating a few simple linguistic ideas and include information beyond syntax. We believe the use of EDG will enable machine learning methods to generalize more easily. Experiments confirm that EDG provides up to 10% f-value improvement over dependency graph using mainstream kernel methods over five corpora. We conducted additional experiments to provide a more detailed analysis of the contributions of individual modules in EDG construction. | An extended dependency graph for relation extraction in biomedical texts |
d42452608 | We improved sentiment classifier for predicting document-level sentiments from Twitter by using multi-channel lexicon embedidngs. The core of the architecture is based on CNN-BiLSTM that can capture high level features and long term dependency in documents. We also applied multi-channel method on lexicon to improve lexicon features. The macroaveraged F1 score of our model outperformed other classifiers in this paper by 1-4%. Our model achieved F1 score of 64% in SemEval Task 4 (2013-2016) datasets when multichannel lexicon embedding was applied with 100 dimensions of word embedding. | Multi-Channel Lexicon Integrated CNN-BiLSTM Models for Sentiment Analysis |
d96462644 | In this paper we describe our 2 nd place FEVER shared-task system that achieved a FEVER score of 62.52% on the provisional test set (without additional human evaluation), and 65.41% on the development set. Our system is a four stage model consisting of document retrieval, sentence retrieval, natural language inference and aggregation. Retrieval is performed leveraging task-specific features, and then a natural language inference model takes each of the retrieved sentences paired with the claimed fact. The resulting predictions are aggregated across retrieved sentences with a Multi-Layer Perceptron, and re-ranked corresponding to the final prediction. | UCL Machine Reading Group: Four Factor Framework For Fact Finding (HexaF) |
d233029503 | This paper discusses the tone acquisition of Chinese by students in the context of Indo-European languages. This paper conducted two experiments: (1) Using Zhang(2006)'s 'Somatically-Enhanced Approach'(SEA) to conduct small-scale teaching experiments to the effectiveness of SEA on error correction of intermediate French and Russian students."Somatically-Enhanced Approach" is centered on the body, teaching through humming, clapping, rhythm and movement to increase learners' sensitivity to tone and rhythm through language rhythm. The data in this thesis comes from the output of a Chinese class oral test of six French and Russian exchange students in a private university in Taiwan.(2) In the second experiment, all the spoken language corpus of French and Russian students were provided to ten native speaking Chinese teachers for analysis. After a one-semester study of the "Somatically-Enhanced Approach" in this research, Russian students and French students demonstrated that they could correctly pronounce the correct tones when speaking Chinese, with enhanced fluency in natural speech. The results of this study will be presented through quantitative (statistical data) and visualization and Praat was used to analyze the collected classroom spoken data and explore the sources of the errors. | French and Russian students' production of Mandarin tones |
d3890388 | There need be no real dispute on this panel about what is meant, in the broadest terms, by formal semantics (FS) when opposed to common-sense semantics (CSS): after registering his complaints and worries, the opposition David Israel opts for in his paper is broadly the one adopted here, and the model-theoretic semanticists he mentions will do just fine for me, and I suspect for Karen Sparck-Jones in her characterisation of a "logicist" approach to natural language processing. As will appear below, though, I want to list a range of strengths of FS commitment, not all of which are model-theoretic. So, whatever we find to argue about on this panel, it needn't be those two terms.Formal semantics (henceforth FS), at least as it relates to computational language understanding, is in one way rather like connectionism, though without the crucial prop Sejnowski's work (1986) is now widely believed to give to the latter: both are old doctrines returned, like the Bourbons having learned nothing and forgotten nothing, but FS has nothing to show as a showpiece success after all the intellectual groaning and effort. Here, I must register a small historical protest at Israel's claim that "until Montague, undeviatingly, the techniques of pure mathematical semantics were deployed for formal or artificial languages" It all depends what you mean by techniques, but Carnap in l~is Meaning and Necessity (1947) certainly thought he was applying Tarskian insights to natural language analysis. And the arguments surrounding that work, and others, were very like those we are having now. I need that point if the Bourbon analogy is to stick: FS applied to natural languages is anything but new.But there have been recent changes in style and presentation in the purely computational area as a result of the return: many working in the computational semantics of natural language now choose to express their notations in ways more acceptable to FS than they would have bothered to do, say, ten years ago. That may be a gain for perspicuity or may be a waste of time in individual cases, but there are no clear examples, I suggest, of computational systems where a FS theory offers anything integral or fundamental to the success of the processes that could not have been achieved by those same processes described at a more common-sense level (what I am calling common-sense semantics, or CSS). However, I do not at all intend to define CSS by any particular type or level of notational description: by it I mean a primary commitment to the solution of the main problems of language processing, those problems that have not obstructed progress in the field for thirty years. That set I take to include: large scale lexical ambiguity (i.e. against realistically large sense ambiguity for lexical items of English), the problems of phrase, and other constituent, attachment, where those require meanings to fix, and the whole mass of problems that collect around the notions of expertise, plans, intentions, goals, common knowledge, reference and its relationship to topic assumption etc.124 | On keeping logic in its place |
d51875779 | Neural Machine Translation (NMT) is notorious for its need for large amounts of bilingual data. An effective approach to compensate for this requirement is Multi-Task Learning (MTL) to leverage different linguistic resources as a source of inductive bias. Current MTL architectures are based on the SEQ2SEQ transduction, and (partially) share different components of the models among the tasks. However, this MTL approach often suffers from task interference, and is not able to fully capture commonalities among subsets of tasks. We address this issue by extending the recurrent units with multiple blocks along with a trainable routing network. The routing network enables adaptive collaboration by dynamic sharing of blocks conditioned on the task at hand, input, and model state. Empirical evaluation of two low-resource translation tasks, English to Vietnamese and Farsi, show +1 BLEU score improvements compared to strong baselines. | Adaptive Knowledge Sharing in Multi-Task Learning: Improving Low-Resource Neural Machine Translation |
d8086678 | We describe an on-going project whose primary aim is to establish the technology of producing closed captions for TV | Project for production of closed-caption TV programs for the hearing impaired |
d251980393 | Journey around Neural Machine Translation quality Neural Machine Translation | |
d248779944 | AI systems embodied in the physical world face a fundamental challenge of partial observability; operating with only a limited view and knowledge of the environment. This creates challenges when AI systems try to reason about language and its relationship with the environment: objects referred to through language (e.g. giving many instructions) are not immediately visible. Actions by the AI system may be required to bring these objects in view. A good benchmark to study this challenge is Dynamic Referring Expression Recognition (dRER) task where the goal is to find a target location by dynamically adjusting the field of view (FoV) in a partially observed 360 • scenes. In this paper, we introduce HOLM, Hallucinating Objects with Language Models, to address the challenge of partial observability. HOLM uses large pre-trained language models (LMs) to infer object hallucinations for the unobserved part of the environment. Our core intuition is that if a pair of objects coappear in an environment frequently, our usage of language should reflect this fact about the world. Based on this intuition, we prompt language models to extract knowledge about object affinities which gives us a proxy for spatial relationships of objects. Our experiments show that HOLM performs better than the state-of-the-art approaches on two datasets for dRER; allowing to study generalization for both indoor and outdoor settings. | HOLM: Hallucinating Objects with Language Models for Referring Expression Recognition in Partially-Observed Scenes |
d21918825 | Users will interact with an individual app on smart devices (e.g., phone, TV, car) to fulfill a specific goal (e.g. find a photographer), but users may also pursue more complex tasks that will span multiple domains and apps (e.g. plan a wedding ceremony). Planning and executing such multi-app tasks are typically managed by users, considering the required global context awareness. To investigate how users arrange domains/apps to fulfill complex tasks in their daily life, we conducted a user study on 14 participants to collect such data from their Android smart phones. This document 1) summarizes the techniques used in the data collection and 2) provides a brief statistical description of the data. This data guilds the future direction for researchers in the fields of conversational agent and personal assistant, etc. This data is available at http://AppDialogue.com. | AppDialogue: Multi-App Dialogues for Intelligent Assistants |
d14331079 | ||
d9610093 | This paper describes the supporting resources provided for the BioNLP Shared Task 2011. These resources were constructed with the goal to alleviate some of the burden of system development from the participants and allow them to focus on the novel aspects of constructing their event extraction systems. With the availability of these resources we also seek to enable the evaluation of the applicability of specific tools and representations towards improving the performance of event extraction systems. Additionally we supplied evaluation software and services and constructed a visualisation tool, stav, which visualises event extraction results and annotations. These resources helped the participants make sure that their final submissions and research efforts were on track during the development stages and evaluate their progress throughout the duration of the shared task. The visualisation software was also employed to show the differences between the gold annotations and those of the submitted results, allowing the participants to better understand the performance of their system. The resources, evaluation tools and visualisation tool are provided freely for research purposes and can be found at | BioNLP Shared Task 2011: Supporting Resources |
d36706044 | Simultaneous speech translation attempts to produce high quality translations while at the same time minimizing the latency between production of words in the source language and translation into the target language. The variation in syntactic structure between the source and target language can make this task challenging: translating from a language where the verb is at the end increases latency when translating incrementally into a language where the verb appears after the subject. | Information and Computation (PACLIC 30) Seoul, Republic of Korea |
d3217392 | Multi-source statistical machine translation is the process of generating a single translation from multiple inputs. Previous work has focused primarily on selecting from potential outputs of separate translation systems, and solely on multi-parallel corpora and test sets. We demonstrate how multi-source translation can be adapted for multiple monolingual inputs. We also examine different approaches to dealing with multiple sources, including consensus decoding, and we present a novel method of input combination to generate lattices for multi-source translation within a single translation model. | Word Lattices for Multi-Source Translation |
d543369 | In this paper, we introduce a new framework for recognizing textual entailment which depends on extraction of the set of publiclyheld beliefs -known as discourse commitments -that can be ascribed to the author of a text or a hypothesis. Once a set of commitments have been extracted from a t-h pair, the task of recognizing textual entailment is reduced to the identification of the commitments from a t which support the inference of the h. Promising results were achieved: our system correctly identified more than 80% of examples from the RTE-3 Test Set correctly, without the need for additional sources of training data or other web-based resources. | A Discourse Commitment-Based Framework for Recognizing Textual Entailment |
d588986 | We present a new method for detecting and disambiguating named entities in open domain text. A disambiguation SVM kernel is trained to exploit the high coverage and rich structure of the knowledge encoded in an online encyclopedia. The resulting model significantly outperforms a less informed baseline. | Using Encyclopedic Knowledge for Named Entity Disambiguation |
d220046429 | One great challenge in neural sequence labeling is the data sparsity problem for rare entity words and phrases. Most of test set entities appear only few times and are even unseen in training corpus, yielding large number of out-of-vocabulary (OOV) and low-frequency (LF) entities during evaluation. In this work, we propose approaches to address this problem. For OOV entities, we introduce local context reconstruction to implicitly incorporate contextual information into their representations. For LF entities, we present delexicalized entity identification to explicitly extract their frequency-agnostic and entity-typespecific representations. Extensive experiments on multiple benchmark datasets show that our model has significantly outperformed all previous methods and achieved new startof-the-art results. Notably, our methods surpass the model fine-tuned on pre-trained language models without external resource. . 2019. Dual adversarial neural transfer for low-resource named entity recognition. ACL, pages 3461-3471. | Handling Rare Entities for Neural Sequence Labeling |
d232021961 | ||
d38901491 | | |
d53234523 | Solving composites tasks, which consist of several inherent sub-tasks, remains a challenge in the research area of dialogue. Current studies have tackled this issue by manually decomposing the composite tasks into several sub-domains. However, much human effort is inevitable. This paper proposes a dialogue framework that autonomously models meaningful sub-domains and learns the policy over them. Our experiments show that our framework outperforms the baseline without subdomains by 11% in terms of success rate, and is competitive with that with manually defined sub-domains. | Autonomous Sub-domain Modeling for Dialogue Policy with Hierarchical Deep Reinforcement Learning |
d30952313 | This paper tackles the problem of timeline generation from traditional news sources. Our system builds thematic timelines for a general-domain topic defined by a user query. The system selects and ranks events relevant to the input query. Each event is represented by a one-sentence description in the output timeline.We present an inter-cluster ranking algorithm that takes events from multiple clusters as input and that selects the most salient and relevant events. A cluster, in our work, contains all the events happening in a specific date. Our algorithm utilizes the temporal information derived from a large collection of extensively temporal analyzed texts. Such temporal information is combined with textual contents into an event scoring model in order to rank events based on their salience and query-relevance. | Ranking Multidocument Event Descriptions for Building Thematic Timelines |
d202781416 | Contextualized word representations are able to give different representations for the same word in different contexts, and they have been shown to be effective in downstream natural language processing tasks, such as question answering, named entity recognition, and sentiment analysis. However, evaluation on word sense disambiguation (WSD) in prior work shows that using contextualized word representations does not outperform the stateof-the-art approach that makes use of noncontextualized word embeddings. In this paper, we explore different strategies of integrating pre-trained contextualized word representations and our best strategy achieves accuracies exceeding the best prior published accuracies by significant margins on multiple benchmark WSD datasets. | Improved Word Sense Disambiguation Using Pre-Trained Contextualized Word Representations |
d8064454 | In this paper, w e introduce our corpora under developm ent, w hich are recorded in a real environm ent. T hese corpora com prise dialogues collected in hospitals w ith the aim of developing a nursing service support system through a com prehensive understanding of nursing activities. We use the corpora to analyze how nurses perform their nursing duties and how they express the perform ance of their tasks. To understand nursing activities, w e investigated nursing services and the relevant m edical charts by using the corpora. In the paper, w e show features and prom ising applications of the corpora. | Building Dialogue Corpora for Nursing Activity Analysis |
d261349832 | This paper presents our participating system in the Chinese Abstract Meaning Representation Parsing Evaluation Task at the 22nd China National Conference on Computational Linguistics. Chinese Abstract Meaning Representation (CAMR) not only captures sentence semantics through graphical representation but also ensures the alignment of concepts and relations. Recently, generative large language models have demonstrated exceptional abilities in generation and generalization across various natural language processing tasks. Motivated by these advancements, we fine-tune the Baichuan-7B model to directly generate serialized CAMR from the provided text in an end-to-end manner. Experimental results demonstrate that our system achieves comparable performance to existing methods, eliminating the need for part-of-speech, dependency syntax, and complex rules. | System Report for CCL23-Eval Task 2: WestlakeNLP, Investigating Generative Large Language Models for Chinese AMR Parsing |
d250390599 | This paper contains words that are offensive.Lexicons play an important role in content moderation, often being the first line of defense. However, little or no literature exists in analyzing the representation of queer-related words in them. In this paper, we consider twelve wellknown English lexicons containing inappropriate words and analyze how gender and sexual minorities are represented in these lexicons. Our analyses reveal that several of these lexicons barely make any distinction between pejorative and non-pejorative queer-related words. We express concern that such unfettered usage of non-pejorative queer-related words may impact queer presence in mainstream discourse. Our analyses further reveal that the lexicons have poor overlap in queer-related words. We finally present a quantifiable measure of consistency and show that several of these lexicons are not consistent in how they include (or omit) queer-related words. | Revisiting Queer Minorities in Lexicons |
d4160293 | This paper describes the lemmatisation and tagging guidelines developed for the "Spoken Dutch Corpus", and lays out the philosophy behind the high granularity tagset that was designed for the project. To bootstrap the annotation of large quantities of material (10 million words) with this new tagset we tested several existing taggers and tagger generators on initial samples of the corpus. The results show that the most effective method, when trained on the small samples, is a high quality implementation of a Hidden Markov Model tagger generator. | Part of Speech Tagging and Lemmatisation for the Spoken Dutch Corpus |
d232021994 | ||
d10211215 | In this paper, we investigate the problem of Ezafe recognition in Persian language. Ezafe is an unstressed vowel that is usually not written, but is intelligently recognized and pronounced by human. Ezafe marker can be placed into noun phrases, adjective phrases and some prepositional phrases linking the head and modifiers. Ezafe recognition in Persian is indeed a homograph disambiguation problem, which is a useful task for some language applications in Persian like TTS. In this paper, Part of Speech tags augmented by Ezafe marker (POSE) have been used to train a probabilistic model for Ezafe recognition. In order to build this model, a ten million word tagged corpus was used for training the system. For building the probabilistic model, three different approaches were used; Maximum Entropy POSE tagger, Conditional Random Fields (CRF) POSE tagger and also a statistical machine translation approach based on parallel corpus. It is shown that comparing to previous works, the use of CRF POSE tagger can achieve outstanding results. | A Probabilistic Approach to Persian Ezafe Recognition |
d14526181 | Multi-modal models that learn semantic representations from both linguistic and perceptual input outperform language-only models on a range of evaluations, and better reflect human concept acquisition. Most perceptual input to such models corresponds to concrete noun concepts and the superiority of the multimodal approach has only been established when evaluating on such concepts. We therefore investigate which concepts can be effectively learned by multi-modal models. We show that concreteness determines both which linguistic features are most informative and the impact of perceptual input in such models. We then introduce ridge regression as a means of propagating perceptual information from concrete nouns to more abstract concepts that is more robust than previous approaches. Finally, we present weighted gram matrix combination, a means of combining representations from distinct modalities that outperforms alternatives when both modalities are sufficiently rich.285 | Multi-Modal Models for Concrete and Abstract Concept Meaning |
d11987101 | The event structure (aktionsart) is a widely discussed issue for the representation of verbal semantics in languages. However, there is still problems for the classification of verbs into state, activity, accomplishment, achievement and semelfactive. It is also not clear where are the differences of them embedded in terms of lexical, semantic or syntactic levels. In this paper, we will give a discussion on the primitives of events from an ontological point of view. We suggest that event types should be discussed in the usage level of language. Based on the Generative Lexicon theory, we provide a semantic representation of verbs which can give a better explanation how the semantics of verbs and the composition with their complements can determine the event type they denote. | Primitives of Events and the Semantic Representation |
d14438207 | In this paper, we report on a two-part experiment aiming to assess and compare the performance of two types of automatic speech recognition (ASR) systems on two different computational platforms when used to augment dictation workflows. The experiment was performed with a sample of speakers of three major languages and with different linguistic profiles: non-native English speakers; non-native French speakers; and native Spanish speakers. The main objective of this experiment is to examine ASR performance in translation dictation (TD) and medical dictation (MD) workflows without manual transcription vs. with transcription. We discuss the advantages and drawbacks of a particular ASR approach in different computational platforms when used by various speakers of a given language, who may have different accents and levels of proficiency in that language, and who may have different levels of competence and experience dictating large volumes of text, and with ASR technology. Lastly, we enumerate several areas for future research. | Assessing the Performance of Automatic Speech Recognition Systems When Used by Native and Non-Native Speakers of Three Major Languages in Dictation Workflows |
d14454247 | The pyramid method for content evaluation of automated summarizers produces scores that are shown to correlate well with manual scores used in educational assessment of students' summaries. This motivates the development of a more accurate automated method to compute pyramid scores. Of three methods tested here, the one that performs best relies on latent semantics. | Automated Pyramid Scoring of Summaries using Distributional Semantics |
d19032406 | This paper describes a preliminary work on prosody modeling aspect of a text-tospeech system for Thai. Specifically, the model is designed to predict symbolic markers from text (i.e., prosodic phrase boundaries, accent, and intonation boundaries), and then using these markers to generate pitch, intensity, and durational patterns for the synthesis module of the system. In this paper, a novel method for annotating the prosodic structure of Thai sentences based on dependency representation of syntax is presented. The goal of the annotation process is to predict from text the rhythm of the input sentence when spoken according to its intended meaning. The encoding of the prosodic structure is established by minimizing speech disrhythmy while maintaining the congruency with syntax. That is, each word in the sentence is assigned a prosodic feature called strength dynamic which is based on the dependency representation of syntax. The strength dynamics assigned are then used to obtain rhythmic groupings in terms of a phonological unit called foot. Finally, the foot structure is used to predict the durational pattern of the input sentence. The aforementioned process has been tested on a set of ambiguous sentences, which represents various structural ambiguities involving five types of compounds in Thai. | Prosodic Annotation in a Thai Text-to-speech System * |
d252819466 | With the rapid growth of scientific papers, understanding the changes and trends in a research area is rather time-consuming. The first challenge is to find related and comparable articles for the research. Comparative citations compare co-cited papers in a citation sentence and can serve as good guidance for researchers to track a research area. We thus go through comparative citations to find comparable objects and build a comparative scientific summarization corpus (CSSC). And then, we propose the comparative graph-based summarization (CGSUM) method to create comparative summaries using citations as guidance. The comparative graph is constructed using sentences as nodes and three different relationships of sentences as edges. The relationship that sentences occur in the same paper is used to calculate the salience of sentences, the relationship that sentences occur in two different papers is used to calculate the difference between sentences, and the relationship that sentences are related to citations is used to calculate the commonality of sentences. Experiments show that CGSUM outperforms comparative baselines on CSSC and performs well on DUC2006 and DUC2007. | Comparative Graph-based Summarization of Scientific Papers Guided by Comparative Citations |
d235097374 | ||
d227230575 | ||
d37059607 | What do you do when you need to find terminology in a foreign language and available bilingual sources are of no help at all? With the fast-growing complexity in all fields of knowledge and the parallel creation of neologisms needed to distribute the new knowledge, dictionaries and term databases are lagging behind, in particular when you work with two less widely spoken languages. This paper investigates methods and strategies for solving the problem and proposes to lay the grounds for a Bilingual Web Dictionary on Demand. This virtual dictionary is conceived as a number of knowledge-based methodologies allowing users to get across the language barrier using standard search engines and the Web as corpus. The key methodology aims at identifying a target language text containing an equivalent to the source language term using a so-called Cluster Method. Once a candidate term in the target language is identified, it must be validated. Intended users are translators, lexicographers, terminologists who must be bilingual in order to be able to select and adjust clusters. | The Bilingual Web Dictionary on Demand |
d10835278 | For several years, chunking has been an integral part of MITRE's approach to information extraction. Our work exploits chunking in two principal ways. First, as part of our extraction system (Alembic)(Aberdeen et al., 1995), the chunker delineates descriptor phrases for entity extraction. Second, as part of our ongoing research in parsing, chunks provide the first level of a stratified approach to syntax -the second level is defined by grammatical relations, much as in the SPARKLE effort(Carroll et al., 1997).Because of our ongoing work with chunking, we were naturally interested in evaluating our approach on the common CoNLL task. In this note, we thus present three different evaluations of our work on phrase-level parsing. The first is a baseline of sorts, our own version of the "chunking as tagging" approach introduced by Ramshaw and Marcus(Ramshaw and Marcus, 1995). The second set of results reports the performance of a trainable rule-based system, the Alembic phrase rule parser. As a point of comparison, we also include a third set of measures produced by running the standard Alembic chunker on the common task with little or no adaptation. | Phrase Parsing with Rule Sequence Processors: an Application to the Shared CoNLL Task |
d225062769 | ||
d8803650 | In this paper we describe an approach that both creates crosslingual acoustic monophone model sets for speech recognition tasks and objectively predicts their performance without target-language speech data or acoustic measurement techniques. This strategy is based on a series of linguistic metrics characterizing the articulatory phonetic and phonological distances of target-language phonemes from source-language phonemes. We term these algorithms the Combined Phonetic and Phonological Crosslingual Distance (CPP-CD) metric and the Combined Phonetic and Phonological Crosslingual Prediction (CPP-CP) metric. The particular motivations for this project are the current unavailability and often prohibitively high production cost of speech databases for many strategically important low-and middle-density languages.First, we describe the CPP-CD approach and compare the performance of CPP-CD-specified models to both native language models and crosslingual models selected by the Bhattacharyya acoustic-model distance metric in automatic speech recognition (ASR) experiments. Results confirm that the CPP-CD approach nearly matches those achieved by the acoustic distance metric. We then test the CPP-CP algorithm on the CPP-CD models by comparing the CPP-CP scores to the recognition phoneme error rates. Based on this comparison, we conclude that the CPP-CP algorithm is a reliable indicator of crosslingual model performance in speech recognition tasks. | Borrowing Language Resources for Development of Automatic Speech Recognition for Low-and Middle-Density Languages |
d258463947 | Language and literacy development has come to be influenced by digital technology in the current information age. Comments posted on a video streaming site by a developing bilingual seven-year-old child speaking an autochthonous Philippine language and English were analyzed as to their mean length, syntactic form and pragmatic uses. Results revealed a language development in early English writing comparable to native speaker oral norms extant in the literature. Implications for further language and literacy development of the participant especially in the use of internet and digital technology in learning as well as for language policy in education are also discussed. | Early writing of a bilingual child: A content analysis of his YouTube video comments |
d243865572 | Healthcare is becoming a more and more important research topic recently. With the growing data in the healthcare domain, it offers a great opportunity for deep learning to improve the quality of medical service. However, the complexity of electronic health records (EHR) data is a challenge for the application of deep learning. Specifically, the data produced in the hospital admissions are monitored by the EHR system, which includes structured data like daily body temperature, and unstructured data like free text and laboratory measurements. Although there are some preprocessing frameworks proposed for specific EHR data, the clinical notes that contain significant clinical value are beyond the realm of their consideration. Besides, whether these different data from various views are all beneficial to the medical tasks and how to best utilize these data remain unclear. Therefore, in this paper, we first extract the accompanying clinical notes from EHR and propose a method to integrate these data, we also comprehensively study the different models and the data leverage methods for better medical task prediction. The results on two medical prediction tasks show that our fused model with different data outperforms the state-of-the-art method that without clinical notes, which illustrates the importance of our fusion method and the value of clinical note features. Our code is available at https: //github.com/emnlp-mimic/mimic. | How to Leverage Multimodal EHR Data for Better Medical Predictions? |
d51869843 | Neural machine translation systems have been shown to achieve state-of-the-art translation performance for many language pairs. In order to produce a correct translation, MT systems must learn how to disambiguate words with multiple senses and pick the correct translation. We explore the extent to which the word embeddings for ambiguous words are able to disambiguate senses at deeper layers of the NMT encoder, which are thought to represent words with surrounding context. Consistent with previous research, we find that the NMT system fails to translate many ambiguous words correctly. We provide an evaluation framework to use for proposed improvements to word sense disambiguation abilities of NMT systems. | Exploring Word Sense Disambiguation Abilities of Neural Machine Translation Systems |
d123763774 | Les familles de mots produites par deux analyseurs morphologiques, DériF (basé sur des règles) et Morphonette (basé sur l'analogie), appliqués à un même corpus lexical, sont comparées. Cette comparaison conduit à l'examen de trois sous-ensembles :-un sous-ensemble commun aux deux systèmes dont la taille montre que, malgré leurs différences, les approches expérimentées par chaque système sont valides et décrivent en partie la même réalité morphologique.-un sous-ensemble propre à DériF et un autre à Morphonette. Ces ensembles (a) nous renseignent sur les caractéristiques propres à chaque système, et notamment sur ce que l'autre ne peut pas produire, (b) ils mettent en évidence les erreurs d'un système, en ce qu'elles n'apparaissent pas dans l'autre, (c) ils font apparaître certaines limites de la description, notamment celles qui sont liées aux objets et aux notions théoriques comme les familles morphologiques, les bases, l'existence de RCL « transversales » entre les lexèmes qui n'ont pas de relation d'ascendance ou de descendance.Abstract.The word families produced by two morphological analyzers of French, DériF (rule-based) and Morphonette (analogybased), applied on the same lexical corpus have been compared. The comparison led us to examine three classes of relations:-one subset of relations that are shared by both systems. It shows that, despite their differences, the approaches implemented in these systems are valid and describe, to some extent, one and the same morphological reality.-one subset of relations specific to DériF and another one to Morphonette. These sets (a) give us informations on the characteristics proper to each system, and especially on what the other system is unable to produce; (b) they highlight the errors of one system, in so that they are absent from the results of the other; (c) they reveal some of the limits of the description, especially the ones related to theoretical objects and concepts such as morphological family, base or the existence of transverse LCR (lexeme construction rules) between lexemes that are not ascendant nor descendant of each other. | Règles et paradigmes en morphologie informatique lexématique |
d226239388 | ||
d15882618 | The growing amount of available information and the growing importance given to the access to technical information enhance the potential role of NLP applications in enabling users to deal with information for a variety of knowledge domains. In this process, lexical resources are crucial.Using and comparing already existent wordnets for common and technical lexica, we set up a basis for integrating these resources without losing their specific information and properties. We demonstrate their compatibility and discuss strategies to overcome the issues arrising in their merging, namely aspects concerning conceptual variation, subnet and synset merging, and the incorporation of technical and non-technical information in definitions.As we are using models of the lexicon that mirror the organization of the mental lexicon, the accomplishment of this goal can provide insights on the type of relations holding between common lexical items and terms. Also, the results of integrating such resources can contribute to the better intercommunication between experts and non-experts, and provide a useful resource for NLP, particularly for tools simultaneously serving specialist and non-specialist publics. | Towards Merging Common and Technical Lexicon Wordnets |
d145042669 | This paper presents the development of an annotation scheme for the syntax/semantics interface that may feed into the generation of (ISO-)TimeML style annotations. The annotation scheme accounts for compositionality and calculates the semantic contribution of tense and aspect. The annotation builds on output from syntactic parsers and links information from morphosyntactic cues to a representation grounded in formal semantics/pragmatics that may be used to automatize the process of annotating tense/aspect and temporal relations. | Annotation of the Syntax/Semantics interface as a Bridge between Deep Linguistic Parsing and TimeML |
d8949142 | We describe our linguistic rule-based tagger IceTagger, and compare its tagging accuracy to the TnT tagger, a state-of-theart statistical tagger, when tagging Icelandic, a morphologically complex language. Evaluation shows that the average tagging accuracy is 91.54% and 90.44%, obtained by IceTagger and TnT, respectively. When tag profile gaps in the lexicon, used by the TnT tagger, are filled with tags produced by our morphological analyser IceMorphy, TnT's tagging accuracy increases to 91.18%. | Tagging Icelandic text using a linguistic and a statistical tagger |
d252624691 | The aim of this work is to describe the collection created with transcript of the Basque parliamentary speeches. This corpus follows the constraints of the ParlaMint project. The Basque ParlaMint corpus consists of two versions: the first version stands for what was said in the Basque Parliament, that is, the original bilingual corpus in Basque and in Spanish to analyse what and how it was said, while the second is only in Basque with the original and translated passages to promote studies on the content of the parliament speeches. | Adding the Basque Parliament Corpus to ParlaMint Project |
d12819449 | A language independent model for recognition and production of word forms is presented. This "two-level model" is based on a new way of describing morphological alternations. All rules describing the morphophonological variations are parallel and relatively independent of each other. Individual rules are implemented as finite state automata, as in an earlier model due to Martin Kay and Ron Kaplan. The two-level model has been implemented as an operational computer programs in several places. A number of operational two-level descriptions have been written or are in progress (Finnish, English, Japanese, Rumanian, French, Swedish, Old Church Slavonic, Greek, Lappish, Arabic, Icelandic). The model is bidirectional and it is capable of both analyzing and synthesizing word-forms. | A GENERAL COMPUTATIONAL MODEL FOR WORD-FORM RECOGNITION AND PRODUCTION |
d227231117 | This paper describes the SUMSUM systems submitted to the Financial Narrative Summarization Shared Task (FNS-2020). We explore a section-based extractive summarization method tailored to the structure of financial reports: our best system parses the reportTable ofContents (ToC), splits the report into narrative sections based on the ToC, and applies a BERT-based classifier to each section to determine whether it should be included in the summary. Our best system ranks 4 th , 1 st , 2 nd and 17 th on the Rouge-1, Rouge-2, Rouge-SU4, and Rouge-L official metrics, respectively. We also report results on the validation set using an alternative set of Rouge-based metrics that measure performance with respect to the best-matching of the available gold summaries. | SUMSUM@FNS-2020 Shared Task |
d52011828 | Scientific papers from all disciplines contain many abbreviations and acronyms. In many cases these acronyms are ambiguous. We present a method to choose the contextual correct definition of an acronym that does not require training for each acronym and thus can be applied to a large number of different acronyms with only few instances. We constructed a set of 19,954 examples of 4,365 ambiguous acronyms from image captions in scientific papers along with their contextually correct definition from different domains. We learn word embeddings for all words in the corpus and compare the averaged context vector of the words in the expansion of an acronym with the weighted average vector of the words in the context of the acronym. We show that this method clearly outperforms (classical) cosine similarity. Furthermore, we show that word embeddings learned from a 1 billion word corpus of scientific texts outperform word embeddings learned from much larger general corpora. | Using Word Embeddings for Unsupervised Acronym Disambiguation |
d2694479 | The two-level grammar is investigated as a notation for giving formal specification of the context-frec and context-sensitive aspects of n,~tural language syntax. In this paper, a large class of English declarative sentences, including post-noun-modificatlon by relative clauses, is formalized using a two-level grammar. The principal advantages of twolevel grammar are: 1) it is very e~sy to understand and may be used to give a formal description using a structured form of natural language; 2) it is formal with many well-known mathematical properties; and 3) it is directly implementable by interpretation. The significance of the latter fact is that once we have written a two-level grammar for natural language syntax, we can derive a parser automatically without writing any additional specialized computer programs. Because of the ease with which two-levcl grammars may express logic and their Turing computability we expect that they will also bc very snitable for future extensions to semantics and knowledge representation. | FORMAL SPECIFICATION OF NATURAL LANGUAGE SYNTAX |
d1670352 | Lexical-Functional Grammar (Kaplan and Bresnan, 1982)f-structures are bilexical labelled dependency representations. We show that the Naive Bayes classifier is able to guess missing grammatical function labels (i.e. bilexical dependency labels) with reasonably high accuracy (82-91%). In the experiments we use f-structure parser output for English and German Europarl data, automatically "broken" by replacing grammatical function labels with a generic UNKNOWN label and asking the classifier to restore the label. | Guessing the Grammatical Function of a Non-Root F-Structure in LFG |
d6863195 | We explore a novel application of Question Generation (QG) for authentication use, where questions are widely used to verify user identity for online accounts. In our approach, we prompt users to provide a few sentences about their personal life events. We transform user-provided input sentences into a set of simple fact-based authentication questions. We compared our approach with previous QG systems, and evaluation results show that our approach yielded better performance and the promise of future personalized authentication question generation. | Good Automatic Authentication Question Generation |
d252819318 | We report the construction of a Korean evaluation-annotated corpus, hereafter called 'Evaluation Annotated Dataset (EVAD)', and its use in Aspect-Based Sentiment Analysis (ABSA) extended in order to cover e-commerce reviews containing sentiment and non-sentiment linguistic patterns. The annotation process uses Semi-Automatic Symbolic Propagation (SSP). We built extensive linguistic resources formalized as a Finite-State Transducer (FST) to annotate corpora with detailed ABSA components in the fashion e-commerce domain. The ABSA approach is extended, in order to analyze user opinions more accurately and extract more detailed features of targets, by including aspect values in addition to topics and aspects, and by classifying aspectvalue pairs depending whether values are unary, binary, or multiple. For evaluation, the KoBERT and KcBERT models are trained on the annotated dataset, showing robust performances of F1 0.88 and F1 0.90, respectively, on recognition of aspect-value pairs. | SSP-Based Construction of Evaluation-Annotated Data for Fine-Grained Aspect-Based Sentiment Analysis |
d8216303 | Surface realisers divide into those used in generation (NLG geared realisers) and those mirroring the parsing process (Reversible realisers). While the first rely on grammars not easily usable for parsing, it is unclear how the second type of realisers could be parameterised to yield from among the set of possible paraphrases, the paraphrase appropriate to a given generation context. In this paper, we present a surface realiser which combines a reversible grammar (used for parsing and doing semantic construction) with a symbolic means of selecting paraphrases. | A Symbolic Approach to Near-Deterministic Surface Realisation using Tree Adjoining Grammar |
d227141494 | ||
d220058061 | ||
d261349538 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.