_id
stringlengths
4
10
text
stringlengths
0
18.4k
title
stringlengths
0
8.56k
d219302062
TITLE INDEX Discourse-Oriented Anaphora Resolution: A Review Hirst, Graeme Efficient Easily Adaptable System for Interpreting Natural Language Queries
d2855592
Successful application of multi-view cotraining algorithms relies on the ability to factor the available features into views that are compatible and uncorrelated. This can potentially preclude their use on problems such as coreference resolution that lack an obvious feature split. To bootstrap coreference classifiers, we propose and evaluate a single-view weakly supervised algorithm that relies on two different learning algorithms in lieu of the two different views required by co-training. In addition, we investigate a method for ranking unlabeled instances to be fed back into the bootstrapping loop as labeled data, aiming to alleviate the problem of performance deterioration that is commonly observed in the course of bootstrapping.
Bootstrapping Coreference Classifiers with Multiple Machine Learning Algorithms
d10410491
The RELISH project promotes language-oriented research by addressing a two-pronged problem: (1) the lack of harmonization between digital standards for lexical information in Europe and America, and (2) the lack of interoperability among existing lexicons of endangered languages, in particular those created with the Shoebox/Toolbox lexicon building software. The cooperation partners in the RELISH project are the University of Frankfurt (FRA), the Max Planck Institute for Psycholinguistics (MPI Nijmegen), and Eastern Michigan University, the host of the Linguist List (ILIT). The project aims at harmonizing key European and American digital standards whose divergence has hitherto impeded international collaboration on language technology for resource creation and analysis, as well as web services for archive access. Focusing on several lexicons of endangered languages, the project will establish a unified way of referencing lexicon structure and linguistic concepts, and develop a procedure for migrating these heterogeneous lexicons to a standards-compliant format. Once developed, the procedure will be generalizable to the large store of lexical resources involved in the LEGO and DoBeS projects.
"Rendering Endangered Lexicons Interoperable through Standards Harmonization": the RELISH Project
d554709
The length of a constituent (number of syllables in a word or number of words in a phrase), or rhythm, plays an important role in Chinese syntax. This paper systematically surveys the distribution of rhythm in constructions in Chinese from the statistical data acquired from a shallow tree bank. Based on our survey, we then used the rhythm feature in a practical shallow parsing task by using rhythm as a statistical feature to augment a PCFG model. Our results show that using the probabilistic rhythm feature significantly improves the performance of our shallow parser.
The Effect of Rhythm on Structural Disambiguation in Chinese
d11896512
This paper provides an algorithmic framework for learning statistical models involving directed spanning trees, or equivalently non-projective dependency structures. We show how partition functions and marginals for directed spanning trees can be computed by an adaptation of Kirchhoff's Matrix-Tree Theorem. To demonstrate an application of the method, we perform experiments which use the algorithm in training both log-linear and max-margin dependency parsers. The new training methods give improvements in accuracy over perceptron-trained models.
Structured Prediction Models via the Matrix-Tree Theorem
d17986977
This paper introduces association norms of German noun compounds as a lexical-semantic resource for cognitive and computational linguistics research on compositionality. Based on an existing database of German noun compounds, we collected human associations to the compounds and their constituents within a web experiment. The current study describes the collection process and a part-of-speech analysis of the association resource. In addition, we demonstrate that the associations provide insight into the semantic properties of the compounds, and perform a case study that predicts the degree of compositionality of the experiment compound nouns, as relying on the norms. Applying a comparatively simple measure of association overlap, we reach a Spearman rank correlation coefficient of rs = 0.5228, p < .000001, when comparing our predictions with human judgements.
Association Norms of German Noun Compounds
d16763002
d17809778
Cross-language information retrieval consists in providing a query in one language and searching documents in one or different languages. These documents are ordered by the probability of being relevant to the user's request. The highest ranked document is considered to be the most likely relevant document. The LIC2M cross-language information retrieval system is a weighted Boolean search engine based on a deep linguistic analysis of the query and the documents. This system is composed of a linguistic analyzer, a statistic analyzer, a reformulator, a comparator and a search engine. The linguistic analysis processes both documents to be indexed and queries to extract concepts representing their content. This analysis includes a morphological analysis, a part-of-speech tagging and a syntactic analysis. In this paper, we present the deep linguistic analysis used in the LIC2M cross-lingual search engine and we will particularly focus on the impact of the syntactic analysis on the retrieval effectiveness.
A Deep Linguistic Analysis for Cross-language Information Retrieval
d1626138
Rare diseases are not that rare: worldwide, one in 12-17 people will be affected by a rare disease. Newborn screening for rare diseases has been adopted by many European and North American jurisdictions. The results of genetic testing are given to millions of families and children's guardians who often turn to the Internet to find more information about the disease. We found 42 medical forums and blogs where parents and other related adults form virtual communities to discuss the disease diagnosis, share related knowledge and seek moral support. Many people (up to 75% in some population groups) look for professional medical publications to find reliable information. How can it be made easier for these nonmedical professionals to understand such texts?We suggest that recommender systems, installed on web sites of research and teaching health care organizations, can be a tool that helps parents to navigate a massive amount of available medical information. In this paper, we discuss NLP architecture of such a system. We concentrate on processing epistemic modal expressions and helping the general public to evaluate the certainty of an event.
Helping parents to understand rare diseases
d2952345
This paper reports on challenges and results in developing NLP resources for spoken Rusyn. Being a Slavic minority language, Rusyn does not have any resources to make use of. We propose to build a morphosyntactic dictionary for Rusyn, combining existing resources from the etymologically close Slavic languages Russian, Ukrainian, Slovak, and Polish. We adapt these resources to Rusyn by using vowel-sensitive Levenshtein distance, hand-written language-specific transformation rules, and combinations of the two. Compared to an exact match baseline, we increase the coverage of the resulting morphological dictionary by up to 77.4% relative (42.9% absolute), which results in a tagging recall increased by 11.6% relative (9.1% absolute). Our research confirms and expands the results of previous studies showing the efficiency of using NLP resources from neighboring languages for low-resourced languages.
Lexicon Induction for Spoken Rusyn -Challenges and Results
d6578852
Recent work on distributional methods for similarity focuses on using the context in which a target word occurs to derive context-sensitive similarity computations. In this paper we present a method for computing similarity which builds vector representations for words in context by modeling senses as latent variables in a large corpus. We apply this to the Lexical Substitution Task and we show that our model significantly outperforms typical distributional methods.
Topic models for meaning similarity in context
d252819327
Automated theorem proving can benefit a lot from methods employed in natural language processing, knowledge graphs and information retrieval: this non-trivial task combines formal languages understanding, reasoning, similarity search. We tackle this task by enhancing semantic similarity ranking with prompt engineering, which has become a new paradigm in natural language understanding. None of our approaches requires additional training. Despite encouraging results reported by prompt engineering approaches for a range of NLP tasks, for the premise selection task vanilla reranking by prompting GPT-3 doesn't outperform semantic similarity ranking with SBERT, but merging of the both rankings shows better results.
TextGraphs-16 Natural Language Premise Selection Task: Zero-Shot Premise Selection with Prompting Generative Language Models
d7425969
This paper describes the structure of the LTH coreference solver used in the closed track of the CoNLL 2012 shared task(Pradhan et al., 2012). The solver core is a mention classifier that uses Soon et al. (2001)'s algorithm and features extracted from the dependency graphs of the sentences.This system builds on Björkelund and Nugues (2011)'s solver that we extended so that it can be applied to the three languages of the task: English, Chinese, and Arabic. We designed a new mention detection module that removes pleonastic pronouns, prunes constituents, and recovers mentions when they do not match exactly a noun phrase. We carefully redesigned the features so that they reflect more complex linguistic phenomena as well as discourse properties. Finally, we introduced a minimal cluster model grounded in the first mention of an entity.We optimized the feature sets for the three languages: We carried out an extensive evaluation of pairs of features and we complemented the single features with associations that improved the CoNLL score. We obtained the respective scores of 59.57, 56.62, and 48.25 on English, Chinese, and Arabic on the development set, 59.36, 56.85, and 49.43 on the test set, and the combined official score of 55.21.
Using Syntactic Dependencies to Solve Coreferences
d7482980
This paper proposes document oriented preference sets(DoPS) for the disambiguation of the dependency structure of sentences. The I)oPS system extracts preference knowledge from a target document or other documents automatically. Sentence ambiguities can be resolved by using domain targeted preference knowledge without using complicated large knowledgebases.Implementation and empirical results are described for the cmalysis of dependency structures of Japanese patent claim sentences.To solve this problem, we introduce Document oriented Preference Sets(DoPS). The concept of DoPS is that to determine the most appropriate preference knowledge, preference knowledge be segregated into several domains, for example, language domain, field domain, and sentence domain, each of which has a different execution priority. By using the segregated preference knowledge in the fixed order, the most plausible interpretation can be obtained more rapidly and more accurately.2. The concept of DoPS 1.
Sentence disambiguation by document preference sets oriented
d902350
AutomatedReasoning techniques applied to the problem of natural language correctness allow the design of flexible training aids for the teaching of foreign languages.The approach involves important advantages for both the student and the
AUTOMATED REASONING ABOUT NATURAL LANGUAGE CORRECTNESS
d15847650
While much effort is expended in the curation of language resources, such investment is largely irrelevant if users cannot locate resources of interest. The Open Language Archives Community (OLAC) was established to define standards for the description of language resources and provide core infrastructure for a virtual digital library, thus addressing the resource discovery issue. In this paper we consider naturalistic user search behaviour in the Open Language Archives Community. Specifically, we have collected the query logs from the OLAC Search Engine over a 2 year period, collecting in excess of 1.3 million queries, in over 450K user search sessions. Subsequently we have mined these to discover user search patterns of various types, all pertaining to the discovery of language resources. A number of interesting observations can be made based on this analysis, in this paper we report on a range of properties and behaviours based on empirical evidence.
Searching for Language Resources on the Web: User Behaviour in the Open Language Archives Community
d28126691
Question difficulty estimates guide test creation, but are too costly for small-scale testing. We empirically verify that Bloom's Taxonomy, a standard tool for difficulty estimation during question creation, reliably predicts question difficulty observed after testing in a short-answer corpus. We also find that difficulty can be approximated by the amount of variation in student answers, which can be computed before grading.We show that question difficulty and its approximations are useful for automated grading, allowing us to identify the optimal feature set for grading each question even in an unseen-question setting.
Question Difficulty -How to Estimate Without Norming, How to Use for Automated Grading
d56501000
Edge A Chart Structure B
d16102199
Collective intelligence is the capability for a group of people to collaborate in order to achieve goals in a complex context than its individual member. This common concept increases topic of interest in many sciences including computer science where computers are bring about as group support elements. This paper presents a new platform, called Knowledge Unifying Initiator (KUI) for knowledge development which enables connection and collaboration among individual intelligence in order to accomplish a complex mission. KUI is a platform to unify the various thoughts following the process of thinking, i.e., initiating the topic of interest, collecting the opinions to the selected topics, localizing the opinions through the translation or customization and posting for public hearing to conceptualize the knowledge. The process of thinking is done under the selectional preference simulated by voting mechanism in case that many alternatives occur. By measuring the history of participation of each member, KUI adaptively manages the reliability of each member's opinion and vote according to the estimated Ex-pertScore.
KUI: an ubiquitous tool for collective intelligence development
d46203101
Recently, conversational robots are widely used in mobile terminals as the virtual assistant or companion. The goals of prevalent conversational robots mainly focus on four categories, namely chitchat, task completion, question answering and recommendation. In this paper, we present a Chinese intelligent conversational robot, Benben, which is designed to achieve these goals in a unified architecture. Moreover, it also has some featured functions such as diet map, implicit feedback based conversation, interactive machine reading, news recommendation, etc. Since the release of Benben at June 6, 2016, there are 2,505 users (till Feb 22, 2017) and 11,107 complete humanrobot conversations, which totally contain 198,998 single turn conversation pairs.
Benben: A Chinese Intelligent Conversational Robot
d9581267
In this paper we propose using the distributional differences in the syntactic patterns of near-synonyms to deduce the relevant components of verb meaning. Our method involves determining the distributional differences in syntactic patterns, deducing the semantic features from the syntactic phenomena, and testing the semantic features in new syntactic frames. We determine the distributional differences in syntactic patterns through the following five steps: First, we search for all instances of the verb in the corpus. Second, we classify each of these instances into its type of syntactic function. Third, we classify each of these instances into its argument structure type. Fourth, we determine the aspectual type that is associated with each verb. Lastly, we determine each verb's sentential type. Once the distributional differences have been determined, then the relevant semantic features are postulated. Our goal is to tease out the lexical semantic features as the explanation, and as the motivation of the syntactic contrasts.
Towards a Representation of Verbal Semantics -- An Approach Based on Near-Synonyms
d232021907
d235097549
Usage-based analyses of teacher corpora and codeswitching(Boztepe, 2003)are an important next stage in understanding language acquisition. Multilingual corpora are difficult to compile and a classroom setting adds pedagogy to the mix of factors which make this data so rich and problematic to classify. Using quantitative methods to understand language learning and teaching is difficult work as the 'transcription bottleneck' constrains the size of datasets. We found that using an automatic speech recognition (ASR) toolkit with a small set of training data is likely to speed data collection in this context(Maxwelll-Smith et al., 2020).For this study we used approximately 150 minutes of data from a project recording a single teacher's speech in a second-year, tertiary Indonesian language program. Our methodological considerations addressed the following: which ASR tool to use, how to prepare training data for this tool, and how to best manage the bias of the training data inherent in all transcription processes.We chose the Elpis ASR system, which combines user-friendly data processing scripts with a Kaldi HMM/GMM (Hidden Markov Model/Gaussian Mixture Model) recipe. Elpis generates transcripts as time-aligned ELAN files, which was a good fit with the broader project investigating Indonesian language teaching.A team of transcribers established guidelines which reflexively responded to a range of methodological considerations. Indonesian diglossic variants exist in a highly diverse linguistic ecosystem(Djenar and Ewing, 2015;Sneddon, 2003;Goebel, 2010). This was highlighted by transcriber subjectivity in the teaching context. For example, the task of analyzing and choosing orthography to transcribe teacher speech into over-simplified, binary L1 versus L2 categories (1st language: English, 2nd language: Indonesian) is influenced by transcriber expectations of language norms in 'high' vs. 'low' varieties of Indonesian. Further, the goal of modifying sociolinguistic norms which brings people to language classrooms precipitated a level of variance and unpredictability unusual in other speech contexts as teachers respond to student acquisition processes. We also provided examples of the development of a Community of Practice (Wenger, 1998) as another layer of complexity in the group classroom environment.The dataset was transcribed using several "tiers" to create parallel structures for storing data. While predominately working from a code-switching paradigm, the data structure allowed us to train multiple models for comparative evaluation. We trained three models, two of which included all training data and multi-lingual pronunciation lexicons, resonating with work on translanguaging in educational settings (Garcia and Wei, 2014). The third model was trained with Indonesian data only. Our preliminary result of 64% word error rate (WER) is high in comparison to mono-lingual ASR systems(Maxwelll-Smith et al., 2020). However, WERs from code-switch bilingual data(Biswas et al., 2019)were more similar to our WER, especially given our small amount of training data.By analysing the text spans in the machine transcription, we found a high incidence of resyllabification (word splitting), particularly with omission of initial or middle consonants. The analysis identified which model would include less disruptive errors than the others, and which would be more responsive to the addition of further training data.The application of ASR tools is limited in this setting given the small set of training data, however using these tools has potential to expedite the transcription of teacher corpora. These tools could change workflow and decrease cognitive load for human transcribers by generating a draft transcript for revision. We highlight some of the benefits and risks of using these emerging technologies to analyze the complex work of language teachers, and in education more generally.132
Developing ASR for Indonesian-English Bilingual Language Teaching
d14941206
Two Methods for Learning ALT-J/E ]¥anslation Rules from Examples and a Semantic Hierarchy
d218974110
d252364990
Mental disorders are a serious and increasingly relevant public health issue. NLP methods have the potential to assist with automatic mental health disorder detection, but building annotated datasets for this task can be challenging; moreover, annotated data is very scarce for disorders other than depression. Understanding the commonalities between certain disorders is also important for clinicians who face the problem of shifting standards of diagnosis. We propose that transfer learning with linguistic features can be useful for approaching both the technical problem of improving mental disorder detection in the context of data scarcity, and the clinical problem of understanding the overlapping symptoms between certain disorders. In this paper, we target four disorders: depression, PTSD, anorexia and self-harm. We explore multi-aspect transfer learning for detecting mental disorders from social media texts, using deep learning models with multi-aspect representations of language (including multiple types of interpretable linguistic features). We explore different transfer learning strategies for cross-disorder and cross-platform transfer, and show that transfer learning can be effective for improving prediction performance for disorders where little annotated data is available. We offer insights into which linguistic features are the most useful vehicles for transferring knowledge, through ablation experiments, as well as error analysis.
Multi-Aspect Transfer Learning for Detecting Low Resource Mental Disorders on Social Media
d251402085
We present results from a study investigating how users perceive text quality and readability in extractive and abstractive summaries. We trained two summarisation models on Swedish news data and used these to produce summaries of articles. With the produced summaries, we conducted an online survey in which the extractive summaries were compared to the abstractive summaries in terms of fluency, adequacy and simplicity. We found statistically significant differences in perceived fluency and adequacy between abstractive and extractive summaries but no statistically significant difference in simplicity. Extractive summaries were preferred in most cases, possibly due to the types of errors the summaries tend to have.
Perceived Text Quality and Readability in Extractive and Abstractive Summaries
d15040797
As the popularity of Community Question Answering(CQA) increases, spamming activities also picked up in numbers and variety. On CQA sites, spammers often pretend to ask questions, and select answers which were published by their partners or themselves as the best answers. These fake best answers cannot be easily detected by neither existing methods nor common users. In this paper, we address the issue of detecting spammers on CQA sites. We formulate the task as an optimization problem. Social information is incorporated by adding graph regularization constraints to the text-based predictor. To evaluate the proposed approach, we crawled a data set from a CQA portal. Experimental results demonstrate that the proposed method can achieve better performance than some state-of-the-art methods.
Detecting Spammers in Community Question Answering
d3730096
Why do few working spoken dialogue systems make use of dialogue models in their dialogue management? We find out the causes and propose a generic dialogue model. It promises to bridge the gap between practical dialogue management and (pattern-based) dialogue model through integrating interaction patterns with the underling tasks and modeling interaction patterns via utterance groups using a high level construct different from dialogue act.
Bridging the Gap Between Dialogue Management and Dialogue Models
d9422633
Feature and context aggregation play a large role in current NER systems, allowing significant opportunities for research into optimizing these features to cater to different domains. This work strives to reduce the noise introduced into aggregated features from disparate and generic training data in order to allow for contextual features that more closely model the entities in the target data. The proposed approach trains models based on only a part of the training set that is more similar to the target domain. To this end, models are trained for an existing NER system using the top documents from the training set that are similar to the target document in order to demonstrate that this technique can be applied to improve any pre-built NER system. Initial results show an improvement over the University of Illinois NE tagger with a weighted average F1 score of 91.67 compared to the Illinois tagger's score of 91.32. This research serves as a proof-of-concept for future planned work to cluster the training documents to produce a number of more focused models from a given training set, thereby reducing noise and extracting a more representative feature set.
Focused training sets to reduce noise in NER feature models
d10503556
To computationally model discourse phenomena such as argumentation we need corpora with reliable annotation of the phenomena under study. Annotating complex discourse phenomena poses two challenges: fuzziness of unit boundaries and the need for multiple annotators. We show that current metrics for inter-annotator agreement (IAA) such as P/R/F1 and Krippendorff's α provide inconsistent results for the same text. In addition, IAA metrics do not tell us what parts of a text are easier or harder for human judges to annotate and so do not provide sufficiently specific information for evaluating systems that automatically identify discourse units. We propose a hierarchical clustering approach that aggregates overlapping text segments of text identified by multiple annotators; the more annotators who identify a text segment, the easier we assume that the text segment is to annotate. The clusters make it possible to quantify the extent of agreement judges show about text segments; this information can be used to assess the output of systems that automatically identify discourse units. This work is licensed under a Creative Commons Attribution 4.0 International Licence. Page numbers and proceedings footer are added by the organisers. Licence details: http://creativecommons.org/licenses/by/4.0/
Annotating Multiparty Discourse: Challenges for Agreement Metrics
d218974492
d249204511
The work in progress on the CEF action CURLICAT is presented. The general aim of the action is to compile curated monolingual datasets in seven languages of the consortium in domains of relevance to European Digital Service Infrastructures (DSIs) in order to enhance the eTranslation services.
Curated Multilingual Language Resources for CEF AT (CURLICAT): Overall View
d3045000
Combining a naive Bayes classifier with the EM algorithm is one of the promising approaches for making use of unlabeled data for disambiguation tasks when using local context features including word sense disambiguation and spelling correction. However, the use of unlabeled data via the basic EM algorithm often causes disastrous performance degradation instead of improving classification performance, resulting in poor classification performance on average. In this study, we introduce a class distribution constraint into the iteration process of the EM algorithm. This constraint keeps the class distribution of unlabeled data consistent with the class distribution estimated from labeled data, preventing the EM algorithm from converging into an undesirable state. Experimental results from using 26 confusion sets and a large amount of unlabeled data show that our proposed method for using unlabeled data considerably improves classification performance when the amount of labeled data is small.
Training a Naive Bayes Classifier via the EM Algorithm with a Class Distribution Constraint
d219720965
d252624631
The Libras Portal is a platform that makes available in one place a series of materials and tools related to the Brazilian Sign Language (Libras) that integrate the documentation of Libras. It can be used both for research and educational purposes. Among the artifacts developed are tools that support the constitution of an education network and/or community of practice, enabling the sharing of knowledge, data and interaction in Libras and Portuguese.
-BY-NC 4.0 Libras Portal: a Way of Documentation, a Way of Sharing
d243865634
Recently, the focus of dialogue state tracking has expanded from single domain to multiple domains. The task is characterized by the shared slots between domains. As the scenario gets more complex, the out-of-vocabulary problem also becomes more severe. Current models are not satisfactory for addressing the challenges of ontology integration between domains and out-of-vocabulary problems. To address the problem, we explore the hierarchical semantics of the ontology and enhance the interrelation between slots with masked hierarchical attention. In state value decoding stage, we address the out-of-vocabulary problem by combining generation method and extraction method together. We evaluate the performance of our model on two representative datasets, MultiWOZ in English and CrossWOZ in Chinese. The results show that our model yields a significant performance gain over current state-of-the-art state tracking model and it is more robust to out-of-vocabulary problem compared with other methods.
Generation and Extraction Combined Dialogue State Tracking with Hierarchical Ontology Integration
d980313
Characters play an important role in the Chinese language, yet computational processing of Chinese has been dominated by word-based approaches, with leaves in syntax trees being words. We investigate Chinese parsing from the character-level, extending the notion of phrase-structure trees by annotating internal structures of words. We demonstrate the importance of character-level information to Chinese processing by building a joint segmentation, part-of-speech (POS) tagging and phrase-structure parsing system that integrates character-structure features. Our joint system significantly outperforms a state-of-the-art word-based baseline on the standard CTB5 test, and gives the best published results for Chinese parsing.
Chinese Parsing Exploiting Characters
d248513012
In this paper, we ask the research question of whether all the datasets in the benchmark are necessary. We approach this by first characterizing the distinguishability of datasets when comparing different systems. Experiments on 9 datasets and 36 systems show that several existing benchmark datasets contribute little to discriminating top-scoring systems, while those less used datasets exhibit impressive discriminative power. We further, taking the text classification task as a case study, investigate the possibility of predicting dataset discrimination based on its properties (e.g., average sentence length). Our preliminary experiments promisingly show that given a sufficient number of training experimental records, a meaningful predictor can be learned to estimate dataset discrimination over unseen datasets. We released all datasets with features explored in this work on DataLab. 1
Are All the Datasets in Benchmark Necessary? A Pilot Study of Dataset Evaluation for Text Classification
d226283725
d6686868
We present a kernel-based approach for finegrained classification of named entities. The only training data for our algorithm is a few manually annotated entities for each class. We defined kernel functions that implicitly map entities, represented by aggregating all contexts in which they occur, into a latent semantic space derived from Wikipedia. Our method achieves a significant improvement over the state of the art for the task of populating an ontology of people, although requiring considerably less training instances than previous approaches.
Fine-Grained Classification of Named Entities Exploiting Latent Semantic Kernels
d15776405
Indian sub-continent is one of those unique
Sangam: A Perso-Arabic to Indic Script Machine Transliteration Model
d30501633
In this paper, we describe a new method for extracting monolingual collocations. The method is based on statistical methods extracts. VN collocations from large textual corpora. Being able to extract a large number of collocations is very critical to machine translation and many other application. The method has an element of snowballing in it. Initially, one identifies a pattern that will produce a large portion of VN collocations. We experimented with an implementation of the proposed method on a large corpus with satisfactory results. The patterns are further refined to improve on the precision ration.IntroductionCollocations are recurrent combinations of words that co-occur more often than chance. Collocations like terminology tend to be lexicalized and have a somehow more restricted meaning than the surface form suggested (Justerson and Katz 1994). The words in a collocation may be appearing next to each other (rigid collocation) or otherwise (flexible/elastic collocations). On the other hand, collocations can be classified into lexical and grammatical collocations(Benson, Benson, Ilson, 1986). Lexical collocations are formed between content words, while the grammatical collocation has to do with a content word with a function word or a syntactic structure. Collocations are pervasive in all types of writing and can be found in phrases, chunks, proper names, idioms, and terminology.Automatic extraction of monolingual and bilingual collocations are important for many applications, including Computer Assisted Language Learning, natural language generation, word sense disambiguation, machine translation, lexicography, and cross language information retrieval. Hank and Church (1990) pointed out the usefulness of pointwise mutual information for identifying collocations in lexicography.Justeson and Katz (1995)proposed to identify technical terminology based on preferred linguistic patterns and discourse property of repetition. Among many general methods presented in Manning and Schutze (1999), the best method is filtering based on both linguistic and statistical constraints. Smadja (1993) presented a program called XTRACT, based on mean and variance of the distance between two words that is capable of computing flexible collocations. Kupiec (1992) proposed to extract bilingual noun phrases using statitistical analysis of coocurrance of phrases. Smadja,McKeown, and Hatzivassiloglou (1996)extended the EXTRACT approach to handling of bilingual collocation based mainly on the statistical measures of Dice coefficient.Dunning (1993)pointed out the weakness of mutual information and showed that log likelihood ratios are more effective in identifying monolingual collocations especially when the occurrence count is very low.Smadja's XTRACT is the seminal work on extracting collocation types. XTRACT invloves three different statistical measures related to how likely a pair of words is part of a collocation type. It is complicated to set different thresholds for each of these statistical measures. We decided to research and develop a new and simpler method for extracting monolingual collocations. We describe the experiments and evaluation in Section 3. The limitations and related issues will be taken up in Section 4. We conclude and give future direction in Section 5.
Extracting Verb-Noun Collocations from Text
d16585956
Feature selection is a major hurdle for the CRF and SVM based POS tagging. The features are of course listed which will have a very good impact with the identification of POS. Among the listed features, the feature selection is purely a manual effort with hit and trail methods among them. The best way for better output is to design a system where the system itself identifies the best combination of features. A Genetic Algorithm (GA) system is design so that the best possible combination can be sort out instead of a manual hit and trail method in feature selection. The system shows a Recall of 80.00%, Precision (P) of 90.43% and F-score (F) of 84.90%.
Genetic Algorithm (GA) Implementation for Feature Selection in Manipuri POS Tagging
d220045399
d13577644
This paper describes the latest developments in the design of a tool to monitor Patient Discharge Summaries to detected pieces of evidences related to Hospital Acquired Infections. Anonymization, Named Entity detection, Temporal Expressions analysis and Causality detection methods have been developed and evaluated. They are embedded in a tool designed to work in a Hospital Information Workflow.
Architecture and Systems for Monitoring Hospital Acquired Infec- tions inside a Hospital Information Workflow
d541363
In this article we propose a rank aggregation method for the task of collocations detection. It consists of applying some well-known methods (e.g. Dice method, chi-square test, z-test and likelihood ratio) and then aggregating the resulting collocations rankings by rank distance and Borda score. These two aggregation methods are especially well suited for the task, since the results of each individual method naturally forms a ranking of collocations. Combination methods are known to usually improve the results, and indeed, the proposed aggregation method performs better then each individual method taken in isolation.
Aggregation methods for efficient collocation detection
d8487570
The accuracy and coverage of existing methods for extracting attributes of instances from text in general, and Web search queries in particular, are limited by two main factors: availability of input textual data to which the methods can be applied, and inherent limitations of the underlying assumptions and algorithms being used. This paper proposes a weakly-supervised approach for the acquisition of attributes of instances from input data available in the form of synthetic queries automatically generated from submitted queries. The generated queries allow for the acquisition of additional attributes, leading to extracted lists of attributes of higher quality than with comparable previous methods.
Attribute Extraction from Synthetic Web Search Queries
d9159237
In this paper, we report our work on automatic image annotation by combining several textual features drawn from the text surrounding the image. Evaluation of our system is performed on a dataset of images and texts collected from the web. We report our findings through comparative evaluation with two gold standard collections of manual annotations on the same dataset.
Explorations in Automatic Image Annotation using Textual Features
d241583648
The knowledge of the European silk textile production is a typical case for which the information collected is heterogeneous, spread across many museums and sparse since rarely complete. Knowledge Graphs for this cultural heritage domain, when being developed with appropriate ontologies and vocabularies, enable to integrate and reconcile this diverse information. However, many of these original museum records still have some metadata gaps. In this paper, we present a zero-shot learning approach that leverages the Concept-Net common sense knowledge graph to predict categorical metadata informing about the silk objects production. We compared the performance of our approach with traditional supervised deep learning-based methods that do require training data. We demonstrate promising and competitive performance for similar datasets and circumstances and the ability to predict sometimes more fine-grained information. Our results can be reproduced using the code and datasets published at https:// github.com/silknow/ZSL-KG-silk.
Zero-Shot Information Extraction to Enhance a Knowledge Graph Describing Silk Textiles
d14253932
We describe on-going work towards publishing language resources included in dialectal dictionaries in the Linked Open Data (LOD) cloud, and so to support wider access to the diverse cultural data associated with such dictionary entries, like the various historical and geographical variations of the use of such words. Beyond this, our approach allows the cross-linking of entries of dialectal dictionaries on the basis of the semantic representation of their senses, and also to link the entries of the dialectal dictionaries to lexical senses available in the LOD framework. This paper focuses on the description of the steps leading to a SKOS-XL and lemon encoding of the entries of two Austrian dialectal dictionaries, and how this work supports their cross-linking and linking to other language data in the LOD.
How to semantically relate dialectal Dictionaries in the Linked Data Framework
d8401482
This paper introduces the results of integration of lexical and terminological resources, most of them developed within the Human Language Technology (HLT) Group at the University of Belgrade, with the Geological information system of Serbia (GeolISS) developed at the Faculty of Mining and Geology and funded by the Ministry of the Environmental protection. The approach to GeolISS development, which is aimed at the integration of existing geologic archives, data from published maps on different scales, newly acquired field data, and intranet and internet publishing of geologic is given, followed by the description of the geologic multilingual vocabulary and other lexical and terminological resources used. Two basic results are outlined: multilingual map annotation and improvement of queries for the GeolISS geodatabase. Multilingual labelling and annotation of maps for their graphic display and printing have been tested with Serbian, which describes regional information in the local language, and English, used for sharing geographic information with the world, although the geological vocabulary offers the possibility for integration of other languages as well. The resources also enable semantic and morphological expansion of queries, the latter being very important in highly inflective languages, such as Serbian.
GIS Application Improvement with Multilingual Lexical and Terminological Resources
d12301166
We explore a rule system and a machine learning (ML) approach to automatically harvest information on gene regulation events (GREs) from biological documents in two different evaluation scenarios -one uses self-supplied corpora in a clean lab setting, while the other incorporates a standard reference database of curated GREs from REGULONDB, real-life data generated independently from our work. In the lab condition, we test how feasible the automatic extraction of GREs really is and achieve F-scores, under different, not directly comparable test conditions though, for the rule and the ML systems which amount to 34% and 44%, respectively. In the REGU-LONDB condition, we investigate how robust both methodologies are by comparing them with this routinely used database. Here, the best F-scores for the rule and the ML systems amount to 34% and 19%, respectively.
How Feasible and Robust is the Automatic Extraction of Gene Regulation Events ? A Cross-Method Evaluation under Lab and Real-Life Conditions
d8780454
This paper presents an alternative algorithm based on the singular value decomposition (SVD) that creates vector representation for linguistic units with reduced dimensionality. The work was motivated by an application aimed to represent text segments for further processing in a multi-document summarization system. The algorithm tries to compensate for SVD's bias towards dominant-topic documents. Our experiments on measuring document similarities have shown that the algorithm achieves higher average precision with lower number of dimensions than the baseline algorithms -the SVD and the vector space model.
Clustered Sub-matrix Singular Value Decomposition
d174800296
Concept map-based multi-document summarization has recently been proposed as a variant of the traditional summarization task with graph-structured summaries. As shown by previous work, the grouping of coreferent concept mentions across documents is a crucial subtask of it. However, while the current stateof-the-art method suggested a new grouping method that was shown to improve the summary quality, its use of pairwise comparisons leads to polynomial runtime complexity that prohibits the application to large document collections. In this paper, we propose two alternative grouping techniques based on locality sensitive hashing, approximate nearest neighbor search and a fast clustering algorithm. They exhibit linear and log-linear runtime complexity, making them much more scalable. We report experimental results that confirm the improved runtime behavior while also showing that the quality of the summary concept maps remains comparable. 1
Fast Concept Mention Grouping for Concept Map-based Multi-Document Summarization
d10125102
In this paper we present the results of the University of Sheffield (SHEF) submissions for the WMT16 shared task on document-level Quality Estimation (Task 3). Our submission explore discourse and document-aware information and word embeddings as features, with Support Vector Regression and Gaussian Process used to train the Quality Estimation models. The use of word embeddings (combined with baseline features) and a Gaussian Process model with two kernels led to the winning submission in the shared task.
Shared Task Papers
d8786258
Typed lexicons that encode knowledge about the semantic types of an entity name, e.g., that 'Paris' denotes a geolocation, product, or person, have proven useful for many text processing tasks. While lexicons may be derived from large-scale knowledge bases (KBs), KBs are inherently imperfect, in particular they lack coverage with respect to long tail entity names. We infer the types of a given entity name using multi-source learning, considering information obtained by alignment to the Freebase knowledge base, Web-scale distributional patterns, and global semi-structured contexts retrieved by means of Web search. Evaluation in the challenging domain of social media shows that multi-source learning improves performance compared with rule-based KB lookups, boosting typing results for some semantic categories.
Multi-source named entity typing for social media
d2545652
This paper describes the systems submitted by Avaya Labs (AVAYA) to SemEval-2013 Task 2 -Sentiment Analysis in Twitter. For the constrained conditions of both the message polarity classification and contextual polarity disambiguation subtasks, our approach centers on training high-dimensional, linear classifiers with a combination of lexical and syntactic features. The constrained message polarity model is then used to tag nearly half a million unlabeled tweets. These automatically labeled data are used for two purposes: 1) to discover prior polarities of words and 2) to provide additional training examples for self-training. Our systems performed competitively, placing in the top five for all subtasks and data conditions. More importantly, these results show that expanding the polarity lexicon and augmenting the training data with unlabeled tweets can yield improvements in precision and recall in classifying the polarity of non-neutral messages and contexts.
AVAYA: Sentiment Analysis on Twitter with Self-Training and Polarity Lexicon Expansion
d227231518
d5706166
In this paper, we describe the construction of a parallel Chinese-English patent sentence corpus which is created from noisy parallel patents. First, we use a publicly available sentence aligner to find parallel sentence candidates in the noisy parallel data. Then we compare and evaluate three individual measures and different ensemble techniques to sort the parallel sentence candidates according to the confidence score and filter out those with low scores as the noisy data. The experiment shows that the combination of measures outperforms the individual measures, and that filtering out low-quality sentence pairs is readily justified as it can improve SMT performance. Finally, we arrive at the final corpus consisting of 160K sentence pairs in which about 90% are correct or partially correct alignments.
The Construction of a Chinese-English Patent Parallel Corpus
d15958621
Previous work on quantifier scope annotation focuses on scoping sentences with only two quantified noun phrases (NPs), where the quantifiers are restricted to a predefined list. It also ignores negation, modal/logical operators, and other sentential adverbials. We present a comprehensive scope annotation scheme. We annotate the scope interaction between all scopal terms in the sentence from quantifiers to scopal adverbials, without putting any restriction on the number of scopal terms in a sentence. In addition, all NPs, explicitly quantified or not, with no restriction on the type of quantification, are investigated for possible scope interactions.
A Corpus of Scope-disambiguated English Text
d2586102
One of the crucial issues in semantic parsing is how to reduce costs of collecting a sufficiently large amount of labeled data. This paper presents a new approach to cost-saving annotation of example sentences with predicate-argument structure information, taking Japanese as a target language. In this scheme, a large collection of unlabeled examples are first clustered and selectively sampled, and for each sampled cluster, only one representative example is given a label by a human annotator. The advantages of this approach are empirically supported by the results of our preliminary experiments, where we use an existing similarity function and naive sampling strategy.
Augmenting a Semantic Verb Lexicon with a Large Scale Collection of Example Sentences
d8416674
Deep-syntactic" dependency structures bridge the gap between the surface-syntactic structures as produced by state-of-the-art dependency parsers and semantic logical forms in that they abstract away from surfacesyntactic idiosyncrasies, but still keep the linguistic structure of a sentence. They have thus a great potential for such downstream applications as machine translation and summarization. In this demo paper, we propose an online version of a deep-syntactic parser that outputs deep-syntactic structures from plain sentences and visualizes them using the Brat tool. Along with the deep-syntactic structures, the user can also inspect the visual presentation of the surface-syntactic structures that serve as input to the deep-syntactic parser and that are produced by the joint tagger and syntactic transition-based parser ran in the pipeline before deep-syntactic parsing takes place.
Visualizing Deep-Syntactic Parser Output
d9402109
We describe how we constructed an automatic scoring function for machine translation quality; this function makes use of arbitrarily many pieces of natural language processing software that has been designed to process English language text. By machine-learning values of fnnctions available inside the software and by constructing functions that yield values based upon the software output, we are able to achieve preliminary, positive results in machine-learning the difference between human-produced English and machine-translation English. We suggest how the scoring ftmction may be used for MT system development.
Toward a Scoring Function for Quality-Driven Machine Translation
d28522152
Quality estimation (QE) for machine translation has emerged as a promising way to provide real-world applications with methods to estimate at run-time the reliability of automatic translations. Real-world applications, however, pose challenges that go beyond those of current QE evaluation settings. For instance, the heterogeneity and the scarce availability of training data might contribute to significantly raise the bar. To address these issues we compare two alternative machine learning paradigms, namely online and multi-task learning, measuring their capability to overcome the limitations of current batch methods. The results of our experiments, which are carried out in the same experimental setting, demonstrate the effectiveness of the two methods and suggest their complementarity. This indicates, as a promising research avenue, the possibility to combine their strengths into an online multi-task approach to the problem.
Towards a Combination of Online and Multitask Learning for MT Quality Estimation: a Preliminary Study
d850161
Entity linking (EL) is the task of linking a textual named entity mention to a knowledge base entry. Traditional approaches have addressed the problem by dividing the task into separate stages: entity recognition/classification, entity filtering, and entity mapping, in which different constraints are used to improve the system's performance. Nevertheless, these constraints are executed separately and cannot be used interactively. In this paper, we propose an integrated solution to the task based on a Markov logic network (MLN). We show how the stage decision can be formulated and combined in an MLN. We conducted experiments on the biomedical EL task, gene mention linking (GML), and compared our model's performance with those of two other GML approaches. Our experimental results provide the first comprehensive GML evaluations from three different perspectives: article-wide precision/recall/F-measure (PRF), instance-based PRF, and question answering accuracy. This paper also provides formal definitions of all of the above EL tasks. Experimental results show that our method outperforms the baseline and state-of-the-art systems under all three evaluation schemes.12Hong-Jie Dai et al.such a system in different fields, e.g., improving document retrieval for specific entities, relation extraction, and attribute assignment (e.g., gene ontology annotations). In these applications, recognized entities must be linked to unique database entries.McNamee and Dang (2009b)named the task of matching a textual entity mention to a knowledge base (KB) entry Entity Linking (EL). InFigure 1, we provide a biomedical abstract to illustrate this task. The abstract discusses the relationship of the gene "CD59" to other lymphocyte antigens. TITLE: Structure of the CD59-encoding gene: further evidence of a relationship to murine lymphocyte antigen Ly-6 protein ABSTRACT: The gene for CD59 [membrane inhibitor of reactive lysis (MIRL), protectin], a phosphatidylinositol-linked surface glycoprotein that regulates the formation of the polymeric C9 complex of complement and that is deficient on the abnormal hematopoietic cells of patients with paroxysmal nocturnal hemoglobinuria, consists of four exons spanning 20 kilobases. … PMID [1381503]
Joint Learning of Entity Linking Constraints Using a Markov-Logic Network
d219304740
d9740797
While text-based deception in computer mediated communication has been studied, e.g.,Zhou (2005)andDuran et al. (2010), there has been less focus on the differentiation of strategies for deception, especially those which may manifest in modern communication, such as found in social media. In this paper, we extend our previous work on the evaluation of linguistic indicators to strategic deception(Appling et al., 2015), by evaluating the relationship between personality and deceptive strategy use and the utilization of linguistic features for inferring both (personality and deception). We find that even with a relatively small corpus, there is evidence that personality is related to particular deception strategies, though in short social media communications, these personality traits are difficult to infer using standard linguistic measures (e.g., LIWC). We also describe the corpus we collected from an experiment in which subjects engaged in deception through a social media platform.
Individual Differences in Strategic Deception
d248780048
In this paper, we propose a neural model EPT-X (Expression-Pointer Transformer with Explanations), which utilizes natural language explanations to solve an algebraic word problem. To enhance the explainability of the encoding process of a neural model, EPT-X adopts the concepts of plausibility and faithfulness which are drawn from math word problem solving strategies by humans. A plausible explanation is one that includes contextual information for the numbers and variables that appear in a given math word problem. A faithful explanation is one that accurately represents the reasoning process behind the model's solution equation. The EPT-X model yields an average baseline performance of 69.59% on our PEN dataset and produces explanations with quality that is comparable to human output. The contribution of this work is two-fold. (1) EPT-X model: An explainable neural model that sets a baseline for algebraic word problem solving task, in terms of model's correctness, plausibility, and faithfulness. (2) New dataset: We release a novel dataset PEN (Problems with Explanations for Numbers), which expands the existing datasets by attaching explanations to each number/variable.
EPT-X: An Expression-Pointer Transformer model that generates eXplanations for numbers
d4227779
In this paper we compare two competing approaches to part-of-speech tagging, statistical and constraint-based disambiguation, using French as our test language. We imposed a time limit on our experiment: the amount of time spent on the design of our constraint system was about the same as the time we used to train and test the easy-to-implement statistical model. We describe the two systems and compare the results. The accuracy of the statistical method is reasonably good, comparable to taggers for English. But the constraint-based tagger seems to be superior even with the limited time we allowed ourselves for rule development.University of Helsinki, 1983.Kimmo Koskenniemi. Finite-state parsing and disambiguation. In proceedings of
Tagging French- comparing a statistical and a constraint-based method
d15069542
Prior work on training the IBM-3 translation model is based on suboptimal methods for computing Viterbi alignments. In this paper, we present the first method guaranteed to produce globally optimal alignments. This not only results in improved alignments, it also gives us the opportunity to evaluate the quality of standard hillclimbing methods. Indeed, hillclimbing works reasonably well in practice but still fails to find the global optimum for between 2% and 12% of all sentence pairs and the probabilities can be several tens of orders of magnitude away from the Viterbi alignment.By reformulating the alignment problem as an Integer Linear Program, we can use standard machinery from global optimization theory to compute the solutions. We use the well-known branch-and-cut method, but also show how it can be customized to the specific problem discussed in this paper. In fact, a large number of alignments can be excluded from the start without losing global optimality.
Computing Optimal Alignments for the IBM-3 Translation Model
d14400645
Accurate recovery of predicate-argument dependencies is vital for interpretation tasks like information extraction and question answering, and unbounded dependencies may account for a significant portion of the dependencies in any given text. This paper describes a categorial grammar which, like other categorial grammars, imposes a small, uniform, and easily learnable set of semantic composition operations based on functor-argument relations, but like HPSG, is generalized to limit the number of categories used to those needed to enforce grammatical constraints. The paper also describes a novel reannotation system used to map existing resources based on Government and Binding Theory, like the Penn Treebank, into this categorial representation. This grammar is evaluated on an existing unbounded dependency recovery task(Rimell et al., 2009;Nivre et al., 2010).
Accurate Unbounded Dependency Recovery using Generalized Categorial Grammars
d18205410
Clinical depression is a mental disorder involving genetics and environmental factors. Although much work studied its genetic causes and numerous candidate genes have consequently been looked into and reported in the biomedical literature, no gene expression changes or mutations regarding depression have yet been adequately collected and analyzed for its full pathophysiology. In this paper, we present a depression-specific annotated corpus for text mining systems that target at providing a concise review of depression-gene relations, as well as capturing complex biological events such as gene expression changes. We describe the annotation scheme and the conducted annotation procedure in detail. We discuss issues regarding proper recognition of depression terms and entity interactions for future approaches to the task. The corpus is available at http://www.biopathway.org/CoMAGD. PRKCA d , MAPK3 d , MAPK1 d ALB, TNF d,p , IL2 d , IL1B d,p , MAPK1 d dec.
CoMAGD: Annotation of Gene-Depression Relations
d250179936
RESUMEL'utilisation d'algorithmes de Machine Learning (ML) en fouille d'opinions notamment ceux d'apprentissage supervisé nécessite un corpus annoté pour entrainer le modèle de classification afin de prédire des résultats proches de la réalité. Malheureusement, il n'existe pas encore de ressources pour le traitement automatique de données textuelles exprimées dans le langage urbain sénégalais. L'objectif de cet article est de construire un corpus multilingue pour la fouille d'opinions (COMFO). Le processus de constitution du corpus COMFO est composé de trois étapes à savoir la présentation de la source de données, la collecte et préparation de données, et l'annotation par approche lexicale. La particularité de COMFO réside dans l'intégration des langues étrangères (française et anglaises) et celles locales notamment le wolof urbain afin de refléter l'opinion collective des lecteurs sénégalais.ABSTRACTCOMFO: Multilingual Corpus for Opinion MiningThe use of Machine Learning (ML) algorithms in opinion mining, particularly supervised learning algorithms, requires an annotated corpus to train the classification model in order to predict results that are close to reality. Unfortunately, there are still no resources for the automatic processing of textual data expressed in the Senegalese urban language. The objective of this paper is to build a multilingual corpus for opinion mining (COMFO). The process of building the COMFO corpus is composed of three steps: presentation of the data source, data collection and preparation, and annotation by lexical approach. The particularity of COMFO lies in the integration of foreign languages (French and English) and local languages, notably urban Wolof, in order to reflect the collective opinion of Senegalese readers. MOTS-CLES : Fouille d'opinions, commentaire en ligne, constitution de corpus, COMFO
COMFO : Corpus Multilingue pour la Fouille d'Opinions
d5527143
In this paper, we describe the pronominal anaphora resolution module of Lucy, a portable English understanding system. The design of this mo;clule was motivated by the observation that, although there exist many theories of anaphora resolution, no one of these theories is complete. Thus we have implemented a blackboard-like architecture in which individual partial theories can be encoded as separate modules that can interact to propose candidate antecedents and to evaluate each other's proposals.
An Architecture for Anaphora Resolution
d9287023
This paper describes Japanese-English-Chinese aligned parallel treebank corpora of newspaper articles. They have been constructed by translating each sentence in the Penn Treebank and the Kyoto University text corpus into a corresponding natural sentence in a target language. Each sentence is translated so as to reflect its contextual information and is annotated with morphological and syntactic structures and phrasal alignment. This paper also describes the possible applications of the parallel corpus and proposes a new framework to aid in translation. In this framework, parallel translations whose source language sentence is similar to a given sentence can be semiautomatically generated. In this paper we show that the framework can be achieved by using our aligned parallel treebank corpus.
Multilingual Aligned Parallel Treebank Corpus Reflecting Contextual Information and Its Applications
d13038257
It is a well-known fact that the amount of content which is available to be translated and localized far outnumbers the current amount of translation resources. Automation in general and Machine Translation (MT) in particular are one of the key technologies which can help improve this situation. However, a tool that integrates all of the components needed for the localization process is still missing, and MT is still out of reach for most localisation professionals. In this paper we present an online translation environment which empowers users with MT by enabling engines to be created from their data, without a need for technical knowledge or special hardware requirements and at low cost. Documents in a variety of formats can then be post-edited after being processed with their Translation Memories, MT engines and glossaries. We give an overview of the tool and present a case study of a project for a large games company, showing the applicability of our tool.
SmartMATE: An Online End-To-End MT Post-Editing Framework
d12069271
d16577840
This paper introduces a semantic theory I)I,PW, l)ynamic l,ogic with Possible World, which extends Groenendijk's I)PI, and Cresswell's Indices Semantics. The semantics can interpret the temporal and modal sense and anaphora. At present there are three main aspects in semantical field: 1. Transformation of' sentences or discourses into formulas in high order logic.
DYNAMIC LOGIC WITH POSSIBLE WORLD
d15139655
The paper describes a procedure for the automatic generation of a large full-form lexicon of English. We put emphasis on two statistical methods to lexicon extension and adjustment: in terms of a letter-based HMM and in terms of a detector of spelling variants and misspellings. The resulting resource, ColLex.EN, is evaluated with respect to two tasks: text categorization and lexical coverage by example of the SUSANNE corpus and the Open ANC.
Automatically Generating and Evaluating a Full-form Lexicon for English
d15399459
This paper proposes an approach of processing Japanese compound functional expressions by identifying them and analyzing their dependency relations through a machine learning technique. First, we formalize the task of identifying Japanese compound functional expressions in a text as a machine learning based chunking problem. Next, against the results of identifying compound functional expressions, we apply the method of dependency analysis based on the cascaded chunking model. The results of experimental evaluation show that, the dependency analysis model achieves improvements when applied after identifying compound functional expressions, compared with the case where it is applied without identifying compound functional expressions.
Learning Dependency Relations of Japanese Compound Functional Expressions
d6009007
Within computational linguistics, the use of statistical pattern matching is generally restricted to speech processing.We have attempted to apply statistical techniques to discover a grammatical classification system from a Corpus of 'raw' English text. A discovery procedure is simpler for a simpler
PATI'ERN RECOGNITION APPLIED TO THE ACQUISITION OF A GRAMMATICAL CLASSIFICATION SYSTEM FROM UNRESTRICTED ENGLISH TEXT
d167401
In this paper we perform a preliminary evaluation on how Semantic Web technologies such as RDF and OWL can be used to perform textual encoding. Among the potential advantages, we notice how RDF, given its conceptual graph structure, appears naturally suited to deal with overlapping hierarchies of annotations, something notoriously problematic using classic XML based markup. To conclude, we show how complex querying can be performed using slight modifications of already existing Semantic Web query tools.
A novel Textual Encoding paradigm based on Semantic Web tools and semantics
d184482787
This paper describes our system (Fermi) for Task 6: OffensEval: Identifying and Categorizing Offensive Language in Social Media of SemEval-2019. We participated in all the three sub-tasks within Task 6. We evaluate multiple sentence embeddings in conjunction with various supervised machine learning algorithms and evaluate the performance of simple yet effective embedding-ML combination algorithms. Our team (Fermi)'s model achieved an F1-score of 64.40%, 62.00% and 62.60% for sub-task A, B and C respectively on the official leaderboard. Our model for subtask C which uses pretrained ELMo embeddings for transforming the input and uses SVM (RBF kernel) for training, scored third position on the official leaderboard.Through the paper we provide a detailed description of the approach, as well as the results obtained for the task.
Identifying and Categorizing Offensive Language in Social Media using Sentence Embeddings Fermi at SemEval-2019 Task 6: Identifying and Categorizing Offensive Language in Social Media using Sentence Embeddings
d62579571
We present an overview of recent work in which eye movements are monitored as people follow spoken instructions to move objects or pictures in a visual workspace. Subjects naturally make saccadic eye-movements to objects that are closely time-locked to relevant information in the instruction. Thus the eye-movements provide a window into the rapid mental processes that underlie spoken language comprehension. We review studies of reference resolution, word recognition, and pragmatic effects on syntactic ambiguity resolution. Our studies show that people seek to establish reference with respect to their behavioral goals during the earliest moments of linguistic processing.Moreover, referentially relevant non-linguistic informationimmediately affects how the linguistic input is initially structured.
INVITED TALK Eye Movements and Spoken Language Comprehension
d6928130
In this paper, we address statistical machine translation of public conference talks. Modeling the style of this genre can be very challenging given the shortage of available in-domain training data. We investigate the use of a hybrid LM, where infrequent words are mapped into classes. Hybrid LMs are used to complement word-based LMs with statistics about the language style of the talks. Extensive experiments comparing different settings of the hybrid LM are reported on publicly available benchmarks based on TED talks, from Arabic to English and from English to French. The proposed models show to better exploit in-domain data than conventional word-based LMs for the target language modeling component of a phrase-based statistical machine translation system.
Cutting the Long Tail: Hybrid Language Models for Translation Style Adaptation
d3447736
In this paper we present an English grammar and style checker for non-native English speakers. The main characteristic of this checker is the use of an Internet search engine. As the number of web pages written in English is immense, the system hypothesizes that a piece of text not found on the Web is probably badly written. The system also hypothesizes that the Web will provide examples of how the content of the text segment can be expressed in a gramatical and idiomatic way. So, after the checker warns the user about the odd character of a text segment, the Internet engine searches for contexts that will be helpful for the user to decide whether he/she corrects the segment or not. By means of a search engine, the checker also suggests the writer to use expressions which are more frequent on theWeb other than the expression he/she actually wrote. Although the system is currently being developed for teachers of the Open University of Catalonia, the checker can also be useful for second-language learners, translators, and post-editors.
A Grammar and Style Checker Based on Internet Searches
d11989149
In this paper, we describe a sentence position based summarizer that is built based on a sentence position policy, created from the evaluation testbed of recent summarization tasks at Document Understanding Conferences (DUC). We show that the summarizer thus built is able to outperform most systems participating in task focused summarization evaluations at Text Analysis Conferences (TAC) 2008. Our experiments also show that such a method would perform better at producing short summaries (upto 100 words) than longer summaries. Further, we discuss the baselines traditionally used for summarization evaluation and suggest the revival of an old baseline to suit the current summarization task at TAC: the Update Summarization task.
Sentence Position revisited: A robust light-weight Update Summarization 'baseline' Algorithm
d7966312
Many obstacles stand in the way of computer programs that could read and digest volumes of natural language text. The foremost of these difficulties is the quantity and variety of knowledge about language and about the world that seems to be a prerequisite for any substantial language understanding. In its most general form, the robust text processing problem remains insurmountable; yet practical applications of text processing are realizable throngh a combination of knowledge representation and language analysis strategies.This project note describes the GE NLToo~s~,:T and its use in two text processing applications. In the first, dornain, the system selects and analyzes stories about corporate mergers and acquisitions as they come across a real-time news feed. In the second do~ main, the program uses naval operations messages to fill a 10--field template. In both cases, users can ask natural language questions about, the contents of the texts, and the system responds with direct answers along with the original text.The GE NLTooLsET is a software foundation for text processing. The NL'I'OOLS~?'r derives from a research effort aimed at preserving the capabilities of naturM language text processing across domains. The program achieves this transportability by using a core knowledge base and lexicon that customizes easily to new applications, along with a flexible text processing strategy tolerant of gaps in the program's knowledge base. Developed over the last four years, it runs in real time on a SUN TM workstation in Common Lisp under UNIX TM. It performs the following t asks:• The lexical analysis of the input character stream, including names, dates, numbers, a, nd eorttractions.• The separation of the raw news feed into story structures, with separate headline, byline and dateline designations.• A topic determination fbr each story, indicating whether it is about a corporate merger.• The natural language analysis of each selected story using an integration of two interpretation strategies--"bottom-up" linguistic analysis and "top-down" conceptual interpretation.o The storage and retrieval of conceptual representations of the processed texts into and out of a knowledge base.The design of the NLTooLsET combines artificial intelligence (AI) methods, especially natural language processing, knowledge representation, and information retrieval techniques, with more robust but superficial methods, such as lexical analysis and word-based text search. This approach provides the broad flmctionality of AI systems without sacrificing robnstness or processing speed. In fact, the system has a throughput for real text greater than any other text extraction system we have seen (e.g.,[Sondheimer, 1986;Sundheim, 1990]), while providing knowledge-based capabilities such as producing answers to English questions and identifying key conceptual roles in the text (such as the suitor, target, and per-.share price of a merger offer). The NL-TooLs~'r consists of roughly 50,000 lines of Common Lisp code. It was developed entirely on SUN workstations.1Technical OverviewThe NLTOoLSFT's design provides each system component with access to a rich hand-coded knowledge base, but each component applies the knowledge selectively, avoiding the computation that a complete analysis of each text would require. The architecture of the system allows for levels of language analysis, f¥om rough skimming to in-depth conceptual i:nterpretation [aacobs, 1987]. A custom-built 10,000 word-root lexicon and concept hierarchy provides a rich source of lexical information. Entries are separated by their senses, and contain special context clues to help in the sensedisambiguation process. A morphological analyzer contains semantics for about 75 affixes, and can automatically derive the meanings of inflected entries not separately represented in the lexicon. Domainspecific words and phrases are added to the lexicon by connecting them to higher-level concepts and categories present in the system's core lexicon and con~ cept hierarchy. This is one aspect of the NLTOOLSET that makes it highly portable from one domain to another.The language analysis strategy used in the NL-TOOLSET combines full syntactic (bottom-up) parsing and conceptual expectation-driven (top-down) parsing. Four knowledge sources, including syntactic and semantic information and domain knowledge, interact in a flexible manner. This integration produces a more robust semantic analyzer that deals gracefully with gaps in lexieal and syntactic knowledge, trans-1 373
The GE NLToolset: A Software Foundation for Intelligent Text Processing
d33557569
摘要 中文複合詞中有很大部份是由動補結構所組合產生,它們在語料中常呈現合分詞不一致 或錯誤的情況,本文以動補結構中合分詞類型最複雜的結構「V 到」為例,探討其合分 詞及語意區分問題。我們根據「到」是否有「到達」的語意,利用七條原則及簡易判準-即以賓語類型{地點 時點 狀態}為標準-採人工的方式進行標記,並評估合分詞正確 率,結果證明「吃到」可以由目前的 70.6%正確率提升到 94.5%。以「V 到」結構整體 來看,在 9 個例詞 500 條隨機選取的例句中,合分詞正確率可達到 93.4%,基於此合分 詞結果的語意合成正確率也達到 86%。顯示複雜的動補結構合分詞問題可經由簡易人工 規則來改善。未來我們計畫將人工規則轉為自動化處理程式並檢驗其正確性。 關鍵詞:V 到,分詞,中文動補結構,語意表達 一、 緒論 中文複合詞中有很大部份是由動補(Verb-complement)結構所組合產生,例如:看到、 想來等等。為顧及分詞效率,一般系統會將這些高頻的動補結構視為複合詞收詞。然而, 在不同的語境下,有時它們需分詞處理,例如:看 到 傻了、想 來沾點邊等等,也就 是說,以收詞的方式來處理動補式複合詞,將難以避免分詞錯誤,事實上,語料庫中動 補結構分詞錯誤或不一致的例子頗為常見,如(1)
「V 到」結構的合分詞及語意區分 Word segmentation and sense representation for V-dao structure in Chinese
d233364956
d16289611
The availability of semantically tagged corpora is becoming a very important and urgent need for training and evaluation within a large number of applications but also they are the natural application and accompaniment of semantic lexicons of which they constitute both a useful testbed to evaluate their adequacy and a repository of corpus examples for the attested senses. It is therefore essential that sound criteria are defined for their construction and a specific methodology is set up for the treatment of various semantic phenomena relevant to this level of description. In this paper we present some observations and results concerning an experiment of manual lexical-semantic tagging of a small Italian corpus performed within the framework of the ELSNET project. The ELSNET experimental project has to be considered as a feasibility study. It is part of a preparatory and training phase, started with the Romanseval/Senseval experiment(Calzolari et al., 1998), and ending up with the lexical-semantic annotation of larger quantities of semantically annotated texts such as the syntacticsemantic Treebank which is going to be annotated within an Italian National Project (SI-TAL). Indeed, the results of the ELSNET experiment have been of utmost importance for the definition of the technical guidelines for the lexical-semantic level of description of the Treebank.
An Experiment of Lexical-Semantic Tagging of an Italian Corpus
d3844866
The study on Automatic Recognizing usages of Modern Chinese Adverbs is one of the important parts of the NLP-oriented research of Chinese Functional words Knowledge Base.To solve the problems of the existing rule-based method of adverbs' usages recognition based on the previous work, this paper has studied automatically recognizing common Chinese adverbs' usages using statistical methods. Three statistical models, viz. CRF, ME, and SVM, are used to label several common Chinese adverbs' usages on the segmentation and part-of-speech tagged corpus of People's Daily(Jan 1998). The experiment results show that statisticalbased method is effective in automatically recognizing of several common adverbs' usages and has good application prospects.
Studies on Automatic Recognition of Common Chinese Adverb's Usages Based on Statistical Methods
d273749
The GIVE Challenge is a recent shared task in which NLG systems are evaluated over the Internet. In this paper, we validate this novel NLG evaluation methodology by comparing the Internet-based results with results we collected in a lab experiment. We find that the results delivered by both methods are consistent, but the Internetbased approach offers the statistical power necessary for more fine-grained evaluations and is cheaper to carry out.
Validating the web-based evaluation of NLG systems
d17414711
In this paper we explore the computational modelling of compositionality in distributional models of semantics. In particular, we model the semantic composition of pairs of adjacent English Adjectives and Nouns from the British National Corpus. We build a vector-based semantic space from a lemmatised version of the BNC, where the most frequent A-N lemma pairs are treated as single tokens. We then extrapolate three different models of compositionality: a simple additive model, a pointwise-multiplicative model and a Partial Least Squares Regression (PLSR) model. We propose two evaluation methods for the implemented models. Our study leads to the conclusion that regression-based models of compositionality generally out-perform additive and multiplicative approaches, and also show a number of advantages that make them very promising for future research.
A Regression Model of Adjective-Noun Compositionality in Distributional Semantics
d243839750
We address the compositionality challenge presented by the SCAN benchmark. Using data augmentation and a modification of the standard seq2seq architecture with attention, we achieve SOTA results on all the relevant tasks from the benchmark, showing the models can generalize to words used in unseen contexts. We propose an extension of the benchmark by a harder task, which cannot be solved by the proposed method.
Solving SCAN Tasks with Data Augmentation and Input Embeddings
d2118369
We describe a practical parser for unrestricted dependencies. The parser creates links between words and names the links according to their syntactic functions. We first describe the older Constraint Grammar parser where many of the ideas come from. Then we proceed to describe the central ideas of our new parser. Finally, the parser is evaluated.
A non-projective dependency parser
d226239024
d2346992
d18869458
Recent work on information presentation in dialogue systems combines user modelling (UM) and stepwise refinement through clustering and summarisation (SR) in the UMSR approach. An evaluation in which participants rated dialogue transcripts showed that UMSR presents complex trade-offs understandably, provides users with a good overview of their options, and increases users' confidence that all relevant options have been presented (Demberg and Moore, 2006). In this paper, we evaluate the effectiveness of the UMSR approach in a more realistic setting, by incorporating this information presentation technique into a full endto-end dialogue system in the city information domain, and comparing it with the traditional approach of presenting information sequentially. Our results suggest that despite complications associated with a real dialogue system setting, the UMSR model retains its advantages.
Evaluating the Effectiveness of Information Presentation in a Full End-To-End Dialogue System