_id
stringlengths
4
10
text
stringlengths
0
18.4k
title
stringlengths
0
8.56k
d53299978
Recurrent Neural Networks (RNNs) are theoretically Turing-complete and established themselves as a dominant model for language processing. Yet, there still remains an uncertainty regarding their language learning capabilities. In this paper, we empirically evaluate the inductive learning capabilities of Long Short-Term Memory networks, a popular extension of simple RNNs, to learn simple formal languages, in particular a n b n , a n b n c n , and a n b n c n d n . We investigate the influence of various aspects of learning, such as training data regimes and model capacity, on the generalization to unobserved samples. We find striking differences in model performances under different training settings and highlight the need for careful analysis and assessment when making claims about the learning capabilities of neural network models. 1
On Evaluating the Generalization of LSTM Models in Formal Languages
d220059596
The earliest models for discontinuous constituency parsers used mildly context-sensitive grammars, but the fashion has changed in recent years to grammar-less transition-based parsers that use strong neural probabilistic models to greedily predict transitions.We argue that grammar-based approaches still have something to contribute on top of what is offered by transition-based parsers. Concretely, by using a grammar formalism to restrict the space of possible trees we can use dynamic programming parsing algorithms for exact search for the most probable tree.
Span-Based LCFRS-2 Parsing
d13290232
This paper discusses the adaptation of the Stanford typed dependency model(de Marneffe and Manning 2008), initially designed for English, to the requirements of typologically different languages from the viewpoint of practical parsing. We argue for a framework of functional dependency grammar that is based on the idea of parallelism between syntax and semantics. There is a twofold challenge: (1) specifying the annotation scheme in order to deal with the morphological and syntactic peculiarities of each language and (2) maintaining crosslinguistically consistent annotations to ensure homogenous analysis for similar linguistic phenomena. We applied a number of modifications to the original Stanford scheme in an attempt to capture the language-specific grammatical features present in heterogeneous CoNLL-encoded data sets for German, Dutch, French, Spanish, Brazilian Portuguese, Russian, Polish, Indonesian, and Traditional Chinese. From a multilingual perspective, we discuss features such as subject and object verb complements, comparative phrases, expletives, reduplication, copula elision, clitics and adpositions.
Towards Cross-language Application of Dependency Grammar
d15359717
There are obvious reasons for trying to automate the production of multilingual documentation, especially for routine subject-matter in restricted domains (e.g. technical instructions). Two approaches have been adopted: Machine Translation (MT) of a source text, and Multilingual Natural Language Generation (M-NLG) from a knowledge base. For MT, information extraction is a major difficulty, since the meaning must be derived by analysis of the source text; M-NLG avoids this difficulty but seems at first sight to require an expensive phase of knowledge engineering in order to encode the meaning. We introduce here a new technique which employs M-NLG during the phase of knowledge editing. A 'feedback text', generated from a possibly incomplete knowledge base, describes in natural language the knowledge encoded so far, and the options for extending it. This method allows anyone speaking one of the supported languages to produce texts in all of them, requiring from the author only expertise in the subject-matter, not expertise in knowledge engineering.
Multilingual authoring using feedback texts
d226307957
d6341459
Event schema induction is the task of learning high-level representations of complex events (e.g., a bombing) and their entity roles (e.g., perpetrator and victim) from unlabeled text. Event schemas have important connections to early NLP research on frames and scripts, as well as modern applications like template extraction. Recent research suggests event schemas can be learned from raw text. Inspired by a pipelined learner based on named entity coreference, this paper presents the first generative model for schema induction that integrates coreference chains into learning. Our generative model is conceptually simpler than the pipelined approach and requires far less training data. It also provides an interesting contrast with a recent HMM-based model. We evaluate on a common dataset for template schema extraction. Our generative model matches the pipeline's performance, and outperforms the HMM by 7 F1 points (20%).
Event Schema Induction with a Probabilistic Entity-Driven Model
d243896115
d250390912
Incorporating personas information allows diverse and engaging responses in dialogue response generation. Unfortunately, prior works have primarily focused on self personas and have overlooked the value of partner personas. Moreover, in practical applications, the availability of the gold partner personas is often not the case. This paper attempts to tackle these issues by offering a novel framework that leverages automatic partner personas generation to enhance the succeeding dialogue response generation. Our framework employs reinforcement learning with a dedicatedly designed critic network for reward judgement. Experimental results from automatic and human evaluations indicate that our framework is capable of generating relevant, interesting, coherent and informative partner personas, even compared to the ground truth partner personas. This enhances the succeeding dialogue response generation, which surpasses our competitive baselines that condition on the ground truth partner personas.
Partner Personas Generation for Dialogue Response Generation
d13356948
d219304069
d11156948
In this paper, we show how a computational semantic approach is best fitted to address the translation of highly isolating languages. We use Chinese as an example and present the overall process of translation from Chinese to English, within the framework of Knowledge-Based Machine Translation (KBMT), using an overt semantics while de-emphasizing syntax. We focus here on two particular tasks: Word Sense Disambiguation (WSD) and compound translation.
Long Time No See: Overt Semantics for Machine Translation
d11746938
The field of grammatical error correction (GEC) has grown substantially in recent years, with research directed at both evaluation metrics and improved system performance against those metrics. One unvisited assumption, however, is the reliance of GEC evaluation on error-coded corpora, which contain specific labeled corrections. We examine current practices and show that GEC's reliance on such corpora unnaturally constrains annotation and automatic evaluation, resulting in (a) sentences that do not sound acceptable to native speakers and (b) system rankings that do not correlate with human judgments. In light of this, we propose an alternate approach that jettisons costly error coding in favor of unannotated, whole-sentence rewrites. We compare the performance of existing metrics over different gold-standard annotations, and show that automatic evaluation with our new annotation scheme has very strong correlation with expert rankings (ρ = 0.82). As a result, we advocate for a fundamental and necessary shift in the goal of GEC, from correcting small, labeled error types, to producing text that has native fluency.
Reassessing the Goals of Grammatical Error Correction: Fluency Instead of Grammaticality
d219306558
d199022752
Pairs of sentences, phrases, or other text pieces can hold semantic relations such as paraphrasing, textual entailment, contradiction, specificity, and semantic similarity. These relations are usually studied in isolation and no dataset exists where they can be compared empirically. Here we present a corpus annotated with these relations and the analysis of these results. The corpus contains 520 sentence pairs, annotated with these relations. We measure the annotation reliability of each individual relation and we examine their interactions and correlations. Among the unexpected results revealed by our analysis is that the traditionally considered direct relationship between paraphrasing and bi-directional entailment does not hold in our data.
Annotating and analyzing the interactions between meaning relations
d17192110
In this paper we describe searchable translation memories, which allow translators to search their archives for possible translations of phrases. We describe how statistical machine translation can be used to align subsentential units in a translation memory, and rank them by their probability. We detail a data structure that allows for memory-efficient storage of the index. We evaluate the accuracy of translations retrieved from a searchable translation memory built from 50,000 sentence pairs, and find a precision of 86.6% for the top ranked translations.
A Compact Data Structure for Searchable Translation Memories
d16418952
Building the interface between experts and linguists in the detection and characterisation of neology in the field of the neurosciences
d218974493
d251395478
We describe an open-source dataset providing metadata for about 2,800 language varieties used in the world today. Specifically, the dataset provides the attested writing system(s) for each of these 2,800+ varieties, as well as an estimated speaker count for each variety. This dataset was developed through internal research and has been used for analyses around language technologies. This is the largest publicly-available, machine-readable resource with writing system and speaker information for the world's languages. We analyze the distribution of languages and writing systems in our data and compare it to their representation in current NLP. We hope the availability of this data will catalyze research in under-represented languages.
Writing System and Speaker Metadata for 2,800+ Language Varieties
d226283871
d567912
Complex Language Models cannot be easily integrated in the first pass decoding of a Statistical Machine Translation systemthe decoder queries the LM a very large number of times; the search process in the decoding builds the hypotheses incrementally and cannot make use of LMs that analyze the whole sentence. We present in this paper the Language Computer's system for WMT06 that employs LMpowered reranking on hypotheses generated by phrase-based SMT systems
Language Models and Reranking for Machine Translation
d173648
The purpose of this paper is to present the development of a morphossyntactic disambiguation system (or part-of-speech tagging system) which is intended to be used as a component of a Text-to-Speech (TTS) system for European Portuguese. In the development of the tagger, we compared two approaches: a probabilistic-based approach and a hybrid approach. Besides comparing these two approaches, this paper considers the effects of the different classes of errors on the performance of the complete TTS system.
Morphossyntactic Disambiguation for TTS Systems
d6304246
In this paper, we introduce Vaidya, a spoken dialog system which is developed as part of the ITRA 1 project. The system is capable of providing an approximate diagnosis by accepting symptoms as freeform speech in real-time on both laptop and hand-held devices. The system focuses on challenges in speech recognition specific to Indian languages and capturing the intent of the user. Another challenge is to create models which are memory and CPU efficient for hand-held devices. We describe our progress, experiences and approaches in building the system that can handle English as the input speech. The system is evaluated using subjective statistical measure (Fleiss' kappa) to assess the usability of the system.
Vaidya: A Spoken Dialog System for Health Domain
d1345080
We describe a method of word segmentation in Japanese in which a broad-coverage parser selects the best word sequence while producing a syntactic analysis. This technique is substantially different from traditional statistics-or heuristics-based models which attempt to select the best word sequence before handing it to the syntactic component. By breaking up the task of finding the best word sequence into the identification of words (in the word-breaking component) and the selection of the best sequence (a by-product of parsing), we have been able to simplify the task of each component and achieve high accuracy over a wide varicty of data. Word-breaking accuracy of our system is currently around 97-98%.
Using a Broad-Coverage Parser for Word-Breaking in Japanese
d53082590
Experimental performance on the task of relation classification has generally improved using deep neural network architectures. One major drawback of reported studies is that individual models have been evaluated on a very narrow range of datasets, raising questions about the adaptability of the architectures, while making comparisons between approaches difficult. In this work, we present a systematic large-scale analysis of neural relation classification architectures on six benchmark datasets with widely varying characteristics. We propose a novel multi-channel LSTM model combined with a CNN that takes advantage of all currently popular linguistic and architectural features. Our 'Man for All Seasons' approach achieves state-of-the-art performance on two datasets. More importantly, in our view, the model allowed us to obtain direct insights into the continued challenges faced by neural language models on this task. Example data and source code are available at: https://github.com/aidantee/ MASS.
Large-scale Exploration of Neural Relation Classification Architectures
d16381878
This paper investigates how to improve performance on information extraction tasks by constraining and sequencing CRF-based approaches. We consider two different relation extraction tasks, both from the medical literature: dependence relations and probability statements. We explore whether adding constraints can lead to an improvement over standard CRF decoding. Results on our relation extraction tasks are promising, showing significant increases in performance from both (i) adding constraints to post-process the output of a baseline CRF, which captures "domain knowledge", and (ii) further allowing flexibility in the application of those constraints by leveraging a binary classifier as a pre-processing step.
Named Entity Recognition in the Medical Domain with Constrained CRF Models
d13860492
In this paper we present a new approach for obtaining the terminology of a given domain using the category and page structures of the Wikipedia in a language independent way. The idea is to take profit of category graph of Wikipedia starting with a top category that we identify with the name of the domain. After obtaining the full set of categories belonging to the selected domain, the collection of corresponding pages is extracted, using some constraints. For reducing noise a bootstrapping approach implying several iterations is used. At each iteration less reliable pages, according to the balance between on-domain and off-domain categories of the page, are removed as well as less reliable categories. The set of recovered pages and categories is selected as initial domain term vocabulary. This approach has been applied to three broad coverage domains: astronomy, chemistry and medicine, and two languages: English and Spanish, showing a promising performance. The resulting set of terms has been evaluated using as reference those terms occurring in WordNet (using Magnini's domain codes) and those appearing in SNOMED-CT (a reference resource for the Medical domain available for Spanish).
Finding Domain Terms using Wikipedia
d2360421
We present two systems created for SemEval-2016s Task 11: Complex Word Identification. Our two systems, a regression tree and decision tree, were trained with a word's unigram and lemma word counts, average ageof-acquisition, and a measure of concreteness. The systems ranked 5th and 6th, respectively, on the test set by G-score (the harmonic mean between accuracy and recall). With the regression tree's predictions earning a G-score of 0.766, and the decision tree's earning 0.765, the two systems scored within 1 percent of the score of the best-performing system in the task.
HMC at SemEval-2016 Task 11: Identifying Complex Words Using Depth-limited Decision Trees
d219301277
d62986
This work discusses the evaluation of baseline algorithms for Web search results clustering. An analysis is performed over frequently used baseline algorithms and standard datasets. Our work shows that competitive results can be obtained by either fine tuning or performing cascade clustering over well-known algorithms. In particular, the latter strategy can lead to a scalable and real-world solution, which evidences comparative results to recent text-based state-of-the-art algorithms.
Easy Web Search Results Clustering: When Baselines Can Reach State-of-the-Art Algorithms
d6037458
Referring Expression Generation (REG) is the task that deals with references to entities appearing in a spoken or written discourse. If these referents are organized in terms of a taxonomy, there are two problems when establishing a reference that would distinguish an intended referent from its possible distractors. The first one is the choice of the set of possible distractrors or contrast set in the given situation. The second is to identify at what level of the taxonomy to phrase the reference so that it unambiguously picks out only the intended referent, leaving all possible distractors in different branches of the taxonomy. We discuss the use of ontologies to deal with the REG task, paying special attention to the choice of the the contrast set and to the use of the information of the ontology to select the most appropriate type to be used for the referent.
Degree of Abstraction in Referring Expression Generation and its Relation with the Construction of the Contrast Set
d256461363
Commonsense knowledge graphs (CKGs) are increasingly applied in various natural language processing tasks. However, most existing CKGs are limited to English, which hinders related research in non-English languages. Meanwhile, directly generating commonsense knowledge from pretrained language models has recently received attention, yet it has not been explored in non-English languages. In this paper, we propose a large-scale Chinese CKG generated from multilingual PLMs, named as CN-AutoMIC, aiming to fill the research gap of non-English CKGs. To improve the efficiency, we propose generate-by-category strategy to reduce invalid generation. To ensure the filtering quality, we develop cascaded filters to discard low-quality results. To further increase the diversity and density, we introduce a bootstrapping iteration process to reuse generated results. Finally, we conduct detailed analyses on CN-AutoMIC from different aspects. Empirical results show the proposed CKG has high quality and diversity, surpassing the direct translation version of similar English CKGs. We also find some interesting deficiency patterns and differences between relations, which reveal pending problems in commonsense knowledge generation. We share the resources and related models for further study 1 .
d251402047
The disambiguation of causative-passive homonymy (CPH) is potentially tricky for machines, as the causative and the passive are not distinguished by the sentences' syntactic structure. By transforming CPH disambiguation to a challenging natural language inference (NLI) task, we present the first Chinese Adversarial NLI challenge set (CANLI). We show that the pretrained transformer model RoBERTa, fine-tuned on an existing large-scale Chinese NLI benchmark dataset, performs poorly on CANLI. We also employ Word Sense Disambiguation as a probing task to investigate to what extent the CPH feature is captured in the model's internal representation. We find that the model's performance on CANLI does not correspond to its internal representation of CPH, which is the crucial linguistic ability central to the CANLI dataset. CANLI is available on Hugging Face Datasets (Lhoest et al., 2021) at https://huggingface.co/datasets/sxu/CANLI
The Chinese Causative-Passive Homonymy Disambiguation: an Adversarial Dataset for NLI and a Probing Task
d42836852
d219304675
In this paper, we propose visualizing results of a corpus-based study on text complexity using radar charts. We argue that the added value of this type of visualisation is the polygonal shape that provides an intuitive grasp of text complexity similarities across the registers of a corpus. The results that we visualize come from a study where we explored whether it is possible to automatically single out different facets of text complexity across the registers of a Swedish corpus. To this end, we used factor analysis as applied in Biber's Multi-Dimensional Analysis framework. The visualization of text complexity facets with radar charts indicates that there is correspondence between linguistic similarity and similarity of shape across registers.
Visualizing Facets of Text Complexity across Registers
d158297
In this study, we address the problem of extracting relations between entities from Wikipedia's English articles. Our proposed method first anchors the appearance of entities in Wikipedia's articles using neither Named Entity Recognizer (NER) nor coreference resolution tool. It then classifies the relationships between entity pairs using SVM with features extracted from the web structure and subtrees mined from the syntactic structure of text. We evaluate our method on manually annotated data from actual Wikipedia articles.
Subtree Mining for Relation Extraction from Wikipedia
d2382276
Parse-tree paths are commonly used to incorporate information from syntactic parses into NLP systems. These systems typically treat the paths as atomic (or nearly atomic) features; these features are quite sparse due to the immense variety of syntactic expression. In this paper, we propose a general method for learning how to iteratively simplify a sentence, thus decomposing complicated syntax into small, easy-to-process pieces. Our method applies a series of hand-written transformation rules corresponding to basic syntactic patternsfor example, one rule "depassivizes" a sentence. The model is parameterized by learned weights specifying preferences for some rules over others. After applying all possible transformations to a sentence, we are left with a set of candidate simplified sentences. We apply our simplification system to semantic role labeling (SRL). As we do not have labeled examples of correct simplifications, we use labeled training data for the SRL task to jointly learn both the weights of the simplification model and of an SRL model, treating the simplification as a hidden variable. By extracting and labeling simplified sentences, this combined simplification/SRL system better generalizes across syntactic variation. It achieves a statistically significant 1.2% F1 measure increase over a strong baseline on the Conll-2005 SRL task, attaining near-state-of-the-art performance.
Sentence Simplification for Semantic Role Labeling
d220249687
This paper describes our study on using mutilingual BERT embeddings and some new neural models for improving sequence tagging tasks for the Vietnamese language. We propose new model architectures and evaluate them extensively on two named entity recognition datasets of VLSP 2016 and VLSP 2018, and on two part-of-speech tagging datasets of VLSP 2010 and VLSP 2013. Our proposed models outperform existing methods and achieve new state-of-the-art results. In particular, we have pushed the accuracy of part-of-speech tagging to 95.40% on the VLSP 2010 corpus, to 96.77% on the VLSP 2013 corpus; and the F 1 score of named entity recognition to 94.07% on the VLSP 2016 corpus, to 90.31% on the VLSP 2018 corpus. Our code and pre-trained models viBERT and vELECTRA are released as open source to facilitate adoption and further research.In the denoising auto-encoder approach, a small subset of tokens of the unlabelled input sequence, typically 15%, is selected; these tokens are masked (e.g., BERT (Devlin et al., 2019)), or attended (e.g., XLNet (Yang et al., 2019)); and then train the network to recover the original input. The network is mostly transformer-based models which learn bidirectional representation. The main disadvantage of these models is that they often require a substantial compute cost because only 15% of the tokens per example is learned while a very large corpus is usually required for the pre-trained models to be effective. In the replaced token detection approach, the model learns to distinguish real input tokens from plausible but synthetically generated replacements (e.g., ELECTRA (Clark et al., 2020)) Instead of masking, this method corrupts the input by replacing some tokens with samples from a proposal distribution. The network is pre-trained as a discriminator that predicts for every token whether it is an original or a replacement. The main advantage of this method is that the model can learn from all input tokens instead of just the small masked-out subset. This is therefore much more efficient, requiring less than 1/4 of compute cost as compared to RoBERTa (Liu et al., 2019) and XLNet (Yang et al., 2019).
Improving Sequence Tagging for Vietnamese Text using Transformer-based Neural Models
d16185361
In recent years large repositories of structured knowledge (DBpedia, Freebase, YAGO) have become a valuable resource for language technologies, especially for the automatic aggregation of knowledge from textual data. One essential component of language technologies, which leverage such knowledge bases, is the linking of words or phrases in specific text documents with elements from the knowledge base (KB). We call this semantic annotation. In the same time, initiatives like Wikidata try to make those knowledge bases less language dependent in order to allow cross-lingual or language independent knowledge access. This poses a new challenge to semantic annotation tools which typically are language dependent and link documents in one language to a structured knowledge base grounded in the same language. Ultimately, the goal is to construct cross-lingual semantic annotation tools that can link words or phrases in one language to a structured knowledge database in any other language or to a language independent representation. To support this line of research we developed what we believe could serve as a gold standard Resource for Evaluating Cross-lingual Semantic Annotation (RECSA). We compiled a hand-annotated parallel corpus of 300 news articles in three languages with cross-lingual semantic groundings to the English Wikipedia and DBPedia. We hope that this new language resource, which is freely available, will help to establish a standard test set and methodology to comparatively evaluate cross-lingual semantic annotation technologies.
RECSA: Resource for Evaluating Cross-lingual Semantic Annotation
d17719483
We describe our ongoing work on an application of XML/XSL technology to a dictionary, from whose source representation various views for the human reader as well as for automatic text generation and understanding are derived. Our case study is a dictionary of discourse markers, the words (often, but not always, conjunctions) that signal the presence of a disocurse relation between adjacent spans of text.
XML/XSL in the Dictionary: The Case of Discourse Markers
d7037231
Children can determine the meaning of a new word from hearing it used in a familiar context-an ability often referred to as fast mapping. In this paper, we study fast mapping in the context of a general probabilistic model of word learning. We use our model to simulate fast mapping experiments on children, such as referent selection and retention. The word learning model can perform these tasks through an inductive interpretation of the acquired probabilities. Our results suggest that fast mapping occurs as a natural consequence of learning more words, and provides explanations for the (occasionally contradictory) child experimental data.
Fast Mapping in Word Learning: What Probabilities Tell Us
d2779136
This paper develops computational tools for evaluating competing syllabic parses of a phonological string on the basis of temporal patterns in speech production data. This is done by constructing models linking syllable parses to patterns of coordination between articulatory events. Data simulated from different syllabic parses are evaluated against experimental data from American English and Moroccan Arabic, two languages claimed to parse similar strings of segments into different syllabic structures. Results implicate a tautosyllabic parse of initial consonant clusters in English and a heterosyllabic parse of initial clusters in Arabic, in accordance with theoretical work on the syllable structure of these languages. It is further demonstrated that the model can correctly diagnose syllable structure even when previously proposed phonetic heuristics for such structure do not clearly point to the correct diagnosis.
Quantitative evaluation of competing syllable parses
d16887184
In this article, we compare feedback-related multimodal behaviours in two different types of interactions: first encounters between two participants who do not know each in advance, and naturally-occurring conversations between two and three participants recorded at their homes. All participants are Danish native speakers. The interactions are transcribed using the same methodology, and the multimodal behaviours are annotated according to the same annotation scheme. In the study we focus on the most frequently occurring feedback expressions in the interactions and on feedback-related head movements and facial expressions. The analysis of the corpora, while confirming general facts about feedback-related head movements and facial expressions previously reported in the literature, also shows that the physical setting, the number of participants, the topics discussed, and the degree of familiarity influence the use of gesture types and the frequency of feedback related expressions and gestures.
Multimodal Behaviour and Feedback in Different Types of Interaction
d6931165
In this paper, we describe our system which participated in the SemEval 2010 task of disambiguating sentiment ambiguous adjectives for Chinese. Our system uses text messages from Twitter, a popular microblogging platform, for building a dataset of emotional texts. Using the built dataset, the system classifies the meaning of adjectives into positive or negative sentiment polarity according to the given context. Our approach is fully automatic. It does not require any additional hand-built language resources and it is language independent.
Twitter Based System: Using Twitter for Disambiguating Sentiment Ambiguous Adjectives
d227217628
d5283021
We describe an unsupervised approach to the problem of automatically detecting subgroups of people holding similar opinions in a discussion thread. An intuitive way of identifying this is to detect the attitudes of discussants towards each other or named entities or topics mentioned in the discussion. Sentiment tags play an important role in this detection, but we also note another dimension to the detection of people's attitudes in a discussion: if two persons share the same opinion, they tend to use similar language content. We consider the latter to be an implicit attitude. In this paper, we investigate the impact of implicit and explicit attitude in two genres of social media discussion data, more formal wikipedia discussions and a debate discussion forum that is much more informal. Experimental results strongly suggest that implicit attitude is an important complement for explicit attitudes (expressed via sentiment) and it can improve the sub-group detection performance independent of genre.
Genre Independent Subgroup Detection in Online Discussion Threads: A Pilot Study of Implicit Attitude using Latent Textual Semantics
d21713674
In this paper, we present our process to establish a PICO and a sentiment annotated corpus of clinical trial publications. PICO stands for Population, Intervention, Comparison and Outcome -these four classes can be used for more advanced and specific search queries. For example, a physician can determine how well a drug works only in the subgroup of children. Additionally to the PICO extraction, we conducted a sentiment annotation, where the sentiment refers to whether the conclusion of a trial was positive, negative or neutral. We created both corpora with the help of medical experts and non-experts as annotators.
Medical Entity Corpus with PICO Elements and Sentiment Analysis
d15749329
In the formally syntax-based MT, a hierarchical tree generated by synchronous CFG rules associates the source sentence with the target sentence. In this paper, we propose a source dependency model to estimate the probability of the hierarchical tree generated in decoding. We develop this source dependency model from word-aligned corpus, without using any linguistically motivated parsing. Our experimental results show that integrating the source dependency model into the formally syntax-based machine translation significantly improves the performance on Chinese-to-English translation tasks.
A Source Dependency Model for Statistical Machine Translation
d31579068
The paper presents Bulgarian National Corpus project (BulNC) -a large-scale, representative, online available corpus of Bulgarian. The BulNC is also a monolingual general corpus, fully morpho-syntactically (and partially semantically) annotated, and manually provided with detailed meta-data descriptions. Presently the Bulgarian National corpus consists of about 320 000 000 graphical words and includes more than 10 000 samples. Briefly the corpus structure and the accepted criteria for representativeness and well-balancing are presented. The query language for advance search of collocations and concordances is demonstrated with some examples -it allows to retrieve word combinations, ordered queries, inflexionally and semantically related words, part-of-speech tags, utilising Boolean operations and grouping as well. The BulNC already plays a significant role in natural language processing of Bulgarian contributing to scientific advances in spelling and grammar checking, word sense disambiguation, speech recognition, text categorisation, topic extraction and machine translation. The BulNC can also be used in different investigations going beyond the linguistics: library studies, social sciences research, teaching methods studies, etc.
Bulgarian National Corpus Project
d236459873
CommonsenseQA (CQA) (Talmor et al., 2019) dataset was recently released to advance the research on common-sense question answering (QA) task. Whereas the prior work has mostly focused on proposing QA models for this dataset, our aim is to retrieve as well as generate explanation for a given (question, correct answer choice, incorrect answer choices) tuple from this dataset. Our explanation definition is based on certain desiderata, and translates an explanation into a set of positive and negative common-sense properties (aka facts) which not only explain the correct answer choice but also refute the incorrect ones. We human-annotate a first-ofits-kind dataset (called ECQA) of positive and negative properties, as well as free-flow explanations, for 11K QA pairs taken from the CQA dataset. We propose a latent representation based property retrieval model as well as a GPT-2 based property generation model with a novel two step fine-tuning procedure. We also propose a free-flow explanation generation model. Extensive experiments show that our retrieval model beats BM25 baseline by a relative gain of 100% in F 1 score, property generation model achieves a respectable F 1 score of 36.4, and free-flow generation model achieves a similarity score of 61.9, where last two scores are based on a human correlated semantic similarity metric.
Explanations for CommonsenseQA: New Dataset and Models
d6809934
This paper presents a method for interpreting metaphoric language in the context of a portable natural language interface. The method licenses metaphoric uses via coercions between incompatible ontological sorts. The machinery allows both previously-known and unexpected metaphoric uses to be correctly interpreted and evaluated with respect to the backend expert system.
METAPHORIC GENERALIZATION THROUGH SORT COERCION
d11335818
Books Received ~Cognitive Models of Speech Processing Neurocomputing 2: Directions for Research Modeling Legal Argument: Representing Cases with Hypotheticals Artificial Vision for Mobile Robots: Stereo Vision and Multisensory Perception Dimensiones de la Lexicografia: A Prop6sito del Diccionario del Espafiol de Mdxico [Dimensions of Lexicography: On the Dictionary of Mexican Spanish]
d16064656
This research describes the development of a supervised classifier of English Caused Motion Constructions (CMCs) (e.g. The goalie kicked the ball into the field). Consistent identification of CMCs is a necessary step to a correct interpretation of semantics for sentences where the verb does not conform to the expected semantics of the verb (e.g. The crowd laughed the clown off the stage). We expand on a previous study on the classification CMCs(Hwang et al., 2010)to show that CMCs can be successfully identified in the corpus data. In this paper, we present the classifier and the series of experiments carried out to improve its performance.
Identification of Caused Motion Constructions
d44153001
Sentiment analysis plays an important role in E-commerce. Identifying ironic and sarcastic content in text plays a vital role in inferring the actual intention of the user, and is necessary to increase the accuracy of sentiment analysis. This paper describes the work on identifying the irony level in twitter texts. The system developed by the SSN MLRG1 team in SemEval-2018 for task 3 (irony detection) uses rule based approach for feature selection and MultiLayer Perceptron (MLP) technique to build the model for multiclass irony classification subtask, which classifies the given text into one of the four class labels.
d169275769
d7666029
We work on detecting positive or negative sentiment towards named entities in very large volumes of news articles. The aim is to monitor changes over time, as well as to work towards media bias detection by comparing differences across news sources and countries. With view to applying the same method to dozens of languages, we use linguistically light-weight methods: searching for positive and negative terms in bags of words around entity mentions (also considering negation). Evaluation results are good and better than a third-party baseline system, but precision is not sufficiently high to display the results publicly in our multilingual news analysis system Europe Media Monitor (EMM). In this paper, we focus on describing our effort to improve the English language results by avoiding the biggest sources of errors. We also present new work on using a syntactic parser to identify safe opinion recognition rules, such as predicative structures in which sentiment words directly refer to an entity. The precision of this method is good, but recall is very low.
Large-scale news entity sentiment analysis
d218977361
d425189
Moore's book, based on her doctoral thesis, presents her work on the automatic generation of natural language explanations and also gives an excellent summary of previous work in the field. The book is well written, and should be accessible with limited prior knowledge of the field. I would recommend it both to those wanting a detailed overview of Moore's work (partly summarized by Moore and Paris [1993]) and to those wanting a solid introduction to recent work on explanation generation with an emphasis on dialog issues. The book should be of interest to researchers both in computational linguistics (particularly in text generation and discourse structure) and in expert systems.Moore focuses on textual explanations given by expert and advisory systems, and she describes in detail the explanation component of an expert system that advises the user on how to improve their Lisp program. However, it is clear how the basic ideas and techniques presented apply to a wide range of applications in which explanations need to be provided, such as help systems, online documentation systems, tutorial systems, and so on.Moore starts from the point of view that explanation should be an essentially interactive process, requiring a dialog between the person or system giving the explanation (the advisor) and the person receiving the explanation (the advisee). Without such a dialog the chances of the advisee obtaining the information required, and in a form they understand, are much reduced. In order to participate effectively in such a dialog the advisor must be able to respond appropriately to follow-up questions after the initial explanation is given. Moore argues that this requires that the advisor understand the context of these questions, and in particular the context created by the advisor's previous responses. To do this, the advisor must be able to reason about its own previous responses. This theme is developed throughout the book, through the detailed discussion of a particular implementation of the ideas. Moore starts off with two very useful and clearly written review chapters. Some people might choose to study the book primarily for these reviews, which might indeed be suitable as a basis for postgraduate-level introductory courses on explanation in expert systems. The first chapter argues that if an expert system is to provide effective explanations, then the system itself must be designed with that in mind, to ensure that the knowledge required for possible explanations is explicitly represented. Prior 422
Participating in Explanatory Dialogues: Interpreting and Responding to Questions in Context
d226262345
As an important research issue in the natural language processing community, multi-label emotion detection has been drawing more and more attention in the last few years. However, almost all existing studies focus on one modality (e.g., textual modality). In this paper, we focus on multi-label emotion detection in a multi-modal scenario. In this scenario, we need to consider both the dependence among different labels (label dependence) and the dependence between each predicting label and different modalities (modality dependence). Particularly, we propose a multi-modal sequence-to-set approach to effectively model both kinds of dependence in multi-modal multi-label emotion detection. The detailed evaluation demonstrates the effectiveness of our approach. * Corresponding author the hardest thing that we face is Textual Modality Visual ModalityAcoustic ModalitySad, Disgust Emotions
Multi-modal Multi-label Emotion Detection with Modality and Label Dependence
d209335890
In this paper, we present a novel data-to-text system for cancer patients, providing information on quality of life implications after treatment, which can be embedded in the context of shared decision making. Currently, information on quality of life implications is often not discussed, partly because (until recently) data has been lacking. In our work, we rely on a newly developed prediction model, which assigns patients to scenarios. Furthermore, we use data-to-text techniques to explain these scenario-based predictions in personalized and understandable language. We highlight the possibilities of NLG for personalization, discuss ethical implications and also present the outcomes of a first evaluation with clinicians.
A Personalized Data-to-Text Support Tool for Cancer Patients
d245855889
d218973996
d184482733
This paper presents the Know-Center system submitted for task 5 of the SemEval-2019 workshop. Given a Twitter message in either English or Spanish, the task is to first detect whether it contains hateful speech and second, to determine the target and level of aggression used. For this purpose our system utilizes word embeddings and a neural network architecture, consisting of both dilated and traditional convolution layers. We achieved average F1-scores of 0.57 and 0.74 for English and Spanish respectively.
Know-Center at SemEval-2019 Task 5: Multilingual Hate Speech Detection on Twitter using CNNs
d219899617
d250390674
This paper introduces our submission for the SemEval 2022 Task 8: Multilingual News Article Similarity. The task of the competition consisted of the development of a model, capable of determining the similarity between pairs of multilingual news articles. To address this challenge, we evaluated the Word Mover's Distance in conjunction with word embeddings from ConceptNet Numberbatch and term frequencies of WorldLex, as well the Sentence Mover's Distance based on sentence embeddings generated by pretrained transformer models of Sentence-BERT. To facilitate the comparison of multilingual articles with Sentence-BERT models, we deployed a Neural Machine Translation system. All our models achieve stable results in multilingual similarity estimation without learning parameters.
LSX_team5 at SemEval-2022 Task 8: Multilingual News Article Similarity Assessment based on Word-and Sentence Mover's Distance
d56595638
Nominal compounds such as red wine and nut case display a continuum of compositionality, with varying contributions from the components of the compound to its semantics. This article proposes a framework for compound compositionality prediction using distributional semantic models, evaluating to what extent they capture idiomaticity compared to human judgments. For evaluation, we introduce data sets containing human judgments in three languages: English, French, and Portuguese. The results obtained reveal a high agreement between the models and human predictions, suggesting that they are able to incorporate information about idiomaticity. We also present an in-depth evaluation of various factors that can affect prediction, such as model and corpus parameters and compositionality operations. General crosslingual analyses reveal the impact of morphological variation and corpus size in the ability of the model to predict compositionality, and of a uniform combination of the components for best results.
Unsupervised Compositionality Prediction of Nominal Compounds
d3891290
This paper describes and evaluates a novel feature set for stance classification of argumentative texts; i.e. deciding whether a post by a user is for or against the issue being debated. We model the debate both as attitude bearing features, including a set of automatically acquired 'topic terms' associated with a Distributional Lexical Model (DLM) that captures the writer's attitude towards the topic term, and as dependency features that represent the points being made in the debate. The stance of the text towards the issue being debated is then learnt in a supervised framework as a function of these features. The main advantage of our feature set is that it is scrutable: The reasons for a classification can be explained to a human user in natural language. We also report that our method outperforms previous approaches to stance classification as well as a range of baselines based on sentiment analysis and topic-sentiment analysis.
Scrutable Feature Sets for Stance Classification
d221373758
d18312759
We describe the system entered by the National Research Council Canada in the SemEval-2014 L2 writing assistant task. Our system relies on a standard Phrase-Based Statistical Machine Translation trained on generic, publicly available data. Translations are produced by taking the already translated part of the sentence as fixed context. We show that translation systems can address the L2 writing assistant task, reaching out-of-five word-based accuracy above 80 percent for 3 out of 4 language pairs. We also present a brief analysis of remaining errors.
CNRC-TMT: Second Language Writing Assistant System Description
d18663038
Extended AbstractRecent years have witnessed the transformation of the World Wide Web from an information-gathering and processing tool into an interactive communication medium in the form of online discussion forums, chat-rooms, blogs, and so on. There is strong evidence suggesting that social networks facilitate new ways to interact with information in such media. Understanding the mechanisms and the patterns of such interactions can be important for many applications. Currently, there is not much work that adequately models interaction between social networks and information content. From the perspective of social network analysis, most existing work is concerned with understanding static topological properties of social networks represented by such forums. For instance, Park and Maurer (2009) applied node clustering to identify consensus and consensus facilitators, while Kang et al.(2009)uses discussion thread co-participation relations to identify (static) groups in discussions. On discussion content analysis research side, there have been approaches for classifying messages with respect to dialogue roles(Carvalho and Cohen, 2005;Ravi and Kim, 2007), but they often ignore the role and the impact of underlying social interactions.Thus, the current static network and content analysis approaches provide limited support for• Capturing dynamics of social interactions: the sequence of communication or who is responding to whom is important in understanding the nature of interactions. • Relating social interactions to content analysis: the content can give hint on the nature of the interaction and vice versa (e.g., users with more social interactions are more likely to have common interests).To address the above issues, one needs to go beyond the static analysis approach, and develop dynamical models that will explicitly account for the interplay between the content of communication (topics) and the structure of communications (social networks). Such framework and corresponding algorithmic base will allow us to infer "polarizing" topics discussed in forums, identify evolving communities of interests, and examine the link between social and content dynamics.
Towards Modeling Social and Content Dynamics in Discussion Forums
d9649143
Portable Software Modules for Speech Recognition
d252819469
In the field of Natural Language Processing (NLP), extracting method entities from biomedical text has been a challenging task. Scientific research papers commonly consist of complex keywords and domain-specific terminologies, and new terminologies are continuously appearing. In this research, we find method terminologies in biomedical text using both rule-based and machine learning techniques. We first use linguistic features to extract method sentence candidates from a large corpus of biomedical text. Then, we construct a silver standard biomedical corpus composed of these sentences. With a rule-based method that makes use of the Stanza dependency parsing module, we label the method entities in these sentences. Using this silver standard corpus we train two machine learning algorithms to automatically extract method entities from biomedical text. Our results show that it is possible to develop machine learning models that can automatically extract method entities to a reasonable accuracy without the need for a gold standard dataset.
Method Entity Extraction from Biomedical Texts
d235097431
It is generally agreed upon in the natural language processing (NLP) community that ethics should be integrated into any curriculum. Being aware of and understanding the relevant core concepts is a prerequisite for following and participating in the discourse on ethical NLP. We here present ready-made teaching material in the form of slides and practical exercises on ethical issues in NLP, which is primarily intended to be integrated into introductory NLP or computational linguistics courses. By making this material freely available, we aim at lowering the threshold to adding ethics to the curriculum. We hope that increased awareness will enable students to identify potentially unethical behavior.
A Crash Course on Ethics for Natural Language Processing
d21689243
We present an application of Semantic Web Technologies to computational lexicography. More precisely we describe the publication of the morphological layer of the Italian Parole Simple Clips lexicon (PSC-M) as linked open data. The novelty of our work is in the use of the Semantic Web Rule Language (SWRL) to encode morphological patterns, thereby allowing the automatic derivation of the inflectional variants of the entries in the lexicon. By doing so we make these patterns available in a form that is human readable and that therefore gives a comprehensive morphological description of a large number of Italian words.
One Language to rule them all: Modelling Morphological Patterns in a Large Scale Italian Lexicon with SWRL
d2381180
We describe experiments in parsing the German TIGER Treebank. In parsing the complete treebank, 86.44% of the sentences receive full parses; 13.56% receive fragment parses. We discuss the methods used to enhance coverage and parsing quality and we present an evaluation on a gold standard, to our knowledge the first one for a deep grammar of German. Considering the selection performed by our current version of a stochastic disambiguation component, we achieve an f-score of 84.2%, the upper and lower bounds being 87.4% and 82.3% respectively.
Improving coverage and parsing quality of a large-scale LFG for German
d2401184
Negated statements often carry positive implicit meaning. Regardless of the semantic representation one adopts, pinpointing the positive concepts within a negated statement is needed in order to encode the statement's meaning. In this paper, novel ideas to reveal positive implicit meaning using focus of negation are presented. The concept of granularity of focus is introduced and justified. New annotation and features to detect fine-grained focus are discussed and results reported.
Fine-Grained Focus for Pinpointing Positive Implicit Meaning from Negated Statements
d11365040
In this paper, we relook at the problem of pronunciation of English words using native phone set. Specifically, we investigate methods of pronouncing English words using Telugu phoneset in the context of Telugu Text-to-Speech. We compare phone-phone substitution and wordphone mapping for pronunciation of English words using Telugu phones. We are not considering other than native language phoneset in all our experiments. This differentiates our approach from other works in polyglot speech synthesis.
Is word-to-phone mapping better than phone-phone mapping for handling English words?
d42653685
Multilinguality at NTCIR, and moving on
d150252449
L'étude porte sur l'apport d'un réseau de neurones récurrent (Recurrent Neural Network -RNN) bidirectionnel encodeur/décodeur avec mécanisme d'attention pour une tâche de compréhension de la parole. Les premières expériences faites sur le corpus ATIS confirment la qualité du système RNN état de l'art utilisé pour cet article, en comparant les résultats obtenus à ceux récemment publiés dans la littérature. Des expériences supplémentaires montrent que les RNNs avec mécanisme d'attention obtiennent de meilleures performances que les RNNs récemment proposés pour la tâche d'étiquetage en concepts sémantiques. Sur le corpus MEDIA, un corpus français état de l'art pour la compréhension dédié à la réservation d'hôtel et aux informations touristiques, les expériences montrent qu'un RNN bidirectionnel atteint une f-mesure de 79,51 tandis que le même système intégrant le mécanisme d'attention permet d'atteindre une f-mesure de 80,27.ABSTRACTExploring the use of Attention-Based Recurrent Neural Networks For Spoken Language UnderstandingThis study explores the use of a bidirectional recurrent neural network (RNN) encoder/decoder based on a mechanism of attention for a Spoken Language Understanding (SLU) task. First experiments carried on the ATIS corpus confirm the quality of the RNN baseline system used in this paper, by comparing its results on the ATIS corpus to the results recently published in the literature. Additional experiments show that RNN based on a mechanism of attention performs better than RNN architectures recently proposed for a slot filling task. On the French MEDIA corpus, a French state-of-the-art corpus for SLU dedicated to hotel reservation and tourist information, experiments show that a bidirectionnal RNN reaches a f-measure value of 79.51 while the use of a mechanism of attention allows us to reach a f-measure value of 80.27. MOTS-CLÉS : Compréhension de la Parole, Réseaux de Neurones Récurrents, Mécanisme d'Attention, Bidirectionnel.
Des Réseaux de Neurones avec Mécanisme d'Attention pour la Compréhension de la Parole ⇤
d3960960
Chemistry research papers are a primary source of information about chemistry, as in any scientific field. The presentation of the data is, predominantly, unstructured information, and so not immediately susceptible to processes developed within chemical informatics for carrying out chemistry research by information processing techniques. At one level, extracting the relevant information from research papers is a text mining task, requiring both extensive language resources and specialised knowledge of the subject domain. However, the papers also encode information about the way the research is conducted and the structure of the field itself. Applying language technology to research papers in chemistry can facilitate eScience on several different levels.The SciBorg project sets out to provide an extensive, analysed corpus of published chemistry research. This relies on the cooperation of several journal publishers to provide papers in an appropriate form. The work is carried out as a collaboration involving the
Language Resources and Chemical Informatics 1. Language Resources for Scientific Text
d233365064
d12175509
Recent years have seen an exponential growth in the amount of multilingual text available on the web. This situation raises the need for novel applications for organizing and accessing multilingual content. Common examples of such applications include Multilingual Topic Tracking, Cross-Language Information retrieval systems etc. Most of these applications rely on the availability of multilingual lexical resources which require significant effort to create. In this paper we present an unsupervised method for building bilingual topic hierarchies. In a bilingual topic hierarchy, topics (where a topic is a distribution over words) are arranged in a hierarchical fashion with abstract topics appearing near the root of the hierarchy and more concrete topics near the leaves. Such bilingual topic hierarchies can be useful for organizing bilingual corpus based on common topics, cross-lingual information retrieval and cross-lingual text classification. Our method builds upon the prior work done on Bayesian non-parametric inferencing of topic hierarchies and multilingual topic modeling to extract bilingual topic hierarchies from unaligned text. We demonstrate the effectiveness of our algorithm in extracting such topic hierarchies from a collection of bilingual text passages and FAQs.
Mining bilingual topic hierarchies from unaligned text
d232021920
d21711688
In this paper, we assess the challenges for multi-domain, multi-lingual question answering, create necessary resources for benchmarking and develop a baseline model. We curate 500 articles in six different domains from the web. These articles form a comparable corpora of 250 English documents and 250 Hindi documents. From these comparable corpora, we have created 5, 495 question-answer pairs with the questions and answers, both being in English and Hindi. The question can be both factoid or short descriptive types. The answers are categorized in 6 coarse and 63 finer types. To the best of our knowledge, this is the very first attempt towards creating multi-domain, multi-lingual question answering evaluation involving English and Hindi. We develop a deep learning based model for classifying an input question into the coarse and finer categories depending upon the expected answer. Answers are extracted through similarity computation and subsequent ranking. For factoid question, we obtain an MRR value of 49.10% and for short descriptive question, we obtain a BLEU score of 41.37%. Evaluation of question classification model shows the accuracies of 90.12% and 80.30% for coarse and finer classes, respectively.
MMQA: A Multi-domain Multi-lingual Question-Answering Framework for English and Hindi
d1688505
We present a novel fully unsupervised algorithm for POS induction from plain text, motivated by the cognitive notion of prototypes. The algorithm first identifies landmark clusters of words, serving as the cores of the induced POS categories. The rest of the words are subsequently mapped to these clusters. We utilize morphological and distributional representations computed in a fully unsupervised manner. We evaluate our algorithm on English and German, achieving the best reported results for this task.
Improved Unsupervised POS Induction through Prototype Discovery
d5292930
A string encoding for a subclass of bipartite graphs enables graph rewriting used in autosegmental descriptions of tone phonology via existing and highly optimized finite-state transducer toolkits (Yli-Jyr 2013). The current work offers a rigorous treatment of this code-theoretic approach, generalizing the methodology to all bipartite graphs having no crossing edges and unordered nodes. We present three bijectively related codes each of which exhibit unique characteristics while preserving the freedom to violate or express the OCP constraint. The codes are infinite, finite-state representable and optimal (efficiently computable, invertible, locally iconic, compositional) in the sense ofKornai (1995). They extend the encoding approach with visualization, generality and flexibility and they make encoded graphs a strong candidate when the formal semantics of autosegmental phonology or non-crossing alignment relations are implemented within the confines of regular grammar.2. "stability of tone": A tone can be temporarily unattached to a segment.
Three Equivalent Codes for Autosegmental Representations
d220047209
We study the task of cross-database semantic parsing (XSP), where a system that maps natural language utterances to executable SQL queries is evaluated on databases unseen during training. Recently, several datasets, including Spider, were proposed to support development of XSP systems. We propose a challenging evaluation setup for cross-database semantic parsing, focusing on variation across database schemas and in-domain language use. We re-purpose eight semantic parsing datasets that have been well-studied in the setting where in-domain training data is available, and instead use them as additional evaluation data for XSP systems instead. We build a system that performs well on Spider, and find that it struggles to generalize to our re-purposed set. Our setup uncovers several generalization challenges for cross-database semantic parsing, demonstrating the need to use and develop diverse training and evaluation datasets. * Work done during an internship at Google.
Exploring Unexplored Generalization Challenges for Cross-Database Semantic Parsing
d5291934
This paper investigates neural characterbased morphological tagging for languages with complex morphology and large tag sets. Character-based approaches are attractive as they can handle rarelyand unseen words gracefully. We evaluate on 14 languages and observe consistent gains over a state-of-the-art morphological tagger across all languages except for English and French, where we match the state-of-the-art. We compare two architectures for computing characterbased word vectors using recurrent (RNN) and convolutional (CNN) nets. We show that the CNN based approach performs slightly worse and less consistently than the RNN based approach. Small but systematic gains are observed when combining the two architectures by ensembling.
An Extensive Empirical Evaluation of Character-Based Morphological Tagging for 14 Languages
d6076022
Hierarchical Recognition of Propositional Arguments with Perceptrons
d13043395
We apply statistical machine translation (SMT) tools to generate novel paraphrases of input sentences in the same language. The system is trained on large volumes of sentence pairs automatically extracted from clustered news articles available on the World Wide Web. Alignment Error Rate (AER) is measured to gauge the quality of the resulting corpus. A monotone phrasal decoder generates contextual replacements. Human evaluation shows that this system outperforms baseline paraphrase generation techniques and, in a departure from previous work, offers better coverage and scalability than the current best-of-breed paraphrasing approaches.
Monolingual Machine Translation for Paraphrase Generation
d10300270
This paper introduces a new SemEval task on Cross-Level Semantic Similarity (CLSS), which measures the degree to which the meaning of a larger linguistic item, such as a paragraph, is captured by a smaller item, such as a sentence. Highquality data sets were constructed for four comparison types using multi-stage annotation procedures with a graded scale of similarity. Nineteen teams submitted 38 systems. Most systems surpassed the baseline performance, with several attaining high performance for multiple comparison types. Further, our results show that comparisons of semantic representation increase performance beyond what is possible with text alone.
SemEval-2014 Task 3: Cross-Level Semantic Similarity
d44159347
Ontologies compartmentalize types and relations in a target domain and provide the semantic backbone needed for a plethora of practical applications. Very often different ontologies are developed independently for the same domain. Such "parallel" ontologies raise the need for a process that will establish alignments between their entities in order to unify and extend the existing knowledge. In this work, we present a novel entity alignment method which we dub DeepAlignment. DeepAlignment refines pre-trained word vectors aiming at deriving ontological entity descriptions which are tailored to the ontology matching task. The absence of explicit information relevant to the ontology matching task during the refinement process makes DeepAlignment completely unsupervised. We empirically evaluate our method using standard ontology matching benchmarks. We present significant performance improvements over the current state-of-the-art, demonstrating the advantages that representation learning techniques bring to ontology matching.
DeepAlignment: Unsupervised Ontology Matching With Refined Word Vectors
d53561411
In this paper we present the experiments and results by the SUKI team in the German Dialect Identification shared task of the VarDial 2018 Evaluation Campaign. Our submission using HeLI with adaptive language models obtained the best results in the shared task with a macro F1score of 0.686, which is clearly higher than the other submitted results. Without some form of unsupervised adaptation on the test set, it might not be possible to reach as high an F1-score with the level of domain difference between the datasets of the shared task. We describe the methods used in detail, as well as some additional experiments carried out during the shared task.This work is licensed under a Creative Commons Attribution 4.0 International Licence. Licence details: http:// creativecommons.org/licenses/by/4.0/.
HeLI-based Experiments in Swiss German Dialect Identification
d60611673
La traduction automatique statistique par séquences de mots est une voie prometteuse. Nous présentons dans cet article deuxévolutions complémentaires. La première permet une modélisation de la langue cible dans un espace continu. La seconde intègre des catégories morpho-syntaxiques aux unités manipulées par le modèle de traduction. Ces deux approches sontévaluées sur la tâche Tc-Star. Les résultats les plus intéressants sont obtenus par la combinaison de ces deux méthodes.Abstract. Statistical phrase-based translation models are very efficient. In this paper, we present two complementary methods. The first one consists in a a statistical language model that is based on a continuous representation of the words in the vocabulary. By these means we expect to take better advantage of the limited amount of training data. In the second method, morpho-syntactic information is incorporated into the translation model in order to obtain lexical disambiguation. Both approaches are evaluated on the Tc-Star task. Most promising results are obtained by combining both methods.Mots-clés : traduction automatique, approche statistique, modélisation linguistique dans un espace continu, analyse morpho-syntaxique, désambiguïsation lexicale.
Modèles statistiques enrichis par la syntaxe pour la traduction automatique
d14395450
We demonstrate a proof-of-concept system that uses a shallow chunking-based technique for knowledge extraction from natural language text, in particular looking at the task of story understanding. This technique is extended with a reasoning engine that borrows techniques from dynamic ontology refinement to discover the semantic similarity of stories and to merge them together.
Merging Stories with Shallow Semantics
d9366280
This paper analyzes the functionality of different distance metrics that can be used in a bottom-up unsupervised algorithm for automatic word categorization. The proposed method uses a modified greedy-type algorithm. The formulations of fuzzy theory are also used to calculate the degree of membership for the elements in the linguistic clusters formed. The unigram and the bigram statistics of a corpus of about two million words are used. Empirical comparisons are made in order to support the discussions proposed for the type of distance metric that would be most suitable for measuring the similarity between linguistic elements.
CHOOSING A DISTANCE METRIC FOR AUTOMATIC WORD CATEGORIZATION
d16474818
Crowdsourcing is an increasingly popular, collaborative approach for acquiring annotated corpora. Despite this, reuse of corpus conversion tools and user interfaces between projects is still problematic, since these are not generally made available. This demonstration will introduce the new, open-source GATE Crowdsourcing plugin, which offers infrastructural support for mapping documents to crowdsourcing units and back, as well as automatically generating reusable crowdsourcing interfaces for NLP classification and selection tasks. The entire workflow will be demonstrated on: annotating named entities; disambiguating words and named entities with respect to DBpedia URIs; annotation of opinion holders and targets; and sentiment.
The GATE Crowdsourcing Plugin: Crowdsourcing Annotated Corpora Made Easy
d19028605
Recent studies on deceptive language suggest that machine learning algorithms can be employed with good results for classification of texts as truthful or untruthful. However, the models presented so far do not attempt to take advantage of the differences between subjects. In this paper, models have been trained in order to classify statements issued in Court as false or not-false, not only taking into consideration the whole corpus, but also by identifying more homogenous subsets of producers of deceptive language. The results suggest that the models are effective in recognizing false statements, and their performance can be improved if subsets of homogeneous data are provided.
On the Use of Homogenous Sets of Subjects in Deceptive Language Analysis
d51991501
This paper presents a left-corner parser for minimalist grammars. The relation between the parser and the grammar is transparent in the sense that there is a very simple 1-1 correspondence between derivations and parses. Like left-corner contextfree parsers, left-corner minimalist parsers can be non-terminating when the grammar has empty left corners, so an easily computed left-corner oracle is defined to restrict the search.
Edinburgh Research Explorer A Sound and Complete Left-Corner Parsing for Minimalist Grammars A Sound and Complete Left-Corner Parsing for Minimalist Grammars
d1417087
AUTOMATIC PROOFREADING OF' FROZEN PIIRASES IN GERMAN
d219299843