_id
stringlengths
4
10
text
stringlengths
0
18.4k
title
stringlengths
0
8.56k
d12805692
L'extraction de relations par apprentissage nécessite un corpus annoté de très grande taille pour couvrir toutes les variations d'expressions des relations. Pour contrer ce problème, nous proposons une méthode de simplification de phrases qui permet de réduire la variabilité syntaxique des relations. Elle nécessite l'annotation d'un petit corpus qui sera par la suite augmenté automatiquement. La première étape est l'annotation des simplifications grâce à un classifieur à base de CRF, puis l'extraction des relations, et ensuite une complétion automatique du corpus d'entraînement des simplifications grâce aux résultats de l'extraction des relations. Les premiers résultats que nous avons obtenus pour la tâche d'extraction de relations d'i2b2 2010 sont très encourageants.ABSTRACTSentence simplification for relation extractionMachine learning based relation extraction requires large annotated corpora to take into account the variability in the expression of relations. To deal with this problem, we propose a method for simplifying sentences, i.e. for reducing the syntactic variability of the relations. Simplification requires the annotation of a small corpus, which will be automatically augmented. The process starts with the annotation of the simplification thanks to a CRF classifier, then the relation extraction, and lastly the automatic completion of the training corpus for the simplification through the results of the relation extraction. The first results we obtained for the task of relation extraction of the i2b2 2010 challenge are encouraging. MOTS-CLÉS : Extraction de relations, simplification de phrases, apprentissage automatique.
Simplification de phrases pour l'extraction de relations
d248512833
Keyphrase extraction is a fundamental task in natural language processing that aims to extract a set of phrases with important information from a source document. Identifying important keyphrases is the central component of keyphrase extraction, and its main challenge is learning to represent information comprehensively and discriminate importance accurately. In this paper, to address the above issues, we design a new hyperbolic matching model (Hy-perMatch) to explore keyphrase extraction in hyperbolic space. Concretely, to represent information comprehensively, HyperMatch first takes advantage of the hidden representations in the middle layers of RoBERTa and integrates them as the word embeddings via an adaptive mixing layer to capture the hierarchical syntactic and semantic structures. Then, considering the latent structure information hidden in natural languages, HyperMatch embeds candidate phrases and documents in the same hyperbolic space via a hyperbolic phrase encoder and a hyperbolic document encoder. To discriminate importance accurately, HyperMatch estimates the importance of each candidate phrase by explicitly modeling the phrase-document relevance via the Poincaré distance and optimizes the whole model by minimizing the hyperbolic margin-based triplet loss. Extensive experiments are conducted on six benchmark datasets and demonstrate that HyperMatch outperforms the recent state-of-the-art baselines.
Hyperbolic Relevance Matching for Neural Keyphrase Extraction
d17554801
Entity sense disambiguation becomes difficult with few or even zero training instances available, which is known as imbalanced learning problem in machine learning. To overcome the problem, we create a new set of reliable training instances from dictionary, called dictionarybased prototypes. A hierarchical classification system with a tree-like structure is designed to learn from both the prototypes and training instances, and three different types of classifiers are employed. In addition, supervised dimensionality reduction is conducted in a similarity-based space. Experimental results show our system outperforms three baseline systems by at least 8.3% as measured by macro F 1 score.
Imbalanced Classification Using Dictionary-based Prototypes and Hierarchical Decision Rules for Entity Sense Disambiguation
d9977462
We explore the relationship between question answering and constraint relaxation in spoken dialog systems. We develop dialogue strategies for selecting and presenting information succinctly. In particular, we describe methods for dealing with the results of database queries in informationseeking dialogs. Our goal is to structure the dialogue in such a way that the user is neither overwhelmed with information nor left uncertain as to how to refine the query further. We present evaluation results obtained from a user study involving 20 subjects in a restaurant selection task.
Interactive Question Answering and Constraint Relaxation in Spoken Dialogue Systems
d2244458
This paper presents a multilingual Natural Language Generation system that produces technical instruction texts in Bulgarian, Czech and Russian. It generates several types of texts, common for software manuals, in two styles. We illustrate the system's functionality with examples of its input and output behaviour. We discuss the criteria and procedures adopted for evaluating the system and summarise their results. The system embodies novel approaches to providing multilingual documentation, ranging from the re-use of a large-scale, broad coverage grammar of English in order to develop the lexico-grammatical resources necessary for the generation in the three target languages, through to the adoption of a 'knowledge editing' approach to specifying the desired content of the texts to be generated independently of the target languages in which those texts finally appear.
AGILE -A System for Multilingual Generation of Technical Instructions
d11858650
Temporal analysis of events is a central problem in computational models of discourse. However, correctly recognizing temporal aspects of events poses serious challenges. This paper introduces a joint modeling framework and feature set for temporal analysis of events that utilizes Markov Logic. The feature set includes novel features derived from lexical ontologies. An evaluation suggests that introducing lexical relation features improves the overall accuracy of temporal relation models.
Exploring the Effectiveness of Lexical Ontologies for Modeling Temporal Relations with Markov Logic
d203690604
We present a broad coverage model of Turkish morphology and an open-source morphological analyzer that implements it. The model captures intricacies of Turkish morphologysyntax interface, thus could be used as a baseline that guides language model development. It introduces a novel fine part-of-speech tagset, a fine-grained affix inventory and represents morphotactics without zero-derivations. The morphological analyzer is freely available. It consists of modular reusable components of human-annotated gold standard lexicons, implements Turkish morphotactics as finite-state transducers using OpenFst and morphophonemic processes as Thrax grammars. 1
A Syntactically Expressive Morphological Analyzer for Turkish
d6546892
This paper presents a system 1 for drug name identification and classification in biomedical texts.
A preliminary approach to recognize generic drug names by combining UMLS resources and USAN naming conventions
d252365186
Machine learning (ML) approaches have dominated Natural Language Processing (NLP) during the last two decades. From machine translation and speech technology, machine learning tools are now also in use for spellchecking and grammar checking, with a blurry distinction between the two. We unmask the myth of effortless big data by illuminating the efforts and time that lay behind building a multi-purpose corpus with regard to collecting, marking up and building from scratch. We also discuss what kind of language technology tools minority language communities actually need, and to what extent the dominating paradigm has been able to deliver these tools. In this context we present our alternative to corpus-based language technology -knowledge-based language technology -and we show how this approach can provide language technology solutions for languages being outside the reach of machine learning procedures. We present a stable and mature infrastructure (GiellaLT) containing more than hundred languages and building a number of language technology tools that are useful for language communities.
Unmasking the Myth of Effortless Big Data - Making an Open Source Multilingual Infrastructure and Building Language Resources from Scratch
d8893652
This paper describes the NLMenu System, a menu-based natural language understanding system. Rather than requiring the user to type his input to the system, input to NLMenu is made by selecting items from a set of dynamically changing menus. Active menus and items are determined by a predictive left-corner parser that accesses a semantic grammar and lexicon. The advantage of this approach is that all inputs to the NLMenu System can be understood thus giving a 0% failure rate. A companion system that can automatically generate interfaces to relational databases is also discussed.
MENU-BASED NATURAL LANGUAGE UNDERSTANDING
d676609
Mechanical Translation Work at the University of Michigan
d5718698
We propose a new method to resolve ambiguity in translation and meaning interpretation using linguistic statistics extracted from dual corpora of sourcu aud target languages in addition to tim logical restrictions described on dictiomtry and grammar rules for ambiguity resolution. It provides reasonable criteria for determining a suitable equivalent translation or meaning by making tile dependency relation in the source language be reflected in the translated text. The method can be tractable because tile required staffstics can be computed semi-automatically in advance from a source language corpus and a target language corpus, while an ordinal corpus-based translation method needs a large volume of bilingual corpus of strict pairs of a sentence and its translation. Moreover, it also provides the means to compute the linguistic statistics on the pairs of meaning expressions.ACRES DE COLING-92, NANTES, 23-28 ^of.rr 1992 5 2 5 PROC, oJ, COLING-92, NA~CrEs. AUG. 23-28, 1992
TRANSLATION AMBIGUITY RESOLUTION BASED ON TEXT CORPORA OF SOURCE AND TARGET LANGUAGES
d14732452
Thispaper describes sociological fieldwork conducted in the autumn of 2008 in eleven rural communities of South Africa. The goal of the fieldwork was to evaluate the potential role of automated telephony services in improving access to important government information and services. Our interviews, focus group discussions and surveys revealed that Lwazi, a telephone-based spoken dialog system, could greatly support current South African government efforts to effectively connect citizens to available services, provided such services be toll free, in local languages, and with content relevant to each community.
Initial fieldwork for LWAZI: A Telephone-Based Spoken Dialog System for Rural South Africa
d15078437
Corpora annotated at semantic level play a crucial role both in research and in applicative contexts in which systems of natural language processing are studied and developed. In this paper we present the lexico-semantic annotation of an Italian treebank, a first attempt to recover the lack of such resource for Italian. We will describe the annotation realized, focusing on the methodology followed, the results achieved, and possible further work and applications.
The Lexico-semantic Annotation of an Italian Treebank
d13925548
This paper describes SimpLex, 1 a Lexical Simplification system that participated in the English Lexical Simplification shared task at SemEval-2012. It operates on the basis of a linear weighted ranking function composed of context sensitive and psycholinguistic features. The system outperforms a very strong baseline, and ranked first on the shared task.
UOW-SHEF: SimpLex -Lexical Simplicity Ranking based on Contextual and Psycholinguistic Features
d16293608
Distributed word representation is an efficient method for capturing semantic and syntactic word relations. In this work, we introduce an extension to the continuous bag-of-words model for learning word representations efficiently by using implicit structure information. Instead of relying on a syntactic parser which might be noisy and slow to build, we compute weights representing probabilities of syntactic relations based on the Huffman softmax tree in an efficient heuristic. The constructed "implicit graphs" from these weights show that these weights contain useful implicit structure information. Extensive experiments performed on several word similarity and word analogy tasks show gains compared to the basic continuous bag-of-words model. * Cong Liu is the corresponding author. This work is licensed under a Creative Commons Attribution 4.0 International Licence. Licence details: http:// creativecommons.org/licenses/by/4.0/
Improved Word Embeddings with Implicit Structure Information
d5137112
This paper proposes a new framework for discourse analysis, in the spirit of Grosz and Sidner (1986), Webber (1987a,b) but differentiated
TEMPORAL REASONING IN NATURAL LANGUAGE UNDERSTANDING: THE TEMPORAL STRUCTURE OF THE NARRATIVE
d16680550
The University of Michigan 's natural language processing system, called LINK, was used in the Fourth Message Understanding System Evaluation (MUC-4) . LINK 's performance on MUC-4's two test corpora is summarized infigure 1.Although we only tested LINK in a single configuration, there were several parameters tha t could have been varied in the system . They include the following :1. What to do with undefined words . When the system identified a group of undefine d words as a likely noun phrase, it was assumed that this noun phrase referred to some kin d of HUMAN or PLACE.1 Thus, these noun phrases were potential candidates to fill the LOCATION, PERP, PHYS TGT, or HUM TGT fields of a template .When to generate templates .A template was only generated if an appropriate fille r for the PERP, PHYS TGT or HUM TGT field had been extracted from the text .3. When to merge templates . Every time a new template was generated for an article, the system considered merging it with existing templates . A merge was performed if another template with the same INCIDENT TYPE already existed, and if there were no explicit contradictions between the existing template's filled fields and the new template . For example, if the two templates had different DATE fields, they were not merged . In addition, BOMBING and ATTACK templates were merged if they had no contradictory fields .Amount of effortWe estimate that 1 .5 person-years were spent on our MUC-4 effort .Figure 2shows the breakdown of this effort on different parts of the system .Prior to MUC-4, LINK had been used in several smaller-scale applications, including th e extraction of information from free-form textual descriptions of automobile malfunctions an d the repairs that were made to fix them ; as well as an application involving free-form textual instructions for assembly line workers .Little modification was required of the parser itself for MUC-4 . However, several new modules were built around the parser . In particular, since both of our prior applications involve d 'See our accompanying system summary paper for details.159
The LINK System : MUC-4 Test Results and Analysis Result s
d14516245
This paper describes a sentence pattern-based English-Korean machine translation system backed up by a rule-based module as a solution to the translation of long sentences. A rule-based English-Korean MT system typically suffers from low translation accuracy for long sentences due to poor parsing performance. In the proposed method we only use chunking information on the phraselevel of the parse result (i.e. NP, PP, and AP). By applying a sentence pattern directly to a chunking result, the high performance of analysis and a good quality of translation are expected. The parsing efficiency problem in the traditional RBMT approach is resolved by sentence partitioning, which is generally assumed to have many problems. However, we will show that the sentence partitioning has little side effect, if any, in our approach, because we use only the chunking results for the transfer. The coverage problem of a pattern-based method is overcome by applying sentence pattern matching recursively to the sub-sentences of the input sentence, in case there is no exact matching pattern to the input sentence.
For the Proper Treatment of Long Sentences in a Sentence Pattern- based English-Korean MT System
d14166789
We show how to turn a large-scale syntactic dictionary into a dependency-based unification grammar where each piece of lexical information calls a separate rule, yielding a super granular grammar. Subcategorization, raising and control verbs, auxiliaries and copula, passivization, and tough-movement are discussed. We focus on the semantics-syntax interface and offer a new perspective on syntactic structure. This work is licenced under a Creative Commons Attribution 4.0 International License. License details:
Encoding a syntactic dictionary into a super granular unification grammar
d248366447
Knowledge Graphs (KGs) are symbolically structured storages of facts. The KG embedding contains concise data used in NLP tasks requiring implicit information about the real world. Furthermore, the size of KGs that may be useful in actual NLP assignments is enormous, and creating embedding over it has memory cost issues. We represent KG as a 3rd-order binary tensor and move beyond the standard CP decomposition (Hitchcock, 1927) by using a data-specific generalized version of it (Hong et al., 2020). The generalization of the standard CP-ALS algorithm allows obtaining optimization gradients without a backpropagation mechanism. It reduces the memory needed in training while providing computational benefits. We propose a MEKER, a memory-efficient KG embedding model, which yields SOTA-comparable performance on link prediction tasks and KGbased Question Answering.
MEKER: Memory Efficient Knowledge Embedding Representation for Link Prediction and Question Answering
d220973727
d13043497
Historical cabinet protocols are a useful resource which enable historians to identify the opinions expressed by politicians on different subjects and at different points of time. While cabinet protocols are often available in digitized form, so far the only method to access their information content is by keyword-based search, which often returns sub-optimal results. We present a method for enriching German cabinet protocols with information about the originators of statements. This requires automatic speaker attribution. In order to avoid costly manual annotation of training data, we design a rule-based system which exploits morpho-syntactic cues. Unlike many other approaches, our method can also deal with cases in which the speaker is not explicitly identified in the sentence itself. This is an important capability as 45% of all sentences in the data constitute reported speech whose speakers are not explicitly marked. Our system is able to detect implicit speakers by taking into account signals of speaker continuity. We show that such a system obtains good results, especially with respect to recall which is particularly important for information access.
Speaker Attribution in Cabinet Protocols
d14423856
This paper investigates the appropriateness of using lexical cohesion analysis to assess Chinese readability. In addition to term frequency features, we derive features from the result of lexical chaining to capture the lexical cohesive information, where E-HowNet lexical database is used to compute semantic similarity between nouns with high word frequency. Classification models for assessing readability of Chinese text are learned from the features using support vector machines. We select articles from textbooks of elementary schools to train and test the classification models. The experiments compare the prediction results of different sets of features.Yu-Ta Chen et al.text. In contrast, research on readability assessment for Chinese text is still in its initial stage. This paper investigates the appropriateness of using lexical cohesion analysis to improve the performance of Chinese readability assessment. More specifically, we build lexical chains, which are sequences of semantically related terms, in an article to represent the lexical cohesive structure of texts, and then derive features from the result of lexical chaining to capture the lexical cohesive information. Consisting of term frequency features and lexical chain features, various combinations of features are evaluated for generating prediction models on Chinese readability using support vector machines (SVMs). The prediction models are trained and tested on articles selected from textbooks of elementary schools in Taiwan. The results are compared for different sets of features. This paper is organized as follows. Section 2 introduces related work in readability assessment and lexical cohesion analysis. Section 3 discusses the research methodology of our analysis, including problem definition, text processing, feature deriving, and prediction model building. Section 4 presents the experiments and the experimental results. Section 5 gives conclusions and directions for future work.
Assessing Chinese Readability using Term Frequency and Lexical Chain
d10381786
d233029474
摘要 本文旨在從語料庫的觀點研究中文兩個使役助動詞使 'cause' 和讓 'let' 之間的差異。 我們對從兩個語料庫中提取的中文語料進行邏輯迴歸分析,認為此兩個助動詞之間的差 異可視為 Verhagen and Kemmer (1997) 所提出的直接/間接使役區分。回歸模型得到的 結果表明,直接/間接使役的理論為動詞的特徵和詞義提供了合理的解釋。我們指出,動 詞使與「直接使役」相關,因為它通常使用於涉及無生命參與者的使役事件中,在這種 情況下,起因論旨角色必然且直接地導致受使役者的結果狀態。另一方面,讓應該被歸 類為「間接使役」 ,因為它通常用於涉及有生命參與者的場景,並且除了使動者之外, 亦有其他一些驅動來源也導致使役事件的發生。AbstractThis paper aims to investigate the variation between two Chinese causative auxiliaries shi '使' and rang '讓' from a corpus-based perspective. We conduct a logistic regression analysis to the Chinese data extracted from two corpora and propose a direct/indirect distinction(Verhagen and Kemmer 1997)between the two auxiliary verbs. The results retrieved by the regression model show that the theory of direct/indirect causation provides a reasonable account for the characteristics and lexical meanings of the verbs. We indicate that the verb shi is correlated with "direct causation" because it is typically used when inanimate participants are involved in the causing event, in which the force initiated by the cause inevitably and directly leads to the resulted stage of the causee. On the other hand, the verb rang should be classified as "indirect causation" because it is typically used in scenarios where animate participants are both involved, and some extra force besides the causer also plays a role in the effected event.
Lectal Variation of the Two Chinese Causative Auxiliaries
d219309690
d7462891
Over the past few years, HNC has developed a neural network based, vector space approach to text retrieval. This approach, embodied in a system called MatchPlus, allows the user to retrieve information on the basis of meaning and context of a fi'ee text query. The MatchPlus system uses a neural network based, constrained sel~organization technique to learn word stem interrelationships directly 3~om a training corpus, thereby eliminating the need for hand crafted linguistic knowledge bases and their often substantial maintenance requirements. This paper presents results fi'om recent enhancements to the basic MatchPlus concept. These enhancements include the development of a one step learning law that greatly reduces the amount of time and~or computational resources required to train the system, and the development of a prototype multilingual (English and Spanish) text retrieval system.
RECENT ADVANCES IN HNC'S CONTEXT VECTOR INFORMATION RETRIEVAL TECHNOLOGY
d8137203
To advance information extraction and question answering technologies toward a more realistic path, the U.S. NIST (National Institute of Standards and Technology) initiated the KBP (Knowledge Base Population) task as one of the TAC (Text Analysis Conference) evaluation tracks. It aims to encourage research in automatic information extraction of named entities from unstructured texts with the ultimate goal of integrating such information into a structured Knowledge Base. The KBP track consists of two types of evaluation: Named Entity Linking (NEL) and Slot Filling. This paper describes the linguistic resource creation efforts at the Linguistic Data Consortium (LDC) in support of Named Entity Linking evaluation of KBP, focusing on annotation methodologies, process, and features of corpora from 2009 to 2011, with a highlighted analysis of the cross-lingual NEL data. Progressing from monolingual to cross-lingual Entity Linking technologies, the 2011 cross-lingual NEL evaluation targeted multilingual capabilities. Annotation accuracy is presented in comparison with system performance, with promising results from cross-lingual entity linking systems.
Linguistic Resources for Entity Linking Evaluation: from Monolingual to Cross-lingual
d235097241
We examine the effect of domain-specific external knowledge variations on deep large scale language model performance. Recent work in enhancing BERT with external knowledge has been very popular, resulting in models such as ERNIE(Zhang et al., 2019a). Using the ERNIE architecture, we provide a detailed analysis on the types of knowledge that result in a performance increase on the Natural Language Inference (NLI) task, specifically on the Multi-Genre Natural Language Inference Corpus (MNLI). While ERNIE uses general TransE embeddings, we instead train domain-specific knowledge embeddings and insert this knowledge via an information fusion layer in the ERNIE architecture, allowing us to directly control and analyze knowledge input. Using several different knowledge training objectives, sources of knowledge, and knowledge ablations, we find a strong correlation between knowledge and classification labels within the same polarity, illustrating that knowledge polarity is an important feature in predicting entailment. We also perform classification change analysis across different knowledge variations to illustrate the importance of selecting appropriate knowledge input regarding content and polarity, and show representative examples of these changes.
ERNIE-NLI: Analyzing the Impact of Domain-Specific External Knowledge on Enhanced Representations for NLI
d7595018
This paper describes a computational linguistics-based approach for providing interoperability between multi-lingual systems in order to overcome crucial issues like cross-language and cross-collection retrieval. Our proposal is a system which improves capabilities of language-technology-based information extraction. In the last few years various theories have been developed and applied for making multicultural and multilingual resources easy to access. Important initiatives, like the development of the European Library and Europeana, aim to increase the availability of digital content from various types of providers and institutions. Therefore the accessibility to these resources requires the development of environments enabling to manage multilingual complexity. In this respect, we present a methodological framework which allows mapping both the data and the metadata among the languagespecific ontologies. The feasibility of crosslanguage information extraction and semantic search will be tested by implementing an early prototype system.
Cross-Lingual Information Retrieval and Semantic Interoperability for Cultural Heritage Repositories
d60119001
To provide timely, accurate translation services for the security of the nationGovernment User Session• Translation -English to target language(s) -Foreign language(s) to target language(s) -usually English• Transcription-Audio -Video
Government User Session Translation Memory Technology Translation Memory Technology Assessment Assessment MT Summit XII Government User Session Government User Session Topics • Introduction • NVTC Requirements for Multi-Genre Translation • Motivations for TM Technology Assessment • Pilot Study • Results • Collaboration with NIST Government User Session NVTC Mission NVTC Mission NVTC Services NVTC Services Government User Session NVTC Languages (2008) NVTC Languages (2008) Motivations for TM Technology Assessment • Application of TM technology -Industry • Automobile industry has documented return on investment for updating the translation of auto manufacturing manuals for off- shore assembly plants • Standardized terminology for Parts and Components Suppliers -Example Motivations for TM Technology Assessment
d198936975
In the field of finance and lawsuit, data mining technology has absolute broad market prospect but also is a challenging task. Past years have witnessed great successes of data mining in finance and lawsuit related applications. Most existing work usually focus on providing litigation risk assessment and outcome prediction services for the clients. However, the research on the legal litigation type for enterprise is limited. In this paper, we focus on enterprise lawsuit category prediction and propose a novel approach to refine the problem as a classification task. First, We evaluate the possibility distribution of legal documents received by the enterprise, then distinguish the specific legal litigation type. We apply our method on International Big Data Analysis Competition 1 launched by IEEE ISI Conference 2019 and scored the first place in the final leader-board.
Step-wise Refinement Classification Approach for Enterprise Legal Litigation
d12779706
We discuss a seml-interactive approach to information retrieval which consists of two tasks performed in a sequence. First, the system assists the searcher in building a comprehensive statement of information need, using automatically generated topical summaries of sample documents. Second, the detailed statement of information need is automatically processed by a series of natural language processing routines in order to derive an optimal search query for a statistical information retrieval system. In this paper, we investigate the role of automated document summarization in building effective search statements. We also discuss the results of latest evaluation of our system at the annual Text Retrieval Conference (TKEC).
Summarization-based Query Expansion in Information Retrieval
d193761
We present VPS-GradeUp -a set of 11,400 graded human decisions on usage patterns of 29 English lexical verbs from the Pattern Dictionary of English Verbs by Patrick Hanks. The annotation contains, for each verb lemma, a batch of 50 concordances with the given lemma as KWIC, and for each of these concordances we provide a graded human decision on how well the individual PDEV patterns for this particular lemma illustrate the given concordance, indicated on a 7-item Likert scale for each PDEV pattern. With our annotation, we were pursuing a pilot investigation of the foundations of human clustering and disambiguation decisions with respect to usage patterns of verbs in context. The data set is publicly available at http://hdl.handle.net/11234/1-1585.
VPS-GradeUp: Graded Decisions on Usage Patterns
d9127634
Although parsing performances have greatly improved in the last years, grammar inference from treebanks for morphologically rich languages, especially from small treebanks, is still a challenging task. In this paper we investigate how state-of-the-art parsing performances can be achieved on Spanish, a language with a rich verbal morphology, with a non-lexicalized parser trained on a treebank containing only around 2,800 trees. We rely on accurate part-of-speech tagging and datadriven lemmatization to provide parsing models able to cope lexical data sparseness. Providing state-of-the-art results on Spanish, our methodology is applicable to other languages with high level of inflection.
Statistical Parsing of Spanish and Data Driven Lemmatization
d16435469
This paper presents a conditional random fields based labeling approach to Chinese punctuation prediction. To this end, we first reformulate Chinese punctuation prediction as a multiple-pass labeling task on a sequence of words, and then explore various features from three linguistic levels, namely words, phrase and functional chunks for punctuation prediction under the framework of conditional random fields. Our experimental results on the Tsinghua Chinese Treebank show that using multiple deeper linguistic features and multiple-pass labeling consistently improves performance.
A CRF Sequence Labeling Approach to Chinese Punctuation Prediction
d18039958
Up until now most of the methods published for polarity classification are applied to English texts. However, other languages on the Internet are becoming increasingly important. This paper presents a set of experiments on English and Spanish product reviews. Using a comparable corpus, a supervised method and two unsupervised methods have been assessed. Furthermore, a list of Spanish opinion words is presented as a valuable resource.
Bilingual Experiments on an Opinion Comparable Corpus
d13845050
This paper proposes a semantic similarity based novel approach, to assign or recommend a hashtag to a given tweet. The work uses a Latent Dirichlet Allocation (LDA) based learning approach. In the training phase, we learn the latent concept space of a given set of training tweets, via topic modeling, and identify a group of tweets that act as representatives of the topic. In the inference phase, we create a probability distribution of a given test tweet belonging to the learned topics, and find the semantic similarity of the test tweet with representative tweets for each topic. We propose two assignment approaches. In one approach, we assign hashtags to a target tweet, by obtaining these from a set of representative training tweets, that have the highest semantic similarities with the target tweet. In the other approach, we combine (a) the semantic similarity of the target tweet with the representative tweets, and (b) the assignment probability of the target tweet to a given topic, and assign hashtags using this joint maximization. The hashtags are assigned to the target tweet, by selecting the top-K values from the combination. Our system yields F-score of 46.59%, improving over the LDA baseline by around 6 times.
SemTagger: A Novel Approach for Semantic Similarity Based Hashtag Recommendation on Twitter
d14777125
This paper describes a new methodology for developing CAT tools that assist translators of technical and scientific texts by (i) on-the-fly highlight of nominal and verbal terminology in a source language (SL) document that lifts possible syntactic ambiguity and thus essentially raises the document readability and (ii) simultaneous translation of all SL document one-and multicomponent lexical units. The methodology is based on a language-independent hybrid extraction technique used for document analysis, and language-dependent shallow linguistic knowledge. It is targeted at intelligent output and computationally attractive properties. The approach is illustrated by its implementation into a CAT tool for the Russian-English language pair. Such tools can also be integrated into full MT systems.
On-The-Fly Translator Assistant (Readability and Terminology Handling)
d16597272
This paper presents the first systematic study of the coreference resolution problem in a general inference-based discourse processing framework. Employing the mode of inference called weighted abduction, we propose a novel solution to the overmerging problem inherent to inference-based frameworks. The overmerging problem consists in erroneously assuming distinct entities to be identical. In discourse processing, overmerging causes establishing wrong coreference links. In order to approach this problem, we extend Hobbs et al. (1993)'s weighted abduction by introducing weighted unification and show how to learn the unification weights by applying machine learning techniques. For making large-scale processing and parameter learning in an abductive logic framework feasible, we employ a new efficient implementation of weighted abduction based on Integer Linear Programming. We then propose several linguistically motivated features for blocking incorrect unifications and employ different large-scale world knowledge resources for establishing unification via inference. We provide a large-scale evaluation on the CoNLL-2011 shared task dataset, showing that all features and almost all knowledge components improve the performance of our system.
Coreference Resolution with ILP-based Weighted Abduction
d226862555
Sports game summarization focuses on generating news articles from live commentaries. Unlike traditional summarization tasks, the source documents and the target summaries for sports game summarization tasks are written in quite different writing styles. In addition, live commentaries usually contain many named entities, which makes summarizing sports games precisely very challenging. To deeply study this task, we present SPORTSSUM 1 , a Chinese sports game summarization dataset which contains 5,428 soccer games of live commentaries and the corresponding news articles. Additionally, we propose a two-step summarization model consisting of a selector and a rewriter for SPORTSSUM. To evaluate the correctness of generated sports summaries, we design two novel score metrics: name matching score and event matching score. Experimental results show that our model performs better than other summarization baselines on ROUGE scores as well as the two designed scores.
Generating Sports News from Live Commentary: A Chinese Dataset for Sports Game Summarization
d14249474
This paper discusses the parameterized Equivalence Class Method for Dutch, an approach developed to incorporate standard lexical representations for Dutch idioms into representations required by any specific NLP system with as minimal manual work as possible. The purpose of the paper is to give an overview of parameters applicable to Dutch, which are determined by examining a large set of data and two Dutch NLP systems. The effects of the introduced parameters are evaluated and the results presented.
Elaborating the parameterized Equivalence Class Method for Dutch
d13878778
This paper studies named entity translation and proposes "selective temporality" as a new feature, as using temporal features may be harmful for translating "atemporal" entities. Our key contribution is building an automatic classifier to distinguish temporal and atemporal entities then align them in separate procedures to boost translation accuracy by 6.1%.
Enriching Entity Translation Discovery using Selective Temporality
d218973909
d219310357
Parallel monolingual resources are imperative for data-driven sentence simplification research. We present the work of aligning, at the sentence level, a corpus of all Swedish public authorities and municipalities web texts in standard and simple Swedish. We compare the performance of three alignment algorithms used for similar work in English (Average Alignment, Maximum Alignment, and Hungarian Alignment), and the best-performing algorithm is used to create a resource of 15,433 unique sentence pairs. We evaluate the resulting corpus using a set of features that has proven to predict text complexity of Swedish texts. The results show that the sentences of the simple sub-corpus are indeed less complex than the sentences of the standard part of the corpus, according to many of the text complexity measures.
Is it simpler? An Evaluation of an Aligned Corpus of Standard-Simple Sentences
d31005847
This presentation introduces the imminent establishment of a new language resource infrastructure particularly focusing on languages spoken in Southern Africa, but with an eventual aim to become a hub for digital language resources within Sub-Saharan Africa. The Constitution of South Africa makes provision for 11 official languages, with equal status although they differ significantly with regard to the number of speakers. The current language Resource Management Agency (RMA) will be merged with the new Centre, which will have a much wider focus than that of data acquisition, management and distribution. The Centre (SADiLaR) will entertain two main programs: Digitisation and Digital Humanities. The digitisation program will focus on the systematic digitisation of relevant text, speech and multi-modal data across the official languages. Relevancy will be determined by a Scientific Advisory Board. This will take place on a continuous basis through specified projects allocated to national members of the Centre, as well as through open-calls aimed at the academic as well as local communities. The digital resources will be enhanced, managed and distributed through a dedicated web-based portal. The development of the Digital Humanities (DH) program will entail extensive academic support for projects implementing digital language based data. SADiLaR will function as an enabling research infrastructure primarily supported by national government.
South African Centre for Digital Language Resources
d7917329
We restate the classical logical notion of generation/parsing reversibility in terms of feasible probabilistic sampling, and argue for an implementation based on finite-state factors. We propose a modular decomposition that reconciles generation accuracy with parsing robustness and allows the introduction of dynamic contextual factors. (Opinion Piece)
Reversibility reconsidered: finite-state factors for efficient probabilistic sampling in parsing and generation
d1643143
The current paper evaluates the performance of the PRESEMT methodology, which facilitates the creation of machine translation (MT) systems for different language pairs. This methodology aims to develop a hybrid MT system that extracts translation information from large, predominantly monolingual corpora, using pattern recognition techniques. PRESEMT has been designed to have the lowest possible requirements on specialised resources and tools, given that for many languages (especially less widely used ones) only limited linguistic resources are available. In PRESEMT, the main translation process is divided into two phases, the first determining the overall structure of a target language (TL) sentence, and the second disambiguating between alternative translations for words or phrases and establishing local word order. This paper describes the latest version of the system and evaluates its translation accuracy, while also benchmarking the PRESEMT performance by comparing it with other established MT systems using objective measures.
Evaluating the translation accuracy of a novel language-independent MT methodology
d8973727
The paper reports on a new approach to automatic generation of back-of-book indexes for Chinese books. Parsing on the level of complete sentential analysis is avoided because of the inefficiency and unavailability of a Chinese Grammar with enough coverage. Instead, fundamental analysis particular to Chinese text called word segmentation is performed to break up characters into a sequence of lexical units equivalent to words in English. The sequence of words then goes through part-ofspeech tagging and noun phrase analysis. All these analyses are done using a corpus-based statistical algorithm.Experimental results have shown satisfactory results.
A Corpus-Based Statistical Approach to Automatic Book Indexing
d982854
In this paper we report on our natural language information retrieval (NLIR) project as related to the recently concluded 5th Text Retrieval Conference (TREC-5). The main thrust of this project is to use natural language processing techniques to enhance the effectiveness of full-text document retrieval. One of our goals was to demonstrate that robust if relatively shallow NLP can help to derive a better representation of text documents for statistical search. Recently, we have turned our attention away from text representation issues and more towards query development problems. While our NLIR system still performs extensive natural language processing in order to extract phrasal and other indexing terms, our focus has shifted to the problems of building effective search queries. Specifically, we are interested in query construction that uses words, sentences, and entire passages to expand initial topic specifications in an attempt to cover their various angles, aspects and contexts. Based on our earlier results indicating that NLP is more effective with long, descriptive queries, we allowed for long passages from related documents to be liberally imported into the queries. This method appears to have produced a dramatic improvement in the performance of two different statistical search engines that we tested (Cornell's SMART and NIST's Prise) boosting the average precision by at least 40%. In this paper we discuss both manual and automatic procedures for query expansion within a new stream-based information retrieval model.
Building Effective Queries In Natural Language Information Retrieval
d15330994
In this paper we introduce the first version of noWaC, a large web-based corpus of Bokmål Norwegian currently containing about 700 million tokens. The corpus has been built by crawling, downloading and processing web documents in the .no top-level internet domain. The procedure used to collect the noWaC corpus is largely based on the techniques described byFerraresi et al. (2008). In brief, first a set of "seed" URLs containing documents in the target language is collected by sending queries to commercial search engines (Google and Yahoo). The obtained seeds (overall 6900 URLs) are then used to start a crawling job using the Heritrix web-crawler limited to the .no domain. The downloaded documents are then processed in various ways in order to build a linguistic corpus (e.g. filtering by document size, language identification, duplicate and near duplicate detection, etc.).
NoWaC: a large web-based corpus for Norwegian
d252847463
A big unknown in Digital Humanities (DH) projects that seek to analyze previously untouched corpora is the question of how to adapt existing Natural Language Processing (NLP) resources to the specific nature of the target corpus. In this paper, we study the case of Emergent Modern Hebrew (EMH), an underresourced chronolect of the Hebrew language. The resource we seek to adapt, a diacritizer, exists for both earlier and later chronolects of the language. Given a small annotated corpus of our target chronolect, we demonstrate that applying transfer-learning from either of the chronolects is preferable to training a new model from scratch. Furthermore, we consider just how much annotated data is necessary. For our task, we find that even a minimal corpus of 50K tokens provides a noticeable gain in accuracy. At the same time, we also evaluate accuracy at three additional increments, in order to quantify the gains that can be expected by investing in a larger annotated corpus.
NLP in the DH pipeline: Transfer-learning to a Chronolect
d8428849
Information Extraction, Summarization andQuestion Answering all manipulate natural language texts and should benefit from the use of NLP techniques. Statistical techniques have till now outperformed symbolic processing of unrestricted text. However, Information Extraction and Question Answering require by far more accurate results of what is currently produced by Bag-Of-Words approaches. Besides, we see that such tasks as Semantic Evaluation of Text Entailment or Similarityas required by the RTE Challenge, impose a much stricter performance in semantic terms to tell true from false pairs. We will speak in favour of a hybrid system, a combination of statistical and symbolic processing with reference to a specific problem, that of Anaphora Resolution which looms large and deep in text processing.
Hybrid Systems for Information Extraction and Question Answering
d49742750
Deep learning approaches for sentiment classification do not fully exploit sentiment linguistic knowledge. In this paper, we propose a Multi-sentiment-resource Enhanced Attention Network (MEAN) to alleviate the problem by integrating three kinds of sentiment linguistic knowledge (e.g., sentiment lexicon, negation words, intensity words) into the deep neural network via attention mechanisms. By using various types of sentiment resources, MEAN utilizes sentiment-relevant information from different representation subspaces, which makes it more effective to capture the overall semantics of the sentiment, negation and intensity words for sentiment prediction. The experimental results demonstrate that MEAN has robust superiority over strong competitors.
A Multi-sentiment-resource Enhanced Attention Network for Sentiment Classification
d6866837
The Universal Dependencies (UD) project was conceived after the substantial recent interest in unifying annotation schemes across languages. With its own annotation principles and abstract inventory for parts of speech, morphosyntactic features and dependency relations, UD aims to facilitate multilingual parser development, cross-lingual learning, and parsing research from a language typology perspective. This paper presents the Turkish IMST-UD Treebank, the first Turkish treebank to be in a UD release. The IMST-UD Treebank was automatically converted from the IMST Treebank, which was also recently released. We describe this conversion procedure in detail, complete with mapping tables. We also present our evaluation of the parsing performances of both versions of the IMST Treebank. Our findings suggest that the UD framework is at least as viable for Turkish as the original annotation framework of the IMST Treebank.
Universal Dependencies for Turkish
d227316717
d6487907
A verb paradigm is a set of inflectional categories for a single verb lemma. To obtain verb paradigms we extracted left and right bigrams for the 400 most frequent verbs from over 100 million words of text, calculated the Kullback Leibler distance for each pair of verbs for left and right contexts separately, and ran a hierarchical clustering algorithm for each context. Our new method for finding unsupervised cut points in the cluster trees produced results that compared favorably with results obtained using supervised methods, such as gain ratio, a revised gain ratio and number of correctly classified items. Left context clusters correspond to inflectional categories, and right context clusters correspond to verb lemmas. For our test data, 91.5% of the verbs are correctly classified for inflectional category, 74.7% are correctly classified for lemma, and the correct joint classification for lemma and inflectional category was obtained for 67.5% of the verbs. These results are derived only from distributional information without use of morphological information.
Towards Unsupervised Extraction of Verb Paradigms from Large Corpora
d171949614
d16425409
Using fu zzy context-free grammars one can easily describe a finite number of ways to derive incorrect strings together with their degree of correctness. However, in general there is an infinite number of ways to perform a certain task wrongly. In this paper we introduce a generalization of fuzzy context-free grammars, the so-called fu zzy context-free K -grammars, to model the situation of malting a finite choice out of an infinity of possible grammatical errors during each context-free derivation step. Under minor assumptions on the parameter K this model happens to be a very general framework to describe correctly as well as erroneously derived sentences by a single generating mechanism. Our first result characterizes the generating capacity of these fuzzy context-free K -grammars. As consequences we obtain: (i) bounds on modeling grammatical errors within the framework of fu zzy context-free grammars, and (ii) the fact that the family of languages generated by fu zzy context-free K -grammars shares closure properties very similar to those of the family of ordinary context-free languages. The second part of the paper is devoted to a few algorithms to recognize fuzzy context-free languages: viz. a variant of a fu nctional version of Cocke-YoungerKasami's algorithm and some recursive descent algorithms. These algorithms tum out to be robust in some very elementary sense and they can easily be extended to corresponding parsing algorithms.
A Fuzzy APPROACH TO ERRONEOUS INPUTS IN CONTEXT-FREE LANGUAGE RECOGNITION
d14881317
Near-synonym sets represent groups of words with similar meaning, which are useful knowledge resources for many natural language applications such as query expansion for information retrieval (IR) and computer-assisted language learning. However, near-synonyms are not necessarily interchangeable in contexts due to their specific usage and syntactic constraints. Previous studies have developed various methods for near-synonym choice in English sentences. To our best knowledge, there is no such evaluation on Chinese sentences. Therefore, this paper implements two baseline systems: pointwise mutual information (PMI) and a 5-gram language model that are widely used in previous work for Chinese nearsynonym choice evaluation. Experimental results show that the 5-gram language model achieves higher accuracy than PMI.
A Baseline System for Chinese Near-Synonym Choice
d6621386
In this article, we present a method for extracting automatically from texts semantic relations in the medical domain using linguistic patterns. These patterns refer to three levels of information about words: inflected form, lemma and part-of-speech. The method we present consists first in identifying the entities that are part of the relations to extract, that is to say diseases, exams, treatments, drugs or symptoms. Thereafter, sentences that contain couples of entities are extracted and the presence of a semantic relation is validated by applying linguistic patterns. These patterns were previously learnt automatically from a manually annotated corpus by relying on an algorithm based on the edit distance. We first report the results of an evaluation of our medical entity tagger for the five types of entities we have mentioned above and then, more globally, the results of an evaluation of our extraction method for four relations between these entities. Both evaluations were done for French.
Learning patterns for building resources about semantic relations in the medical domain
d164407034
d14203516
In many natural language applications, there is a need to enrich syntactical parse trees. We present a statistical tree annotator augmenting nodes with additional information. The annotator is generic and can be applied to a variety of applications. We report 3 such applications in this paper: predicting function tags; predicting null elements; and predicting whether a tree constituent is projectable in machine translation. Our function tag prediction system outperforms significantly published results.
A Statistical Tree Annotator and Its Applications
d478797
We cannot use non-local features with current major methods of sequence labeling such as CRFs due to concerns about complexity. We propose a new perceptron algorithm that can use non-local features. Our algorithm allows the use of all types of non-local features whose values are determined from the sequence and the labels. The weights of local and non-local features are learned together in the training process with guaranteed convergence. We present experimental results from the CoNLL 2003 named entity recognition (NER) task to demonstrate the performance of the proposed algorithm.
A New Perceptron Algorithm for Sequence Labeling with Non-local Features
d196189733
keywords: personality prediction, liking and reblogging, social networking service Summary It has been reported that a person's remarks and behaviors reflect the person's personality. Several recent studies have shown that textual information of user posts and user behaviors such as liking and reblogging the specific posts are useful for predicting the personality of Social Networking Service (SNS) users. However, less attention has been paid to the textual information derived from the user behaviors. In this paper, we investigate the effect of using textual information with user behaviors for personality prediction. We focus on the personality diagnosis website and make a large dataset on SNS users and their personalities by collecting users who posted the personality diagnosis on Twitter. Using this dataset, we work on personality prediction as a set of binary classification tasks. Our experiments on the personality prediction of Twitter users show that the textual information of user behaviors is more useful than the co-occurrence information of the user behaviors and the performance of prediction is strongly affected by the number of the user behaviors, which were incorporated into the prediction. We also show that user behavior information is crucial for predicting the personality of users who do not post frequently.
Incorporating Textual Information on User Behavior for Personality Prediction
d218977352
d8744838
This paper presents the system submitted by KUNLPLab for SemEval-2014 Task9 -Subtask B: Message Polarity on Twitter data. Lexicon features and bag-of-words features are mainly used to represent the datasets. We trained a logistic regression classifier and got an accuracy of 6% increase from the baseline feature representation. The effect of pre-processing on the classifier's accuracy is also discussed in this work.
KUNLPLab:Sentiment Analysis on Twitter Data
d7112682
Finite-state methods are applied to the Russell-Wiener-Kamp notion of time (based on events) and developed into an account of interval relations and semi-intervals. Strings are formed and collected in regular languages and regular relations that are argued to embody temporal relations in their various underspecified guises. The regular relations include retractions that reduce computations by projecting strings down to an appropriate level of granularity, and notions of partiality within and across such levels.
Finite-state representations embodying temporal relations
d15709990
Comparative expressions (CEs) such as "bigger than" and "more oranges than" are highly ambiguous, and their meaning is context dependent. Thus, they pose problems for the semantic interpretation algorithms typically used in natural language database interfaces. We focus on the comparison attribute ambiguities that occur with CEs. To resolve these ambiguities our natural language interface interacts with the user, finding out which of the possible interpretations was intended. Our multi-level semantic processor facilitates this interaction by recognizing the occurrence of comparison attribute ambiguity and then calculating and presenting a list of candidate comparison attributes from which the user may choc6e.
THE LEXICAL SEMANTICS OF COMPARATIVE EXPRESSIONS IN A MULTI-LEVEL SEMANTIC PROCESSOR
d250390906
While there have been advances in Natural Language Processing (NLP), their success is mainly gained by applying a self-attention mechanism into single or multi-modalities. While this approach has brought significant improvements in multiple downstream tasks, it fails to capture the interaction between different entities. Therefore, we propose MM-GATBT, a multimodal graph representation learning model that captures not only the relational semantics within one modality but also the interactions between different modalities. Specifically, the proposed method constructs image-based node embedding which contains relational semantics of entities. Our empirical results show that MM-GATBT achieves stateof-the-art results among all published papers on the MM-IMDb dataset.
MM-GATBT: Enriching Multimodal Representation Using Graph Attention Network
d252819430
The Chinese text correction (CTC) focuses on detecting and correcting Chinese spelling errors and grammatical errors. Most existing datasets of Chinese spelling check (CSC) and Chinese grammatical error correction (GEC) are focused on a single sentence written by Chinese-as-a-second-language (CSL) learners. We find that errors caused by native speakers differ significantly from those produced by non-native speakers. These differences make it inappropriate to use the existing test sets directly to evaluate text correction systems for native speakers. Some errors also require the cross-sentence information to be identified and corrected. In this paper, we propose a crosssentence Chinese text correction dataset for native speakers. Concretely, we manually annotated 1,500 texts written by native speakers. The dataset consists of 30,811 sentences and more than 1,000,000 Chinese characters. It contains four types of errors: spelling errors, redundant words, missing words, and word ordering errors. We also test some state-of-the-art models on the dataset. The experimental results show that even the model with the best performance is 20 points lower than humans, which indicates that there is still much room for improvement. We hope that the new dataset can fill the gap in cross-sentence text correction for native Chinese speakers.
CCTC: A Cross-Sentence Chinese Text Correction Dataset for Native Speakers
d235599154
d15513499
We present a novel algorithm for Japanese dependency analysis. The algorithm allows us to analyze dependency structures of a sentence in linear-time while keeping a state-of-the-art accuracy. In this paper, we show a formal description of the algorithm and discuss it theoretically with respect to time complexity. In addition, we evaluate its efficiency and performance empirically against the Kyoto University Corpus. The proposed algorithm with improved models for dependency yields the best accuracy in the previously published results on the Kyoto University Corpus.We here list the constraints of Japanese dependency including ones mentioned above.C1.Each bunsetsu has only one head except the rightmost one. C2. Each head bunsetsu is always placed at the right hand side of its modifier. C3. Dependencies do not cross one another.These properties are basically shared also with Korean and Mongolian.Typical Steps of Parsing JapaneseSince Japanese has the properties above, the following steps are very common in parsing Japanese:1. Break a sentence into morphemes (i.e. morphological analysis). 2. Chunk them into bunsetsus. 3. Analyze dependencies between these bunsetsus. 4. Label each dependency with a semantic role such as agent, object, location, etc.We focus on dependency analysis in Step 3.
Linear-Time Dependency Analysis for Japanese
d212886318
d40209469
We present a set of stand-off annotations for the ninety thousand sentences in the spoken section of the British National Corpus (BNC) which feature a progressive aspect verb group. These annotations may be matched to the original BNC text using the supplied document and sentence identifiers. The annotated features mostly relate to linguistic form: subject type, subject person and number, form of auxiliary verb, and clause type, tense and polarity. In addition, the sentences are classified for register, the formality of recording context: three levels of 'spontaneity' with genres such as sermons and scripted speech at the most formal level and casual conversation at the least formal. The resource has been designed so that it may easily be augmented with further stand-off annotations. Expert linguistic annotations of spoken data, such as these, are valuable for improving the performance of natural language processing tools in the spoken language domain and assist linguistic research in general.
Annotating progressive aspect constructions in the spoken section of the British National Corpus
d140117323
We present the results of feature engineering and post-processing experiments conducted on a temporal expression recognition task. The former explores the use of different kinds of tagging schemes and of exploiting a list of core temporal expressions during training. The latter is concerned with the use of this list for postprocessing the output of a system based on conditional random fields.We find that the incorporation of knowledge sources both for training and postprocessing improves recall, while the use of extended tagging schemes may help to offset the (mildly) negative impact on precision. Each of these approaches addresses a different aspect of the overall recognition performance. Taken separately, the impact on the overall performance is low, but by combining the approaches we achieve both high precision and high recall scores.
Feature Engineering and Post-Processing for Temporal Expression Recognition Using Conditional Random Fields
d27060
This paper deals with the topic-comment articulation of information structures conveyed by sentences. In Japanese, the topic marker WA is attached not only to an N(P) but also to a PP or a clause, forming the information structure of a sentence, where the topicalized part represents a restriction and the remaining part of the sentence represents a nuclear scope. I propose the type-raised category for WA which embodies flexible constituency to realize divergent topic-comment structures. With our categorial definition for the topic marker and the combinatory rules in Combinatory Categorial Grammar, which derive the tripartite representation of the information state of a sentence, our grammar architecture can dispense with an independent level representing the information structure.
Topic-Comment Articulation in Japanese: A Categorial Approach
d249204473
The CEFAT4Cities project aims at creating a multilingual semantic interoperability layer for smart cities that allows users from all EU member states to interact with public services in their own language. The CEFAT4Cities processing pipeline transforms natural-language administrative procedures into machine-readable data using various multilingual natural-language processing techniques, such as semantic networks and machine translation, thus allowing for the development of more sophisticated and more user-friendly public services applications.
Automatically extracting the semantic network out of public services to support cities becoming smart cities
d250390889
ISCAS participated in both sub-tasks in SemEval-2022 Task 10: Structured Sentiment competition. We design an extractionvalidation pipeline architecture to tackle both monolingual and cross-lingual sub-tasks. Experimental results show the multilingual effectiveness and cross-lingual robustness of our system. Our system is openly released on: https://github.com/luxinyu1/ SemEval2022-Task10/.
ISCAS at SemEval-2022 Task 10: An Extraction-Validation Pipeline for Structured Sentiment Analysis
d17570096
Ý Ö ÔÔÖÓ ØÓ Ø Ú ÐÓÔÑ ÒØ Ó ÐÓ Ù ×Ýר Ñ× Ö Ø Ý × Ñ ÒØ × Ñ Ð Ó Ë Ò ×¸Á× Ð Ð ÒÓ¸ ÖÒ Ò Ó Ö ¸ ÒØÓÒ Ó ÒÓ
d1063621
The ability to detect similarity in conjunct heads is potentially a useful tool in helping to disambiguate coordination structures -a difficult task for parsers. We propose a distributional measure of similarity designed for such a task. We then compare several different measures of word similarity by testing whether they can empirically detect similarity in the head nouns of noun phrase conjuncts in the Wall Street Journal (WSJ) treebank. We demonstrate that several measures of word similarity can successfully detect conjunct head similarity and suggest that the measure proposed in this paper is the most appropriate for this task.
Empirical Measurements of Lexical Similarity in Noun Phrase Conjuncts
d19017081
Previous methods for extracting attributes (e.g., capital, population) of classes (Empires) from Web documents or search queries assume that relevant attributes occur verbatim in the source text. The extracted attributes are short phrases that correspond to quantifiable properties of various instances (ottoman empire, roman empire, mughal empire) of the class. This paper explores the extraction of noncontiguous class attributes (manner (it) claimed legitimacy of rule), from factseeking and explanation-seeking queries. The attributes cover properties that are not always likely to be extracted as short phrases from inherently-noisy queries.Contributions:The contributions of this paper are twofold. First, it introduces a method for the acquisition of noncontiguous class attributes, from fact or explanation-seeking Web search queries like "how long does olive oil last unopened" or "how does honey help in weight loss". The resulting attributes are more diverse than, and therefore subsume, the scope of attributes extracted by previous methods. Indeed, previous methods are unlikely to extract attributes as specific as length/duration (it) lasts unopened and manner (it) helps in weight loss, for the instances olive oil and honey of the class Food ingredients. Conversely, previously extracted attributes like nutritional value and solubility in water are roughly equivalent to the finer-grained nutritional value (it) has and reason (it) dissolves in water, extracted from the queries "what nutritional value does honey have" and "why does glucose dissolve in water" respectively. Second, the noncontiguous attributes can be simultaneously interpreted as binary relations pertaining to instances and classes. The relations (helps in weight loss) connect an instance (honey) or, more generally, a class (Food ingredients), on one hand; and a loosely-typed unknown argument (manner) whose value is of interest to Web users, on the other hand. Because
Acquisition of Noncontiguous Class Attributes from Web Search Queries
d52000158
We present our system description of input-level multimodal fusion of audio, video, and text for recognition of emotions and their intensities for the 2018 First Grand Challenge on Computational Modeling of Human Multimodal Language. Our proposed approach is based on input-level feature fusion with sequence learning from Bidirectional Long-Short Term Memory (BLSTM) deep neural networks (DNNs). We show that our fusion approach outperforms unimodal predictors. Our system performs 6-way simultaneous classification and regression, allowing for overlapping emotion labels in a video segment. This leads to an overall binary accuracy of 90%, overall 4-class accuracy of 89.2% and an overall mean-absolute-error (MAE) of 0.12. Our work shows that an early fusion technique can effectively predict the presence of multi-label emotions as well as their coarsegrained intensities. The presented multimodal approach creates a simple and robust baseline on this new Grand Challenge dataset. Furthermore, we provide a detailed analysis of emotion intensity distributions as output from our DNN, as well as a related discussion concerning the inherent difficulty of this task.
Edinburgh Research Explorer Recognizing Emotions in Video Using Multimodal DNN Feature Fusion Recognizing Emotions in Video Using Multimodal DNN Feature Fusion
d4898281
In the LOD era, the conceptual interoperability of language resources is established by using modular architectures like the Ontologies of Linguistic Annotations (Chiarcos, 2008a, OLiA). Available as a part of the Linguistic Linked Open Data (LLOD) cloud, 1 OLiA provides ontological representations of annotation schemes for over 70 languages, as well as their linking to a reference model. We successfully train an ontology-based POS tagger on corpora with different tag sets of divergent granularity and partially compatible annotations. Making use of OLiA, we achieve interoperability of annotation schemes, and, despite sparse training data, we do not only outperform state-of-the-art POS taggers in concept coverage, but also show how traing on heterogeneously annotated data produces richer morphosyntactic annotation with no or only marginal loss of precision.
An Ontology-based Approach to Automatic Part-of-Speech Tagging Using Heterogeneously Annotated Corpora
d10685951
In this paper we investigate named entity transliteration based on a phonetic scoring method. The phonetic method is computed using phonetic features and carefully designed pseudo features. The proposed method is tested with four languages -Arabic, Chinese, Hindi and Korean -and one source language -English, using comparable corpora. The proposed method is developed from the phonetic method originally proposed inTao et al. (2006). In contrast to the phonetic method inTao et al. (2006)constructed on the basis of pure linguistic knowledge, the method in this study is trained using the Winnow machine learning algorithm. There is salient improvement in Hindi and Arabic compared to the previous study. Moreover, we demonstrate that the method can also achieve comparable results, when it is trained on language data different from the target language. The method can be applied both with minimal data, and without target language data for various languages.
Multilingual Transliteration Using Feature based Phonetic Method
d11323789
Although disjunction has been used in several unificationbased grammar formalisms, existing methods of unification have been unsatisfactory for descriptions containing large quantities of disjunction, because they require exponential time. This paper describes a method of unification by successive approximation, resulting in better average performance.
A Unification Method for Disjunctive Feature Descriptions
d1924036
GF is a grammar formalism that has a powerful type system and module system, permitting a high level of abstraction and division of labour in grammar writing. GF is suited both for expert linguists, who appreciate its capacity of generalizations and conciseness, and for beginners, who benefit from its static type checker and, in particular, the GF Resource Grammar Library, which currently covers 12 languages. GF has a notion of multilingual grammars, enabling code sharing, linguistic generalizations, rapid development of translation systems, and painless porting of applications to new languages.
Grammar Development in GF
d10927237
This paper presents a preliminary experiment in automatically suggesting significant terms for a predefined topic. The general method is to compare a topically focused sample created around the predefined topic with a larger and more general base sample. A set of statistical measures are used to identify significant word units in both samples. Identification of single word terms is based on the notion of word intervals. Two-word terms are identified through the computation of mutual information, and the extension of mutual information assists in capturing multi-word terms. Once significant terms of all these three types are identified, a comparison algorithm is applied to differentiate terms across the two data samples. If significant changes in the values of certain statistical variables are detected, associated terms will selected as being topic-oriented and included in a suggested list. To check the quality of the suggested terms, we compare them against terms manually determined by the domain expert. Though overlaps vary, we find that the automatical suggestion provides more terms that are useful for describing the predefined topic. 131
Automatic Suggestion of Significant Terms for a Predefined Topic
d219302019
d248780325
Modern Irish is a minority language lacking sufficient computational resources for the task of accurate automatic syntactic parsing of usergenerated content such as tweets. Although language technology for the Irish language has been developing in recent years, these tools tend to perform poorly on user-generated content. As with other languages, the linguistic style observed in Irish tweets differs, in terms of orthography, lexicon, and syntax, from that of standard texts more commonly used for the development of language models and parsers. We release the first Universal Dependencies treebank of Irish tweets, facilitating natural language processing of user-generated content in Irish. In this paper, we explore the differences between Irish tweets and standard Irish text, and the challenges associated with dependency parsing of Irish tweets. We describe our bootstrapping method of treebank development and report on preliminary parsing experiments. 4 https://github.com/ UniversalDependencies/UD_Irish-IDT BERT (mBERT) (Devlin et al., 2019) and WikiB-ERT (Pyysalo et al., 2021) at the task of dependency parsing for Irish.
TwittIrish: A Universal Dependencies Treebank of Tweets in Modern Irish
d12925121
BackgroundI.D.S. stands for Integrated Dictionary Systems. Its distinguishing feature is the integration of the bulk of grammar (= morphology, instructions for syntactic analysis, transfer and generation) !into the dictionary.Research on these lines, Mined at :Japaneseto-English Machine ~h:a.nslation, started in the early 60's and found practical application as a tool for teaching monolinguM English speakers to decode Japanese. Applications of this method to other language pairs have also taken place.The fDS approach to Japanese-to-English MT found sponsorship from the British Government and ICL from 1984 as part of ALVEY (IKBS project no.25, carried out at the University of Sheffield, England in cooperation with ICL and Kobe University, Japan). When the Japanese to English part of the ALVEY project was successfully concluded in 1987, resulting in the creation of AIDTRANS, SHARP Corporation (Japan) concluded an agreement with the hitherto partners and took over further sponsorship of this research. This note is about the work carried out after that.We have since achieved a working prototype of the sentence-for-sentence component known as PROTRAN and work now continues at Kobe University, under SHARP sponsorship, on the development of a textwide component (TWIN-TRAN) which could run on top of the existing model.Our mMn task in the last year of research has been to reformulate the sentence-for-sentence Japanese-to-English system in such a way as to make the complete linguistic information explicit, which are executed by a processing system separate from these rules. The processing system is all programmed in Prolog and executes the linguistic rules by applying a function to each type of ~ule. This task has largely been achieved by now.The linguistic information resides in tlhe following sets of rules: 1) Japanese-to-English Automatic Dictionary (at present 32000 entries), held in a relational database with seven fields for each entry (combined key comprises the fields Entry_~ord, Translation, ~lord._class, Entry_code and Continuation; outside key are the fields Priority and Semantic_ category).2) Prioritised list of permitted juxtapositional links Morpholexical analysis is executed by a linear chart parser utilising the fields Entry_~ord, Entry_code, Continuation and Priority from the dictionaxy database and the prioritised list of permitted links. This yields a set of morphological word class 50 1
Japanese-to-English Project 2 Sentence-for-Sentence Com- ponent: PROTRAN 2.1 Linguistic Rules and the Process- ing System
d250390750
Patronizing and Condescending Language (PCL) towards vulnerable communities in general media has been shown to have potentially harmful effects. Due to its subtlety and the good intentions behind its use, the audience is not aware of the language's toxicity. In this paper, we present our method for the SemEval-2022 Task4 titled "Patronizing and Condescending Language Detection". In Subtask A, a binary classification task, we introduce adversarial training based on Fast Gradient Method (FGM) and employ pre-trained model in a unified architecture. For Subtask B, framed as a multi-label classification problem, we utilize various improved multi-label cross-entropy loss functions and analyze the performance of our method. In the final evaluation, our system achieved official rankings of 17/79 and 16/49 on Subtask A and Subtask B, respectively. In addition, we explore the relationship between PCL and emotional polarity and intensity it contains. Our code is available on Github 1 .
GUTS at SemEval-2022 Task 4: Adversarial Training and Balancing Methods for Patronizing and Condescending Language Detection
d10251288
The constraint-oriented approaches to language processing step back from the generative theory and make it possible, in theory, to deal with all types of linguistic relationships (e.g. dependency, linear precedence or immediate dominance) with the same importance when parsing an input utterance. Yet in practice, all implemented constraint-oriented parsing strategies still need to discriminate between "important" and "not-so-important" types of relations during the parsing process.In this paper we introduce a new constraint-oriented parsing strategy based on Property Grammars, which overcomes this drawback and grants the same importance to all types of relations.
Numbat: Abolishing Privileges when Licensing New Constituents in Constraint-oriented Parsing
d222090623
d220446081
d2776676
One of the difficulties in using Folksonomies in computational systems is tag ambiguity: tags with multiple meanings. This paper presents a novel method for building Folksonomy tag ontologies in which the nodes are disambiguated. Our method utilizes a clustering algorithm called DSCBC, which was originally developed in Natural Language Processing (NLP), to derive committees of tags, each of which corresponds to one meaning or domain. In this work, we use Wikipedia as the external knowledge source for the domains of the tags. Using the committees, an ambiguous tag is identified as one which belongs to more than one committee. Then we apply a hierarchical agglomerative clustering algorithm to build an ontology of tags. The nodes in the derived ontology are disambiguated in that an ambiguous tag appears in several nodes in the ontology, each of which corresponds to one meaning of the tag. We evaluate the derived ontology for its ontological density (how close similar tags are placed), and its usefulness in applications, in particular for a personalized tag retrieval task. The results showed marked improvements over other approaches.
Construction of Disambiguated Folksonomy Ontologies Using Wikipedia
d31966685
This paper demonstrates how to generate natural language sentences from the pieces of data found in databases in the domain of flight tickets. By using NooJ to add context to specific customer data found in customer data sets, we are able to produce sentences that give a short textual summary of each customer, providing a list of possible suggestions how to proceed. In addition, due to the rich morphology of Croatian, we are giving special attention to matching gender, number and case information where appropriate. Thus, we are able to provide individualized and grammatically correct text in spite of the customer gender or the number of tickets bought and inquiries made. We believe that such short NL overviews can help ticket sellers get a quicker assessment of the type of a customer and allow for the exchange of information with more confidence and greater speed.
Language Generation from DB Query
d2391728
We present a quasi-synchronous dependency grammar(Smith and Eisner, 2006)for machine translation in which the leaves of the tree are phrases rather than words as in previous work(Gimpel and Smith, 2009). This formulation allows us to combine structural components of phrase-based and syntax-based MT in a single model. We describe a method of extracting phrase dependencies from parallel text using a target-side dependency parser. For decoding, we describe a coarse-to-fine approach based on lattice dependency parsing of phrase lattices. We demonstrate performance improvements for Chinese-English and Urdu-English translation over a phrase-based baseline. We also investigate the use of unsupervised dependency parsers, reporting encouraging preliminary results.
Quasi-Synchronous Phrase Dependency Grammars for Machine Translation
d231642815
This paper presents a web interface for wordnets named Hydra for Web which is built on top of Hydra -an open source tool for wordnet development -by means of modern web technologies. It is a Single Page Application with simple but powerful and convenient GUI. It has two modes for visualisation of the language correspondences of searched (and found) wordnet synsets -single and parallel modes. Hydra for web is available at: http://dcl.bas.bg/bulnet/.
Hydra for Web: A Browser for Easy Access to Wordnets