_id stringlengths 4 10 | text stringlengths 0 18.4k | title stringlengths 0 8.56k |
|---|---|---|
d227230611 | ||
d7165697 | Fluent dialogue requires that speakers successfully negotiate and signal turn-taking. While many cues to turn change have been proposed, especially in multi-modal frameworks, here we focus on the use of prosodic cues to these functions. In particular, we consider the use of prosodic cues in a tone language, Mandarin Chinese, where variations in pitch height and slope additionally serve to determine word meaning. Within a corpus of spontaneous Chinese dialogues, we find that turn-unit final syllables are significantly lower in average pitch and intensity than turnunit initial syllables in both smooth turn changes and segments ended by speaker overlap. Interruptions are characterized by significant prosodic differences from smooth turn initiations. Furthermore, we demonstrate that these contrasts correspond to an overall lowering across all tones in final position, which largely preserves the relative heights of the lexical tones. In classification tasks, we contrast the use of text and prosodic features. Finally, we demonstrate that, on balanced training and test sets, we can distinguish turnunit final words from other words at ≈ 93% accuracy and interruptions from smooth turn unit initiations at 62% accuracy. | Turn-taking in Mandarin Dialogue: Interactions of Tone and Intonation |
d259376794 | Social media (SM) can provide valuable information about patients' experiences with multiple drugs during treatments. Although information extraction from SM has been well-studied, drug switches detection and reasons behind these switches from SM have not been studied yet. Therefore, in this paper, we present a new SM listening approach for analyzing online patient conversations that contain information about drug switching, drug effectiveness, side effects, and adverse drug reactions. We describe a deep learning-based approach for identifying instances of drug switching in SM posts, as well as a method for extracting the reasons behind these switches. To train and test our models, we used annotated SM data from internal dataset which is automatically created using a rule-based method. We evaluated our models using Text-to-Text Transfer Transformer (T5) and found that our SM listening approach can extract medication change information and reasons with high accuracy, achieving an F1-score of 98% and a ROUGE-1 score of 93%, respectively. Overall, our results suggest that our SM listening approach has the potential to provide valuable insights into patients' experiences with drug treatments, which can be used to improve patient outcomes and the effectiveness of drug treatments. | Exploring Drug Switching in Patients: A Deep Learning-based Approach to Extract Drug Changes and Reasons from Social Media |
d30303365 | If natural language understanding systems are ever to cope with the full range of English language forms, their designers will have to incorporate a number of features of the spoken vernacular language.This communication discusses such features as non-standard grammatical rules, hesitations and false starts due to self-correction, systematic errors due to mismatches between the grammar and sentence generator, and uncorrected true errors. | ON THE LINGUISTIC CHARACTER OF NON-STANDARD INPUT |
d11613407 | This paper discusses aspects of a computational model for the semantics of why-questions which are relevant to the implementation of an explanation component in a natural language dialogue system. After a brief survey of all of the explanation components which have been implemented to date, some of the distinguishing features of the explanation component designed and implemented by the author are listed. In the first part of the paper the major types of signals which, like the word whV, can be used to set the explanation component into action are listed, and some ways of recognizing them automatically are considered. In addition to these linguistic signals, communicative and cognitive conditions which can have the same effect are discussed. In the second part the various schemata.for argumentative dialogue sequences which can be handled by the explanation component in question are examined, Particular attention is paid to problems arising in connection with the iteration of why-questions and the verbalization of multiple justifications. Finally schemata for metacommunicative why-questions and for why-questions asked by the user are investigated.' | TOWARDS A COMPUTATIONAL MODEL FOR THE SEMANTICS OF WHY-QUESTIONS |
d6296389 | This work presents a first step to a general implementation of the Semantic-Script Theory of Humor (SSTH). Of the scarce amount of research in computational humor, no research had focused on humor generation beyond simple puns and punning riddles. We propose an algorithm for mining simple humorous scripts from a semantic network (Concept-Net) by specifically searching for dual scripts that jointly maximize overlap and incongruity metrics in line with Raskin's Semantic-Script Theory of Humor. Initial results show that a more relaxed constraint of this form is capable of generating humor of deeper semantic content than wordplay riddles. We evaluate the said metrics through a user-assessed quality of the generated two-liners. | Humor as Circuits in Semantic Networks |
d1951131 | The Twins corpus is a collection of utterances spoken in interactions with two virtual characters who serve as guides at the Museum of Science in Boston. The corpus contains about 200,000 spoken utterances from museum visitors (primarily children) as well as from trained handlers who work at the museum. In addition to speech recordings, the corpus contains the outputs of speech recognition performed at the time of utterance as well as the system interpretation of the utterances. Parts of the corpus have been manually transcribed and annotated for question interpretation. The corpus has been used for improving performance of the museum characters and for a variety of research projects, such as phonetic-based Natural Language Understanding, creation of conversational characters from text resources, dialogue policy learning, and research on patterns of user interaction. It has the potential to be used for research on children's speech and on language used when talking to a virtual human. | The Twins Corpus of Museum Visitor Questions |
d943698 | Due to the mobile Internet revolution, people tend to browse the Web while driving their car which puts the driver's safety at risk. Therefore, an intuitive and nondistractive in-car speech interface to the Web needs to be developed. Before developing a new speech dialog system in a new domain developers have to examine what the user's preferred interaction style is in order to use such a system. This paper reports from a very recent driving simulation study and its preliminary results which are conducted in order to compare different speech dialog strategies. The use of command-based and conversational SDS prototypes while driving is evaluated on usability and driving performance. Different GUIs are designed in order to support the respective dialog strategy the most and to evaluate the effect of the GUI on usability and driver distraction. The preliminary results show that the conversational speech dialog performs more efficient than the command-based dialog. However, the conversational dialog distracts more from driving than the command-based. Furthermore, the results indicate that an SDS supported by a GUI is more efficient and better accepted by the user than without GUI. | Evaluation of Speech Dialog Strategies for Internet Applications in the Car |
d6300286 | Linguistic annotation is the process of adding additional notations to raw linguistic data for descriptive or analytical purposes. In the tagging of complex Chinese and multilingual linguistic data with a sophisticated linguistic framework, immediate visualization of the complex multi-layered functional and discourse structures is crucial for both speeding up the tagging process and reducing errors. The need for large-scale linguistically annotated corpora has made collaborative annotation increasingly essential, and existing annotation tools are inadequate to the task of providing assistance to annotators when dealing with complex linguistic structural information. In this paper we describe the design and development of a collaborative tool to extend existing annotation tools. The tool improves annotation efficiency and addresses certain difficulties in representing complex linguistic relations. Here, we adopt annotation based on Systemic Functional Linguistics and Rhetorical Structure Theory to demonstrate the effectiveness of the interface built on such infrastructure. | Collaborative Annotation and Visualization of Functional and Discourse Structures |
d8064931 | In this paper, we propose a method for mediatory summarization, which is a novel technique for facilitating users' assessments of the credibility of information on the Web. A mediatory summary is generated by extracting a passage from Web documents; this summary is generated on the basis of its relevance to a given query, fairness, and density of keywords, which are features of the summaries constructed to determine the credibility of information on the Web. We demonstrate the effectiveness of the generated mediatory summary in comparison with the summaries of Web documents produced by Web search engines. | A Method for Automatically Generating a Mediatory Summary to Verify Credibility of Information on the Web |
d2940785 | We present recent work in the area of Cross-Domain Dialogue Act tagging. Our experiments investigate the use of a simple dialogue act classifier based on purely intra-utterance features -principally involving word n-gram cue phrases. We apply automatically extracted cues from one corpus to a new annotated data set, to determine the portability and generality of the cues we learn. We show that our automatically acquired cues are general enough to serve as a cross-domain classification mechanism. | Investigating the Portability of Corpus-Derived Cue Phrases for Dialogue Act Classification |
d12842273 | This paper presents the PolNet-Polish WordNet project which aims at building a linguistically oriented ontology for Polish compatible with other WordNet projects such as Princeton WordNet, EuroWordNet and other similarly organized ontologies. The main idea behind this kind of ontologies is to use words related by synonymy to construct formal representations of concepts. In the paper we sketch the PolNet project methodology and implementation. We present data obtained so far, as well as the WQuery tool for querying and maintaining PolNet. WQuery is a query language that make use of data types based on synsets, word senses and various semantic relations which occur in wordnet-like lexical databases. The tool is particularly useful to deal with complex querying tasks like searching for cycles in semantic relations, finding isolated synsets or computing overall statistics. Both data and tools presented in this paper have been applied within an advanced AI system POLINT-112-SMS with emulated natural language competence, where they are used in the understanding subsystem. | PolNet -Polish WordNet: Data and Tools |
d202777021 | Understanding text often requires identifying meaningful constituent spans such as noun phrases and verb phrases. In this work, we show that we can effectively recover these types of labels using the learned phrase vectors from deep inside-outside recursive autoencoders (DIORA). Specifically, we cluster span representations to induce span labels. Additionally, we improve the model's labeling accuracy by integrating latent code learning into the training procedure. We evaluate this approach empirically through unsupervised labeled constituency parsing. Our method outperforms ELMo and BERT on two versions of the Wall Street Journal (WSJ) dataset and is competitive to prior work that requires additional human annotations, improving over a previous state-of-the-art system that depends on ground-truth part-of-speech tags by 5 absolute F1 points (19% relative error reduction). | Unsupervised Labeled Parsing with Deep Inside-Outside Recursive Autoencoders |
d192546007 | This work examines the robustness of selfattentive neural networks against adversarial input perturbations. Specifically, we investigate the attention and feature extraction mechanisms of state-of-the-art recurrent neural networks and self-attentive architectures for sentiment analysis, entailment and machine translation under adversarial attacks. We also propose a novel attack algorithm for generating more natural adversarial examples that could mislead neural models but not humans. Experimental results show that, compared to recurrent neural models, self-attentive models are more robust against adversarial perturbation. In addition, we provide theoretical explanations for their superior robustness to support our claims. | On the Robustness of Self-Attentive Models |
d235097289 | Large-scale language models (LMs) pretrained on massive corpora of text, such as GPT-2, are powerful open-domain text generators. However, as our systematic examination reveals, it is still challenging for such models to generate coherent long passages of text (e.g., 1000 tokens), especially when the models are fine-tuned to the target domain on a small corpus. Previous planning-then-generation methods also fall short of producing such long text in various domains. To overcome the limitations, we propose a simple but effective method of generating text in a progressive manner, inspired by generating images from low to high resolution. Our method first produces domain-specific content keywords and then progressively refines them into complete passages in multiple stages. The simple design allows our approach to take advantage of pretrained LMs at each stage and effectively adapt to any target domain given only a small set of examples. We conduct a comprehensive empirical study with a broad set of evaluation metrics, and show that our approach significantly improves upon the fine-tuned large LMs and various planning-then-generation methods in terms of quality and sample efficiency. Human evaluation also validates that our model generations are more coherent. 1 | Progressive Generation of Long Text with Pretrained Language Models |
d195218673 | This paper presents a novel framework, MGNER, for Multi-Grained Named Entity Recognition where multiple entities or entity mentions in a sentence could be nonoverlapping or totally nested. Different from traditional approaches regarding NER as a sequential labeling task and annotate entities consecutively, MGNER detects and recognizes entities on multiple granularities: it is able to recognize named entities without explicitly assuming non-overlapping or totally nested structures. MGNER consists of a Detector that examines all possible word segments and a Classifier that categorizes entities. In addition, contextual information and a self-attention mechanism are utilized throughout the framework to improve the NER performance. Experimental results show that MGNER outperforms current state-of-the-art baselines up to 4.4% in terms of the F1 score among nested/non-overlapping NER tasks. | Multi-Grained Named Entity Recognition |
d14949320 | We describe the semi-automatic adaptation of a TimeML annotated corpus from English to Portuguese, a language for which TimeML annotated data was not available yet. In order to validate this adaptation, we use the obtained data to replicate some results in the literature that used the original English data. The fact that comparable results are obtained indicates that our approach can be used successfully to rapidly create semantically annotated resources for new languages. | Temporal information processing of a new language: fast porting with minimal resources |
d16455108 | This paper presents the first release of the KiezDeutsch Korpus (KiDKo), a new language resource with multiparty spoken dialogues of Kiezdeutsch, a newly emerging language variety spoken by adolescents from multiethnic urban areas in Germany. The first release of the corpus includes the transcriptions of the data as well as a normalisation layer and part-of-speech annotations. In the paper, we describe the main features of the new resource and then focus on automatic POS tagging of informal spoken language. Our tagger achieves an accuracy of nearly 97% on KiDKo. While we did not succeed in further improving the tagger using ensemble tagging, we present our approach to using the tagger ensembles for identifying error patterns in the automatically tagged data. | The KiezDeutsch Korpus (KiDKo) Release 1.0 |
d1076380 | Citations are a valuable resource for characterizing scientific publications that has already been used in applications such as summarization and information retrieval. These applications could be even better served by expanding citation information. We aim to achieve this by extracting and classifying citation information from the text, so that subsequent applications may make use of it. We make three contributions to the advancement of fine-grained citation classification. First, our work uses a standard classification scheme for citations that was developed independently of automatic classification and therefore is not bound to any particular citation application. Second, to address the lack of available annotated corpora and reproducible results for citation classification, we are making available a manually-annotated corpus as a benchmark for further citation classification research. Third, we introduce new features designed for citation classification and compare them experimentally with previously proposed citation features, showing that these new features improve classification accuracy. | Towards a Generic and Flexible Citation Classifier Based on a Faceted Classification Scheme |
d26296420 | Polarity lexicons are a basic resource for analyzing the sentiments and opinions expressed in texts in an automated way. This paper explores three methods to construct polarity lexicons: translating existing lexicons from other languages, extracting polarity lexicons from corpora, and annotating sentiments Lexical Knowledge Bases. Each of these methods require a different degree of human effort. We evaluate how much manual effort is needed and to what extent that effort pays in terms of performance improvement. Experiment setup includes generating lexicons for Basque, and evaluating them against gold standard datasets in different domains. Results show that extracting polarity lexicons from corpora is the best solution for achieving a good performance with reasonable human effort. | Polarity Lexicon Building: to what Extent Is the Manual Effort Worth? |
d11663885 | For resource-limited language pairs, coverage of the test set by the parallel corpus is an important factor that affects translation quality in two respects: 1) out of vocabulary words; 2) the same information in an input sentence can be expressed in different ways, while current phrase-based SMT systems cannot automatically select an alternative way to transfer the same information. Therefore, given limited data, in order to facilitate translation from the input side, this paper proposes a novel method to reduce the translation difficulty using source-side lattice-based paraphrases. We utilise the original phrases from the input sentence and the corresponding paraphrases to build a lattice with estimated weights for each edge to improve translation quality. Compared to the baseline system, our method achieves relative improvements of 7.07%, 6.78% and 3.63% in terms of BLEU score on small, medium and largescale English-to-Chinese translation tasks respectively. The results show that the proposed method is effective not only for resourcelimited language pairs, but also for resourcesufficient pairs to some extent. | Facilitating Translation Using Source Language Paraphrase Lattices |
d6575946 | In this paper we propose a probabilistic graphical model as an innovative framework for studying typological universals. We view language as a system and linguistic features as its components whose relationships are encoded in a Directed Acyclic Graph (DAG). Taking discovery of the word order universals as a knowledge discovery task we learn the graphical representation of a word order sub-system which reveals a finer structure such as direct and indirect dependencies among word order features. Then probabilistic inference enables us to see the strength of such relationships: given the observed value of one feature (or combination of features), the probabilities of values of other features can be calculated. Our model is not restricted to using only two values of a feature. Using imputation technique and EM algorithm it can handle missing values well. Model averaging technique solves the problem of limited data. In addition the incremental and divide-and-conquer method addresses the areal and genetic effects simultaneously instead of separately as in Daumé III and Campbell (2007). | Exploring Word Order Universals: a Probabilistic Graphical Model Approach |
d235097519 | ||
d253481040 | Task-oriented conversational agents are gaining immense popularity and success in a wide range of tasks, from flight ticket booking to online shopping. However, the existing systems presume that end-users will always have a pre-determined and servable task goal, which results in dialogue failure in hostile scenarios, such as goal unavailability. On the other hand, human agents accomplish users' tasks even in a large number of goal unavailability scenarios by persuading them towards a very similar and servable goal. Motivated by the limitation, we propose and build a novel end-to-end multi-modal persuasive dialogue system incorporated with a personalized persuasive module aided goal controller and goal persuader. The goal controller recognizes goal conflicting/unavailability scenarios and formulates a new goal, while the goal persuader persuades users using a personalized persuasive strategy identified through dialogue context. We also present a novel automatic evaluation metric called Persuasiveness Measurement Rate (PMeR) for quantifying the persuasive capability of a conversational agent. The obtained improvements (both quantitative and qualitative) firmly establish the superiority and need of the proposed context-guided, personalized persuasive virtual agent over existing traditional task-oriented virtual agents. Furthermore, we also curated a multi-modal persuasive conversational dialogue corpus annotated with intent, slot, sentiment, and dialogue act for e-commerce domain 1 . | Persona or Context? Towards Building Context adaptive Personalized Persuasive Virtual Sales Assistant |
d252847491 | Despite considerable advances in open-domain neural dialogue systems, their evaluation remains a bottleneck. Several automated metrics have been proposed to evaluate these systems, however, they mostly focus on a single notion of quality, or, when they do combine several sub-metrics, they are computationally expensive. This paper attempts to solve the latter: QualityAdapt leverages the Adapter framework for the task of Dialogue Quality Estimation. Using well defined semi-supervised tasks, we train Adapters for different subqualities and score generated responses with Adapter-Fusion. This compositionality provides an easy to adapt metric to the task at hand that incorporates multiple subqualities. It also reduces computational costs as individual predictions of all subqualities are obtained in a single forward pass. This approach achieves comparable results to state-of-the-art metrics on several datasets, whilst keeping the previously mentioned advantages. | QualityAdapt: an Automatic Dialogue Quality Estimation Framework |
d17083280 | Entropy Guided Transformation Learning (ETL) is a new machine learning strategy that combines the advantages of decision trees (DT) and Transformation Based Learning (TBL). In this work, we apply the ETL framework to four phrase chunking tasks: Portuguese noun phrase chunking, English base noun phrase chunking, English text chunking and Hindi text chunking. In all four tasks, ETL shows better results than Decision Trees and also than TBL with hand-crafted templates. ETL provides a new training strategy that accelerates transformation learning. For the English text chunking task this corresponds to a factor of five speedup. For Portuguese noun phrase chunking, ETL shows the best reported results for the task. For the other three linguistic tasks, ETL shows state-of-theart competitive results and maintains the advantages of using a rule based system. | Phrase Chunking using Entropy Guided Transformation Learning |
d16098584 | We introduce positive-only projection (PoP), a new algorithm for constructing semantic spaces and word embeddings. The PoP method employs random projections. Hence, it is highly scalable and computationally efficient. In contrast to previous methods that use random projection matrices R with the expected value of 0 (i.e., E(R) = 0), the proposed method uses R with E(R) > 0. We use Kendall's τ b correlation to compute vector similarities in the resulting non-Gaussian spaces. Most importantly, since E(R) > 0, weighting methods such as positive pointwise mutual information (PPMI) can be applied to PoP-constructed spaces after their construction for efficiently transferring PoP embeddings onto spaces that are discriminative for semantic similarity assessments. Our PoP-constructed models, combined with PPMI, achieve an average score of 0.75 in the MEN relatedness test, which is comparable to results obtained by state-of-the-art algorithms. | Random Positive-Only Projections: PPMI-Enabled Incremental Semantic Space Construction |
d5216936 | The computation of meaning similarity as operationalized by vector-based models has found widespread use in many tasks ranging from the acquisition of synonyms and paraphrases to word sense disambiguation and textual entailment. Vector-based models are typically directed at representing words in isolation and thus best suited for measuring similarity out of context. In his paper we propose a probabilistic framework for measuring similarity in context. Central to our approach is the intuition that word meaning is represented as a probability distribution over a set of latent senses and is modulated by context. Experimental results on lexical substitution and word similarity show that our algorithm outperforms previously proposed models. | Measuring Distributional Similarity in Context |
d5544591 | We describe and experimentally evaluate a system, FeasPar, that learns parsing spontaneous speech. To train and run FeasPar (Feature Structure Parser), only limited handmodeled knowledge is required. The FeasPar architecture consists of neural networks and a search. The networks spilt the incoming sentence into chunks, which are labeled with feature values and chunk relations. Then, the search finds the most probable and consistent feature structure. FeasPar is trained, tested and evaluated with the Spontaneous Schednling Task, and compared with a handmodeled LRparser. The handmodeling effort for Fea-sPar is 2 weeks. The handmodeling effort for the LR-parser was 4 months. FeasPar performed better than the LRparser in all six comparisons that are made. | FeasPar -A Feature Structure Parser Learning to Parse Spoken Language |
d36967686 | ||
d27611775 | We propose a novel pipeline for translation into morphologically rich languages which consists of two steps: initially, the source string is enriched with target morphological features and then fed into a translation model which takes care of reordering and lexical choice that matches the provided morphological features. As a proof of concept we first show improved translation performance for a phrase-based model translating source strings enriched with morphological features projected through the word alignments from target words to source words. Given this potential, we present a model for predicting target morphological features on the source string and its predicate-argument structure, and tackle two major technical challenges: (1) How to fit the morphological feature set to training data? and (2) How to integrate the morphology into the back-end phrase-based model such that it can also be trained on projected (rather than predicted) features for a more efficient pipeline? For the first challenge we present a latent variable model, and show that it learns a feature set with quality comparable to a manually selected set for German. And for the second challenge we present results showing that it is possible to bridge the gap between a model trained on a predicted and another model trained on a projected morphologically enriched parallel corpus. Finally we exhibit final translation results showing promising improvement over the baseline phrase-based system. | Machine Translation with Source-Predicted Target Morphology |
d9932933 | Neural networks with attention have proven effective for many natural language processing tasks. In this paper, we develop attention mechanisms for uncertainty detection. In particular, we generalize standardly used attention mechanisms by introducing external attention and sequence-preserving attention. These novel architectures differ from standard approaches in that they use external resources to compute attention weights and preserve sequence information. We compare them to other configurations along different dimensions of attention. Our novel architectures set the new state of the art on a Wikipedia benchmark dataset and perform similar to the state-of-the-art model on a biomedical benchmark which uses a large set of linguistic features. | Exploring Different Dimensions of Attention for Uncertainty Detection |
d52156433 | User intent detection plays a critical role in question-answering and dialog systems. Most previous works treat intent detection as a classification problem where utterances are labeled with predefined intents. However, it is labor-intensive and time-consuming to label users' utterances as intents are diversely expressed and novel intents will continually be involved. Instead, we study the zero-shot intent detection problem, which aims to detect emerging user intents where no labeled utterances are currently available. We propose two capsule-based architectures: INTENT-CAPSNET that extracts semantic features from utterances and aggregates them to discriminate existing intents, and INTENTCAPSNET-ZSL which gives INTENTCAPSNET the zero-shot learning ability to discriminate emerging intents via knowledge transfer from existing intents. Experiments on two real-world datasets show that our model not only can better discriminate diversely expressed existing intents, but is also able to discriminate emerging intents when no labeled utterances are available. | Zero-shot User Intent Detection via Capsule Neural Networks |
d252624672 | This contribution reports on work in process on project specific software and digital infrastructure components used along with corpus curation workflows in the the framework of the long-term language documentation project INEL. By bringing together scientists with different levels of technical affinity in a highly interdisciplinary working environment, the project is confronted with numerous workflow related issues. Many of them result from collaborative (remote-)work on digital corpora, which, among other things, include annotation, glossing but also quality-and consistency control. In this context several steps were taken to bridge the gap between usability and the requirements of complex data curation workflows. Components of the latter such as a versioning system and semi-automated data validators on one side meet the user demands for the simplicity and minimalism on the other side. Embodying a simple shell script in an interactive graphic user interface, we augment the efficacy of the data versioning and the integration of Java-based quality control and validation tools. | Bringing Together Version Control and Quality Assurance of Language Data with LAMA |
d218973751 | ||
d18653017 | Conceptualising a domain has long been recognised as a prerequisite for understanding that domain and processing information about it. Ontologies are explicit specifications of conceptualisations which are now recognised as important components of information systems and information processing. In this paper, we describe a project in which ontologies are part of the reasoning process used for information management and for the presentation of information.Both accessing and presenting information are mediated via natural language and the ontologies are coupled with the lexicon used in the natural language component. | Towards Ontology-Based Natural Language Processing |
d53622046 | In this paper, we have explored web-based evidence gathering and different linguistic features to automatically extract drug names from tweets and further classify such tweets into Adverse Drug Events or not. We have evaluated our proposed models with the dataset as released by the SMM4H workshop shared Task-1 and Task-3 respectively. Our evaluation results shows that the proposed model achieved good results, with Precision, Recall and F-scores of 78.5%, 88% and 82.9% respectively for Task1 and 33.2%, 54.7% and 41.3% for Task3. | Leveraging Web Based Evidence Gathering for Drug Information Identification from Tweets |
d170655644 | ||
d7856580 | Sociolinguists have long argued that social context influences language use in all manner of ways, resulting in lects 1 . This paper explores a text classification problem we will call lect modeling, an example of what has been termed computational sociolinguistics. In particular, we use machine learning techniques to identify social power relationships between members of a social network, based purely on the content of their interpersonal communication. We rely on statistical methods, as opposed to language-specific engineering, to extract features which represent vocabulary and grammar usage indicative of social power lect. We then apply support vector machines to model the social power lects representing superior-subordinate communication in the Enron email corpus. Our results validate the treatment of lect modeling as a text classification problem -albeit a hard one -and constitute a case for future research in computational sociolinguistics. | Extracting Social Power Relationships from Natural Language |
d1879032 | In this paper, we present a novel method for the computation of compositionality within a distributional framework. The key idea is that compositionality is modeled as a multi-way interaction between latent factors, which are automatically constructed from corpus data. We use our method to model the composition of subject verb object triples. The method consists of two steps. First, we compute a latent factor model for nouns from standard co-occurrence data. Next, the latent factors are used to induce a latent model of three-way subject verb object interactions. Our model has been evaluated on a similarity task for transitive phrases, in which it exceeds the state of the art. | A Tensor-based Factorization Model of Semantic Compositionality |
d127453 | Numerous cross-lingual applications, including state-of-the-art machine translation systems, require parallel texts aligned at the sentence level. However, collections of such texts are often polluted by pairs of texts that are comparable but not parallel. Bitext maps can help to discriminate between parallel and comparable texts. Bitext mapping algorithms use a larger set of document features than competing approaches to this task, resulting in higher accuracy. In addition, good bitext mapping algorithms are not limited to documents with structural mark-up such as web pages. The task of filtering non-parallel text pairs represents a new application of bitext mapping algorithms. | An Automatic Filter for Non-Parallel Texts |
d252442502 | The scarcity of parallel data is a major limitation for Neural Machine Translation (NMT) systems, in particular for translation into morphologically rich languages (MRLs). An important way to overcome the lack of parallel data is to leverage target monolingual data, which is typically more abundant and easier to collect. We evaluate a number of techniques to achieve this, ranging from back-translation to random token masking, on the challenging task of translating English into four typologically diverse MRLs, under low-resource settings. Additionally, we introduce Inflection Pre-Training (or PT-Inflect), a novel pre-training objective whereby the NMT system is pre-trained on the task of re-inflecting lemmatized target sentences before being trained on standard source-to-target language translation. We conduct our evaluation on four typologically diverse target MRLs, and find that PT-Inflect surpasses NMT systems trained only on parallel data. While PT-Inflect is outperformed by back-translation overall, combining the two techniques leads to gains in some of the evaluated language pairs. | Evaluating Pre-training Objectives for Low-Resource Translation into Morphologically Rich Languages |
d3025862 | Spock is an open source tool for the easy deployment of time-aligned corpora. It is fully web-based, and has very limited server-side requirements. It allows the end-user to search the corpus in a text-driven manner, obtaining both the transcription and the corresponding sound fragment in the result page. Spock has an administration environment to help manage the sound files and their respective transcription files, and also provides statistical data about the files at hand. Spock uses a proprietary file format for storing the alignment data but the integrated admin environment allows you to import files from a number of common file formats. Spock is not intended as a transcriber program: it is not meant as an alternative to programs such as ELAN, Wavesurfer, or Transcriber, but rather to make corpora created with these tools easily available on line. For the end user, Spock provides a very easy way of accessing spoken corpora, without the need of installing any special software, which might make time-aligned corpora corpora accessible to a large group of users who might otherwise never look at them. | Spock -a Spoken Corpus Client |
d237055485 | ||
d10681027 | Research has shown that a number of factors, such as maturational constraints, previous language background, and attention, can have an effect on L2 acquisition. One related issue that remains to be explored is what factors make an individual word more easily learned. In this study we propose that word complexity, on both the phonetic and semantic levels, affect L2 vocabulary learning. Two studies showed that words with simple grapheme-to-phoneme ratios were easier to learn than more phonetically complex words, and that words with two or fewer word senses were easier to learn that those with three or more. | Effect of Word Complexity on L2 Vocabulary Learning |
d9370589 | Choosing an appropriate way for a spoken dialog system to initiate a conversation is a challenging problem, and, if done incorrectly, can negatively affect people's performance on other important tasks. We describe the results of a study in which participants play a game and are interrupted by spoken notifications in different styles. We compare people's perceptions of the notification styles, as well as their effect on task performance. The different notifications include manipulations of pre-notifications and information about the urgency of the task. We find that prenotifications help people respond significantly faster to urgent tasks, and that 43% of people, more than in any other category, prefer a notification style in which the notification begins by stating the urgency of the task. | Initiations and Interruptions in a Spoken Dialog System |
d39363 | Relations between frames and constructions must be made explicit in FrameNet-style linguistic resources such as Berkeley FrameNet (Fillmore & Baker, 2010, Fillmore, Lee-Goldman & Rhomieux, 2012, Japanese FrameNet (Ohara, 2013), and Swedish Constructicon(Lyngfelt et al., 2013). On the basis of analyses of Japanese constructions for the purpose of building a constructicon in the Japanese FrameNet project, this paper argues that constructions can be classified based on whether they evoke frames or not. By recognizing such a distinction among constructions, it becomes possible for FrameNet-style linguistic resources to have a proper division of labor between frame annotations and construction annotations. In addition to the three kinds of "meaningless" constructions which have been proposed already, this paper suggests there may be yet another subtype of constructions without meanings. Furthermore, the present paper adds support to the claim that there may be constructions without meanings (Fillmore, Lee-Goldman & Rhomieux, 2012) in a current debate concerning whether all constructions should be seen as meaning-bearing (Goldberg, 2006: 166-182). | Relating Frames and Constructions in Japanese FrameNet |
d2330566 | We present an extension of phrase-based statistical machine translation models that enables the straight-forward integration of additional annotation at the word-levelmay it be linguistic markup or automatically generated word classes. In a number of experiments we show that factored translation models lead to better translation performance, both in terms of automatic scores, as well as more grammatical coherence. | Factored Translation Models |
d15008922 | Due to the commonality in natural language, negation focus plays a critical role in deep understanding of context. However, existing studies for negation focus identification major on supervised learning which is timeconsuming and expensive due to manual preparation of annotated corpus. To address this problem, we propose an unsupervised word-topic graph model to represent and measure the focus candidates from both lexical and topic perspectives. Moreover, we propose a document-sensitive biased Pag-eRank algorithm to optimize the ranking scores of focus candidates. Evaluation on the *SEM 2012 shared task corpus shows that our proposed method outperforms the state of the art on negation focus identification. * | Unsupervised Negation Focus Identification with Word-Topic Graph Model |
d10963629 | Motivated by a systematic representation of the Chinese aspect forms that explores their intrinsic semantics and temporal logical relations, we are constructing a Chinese aspect system network based on systemic functional grammar and implemented using the multilingual generator KPML. In this paper, we introduce the basic simple primary aspect forms and a set of secondary types of the unmarked-durative aspect in our Chinese aspect system, describe the semantic temporal relations of complex aspect in terms of temporal logic theories, and propose principled semantic conditions for aspect combination. Finally, we give a brief explanation of the system implementation. | The Chinese Aspect System and its Semantic Interpretation |
d10286459 | Tense, temporal adverbs, and temporal connectives provide information about when events described in English sentences occur. To extract this temporal information from a sentence, it must be parsed into a semantic representation which captures the meaning of tense, temporal adverbs, and temporal connectives. Representations were developed for the basic tenses, some temporal adverbs, as well as some of the temporal connectives. Five criteria were suggested for judging these representations, and based on these criteria the representations were judged. | Time and Tense in English |
d1517664 | Text-to-speech has long been centered on the production of an intelligible message of good quality. More recently, interest has shifted to the generation of more natural and expressive speech. A major issue of existing approaches is that they usually rely on a manual annotation in expressive styles, which tends to be rather subjective. A typical related issue is that the annotation is strongly influenced -and possibly biased -by the semantic content of the text (e.g. a shot or a fault may incite the annotator to tag that sequence as expressing a high degree of excitation, independently of its acoustic realization). This paper investigates the assumption that human annotation of basketball commentaries in excitation levels can be automatically improved on the basis of acoustic features. It presents two techniques for label correction exploiting a Gaussian mixture and a proportional-odds logistic regression. The automatically re-annotated corpus is then used to train HMM-based expressive speech synthesizers, the performance of which is assessed through subjective evaluations. The results indicate that the automatic correction of the annotation with Gaussian mixture helps to synthesize more contrasted excitation levels, while preserving naturalness. | Combining Manual and Automatic Prosodic Annotation for Expressive Speech Synthesis |
d232237815 | Morphological processes are generally computable with 1-way finite-state transducers. However, we show that 1-way transducers do not capture the strong generative capacity of certain morphological analyses for more complex processes, including mobile affixation, infixation, and partial reduplication. As diagnostics for strong generative capacity, we use origin semantics and order-preservation. These analyze the input-output correspondences generated by finite-state transducers and their corresponding logical transductions. For some linguistic analyses of these complex processes, their strong generative capacity is matched by more expressive grammars, such as non-orderpreserving transductions and their corresponding 2-way finite-state transducers. | Strong generative capacity of morphological processes |
d17785755 | We propose a method to detect Japanese nasty comments from posts on bulletin board systems (BBS). Nasty comments can cause many social problem, because they express potentially harmful words and phrases. There are methods to recognize harmful words, but they are insufficient. Therefore, we present a method for detecting such comments on a BBS with many posts using an n-gram model. In addition, we compared our method with a support vector machine (SVM) that is based on nasty words. As a result, we detected nasty comments that are different to those by the SVM. We also observe higher detection accuracy by combining two methods. | Detecting Nasty Comments from BBS Posts |
d250390662 | We investigated the influence of contradictory connotations of words or phrases occurring in sarcastic statements, causing those statements to convey the opposite of their literal meaning. Our approach was to perform a sentiment analysis in order to capture potential opposite sentiments within one sentence and use its results as additional information for a further classifier extracting general text features, testing this for a Convolutional Neural Network, as well as for a Support Vector Machine classifier, respectively.We found that a more complex and sophisticated implementation of the sentiment analysis than just classifying the sentences as positive or negative is necessary, since our implementation showed a worse performance in both approaches than the respective classifier without using any sentiment analysis. | connotation_clashers at SemEval-2022 Task 6: The effect of sentiment analysis on sarcasm detection |
d6853677 | We propose a method for labelling prepositional phrases according to two different semantic role classifications, as contained in the Penn treebank and the CoNLL 2004 Semantic Role Labelling data set. Our results illustrate the difficulties in determining preposition semantics, but also demonstrate the potential for PP semantic role labelling to improve the performance of a holistic semantic role labelling system. | Semantic Role Labelling of Prepositional Phrases |
d895713 | Humor generation is a very hard problem. It is difficult to say exactly what makes a joke funny, and solving this problem algorithmically is assumed to require deep semantic understanding, as well as cultural and other contextual cues. We depart from previous work that tries to model this knowledge using ad-hoc manually created databases and labeled training examples. Instead we present a model that uses large amounts of unannotated data to generate I like my X like I like my Y, Z jokes, where X, Y, and Z are variables to be filled in. This is, to the best of our knowledge, the first fully unsupervised humor generation system. Our model significantly outperforms a competitive baseline and generates funny jokes 16% of the time, compared to 33% for human-generated jokes. | Unsupervised joke generation from big data |
d225063036 | Repetition of characters and words is a rich phenomenon in Chinese and worth close attention to the repetition usage and its interactions with other parts of text in the task of text-based sentiment analysis. This work discussed and analyzed the appearance, characteristics, and sentiment marking of repetition in text, with a focus on the analysis of the repetition of words and repetition of sentence structure, and the sentiment presentation given by the above repetitions. Based on the analysis, we elaborated on the practical application of identifying and using repetitions in text to assist sentiment analysis. | A Study on Repetition in Text-based Sentiment Analysis |
d18952804 | The goal of this study is to evaluate an 'offthe-shelf' POS-tagger for modern German on historical data from the Early Modern period (1650-1800). With no specialised tagger available for this particular stage of the language, our findings will be of particular interest to smaller, humanities-based projects wishing to add POS annotations to their historical data but which lack the means or resources to train a POS tagger themselves. Our study assesses the effects of spelling variation on the performance of the tagger, and investigates to what extent tagger performance can be improved by using 'normalised' input, where spelling variants in the corpus are standardised to a modern form. Our findings show that adding such a normalisation layer improves tagger performance considerably. | Evaluating an 'off-the-shelf' POS-tagger on Early Modern German text |
d21919924 | The human genome knowledge base (GENOMA-KB) platform integrates a variety of linguistic resources in a unified environment, independent from the origin and nature of the data it retrieves, thereby offering the visitor one sole access point for different heterogeneous sources of information.GENOMA-KB platform has been developed by Serveis i Plataformes Orientades al Coneixement (SPOC) in order to carry out an initiative presented by the Institute for Applied Linguistics (IULA).One of the requirements of the solution was to minimize its impact on the normal working procedures of the linguist team. For this reason the data input processes have been left intact.GENOMA-KB's user interface, developed under the premise of ease of navigation and aesthetic quality, offers the visitor a unified search interface and transparent navigation between different information sources, achieved in an agile and simple fashion. | The GENOMA-KB platform: Queries over integrated of linguistic resources |
d252186423 | Senior Technical Localization Program ManagerVMware is the leading provider of multi-cloud services for all apps, enabling digital innovation with enterprise control.37,500+ employees | Data Analytics Meet Machine Translation |
d15403887 | The OPT submission to the Shared Task of the 2016 Conference on Natural Language Learning (CoNLL) implements a 'classic' pipeline architecture, combining binary classification of (candidate) explicit connectives, heuristic rules for non-explicit discourse relations, ranking and 'editing' of syntactic constituents for argument identification, and an ensemble of classifiers to assign discourse senses. With an end-toend performance of 27.77 F 1 on the English 'blind' test data, our system advances the previous state of the art (Wang & Lan, 2015) by close to four F 1 points, with particularly good results for the argument identification sub-tasks. | OPT: Oslo-Potsdam-Teesside Pipelining Rules, Rankers, and Classifier Ensembles for Shallow Discourse Parsing |
d48356558 | Intelligent systems require common sense, but automatically extracting this knowledge from text can be difficult. We propose and assess methods for extracting one type of commonsense knowledge, object-property comparisons, from pretrained embeddings. In experiments, we show that our approach exceeds the accuracy of previous work but requires substantially less hand-annotated knowledge. Further, we show that an active learning approach that synthesizes common-sense queries can boost accuracy. | Extracting Commonsense Properties from Embeddings with Limited Human Guidance |
d259376491 | Recently, the identification of free connective phrases as signals for discourse relations has received new attention with the introduction of statistical models for their automatic extraction. The limited amount of annotations makes it still challenging to develop well-performing models. In our work, we want to overcome this limitation with semi-supervised learning from unlabeled news texts. We implement a self-supervised sequence labeling approach and filter its predictions by a second model trained to disambiguate signal candidates. With our novel model design, we report state-of-the-art results and in addition, achieve an average improvement of about 5% for both exactly and partially matched alternatively-lexicalized discourse signals due to weak supervision. | A Weakly-Supervised Learning Approach to the Identification of "Alternative Lexicalizations" in Shallow Discourse Parsing |
d12541191 | This paper describes the USAAR-CHRONOS participation in the Diachronic Text Evaluation task of SemEval-2015 to identify the time period of historical text snippets. We adapt a web crawler to retrieve the original source of the text snippets and determine the publication year of the retrieved texts from their URLs. We report a precision score of >90% in identifying the text epoch. Additionally, by crawling and cleaning the website that hosts the source of the text snippets, we present Daikon, a corpus that can be used for future work on epoch identification from a diachronic perspective. | USAAR-CHRONOS: Crawling the Web for Temporal Annotations |
d53081052 | Neural network models are oftentimes restricted by limited labeled instances and resort to advanced architectures and features for cutting edge performance. We propose to build a recurrent neural network with multiple semantically heterogeneous embeddings within a self-training framework. Our framework makes use of labeled, unlabeled, and social media data, operates on basic features, and is scalable and generalizable. With this method, we establish the state-of-the-art result for both in-and cross-domain for a clinical temporal relation extraction task. | Self-training improves Recurrent Neural Networks performance for Temporal Relation Extraction |
d13070546 | The Verb Argument Browser is a linguistically relevant corpus query tool, which can be used for investigating argument structure of verbs. The original tool was developed for Hungarian corpora but the methodology is claimed to be language independent because of the dependecy grammar based representation. This paper examines this language independency applying the methodology to a language with different structure, namely: Danish. We will see that the methodology can be applied straightforwardly, and the resulting tool shows the same properties as the original version. The Verb Argument Browser for Danish is available at http://corpus.nytud.hu/vabd (username: nodalida, password: vabd). | Verb Argument Browser for Danish |
d13867055 | NeuralMonkey is an open-source toolkit for sequence-to-sequence learning. The focus of this paper is to present the current state of the toolkit to the intended audience, which includes students and researchers, both active in the deep learning community and newcomers. For each of these target groups, we describe the most relevant features of the toolkit, including the simple configuration scheme, methods of model inspection that promote useful intuitions, or a modular design for easy prototyping. We summarize relevant contributions to the research community which were made using this toolkit and discuss the characteristics of our toolkit with respect to other existing systems. We conclude with a set of proposals for future development. | Neural Monkey: The Current State and Beyond |
d17423946 | Different summarization requirements could make the writing of a good summary more difficult, or easier. Summary length and the characteristics of the input are such constraints influencing the quality of a potential summary. In this paper we report the results of a quantitative analysis on data from large-scale evaluations of multi-document summarization, empirically confirming this hypothesis. We further show that features measuring the cohesiveness of the input are highly correlated with eventual summary quality and that it is possible to use these as features to predict the difficulty of new, unseen, summarization inputs. | Can you summarize this? Identifying correlates of input difficulty for generic multi-document summarization |
d235258284 | ||
d259376920 | Nowadays, persuasive messages are increasingly frequent in social networks, which generates particular concern in several communities, given that persuasion seeks to guide others towards adopting ideas, attitudes, or actions that they consider to be beneficial to themselves. The efficient detection of news genre categories, detection of framing, and detection of persuasion techniques require several scientific disciplines, such as computational linguistics and sociology. Here we illustrate how we use lexical features given a news article to determine whether it is an opinion piece, aims to report factual news, or is satire. This paper presents a novel strategy for communication based on Lexical Weirdness. The results are part of our participation in Sub-Tasks 1 and 2 in SemEval 2023 Task 3. | UTB-NLP at SemEval-2023 Task 3: Weirdness, Lexical Features for Detecting Categorical Framings, and Persuasion in Online News |
d35514742 | A Radial Dictionary o f Danish.A Radial Dictionary is a KW lC-concordance, where instead of using key words key letters are used. Any pair o f letters can be used as a key to identify and find aU words containing a specific substring o f two or more letters. All words containing the substring in question will be found together in the dictionary whether the substring is word initial, medial or word final. Thus the radial dictionary can be used as a morpheme dictionary as well. For people interested in derivation and composition, which are very frequent phenomena in Danish, the dictionary gives easy access to examples and material difficult to find otherwise. J eg er g la d fo r d en n e le jlig h ed til a t fre m lae g g e re su lta tern e a f m it a r b e jd e m ed D a n sk R a d iae r o r d b o g v e d d e n ord isk e d a ta lin g v is tik d a g e i R e y k ja v ik . 1 1986 k u nn e je g p å S ä b y S äteri p rae sen tere p la n er fo r u d a rb e jd e ls e a f ra d iae r o r d b o g e n fo r et m in d re n ord isk fo r u m a f kolleger. R in g e n slu ttes så led es m e d d e n n e ra p p o r t. | |
d8476273 | We present a simple log-linear reparameterization of IBM Model 2 that overcomes problems arising from Model 1's strong assumptions and Model 2's overparameterization.Efficient inference, likelihood evaluation, and parameter estimation algorithms are provided. Training the model is consistently ten times faster than Model 4. On three large-scale translation tasks, systems built using our alignment model outperform IBM Model 4.An open-source implementation of the alignment model described in this paper is available from http://github.com/clab/fast align . | A Simple, Fast, and Effective Reparameterization of IBM Model 2 |
d14046508 | Many reordering approaches have been proposed for the statistical machine translation (SMT) system. However, the information about the type of source sentence is ignored in the previous works. In this paper, we propose a group of novel reordering models based on the source sentence type for Chinese-to-English translation. In our approach, an SVM-based classifier is employed to classify the given Chinese sentences into three types: special interrogative sentences, other interrogative sentences, and non-question sentences. The different reordering models are developed oriented to the different sentence types. Our experiments show that the novel reordering models have obtained an improvement of more than 2.65% in BLEU for a phrase-based spoken language translation system. | Sentence Type Based Reordering Model for Statistical Machine Translation |
d227231867 | Common sense for natural language processing methods has been attracting a wide research interest, recently. Estimating automatically whether a sentence makes sense or not is considered an essential question. Task 4 in the International Workshop SemEval 2020 has provided three subtasks (A, B, and C) that challenges the participants to build systems for distinguishing the common sense statements from those that do not make sense. This paper describes TeamJUST's approach for participating in subtask A to differentiate between two sentences in English and classify them into two classes: common sense and uncommon sense statements. Our approach depends on ensembling four different state-of-the-art pre-trained models (BERT, ALBERT, Roberta, and XLNet). Our baseline model which we used only the pre-trained model of BERT has scored 89.1, while the TeamJUST model outperformed the baseline model with an accuracy score of 96.2. We have improved the results in the post-evaluation period to achieve our best result, which would rank the 4th in the competition if we had the chance to use our latest experiment. This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http:// creativecommons.org/licenses/by/4.0/. | TeamJUST at SemEval-2020 Task 4: Commonsense Validation and Explanation Using Ensembling Techniques |
d252624590 | The OntoLex-Lemon model provides a vocabulary to enrich ontologies with linguistic information that can be exploited by Natural Language Processing applications. The increasing uptake of Lemon illustrates the growing interest in combining linguistic information and Semantic Web technologies. In this paper, we present Fuzzy Lemon, an extension of Lemon that allows to assign an uncertainty degree to lexical semantic relations. Our approach is based on an OWL ontology that defines a hierarchy of data properties encoding different types of uncertainty. We also illustrate the usefulness of Fuzzy Lemon by showing that it can be used to represent the confidence degrees of automatically discovered translations between pairs of bilingual dictionaries from the Apertium family. | Fuzzy Lemon: Making Lexical Semantic Relations More Juicy |
d14038100 | Transition-based dependency parsers are often forced to make attachment decisions at a point when only partial information about the relevant graph configuration is available. In this paper, we describe a model that takes into account complete structures as they become available to rescore the elements of a beam, combining the advantages of transition-based and graph-based approaches. We also propose an efficient implementation that allows for the use of sophisticated features and show that the completion model leads to a substantial increase in accuracy. We apply the new transition-based parser on typologically different languages such as English, Chinese, Czech, and German and report competitive labeled and unlabeled attachment scores. | The Best of Both Worlds -A Graph-based Completion Model for Transition-based Parsers |
d38019405 | RESUME____________________________________________________________________________________________________________Cet article présente une approche permettant de reconnaitre automatiquement dans un texte des séquences verbales figées (casser sa pipe, briser la glace, prendre en compte) à partir d'une ressource. Cette ressource décrit chaque séquence en termes de possibilités et de restrictions transformationnelles. En effet, les séquences figées ne le sont pas complètement et nécessitent une description exhaustive afin de ne pas extraire seulement les formes canoniques. Dans un premier temps nous aborderons les approches traditionnelles permettant d'extraire des séquences phraséologiques. Par la suite, nous expliquerons comment est constituée notre ressource et comment celle-ci est utilisée pour un traitement automatique.ABSTRACT _________________________________________________________________________________________________________For an Automatic Fixed Verbal Sequence Tagging: State of the Art and Transformational ApproachThis article presents a resource-based method aiming at automatically recognizing fixed verbal sequences in French (i.e casser sa pipe, briser la glace, prendre en compte) inside a text. This resource describes each sequence from the view-point of transformational possibilities and restrictions. Fixed sequences are not totally fixed and an exhaustive description is necessary to not only extract canonical forms. We will first describe some transformational approaches that are able to extract phraseological sequences. The building of the resource will be then addressed followed by our approach to automatically recognize fixed sequences in corpora. | Pour un étiquetage automatique des séquences verbales figées : état de l'art et approche transformationnelle |
d18754073 | For data-to-text tasks in Natural Language Generation (NLG), researchers are often faced with choices about the right words to express phenomena seen in the data. One common phenomenon centers around the description of trends between two data points and selecting the appropriate verb to express both the direction and intensity of movement. Our research shows that rather than simply selecting the same verbs again and again, variation and naturalness can be achieved by quantifying writers' patterns of usage around verbs. | When to Plummet and When to Soar: Corpus Based Verb Selection for Natural Language Generation |
d1275545 | PREWOUS DISAMBIGUATION ALGORITHMSThe problem of lexical category ambiguity has been little examined in the literature of computational linguistics and artificial intelligence, though it pervades English to an astonishing degree. About 11.5% of types (vocabulary), and over 40% of tokens (running words) in English prose are categorically ambiguous (as measured via the Brown Corpus). | GRAMMATICAL CATEGORY DISAMBIGUATION BY STATISTICAL OPTIMIZATION |
d21706033 | In this paper, we present a new large manually-annotated multi-dialect dataset of Arabic tweets that is publicly available. The Dialectal ARabic Tweets (DART) dataset has about 25K tweets that are annotated via crowdsourcing and it is well-balanced over five main groups of Arabic dialects: Egyptian, Maghrebi, Levantine, Gulf, and Iraqi. The paper outlines the pipeline of constructing the dataset from crawling tweets that match a list of dialect phrases to annotating the tweets by the crowd. We also touch some challenges that we face during the process. We evaluate the quality of the dataset from two perspectives: the inter-annotator agreement and the accuracy of the final labels. Results show that both measures were substantially high for the Egyptian, Gulf, and Levantine dialect groups, but lower for the Iraqi and Maghrebi dialects, which indicates the difficulty of identifying those two dialects manually and hence automatically. | DART: A Large Dataset of Dialectal Arabic Tweets |
d15568760 | In this paper, we present how the principles of universal dependencies and morphology have been adapted to Hungarian. We report the most challenging grammatical phenomena and our solutions to those. On the basis of the adapted guidelines, we have converted and manually corrected 1,800 sentences from the Szeged Treebank to universal dependency format. We also introduce experiments on this manually annotated corpus for evaluating automatic conversion and the added value of language-specific, i.e. non-universal, annotations. Our results reveal that converting to universal dependencies is not necessarily trivial, moreover, using languagespecific morphological features may have an impact on overall performance. | Universal Dependencies and Morphology for Hungarian -and on the Price of Universality |
d10134254 | In this paper we address the issue of the encoding of information on metaphors in a WordNet-like database, i.e. the Italian wordnet in EuroWordNet (ItalWordNet). When analysing corpus data we find a huge number of metaphoric expressions which can be hardly dealt with by using as reference database ItalWordNet. In particular, we have compared information contained both in dictionaries of Italian and in ItalWordNet with actual uses of words found in a corpus. We thus put forward proposals to enrich a resource like ItalWordNet with relevant information.* The present paper is the outcome of a collaborative effort. For the specific concerns of the Italian Academy only, A. Alonge is responsible for sections 3 and 4; M. Castelli for sections 1, 2, and 5. 1 EWN is a multilingual database developed within the homonymous project carried out in the EC Language Engineering programme . Within the project, WordNet-like databases for various European languages were developed, connected by means of an Interlingual-Index. Complete information on EWN can be found at its website: / for a database of conceptual metaphors. A similar database is being developed also for German and French: http://www.rrz.uni-hamburg. de/metaphern (cf. Eilts and Lönneker, 2002). | Encoding information on metaphoric expressions in WordNet-like resources* |
d130713 | Probabilistic knowledge bases are commonly used in areas such as large-scale information extraction, data integration, and knowledge capture, to name but a few. Inference in probabilistic knowledge bases is a computationally challenging problem. With this contribution, we present our vision of a distributed inference algorithm based on conflict graph construction and hypergraph sampling. Early empirical results show that the approach efficiently and accurately computes a-posteriori probabilities of a knowledge base derived from a well-known information extraction system. | Towards Distributed MCMC Inference in Probabilistic Knowledge Bases |
d17326709 | We present a novel approach for (written) dialect identification based on the discriminative potential of entire words. We generate Swiss German dialect words from a Standard German lexicon with the help of hand-crafted phonetic/graphemic rules that are associated with occurrence maps extracted from a linguistic atlas created through extensive empirical fieldwork. In comparison with a charactern-gram approach to dialect identification, our model is more robust to individual spelling differences, which are frequently encountered in non-standardized dialect writing. Moreover, it covers the whole Swiss German dialect continuum, which trained models struggle to achieve due to sparsity of training data. | Word-based dialect identification with georeferenced rules |
d3673957 | We introduce -SmartReader-an English reading tool for non-native English readers to overcome language related hindrances while reading a text. It makes extensive use of widely-available NLP tools and resources. SmartReader is a web-based application that can be accessed from standard browsers running on PCs or tablets. A user can choose a text document from the system's library they want to read or can upload a new document of their own and the system will display an interactive version of such text, that provides the reader with an intelligent e-book functionality. | An English Reading Tool as a NLP Showcase |
d2486284 | This paper presents Disco, a prototype for supporting knowledge workers in exploring, reviewing and sorting collections of textual data. The goal is to facilitate, accelerate and improve the discovery of information. To this end, it combines Semantic Relatedness techniques with a review workflow developed in a tangible environment. Disco uses a semantic model that is leveraged on-line in the course of search sessions, and accessed through natural hand-gesture, in a simple and intuitive way. | DISCO: A System Leveraging Semantic Search in Document Review |
d218627041 | The spread of biased news and its consumption by the readers has become a considerable issue. Researchers from multiple domains including social science and media studies have made efforts to mitigate this media bias issue. Specifically, various techniques ranging from natural language processing to machine learning have been used to help determine news bias automatically. However, due to the lack of publicly available datasets in this field, especially ones containing labels concerning bias on a fine-grained level (e.g., on sentence level), it is still challenging to develop methods for effectively identifying bias embedded in new articles. In this paper, we propose a novel news bias dataset which facilitates the development and evaluation of approaches for detecting subtle bias in news articles and for understanding the characteristics of biased sentences. Our dataset consists of 966 sentences from 46 English-language news articles covering 4 different events and contains labels concerning bias on the sentence level. For scalability reasons, the labels were obtained based on crowd-sourcing. Our dataset can be used for analyzing news bias, as well as for developing and evaluating methods for news bias detection. It can also serve as resource for related researches including ones focusing on fake news detection. | Annotating and Analyzing Biased Sentences in News Articles using Crowdsourcing |
d5428818 | This article describes the principles and mechanism of an integrative effort in machine translation (MT) evaluation. Building upon previous standardization initiatives, above all ISO/IEC 9126, 14598 and EAGLES, we attempt to classify into a coherent taxonomy most of the characteristics, attributes and metrics that have been proposed for MT evaluation. The main articulation of this flexible framework is the link between a taxonomy that helps evaluators define a context of use for the evaluated software, and a taxonomy of the quality characteristics and associated metrics. The article explains the theoretical grounds of this articulation, along with an overview of the taxonomies in their present state, and a perspective on ongoing work in MT evaluation standardization. | Computer-Aided Specification of Quality Models for Machine Translation Evaluation |
d9531504 | This paper is concerned with building CCG-grounded, semantics-oriented deep dependency structures with a data-driven, factorization model. Three types of factorization together with different higherorder features are designed to capture different syntacto-semantic properties of functor-argument dependencies. Integrating heterogeneous factorizations results in intractability in decoding. We propose a principled method to obtain optimal graphs based on dual decomposition. Our parser obtains an unlabeled f-score of 93.23 on the CCGBank data, resulting in an error reduction of 6.5% over the best published result. which yields a significant improvement over the best published result in the literature. Our implementation is available at http://www.icst. pku.edu.cn/lcwm/grass. | A Data-Driven, Factorization Parser for CCG Dependency Structures |
d771053 | Previous comparisons of document and query translation suffered difficulty due to differing quality of machine translation in these two opposite directions. We avoid this difficulty by training identical statistical translation models for both translation directions using the same training data. We investigate information retrieval between English and French, incorporating both translations directions into both document translation and query translation-based information retrieval, as well as into hybrid systems. We find that hybrids of document and query translation-based systems outperform query translation systems, even human-quality query translation systems. | Should we Translate the Documents or the Queries in Cross-language Information Retrieval? |
d9832697 | In this paper we introduce Translation Difficulty Index (TDI), a measure of difficulty in text translation. We first define and quantify translation difficulty in terms of TDI. We realize that any measure of TDI based on direct input by translators is fraught with subjectivity and adhocism. We, rather, rely on cognitive evidences from eye tracking. TDI is measured as the sum of fixation (gaze) and saccade (rapid eye movement) times of the eye. We then establish that TDI is correlated with three properties of the input sentence, viz. length (L), degree of polysemy (DP) and structural complexity (SC). We train a Support Vector Regression (SVR) system to predict TDIs for new sentences using these features as input. The prediction done by our framework is well correlated with the empirical gold standard data, which is a repository of < L, DP, SC > and T DI pairs for a set of sentences. The primary use of our work is a way of "binning" sentences (to be translated) in "easy", "medium" and "hard" categories as per their predicted TDI. This can decide pricing of any translation task, especially useful in a scenario where parallel corpora for Machine Translation are built through translation crowdsourcing/outsourcing. This can also provide a way of monitoring progress of second language learners. | Automatically Predicting Sentence Translation Difficulty |
d16695859 | ||
d195064147 | Coreference resolution is the task of grouping together references to the same discourse entity. Resolving coreference in literary texts could benefit a number of Digital Humanities (DH) tasks, such as analyzing the depiction of characters and/or their relations. Domain-dependent training data has shown to improve coreference resolution for many domains, e.g. the biomedical domain, as its properties differ significantly from news text or dialogue, on which automatic systems are typically trained. This also holds for literary texts. We therefore analyze the specific properties of coreference-related phenomena on a number of texts and give directions for the adaptation of annotation guidelines. As some of the adaptations have profound impact, we also present a new annotation tool for coreference, with a focus on enabling annotation of long texts with many discourse entities. This work is licensed under a Creative Commons Attribution 4.0 International Licence. Licence details: http://creativecommons.org/licenses/by/4.0/. | Towards Coreference for Literary Text: Analyzing Domain-Specific Phenomena |
d18734417 | This paper presents preliminary work on identification of argumentation schemes, i.e., identifying premises, conclusion and name of argumentation scheme, in arguments for scientific claims in genetics research articles. The goal is to develop annotation guidelines for creating corpora for argumentation mining research. This paper gives the specification of ten semantically distinct argumentation schemes based on analysis of argumentation in several journal articles. In addition, it presents an empirical study on readers' ability to recognize some of the argumentation schemes. Premise: Certain properties P were observed in an individual. Premise: There is a potential chain of events linking a condition G to observation of P. Conclusion: G may be the cause of P in that individual. Example: SeeFigure 1. | Identifying Argumentation Schemes in Genetics Research Articles |
d11577980 | The MultiLing 2013 Workshop of ACL 2013 posed a multi-lingual, multidocument summarization task to the summarization community, aiming to quantify and measure the performance of multi-lingual, multi-document summarization systems across languages. The task was to create a 240-250 word summary from 10 news articles, describing a given topic. The texts of each topic were provided in 10 languages (Arabic, Chinese, Czech, English, French, Greek, Hebrew, Hindi, Romanian, Spanish) and each participant generated summaries for at least 2 languages. The evaluation of the summaries was performed using automatic and manual processes. The participating systems submitted over 15 runs, some providing summaries across all languages. An automatic evaluation task was also added to this year's set of tasks. The evaluation task meant to determine whether automatic measures of evaluation can function well in the multi-lingual domain. This paper provides a brief description related to the data of both tasks, the evaluation methodology, as well as an overview of participation and corresponding results. | Multi-document multilingual summarization and evaluation tracks in ACL 2013 MultiLing Workshop |
d237366184 | Metaphor is a special phenomenon in human languages. It's fundamental and crucial to detect metaphors for many NLP tasks. As for Metaphor Detection in Chinese, we come up with SaGE(Syntax-aware GCN with ELECTRA) which is inspired by linguistics. SaGE utilizes ELECTRA and Transformer encoder to extract the semantic feature of a sentence, and syntactic feature through GCN, which copes with a graph constructed by dependency parsing result. The model concatenates the two features to detect metaphors. SaGE obatins a substantial improvement over the best reported score in CCL 2018 Chinese Metaphor Detection Task Dataset with an 85.22% macro-F1 score. It demonstrates the importance to incorporate both semantics and syntax in Metaphor Detection. | SaGE: Syntax-aware GCN with ELECTRA for Chinese Metaphor Detection |
d232021915 | In this paper we discuss how Walenty is using PLWORDNET to represent semantic information. We decided to use PLWORD-NET lexical units and synsets to describe both the predicate meaning and the semantic fields of its arguments. The original design decision required some further refinement caused by the structure of PLWORD-NET and complex relations between arguments. 5 https://framenet.icsi.berkeley.edu/ fndrupal/ 6 There were several attempts to relate the resources, cf. (Cao et al., 2010). 7 https://verbs.colorado.edu/~mpalmer/ projects/verbnet.html | Connections between the semantic layer of Walenty valency dictionary and PLWORDNET |
d245855950 | This paper describes the SEBAMAT contribution to the 2021 WMT Similar Language Translation shared task. Using the Marian neural machine translation toolkit, translation systems based on Google's transformer architecture were built in both directions of Catalan-Spanish and Portuguese-Spanish. The systems were trained in two contrastive parameter settings (different vocabulary sizes for byte pair encoding) using only the parallel but not the comparable corpora provided by the shared task organizers. According to their official evaluation results, the SEBAMAT system turned out to be competitive with rankings among the top teams and BLEU scores between 38 and 47 for the language pairs involving Portuguese and between 76 and 80 for the language pairs involving Catalan. | |
d236477882 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.