_id
stringlengths
4
10
text
stringlengths
0
18.4k
title
stringlengths
0
8.56k
d247939641
Prompting methods recently achieve impressive success in few-shot learning. These methods modify input samples with prompt sentence pieces, and decode label tokens to map samples to corresponding labels. However, such a paradigm is very inefficient for the task of slot tagging. Since slot tagging samples are multiple c...
Inverse is Better! Fast and Accurate Prompt for Few-shot Slot Tagging
d16515522
Dialogue topic tracking is a sequential labelling problem of recognizing the topic state at each time step in given dialogue sequences. This paper presents various artificial neural network models for dialogue topic tracking, including convolutional neural networks to account for semantics at each individual utterance,...
Exploring Convolutional and Recurrent Neural Networks in Sequential Labelling for Dialogue Topic Tracking
d805379
Computing semantic relatedness of natural language texts is a key component of tasks such as information retrieval and summarization, and often depends on knowledge of a broad range of real-world concepts and relationships. We address this knowledge integration issue by computing semantic relatedness using personalized...
WikiWalk: Random walks on Wikipedia for Semantic Relatedness
d17387902
When speech understanding systems are used in real applications, they encounter incidental noise generated by the speaker and the environment. Such noises can cause serious problems for speech recognizers not designed to cope with them. We attempt to model these noises by training HMM "noise words" to match classes of ...
Modelling Non-verbal Sounds for Speech Recognition
d53079566
Deep neural networks reach state-of-the-art performance for wide range of natural language processing, computer vision and speech applications. Yet, one of the biggest challenges is running these complex networks on devices such as mobile phones or smart watches with tiny memory footprint and low computational capacity...
Self-Governing Neural Networks for On-Device Short Text Classification
d216079471
Topical blog post retrieval is the task of ranking blog posts with respect to their relevance for a given topic. To improve topical blog post retrieval we incorporate textual credibility indicators in the retrieval process. We consider two groups of indicators: post level (determined using information about individual ...
Credibility Improves Topical Blog Post Retrieval
d236428800
Generative neural conversational systems are generally trained with the objective of minimizing the entropy loss between the training "hard" targets and the predicted logits. Often, performance gains and improved generalization can be achieved by using regularization techniques like label smoothing, which converts the ...
Similarity Based Label Smoothing For Dialogue Generation
d256461325
We report the results of the WMT 2022 shared task on Quality Estimation, in which the challenge is to predict the quality of the output of neural machine translation systems at the word and sentence levels, without access to reference translations. This edition introduces a few novel aspects and extensions that aim to ...
Findings of the WMT 2022 Shared Task on Quality Estimation
d202749964
In a pipeline speech translation system, automatic speech recognition (ASR) system will transmit errors in recognition to the downstream machine translation (MT) system. A standard machine translation system is usually trained on parallel corpus composed of clean text and will perform poorly on text with recognition no...
Breaking the Data Barrier: Towards Robust Speech Translation via Adversarial Stability Training
d9423000
Many approaches to automatic sentiment analysis begin with a large lexicon of words marked with their prior polarity (also called semantic orientation). However, the contextual polarity of the phrase in which a particular instance of a word appears may be quite different from the word's prior polarity. Positive words a...
Recognizing Contextual Polarity: An Exploration of Features for Phrase-Level Sentiment Analysis
d252846223
Closed-book question answering (QA) requires a model to directly answer an open-domain question without access to any external knowledge. Prior work on closed-book QA either directly finetunes or prompts a pretrained language model (LM) to leverage the stored knowledge. However, they do not fully exploit the parameteri...
Context Generation Improves Open Domain Question Answering
d926075
Entity linking refers to the task of assigning mentions in documents to their corresponding knowledge base entities. Entity linking is a central step in knowledge base population. Current entity linking systems do not explicitly model the discourse context in which the communication occurs. Nevertheless, the notion of ...
A Context-Aware Approach to Entity Linking
d237503423
We address the problem of temporal sentence localization in videos (TSLV). Traditional methods follow a top-down framework which localizes the target segment with predefined segment proposals. Although they have achieved decent performance, the proposals are handcrafted and redundant. Recently, bottom-up framework attr...
Adaptive Proposal Generation Network for Temporal Sentence Localization in Videos
d252439131
The zero-shot cross-lingual ability of models pretrained on multilingual and even monolingual corpora has spurred many hypotheses to explain this intriguing empirical result. However, due to the costs of pretraining, most research uses public models whose pretraining methodology, such as the choice of tokenization, cor...
MonoByte: A Pool of Monolingual Byte-level Language Models
d245218561
In-context learning is a recent paradigm in natural language understanding, where a large pretrained language model (LM) observes a test instance and a few training examples as its input, and directly decodes the output without any update to its parameters. However, performance has been shown to strongly depend on the ...
Learning To Retrieve Prompts for In-Context Learning
d211678011
The Arabic language is a morphologically rich language with relatively few resources and a less explored syntax compared to English. Given these limitations, Arabic Natural Language Processing (NLP) tasks like Sentiment Analysis (SA), Named Entity Recognition (NER), and Question Answering (QA), have proven to be very c...
AraBERT: Transformer-based Model for Arabic Language Understanding
d7639057
We propose a method for extracting semantic orientations of phrases (pairs of an adjective and a noun): positive, negative, or neutral. Given an adjective, the semantic orientation classification of phrases can be reduced to the classification of words. We construct a lexical network by connecting similar/related words...
Extracting Semantic Orientations of Phrases from Dictionary
d254017982
Information Extraction from scientific literature can be challenging due to the highly specialised nature of such text. We describe our entity recognition methods developed as part of the DEAL (Detecting Entities in the Astrophysics Literature) shared task. The aim of the task is to build a system that can identify Nam...
Detecting Entities in the Astrophysics Literature: A Comparison of Word-based and Span-based Entity Recognition Methods
d68144498
Previous work in Word Sense Disambiguation (WSD), like many tasks in natural language processing, has been predominantly focused on English. While there has been some work on other languages, including Uralic languages, up until this point no work has been published providing a contrastive evaluation of WSD for Finnish...
A Contrastive Evaluation of Word Sense Disambiguation Systems for Finnish
d202539551
Recent progress in pretraining language models on large textual corpora led to a surge of improvements for downstream NLP tasks. Whilst learning linguistic knowledge, these models may also be storing relational knowledge present in the training data, and may be able to answer queries structured as "fillin-the-blank" cl...
Language Models as Knowledge Bases?
d253098606
Given a long untrimmed video and natural language queries, video grounding (VG) aims to temporally localize the semantically-aligned video segments. Almost all existing VG work holds two simple but unrealistic assumptions: 1) All query sentences can be grounded in the corresponding video. 2) All query sentences for the...
Weakly-Supervised Temporal Article Grounding
d235254260
Differentiable architecture search (DARTS) is successfully applied in many vision tasks. However, directly using DARTS for Transformers is memory-intensive, which renders the search process infeasible. To this end, we propose a multi-split reversible network and combine it with DARTS. Specifically, we devise a backprop...
Memory-Efficient Differentiable Transformer Architecture Search
d237440234
Using prompts to utilize language models to perform various downstream tasks, also known as prompt-based learning or promptlearning, has lately gained significant success in comparison to the pre-train and finetune paradigm. Nonetheless, virtually most prompt-based methods are token-level such as PET based on mask lang...
NSP-BERT: A Prompt-based Few-Shot Learner Through an Original Pre-training Task --Next Sentence Prediction
d195348807
This paper addresses the problem of compound word translation and proposes the approaches to acquiring translations. The proposed approaches focus on exploring web data and utilizing English translations to link words of the source language and the correspondences in the target language. The paper uses Japanese-Chinese...
Acquiring Compound Word Translations Both Automatically and Dynamically
d139100939
Recent years have seen increasingly complex question-answering on knowledge bases (KBQA) involving logical, quantitative, and comparative reasoning over KB subgraphs. Neural Program Induction (NPI) is a pragmatic approach toward modularizing the reasoning process by translating a complex natural language query into a m...
Complex Program Induction for Querying Knowledge Bases in the Absence of Gold Programs
d8775253
This paper reported our work on annotating Chinese texts with information structures derived fromHowNet. An information structure consists of two components: HowNet definitions and dependency relations. It is the unit of representation of the meaning of texts. This work is part of a multi-sentential approach to Chine...
Annotating information structures in Chinese texts using HowNet
d232168881
Biomedical entity linking is the task of identifying mentions of biomedical concepts in text documents and mapping them to canonical entities in a target thesaurus. Recent advancements in entity linking using BERTbased models follow a retrieve and rerank paradigm, where the candidate entities are first selected using a...
Fast and Effective Biomedical Entity Linking Using a Dual Encoder
d237490315
State-of-the-art Named Entity Recognition (NER) models rely heavily on large amounts of fully annotated training data. However, accessible data are often incompletely annotated since the annotators usually lack comprehensive knowledge in the target domain. Normally the unannotated tokens are regarded as non-entities by...
AdaK-NER: An Adaptive Top-K Approach for Named Entity Recognition with Incomplete Annotations
d53079791
Part-of-Speech (POS) tagging for Twitter has received considerable attention in recent years. Because most POS tagging methods are based on supervised models, they usually require a large amount of labeled data for training. However, the existing labeled datasets for Twitter are much smaller than those for newswire tex...
Transferring from Formal Newswire Domain with Hypernet for Twitter POS Tagging
d249204465
This paper presents the ongoing European Language Equality (ELE) project, an 18month action funded by the European Commission. The primary goal of the ELE project is to prepare the ELE programme, in the form of a strategic research, innovation and implementation agenda and roadmap for achieving full digital language eq...
Overview of the ELE Project
d250390460
Multi-triple extraction is a challenging task due to the existence of informative inter-triple correlations, and consequently rich interactions across the constituent entities and relations. While existing works only explore entity representations, we propose to explicitly introduce relation representation, jointly rep...
EmRel: Joint Representation of Entities and Embedded Relations for Multi-triple Extraction
d258947531
Attribute-controlled translation (ACT) is a subtask of machine translation that involves controlling stylistic or linguistic attributes (like formality and gender) of translation outputs. While ACT has garnered attention in recent years due to its usefulness in real-world applications, progress in the task is currently...
RAMP: Retrieval and Attribute-Marking Enhanced Prompting for Attribute-Controlled Translation
d261342127
Ellipsis is a linguistic phenomenon that occurs in various languages, including Chinese. Although humans can generally understand text with omissions correctly, it can have an impact on machine understanding in terms of syntax and semantics. Therefore, the automatic recovery of omitted elements is of significant import...
Overview of CCL23-Eval Task 5: Sentence Level Multi-domain Chinese Ellipsis Resolution
d49559981
Last Words Festina Lente: A Farewell from the Editor
d258378248
Financial information is generated and distributed across the world, resulting in a vast amount of domain-specific multilingual data. Multilingual models adapted to the financial domain would ease deployment when an organization needs to work with multiple languages on a regular basis. For the development and evaluatio...
MULTIFIN: A Dataset for Multilingual Financial NLP
d248780559
Character-based neural machine translation models have become the reference models for cognate prediction, a historical linguistics task. So far, all linguistic interpretations about latent information captured by such models have been based on external analysis (accuracy, raw results, errors). In this paper, we invest...
Probing Multilingual Cognate Prediction Models
d102352781
Recurrent Variational Autoencoder has been widely used for language modeling and text generation tasks. These models often face a difficult optimization problem, also known as the Kullback-Leibler (KL) term vanishing issue, where the posterior easily collapses to the prior, and the model will ignore latent codes in gen...
Riemannian Normalizing Flow on Variational Wasserstein Autoencoder for Text Modeling
d313753
Automatic Machine Translation metrics, such as BLEU, are widely used in empirical evaluation as a substitute for human assessment. Subsequently, the performance of a given metric is measured by its strength of correlation with human judgment. When a newly proposed metric achieves a stronger correlation over that of a b...
Achieving Accurate Conclusions in Evaluation of Automatic Machine Translation Metrics
d233364938
Neural encoders of biomedical names are typically considered robust if representations can be effectively exploited for various downstream NLP tasks. To achieve this, encoders need to model domain-specific biomedical semantics while rivaling the universal applicability of pretrained self-supervised representations. Pre...
Integrating Higher-Level Semantics into Robust Biomedical Name Representations
d232185260
Code summarization and generation empower conversion between programming language (PL) and natural language (NL), while code translation avails the migration of legacy code from one PL to another. This paper introduces PLBART, a sequence-to-sequence model capable of performing a broad spectrum of program and language u...
Unified Pre-training for Program Understanding and Generation
d256461171
1In this study, we evaluated a series of code 2 generation models based on CodeGen and 3 GPTNeo to compare the metric-based 4 performance and human evaluation. For a 5 deeper analysis of human perceiving within 6 the evaluation procedure, we implemented 7 a 5-level Likert scale assessment of the 8 model output using a ...
d52126041
Vlogs provide a rich public source of data in a novel setting. This paper examined the continuous sentiment styles employed in 27,333 vlogs using a dynamic intra-textual approach to sentiment analysis. Using unsupervised clustering, we identified seven distinct continuous sentiment trajectories characterized by fluctua...
Identifying the sentiment styles of YouTube's vloggers
d258463979
The study of corrective feedback (CF) has been gaining more prolific attention in the field of Second language Acquisition (SLA) up to this time. Theorists, researchers, educators have been investigating which forms of CF are effective. The study determined the use of metalinguistic corrective feedback and students' re...
Metalinguistic Corrective Feedback and Students' Response to Feedback in L2 Writing
d235421784
Classifiers commonly make use of preannotated datasets, wherein a model is evaluated by pre-defined metrics on a held-out test set typically made of human-annotated labels. Metrics used in these evaluations are tied to the availability of well-defined ground truth labels, and these metrics typically do not allow for in...
Posthoc Verification and the Fallibility of the Ground Truth
d224814113
In language generation models conditioned by structured data, the classical training via maximum likelihood almost always leads models to pick up on dataset divergence (i.e., hallucinations or omissions), and to incorporate them erroneously in their own generations at inference. In this work, we build ontop of previous...
PARENTing via Model-Agnostic Reinforcement Learning to Correct Pathological Behaviors in Data-to-Text Generation
d184483192
In this paper, we present the model submitted to the SemEval-2019 Task 3 competition: contextual emotion detection in text "EmoContext". We propose a model that hybridizes automatically extracted features and human engineered features to capture the representation of a textual conversation from different perspectives. ...
KSU at SemEval-2019 Task 3: Hybrid Features for Emotion Recognition in Textual Conversation
d235421642
Previous work on review summarization focused on measuring the sentiment toward the main aspects of the reviewed product or business, or on creating a textual summary. These approaches provide only a partial view of the data: aspect-based sentiment summaries lack sufficient explanation or justification for the aspect r...
Every Bite Is an Experience: Key Point Analysis of Business Reviews
d235422049
In this paper, we investigate few-shot joint learning for dialogue language understanding. Most existing few-shot models learn a single task each time with only a few examples. However, dialogue language understanding contains two closely related tasks, i.e., intent detection and slot filling, and often benefits from j...
Learning to Bridge Metric Spaces: Few-shot Joint Learning of Intent Detection and Slot Filling
d12262608
We propose a first ever code-switch language model for mixed language speech recognition that incorporates syntactic constraints by a code-switch boundary prediction model, a code-switch translation model, and a reconstruction model. A WFST-based decoder then recognizes speech by combining an acoustic model, a pronunci...
Code-switch Language Model with Inversion Constraints for Mixed Language Speech Recognition
d12462181
In machine translation, we must consider the difference in expression between languages. For example, the active/passive voice may change in Japanese-English translation. The same verb in Japanese may be translated into different voices at each translation because the voice of a generated sentence cannot be determined ...
Controlling the Voice of a Sentence in Japanese-to-English Neural Machine Translation
d196185038
We present a simple yet powerful data augmentation method for boosting Neural Machine Translation (NMT) performance by leveraging information retrieved from a Translation Memory (TM). We propose and test two methods for augmenting NMT training data with fuzzy TM matches. Tests on the DGT-TM data set for two language pa...
Neural Fuzzy Repair: Integrating Fuzzy Matches into Neural Machine Translation
d196196376
We present an unsupervised method to generate Word2Sense word embeddings that are interpretable -each dimension of the embedding space corresponds to a fine-grained sense, and the non-negative value of the embedding along the j-th dimension represents the relevance of the j-th sense to the word. The underlying LDA-base...
Word2Sense : Sparse Interpretable Word Embeddings
d196192573
In this paper, we focus on the task of finegrained text sentiment transfer (FGST). This task aims to revise an input sequence to satisfy a given sentiment intensity, while preserving the original semantic content. Different from conventional sentiment transfer task that only reverses the sentiment polarity (positive/ne...
Towards Fine-grained Text Sentiment Transfer
d52012533
This study proposes a new neural machine translation (NMT) model based on the encoderdecoder model that incorporates named entity (NE) tags of source-language sentences. Conventional NMT models have two problems enumerated as follows: (i) they tend to have difficulty in translating words with multiple meanings because ...
Neural Machine Translation Incorporating Named Entity
d201669103
Hashing is promising for large-scale information retrieval tasks thanks to the efficiency of distance evaluation between binary codes. Generative hashing is often used to generate hashing codes in an unsupervised way. However, existing generative hashing methods only considered the use of simple priors, like Gaussian a...
Document Hashing with Mixture-Prior Generative Models
d4859466
Virtual agents are becoming a prominent channel of interaction in customer service. Not all customer interactions are smooth, however, and some can become almost comically bad. In such instances, a human agent might need to step in and salvage the conversation. Detecting bad conversations is important since disappointi...
Detecting Egregious Conversations between Customers and Virtual Agents
d221865854
Pre-trained language models (PTLMs) have achieved impressive performance on commonsense inference benchmarks, but their ability to employ commonsense to make robust inferences, which is crucial for effective communications with humans, is debated. In the pursuit of advancing fluid human-AI communication, we propose a n...
RICA: Evaluating Robust Inference Capabilities Based on Commonsense Axioms
d236986938
Graph-based methods, which decompose the score of a dependency tree into scores of dependency arcs, are popular in dependency parsing for decades. Recently, Yang and Tu (2022) propose a headed-span-based method that decomposes the score of a dependency tree into scores of headed spans. They show improvement over firs...
Combining (Second-Order) Graph-Based and Headed-Span-Based Projective Dependency Parsing
d220831033
This paper describes our system for SemEval-2020 Task 4: Commonsense Validation and Explanation(Wang et al., 2020). We propose a novel Knowledge-enhanced Graph Attention Network (KEGAT) architecture for this task, leveraging heterogeneous knowledge from both the structured knowledge base (i.e. ConceptNet) and unstructu...
ECNU-SenseMaker at SemEval-2020 Task 4: Leveraging Heterogeneous Knowledge Resources for Commonsense Validation and Explanation
d235358808
Humor is an important social phenomenon, serving complex social and psychological functions. However, despite being studied for millennia humor is computationally not well understood, often considered an AI-complete problem.In this work, we introduce a novel setting in humor mining: automatically detecting funny and un...
How Did This Get Funded?! Automatically Identifying Quirky Scientific Achievements
d259370497
Information extraction systems often produce hundreds to thousands of strings on a specific topic. We present a method that facilitates better consumption of these strings, in an exploratory setting in which a user wants to both get a broad overview of what's available, and a chance to dive deeper on some aspects. The ...
Hierarchy Builder: Organizing Textual Spans into a Hierarchy to Facilitate Navigation
d259370557
Ontological knowledge, which comprises classes and properties and their relationships, is integral to world knowledge. It is significant to explore whether Pretrained Language Models (PLMs) know and understand such knowledge. However, existing PLM-probing studies focus mainly on factual knowledge, lacking a systematic ...
Do PLMs Know and Understand Ontological Knowledge?
d243831213
Building neural machine translation systems to perform well on a specific target domain is a well-studied problem. Optimizing system performance for multiple, diverse target domains however remains a challenge. We study this problem in an adaptation setting where the goal is to preserve the existing system quality whil...
Improving the Quality Trade-Off for Neural Machine Translation Multi-Domain Adaptation
d2177536
Interactive story systems often involve dialogue with virtual dramatic characters. However, to date most character dialogue is written by hand. One way to ease the authoring process is to (semi-)automatically generate dialogue based on film characters. We extract features from dialogue of film characters in leading rol...
An Annotated Corpus of Film Dialogue for Learning and Characterizing Character Style
d12569557
Amazon's Mechanical Turk service has been successfully applied to many natural language processing tasks. However, the task of named entity recognition presents unique challenges. In a large annotation task involving over 20,000 emails, we demonstrate that a compet itive bonus system and interannotator agree ment can b...
Annotating Large Email Datasets for Named Entity Recognition with Mechanical Turk
d43965072
Automatic speech recognition and spoken dialogue systems have made great advances through the use of deep machine learning methods. This is partly due to greater computing power but also through the large amount of data available in common languages, such as English. Conversely, research in minority languages, includin...
Transfer Learning for British Sign Language Modelling
d51867669
This paper describes the COSTA scheme for coding structures and actions in conversation.Informed by Conversation Analysis, the scheme introduces an innovative method for marking multi-layer structural organization of conversation and a structure-informed taxonomy of actions. In addition, we create a corpus of naturally...
Coding Structures and Actions with the COSTA Scheme in Medical Conversations
d17864960
The sentiment classification performance relies on high-quality sentiment resources. However, these resources are imbalanced in different languages. Cross-language sentiment classification (CLSC) can leverage the rich resources in one language (source language) for sentiment classification in a resource-scarce language...
Learning Bilingual Sentiment Word Embeddings for Cross-language Sentiment Classification
d13419043
Neural machine translation (NMT) has emerged recently as a promising statistical machine translation approach. In NMT, neural networks (NN) are directly used to produce translations, without relying on a pre-existing translation framework. In this work, we take a step towards bridging the gap between conventional word ...
Alignment-Based Neural Machine Translation
d235125510
Datasets with induced emotion labels are scarce but of utmost importance for many NLP tasks. We present a new, automated method for collecting texts along with their induced reaction labels. The method exploits the online use of reaction GIFs, which capture complex affective states. We show how to augment the data with...
Happy Dance, Slow Clap: Using Reaction GIFs to Predict Induced Affect on Twitter
d259370892
Data augmentation is an effective solution to improve model performance and robustness for low-resource named entity recognition (NER). However, synthetic data often suffer from poor diversity, which leads to performance limitations. In this paper, we propose a novel Graph Propagated Data Augmentation (GPDA) framework ...
Improving Low-resource Named Entity Recognition with Graph Propagated Data Augmentation
d3042025
We propose a novel hierarchical Recurrent Neural Network (RNN) for learning sequences of Dialogue Acts (DAs). The input in this task is a sequence of utterances (i.e., conversational contributions) comprising a sequence of tokens, and the output is a sequence of DA labels (one label per utterance). Our model leverages ...
A Hierarchical Neural Model for Learning Sequences of Dialogue Acts
d259370529
Semantic proto-role labeling (SPRL) assigns properties to arguments based on a series of binary labels. While multiple studies have evaluated various approaches to SPRL, it has only been studied in-depth as a standalone task using gold predicate/argument pairs. How do SPRL systems perform as part of an information extr...
Joint End-to-End Semantic Proto-role Labeling
d259370787
In recent years, deep neural networks (DNNs) have achieved state-of-the-art performance on a wide range of tasks. However, limitations in interpretability have hindered their applications in the real world. This work proposes to interpret neural networks by linear decomposition and finds that the ReLU-activated Transfo...
Local Interpretation of Transformer Based on Linear Decomposition
d259370790
Multimodal Emotion Recognition inMultiparty Conversations (MERMC)has recently attracted considerable attention. Due to the complexity of visual scenes in multi-party conversations, most previous MERMC studies mainly focus on text and audio modalities while ignoring visual information. Recently, several works proposed t...
A Facial Expression-Aware Multimodal Multi-task Learning Framework for Emotion Recognition in Multi-party Conversations
d220768717
We propose a novel method that enables us to determine words that deserve to be emphasized from written text in visual media, relying only on the information from the self-attention distributions of pre-trained language models (PLMs). With extensive experiments and analyses, we show that 1) our zero-shot approach is su...
IDS at SemEval-2020 Task 10: Does Pre-trained Language Model Know What to Emphasize?
d255998364
Dialogue State Tracking (DST) is core research in dialogue systems and has received much attention. In addition, it is necessary to define a new problem that can deal with dialogue between users as a step toward the conversational AI that extracts and recommends information from the dialogue between users. So, we intro...
KILDST: Effective Knowledge-Integrated Learning for Dialogue State Tracking using Gazetteer and Speaker Information
d257050547
Existing language and vision models achieve impressive performance in image-text understanding. Yet, it is an open question to what extent they can be used for language understanding in 3D environments and whether they implicitly acquire 3D object knowledge, e.g. about different views of an object. In this paper, we in...
Paparazzi: A Deep Dive into the Capabilities of Language and Vision Models for Grounding Viewpoint Descriptions
d785805
A common use of language is to refer to visually present objects. Modelling it in computers requires modelling the link between language and perception. The "words as classifiers" model of grounded semantics views words as classifiers of perceptual contexts, and composes the meaning of a phrase through composition of t...
Resolving References to Objects in Photographs using the Words-As-Classifiers Model
d53079875
Pronouns are frequently omitted in pro-drop languages, such as Chinese, generally leading to significant challenges with respect to the production of complete translations. Recently, Wang et al.(2018)proposed a novel reconstruction-based approach to alleviating dropped pronoun (DP) translation problems for neural machi...
Learning to Jointly Translate and Predict Dropped Pronouns with a Shared Reconstruction Mechanism
d256461133
We here describe our neural machine translation system for the general machine translation shared task in WMT 2022. Our systems are based on the Transformer (Vaswani et al., 2017) with base settings. We explore the high-efficiency model training strategies, aimed to train a model with high-accuracy by using a small mod...
d243925639
This paper details experiments we performed on the Universal Dependencies 2.7 corpora in order to investigate the dominant word order in the available languages.For this purpose, we used a graph rewriting tool, GREW, which allowed us to go beyond the surface annotations and identify the implicit subjects.We first measu...
Investigating Dominant Word Order on Universal Dependencies with Graph Rewriting
d218487293
This position paper describes and critiques the Pretraining-Agnostic Identically Distributed (PAID) evaluation paradigm, which has become a central tool for measuring progress in natural language understanding. This paradigm consists of three stages: (1) pretraining of a word prediction model on a corpus of arbitrary s...
How Can We Accelerate Progress Towards Human-like Linguistic Generalization?
d202785312
Words in different languages rarely cover the exact same semantic space. This work characterizes differences in meaning between words across languages using semantic relations that have been used to relate the meaning of English words. However, because of translation ambiguity, semantic relations are not always preserv...
Weakly Supervised Cross-lingual Semantic Relation Classification via Knowledge Distillation
d202787685
This paper provides a detailed comparison of a data programming approach with (i) off-the-shelf, state-of-the-art deep learning architectures that optimize their representations (BERT) and (ii) handcrafted-feature approaches previously used in the discourse analysis literature. We compare these approaches on the task o...
Weak Supervision for Learning Discourse Structure
d253107409
Despite the great success of spoken language understanding (SLU) in high-resource languages, it remains challenging in low-resource languages mainly due to the lack of labeled training data. The recent multilingual codeswitching approach achieves better alignments of model representations across languages by constructi...
Label-aware Multi-level Contrastive Learning for Cross-lingual Spoken Language Understanding
d252519660
Prompting method is regarded as one of the crucial progress for few-shot nature language processing. Recent research on prompting moves from discrete tokens based "hard prompts" to continuous "soft prompts", which employ learnable vectors as pseudo prompt tokens and achieve better performance. Though showing promising ...
MetaPrompting: Learning to Learn Better Prompts
d234777804
The proliferation of fake news, i.e., news intentionally spread for misinformation, poses a threat to individuals and society. Despite various fact-checking websites such as PolitiFact, robust detection techniques are required to deal with the increase in fake news. Several deep learning models show promising results f...
Explainable Tsetlin Machine Framework for Fake News Detection with Credibility Score Assessment
d53236219
This paper described NiuTrans neural machine translation systems for the WMT 2019 news translation tasks. We participated in 13 translation directions, including 11 supervised tasks, namely EN↔{ZH, DE, RU, KK, LT}, GU→EN and the unsupervised DE↔CS subtrack. Our systems were built on deep Transformer and several back-tr...
The NiuTrans Machine Translation Systems for WMT19
d253080885
Recent advances in vision-and-language modeling have seen the development of Transformer architectures that achieve remarkable performance on multimodal reasoning tasks. Yet, the exact capabilities of these black-box models are still poorly understood. While much of previous work has focused on studying their ability t...
Do Vision-and-Language Transformers Learn Grounded Predicate-Noun Dependencies?
d253097973
Can Visual Context Improve Automatic Speech Recognition for an Embodied Agent?
d261341821
d232417310
This paper proposes a generative language model called AfriKI. Our approach is based on an LSTM architecture trained on a small corpus of contemporary fiction. With the aim of promoting human creativity, we use the model as an authoring tool to explore machine-inthe-loop Afrikaans poetry generation. To our knowledge, t...
AfriKI: Machine-in-the-Loop Afrikaans Poetry Generation
d248986794
This paper presents a linguistically driven proof of concept for finding potentially euphemistic terms, or PETs. Acknowledging that PETs tend to be commonly used expressions for a certain range of sensitive topics, we make use of distributional similarities to select and filter phrase candidates from a sentence and ran...
Searching for PETs: Using Distributional and Sentiment-Based Methods to Find Potentially Euphemistic Terms
d219682425
The Natural Language Understanding (NLU) component in task oriented dialog systems processes a user's request and converts it into structured information that can be consumed by downstream components such as the Dialog State Tracker (DST). This information is typically represented as a semantic frame that captures the ...
Recursive Template-based Frame Generation for Task Oriented Dialog
d17770905
We present a dependency-driven parser that parses both dependency structures and constituent structures. Constituency representations are automatically transformed into dependency representations with complex arc labels, which makes it possible to recover the constituent structure with both constituent labels and gramm...
A Dependency-Driven Parser for German Dependency and Constituency Representations
d258378191
Keyphrase Extraction (KE) is a critical component in Natural Language Processing (NLP) systems for selecting a set of phrases from the document that could summarize the important information discussed in the document. Typically, a keyphrase extraction system can significantly accelerate the speed of information retriev...
A Survey on Recent Advances in Keyphrase Extraction from Pre-trained Language Models
d258378221
Extreme Multi-label Text Classification (XMTC) has been a tough challenge in machine learning research and applications due to the sheer sizes of the label spaces and the severe data scarcity problem associated with the long tail of rare labels in highly skewed distributions. This paper addresses the challenge of tail ...
Long-tailed Extreme Multi-label Text Classification by the Retrieval of Generated Pseudo Label Descriptions
d258378244
Unsupervised out-of-domain (OOD) detection is a task aimed at discriminating whether given samples are from the in-domain or not, without the categorical labels of in-domain instances. Unlike supervised OOD, as there are no labels for training a classifier, previous works on unsupervised OOD detection adopted the one-c...
Improving Unsupervised Out-of-domain Detection through Pseudo Labeling and Learning
d44232537
This paper presents a newly funded international project for machine translation and automated analysis of ancient cuneiform 1 languages where NLP specialists and Assyriologists collaborate to create an information retrieval system for Sumerian. 2 This research is conceived in response to the need to translate large nu...
Machine Translation and Automated Analysis of the Sumerian Languagé