doc_id
stringlengths
4
10
revision_depth
int64
1
4
before_revision
stringlengths
135
9.03k
after_revision
stringlengths
144
8.89k
edit_actions
list
sents_char_pos
list
domain
stringclasses
3 values
1912.05372
1
Language models have become a key step to achieve state-of-the-art results in many different Natural Language Processing (NLP) tasks. Leveraging the huge amount of unlabeled texts nowadays available, they provide an efficient way to pre-train continuous word representations that can be fine-tuned for a downstream task,...
Language models have become a key step to achieve state-of-the art results in many different Natural Language Processing (NLP) tasks. Leveraging the huge amount of unlabeled texts nowadays available, they provide an efficient way to pre-train continuous word representations that can be fine-tuned for a downstream task,...
[ { "type": "R", "before": "state-of-the-art", "after": "state-of-the art", "start_char_pos": 50, "end_char_pos": 66, "major_intent": "fluency", "raw_intents": [ "fluency", "clarity", "fluency" ] }, { "type": "R", "before": "are shared with", "after": ...
[ 0, 133, 378, 568, 681, 811, 1011 ]
arxiv
1912.05372
2
Language models have become a key step to achieve state-of-the art results in many different Natural Language Processing (NLP) tasks. Leveraging the huge amount of unlabeled texts nowadays available, they provide an efficient way to pre-train continuous word representations that can be fine-tuned for a downstream task,...
Language models have become a key step to achieve state-of-the art results in many different Natural Language Processing (NLP) tasks. Leveraging the huge amount of unlabeled texts nowadays available, they provide an efficient way to pre-train continuous word representations that can be fine-tuned for a downstream task,...
[ { "type": "R", "before": "word representations such as OpenAI GPT (Radford", "after": "representations (Dai and Le, 2015; Peters", "start_char_pos": 446, "end_char_pos": 494, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "mean...
[ 0, 133, 378, 572, 685, 815, 1017 ]
arxiv
1912.10514
2
An effective method to generate a large number of parallel sentences for training improved neural machine translation (NMT) systems is the use of back-translations of the target-side monolingual data. The method was not able to utilize the available huge amount of monolingual data because of the inability of models ...
An effective method to generate a large number of parallel sentences for training improved neural machine translation (NMT) systems is the use of the back-translations of the target-side monolingual data. The standard back-translation method has been shown to be unable to efficiently utilize the available huge amount o...
[ { "type": "A", "before": null, "after": "the", "start_char_pos": 146, "end_char_pos": 146, "major_intent": "fluency", "raw_intents": [ "clarity", "fluency", "fluency" ] }, { "type": "R", "before": "method was not able to", "after": "standard back-tra...
[ 0, 201, 389, 674, 807, 930, 1227 ]
arxiv
1912.10616
1
Authorship attribution is the process of identifying the author of a text. Classification-based approaches work well for small numbers of candidate authors, but only similarity-based methods are applicable for larger numbers of authors or for authors beyond the training set. While deep learning methods have been applie...
Authorship attribution is the process of identifying the author of a text. Classification-based approaches work well for small numbers of candidate authors, but only similarity-based methods are applicable for larger numbers of authors or for authors beyond the training set. While deep learning methods have been applie...
[ { "type": "R", "before": "current", "after": "applications to", "start_char_pos": 358, "end_char_pos": 365, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "A", "before": null, "after": "applications have be...
[ 0, 74, 275, 433, 583 ]
arxiv
1912.10616
2
Authorship attribution is the process of identifying the author of a text. Classification-based approaches work well for small numbers of candidate authors, but only similarity-based methods are applicable for larger numbers of authors or for authors beyond the training set . While deep learning methodshave been appli...
Authorship attribution is the process of identifying the author of a text. Approaches to tackling it have been conventionally divided into classification-based ones, which work well for small numbers of candidate authors, and similarity-based methods , which are applicable for larger numbers of authors or for authors b...
[ { "type": "R", "before": "Classification-based approaches", "after": "Approaches to tackling it have been conventionally divided into classification-based ones, which", "start_char_pos": 75, "end_char_pos": 106, "major_intent": "coherence", "raw_intents": [ "clarity", "cohere...
[ 0, 74, 277, 500, 656, 958 ]
arxiv
1912.11602
1
Lead bias is a common phenomenon in news summarization, where early parts of an article often contain the most salient information. While many algorithms exploit this fact in summary generation, it has a detrimental effect on teaching the model to discriminate and extract important information . We propose that the le...
Lead bias is a common phenomenon in news summarization, where early parts of an article often contain the most salient information. While many algorithms exploit this fact in summary generation, it has a detrimental effect on teaching the model to discriminate and extract important information in general . We propose t...
[ { "type": "A", "before": null, "after": "in general", "start_char_pos": 295, "end_char_pos": 295, "major_intent": "clarity", "raw_intents": [ "clarity", "meaning-changed", "clarity" ] }, { "type": "A", "before": null, "after": "our favor in", "st...
[ 0, 131, 297, 535, 706, 787 ]
null
1912.11602
2
Lead bias is a common phenomenon in news summarization, where early parts of an article often contain the most salient information . While many algorithms exploit this fact in summary generation , it has a detrimental effect on teaching the model to discriminate and extract important information in general. We propose ...
A typical journalistic convention in news articles is to deliver the most salient information in the beginning, also known as the lead bias. While this phenomenon can be exploited in generating a summary , it has a detrimental effect on teaching a model to discriminate and extract important information in general. We p...
[ { "type": "R", "before": "Lead bias is a common phenomenon in news summarization, where early parts of an article often contain", "after": "A typical journalistic convention in news articles is to deliver", "start_char_pos": 0, "end_char_pos": 101, "major_intent": "clarity", "raw_intents...
[ 0, 132, 308, 551, 650, 772, 998, 1112 ]
arxiv
1912.13318
1
Pre-training techniques have been verified successfully in a variety of NLP tasks in recent years. Despite the wide spread of pre-training models for NLP applications, they almost focused on text-level manipulation, while neglecting the layout and style information that is vital for document image understanding. In thi...
Pre-training techniques have been verified successfully in a variety of NLP tasks in recent years. Despite the widespread of pre-training models for NLP applications, they almost focused on text-level manipulation, while neglecting the layout and style information that is vital for document image understanding. In this...
[ { "type": "R", "before": "wide spread", "after": "widespread", "start_char_pos": 111, "end_char_pos": 122, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "D", "before": "textbf", "after": null, "start_c...
[ 0, 98, 313, 599, 694 ]
arxiv
1912.13318
2
Pre-training techniques have been verified successfully in a variety of NLP tasks in recent years. Despite the widespread of pre-training models for NLP applications, they almost focused on text-level manipulation, while neglecting the layout and style information that is vital for document image understanding. In this...
Pre-training techniques have been verified successfully in a variety of NLP tasks in recent years. Despite the widespread of pre-training models for NLP applications, they almost focused on text-level manipulation, while neglecting the layout and style information that is vital for document image understanding. In this...
[ { "type": "A", "before": null, "after": "form understanding (from 70.72 to 79.27),", "start_char_pos": 936, "end_char_pos": 936, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", ...
[ 0, 98, 312, 595, 706, 855, 1037 ]
arxiv
2001.00059
2
Recent research has achieved impressive results on understanding and improving source code by building up on machine-learning techniques developed for natural languages. A significant advancement in natural-language understanding has come with the development of pre-trained contextual embeddings, such as BERT, which ca...
Recent research has achieved impressive results on understanding and improving source code by building up on machine-learning techniques developed for natural languages. A significant advancement in natural-language understanding has come with the development of pre-trained contextual embeddings, such as BERT, which ca...
[ { "type": "R", "before": "6M", "after": "7.4M", "start_char_pos": 721, "end_char_pos": 723, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "A", "before": null, "after": "from...
[ 0, 169, 435, 605, 655, 830, 1017, 1326 ]
arxiv
2001.01037
1
This paper explains predictions of image captioning models with attention mechanisms beyond visualizing the attention itself. In this paper, we develop variants of layer-wise relevance backpropagation (LRP) and gradient backpropagation, tailored to image captioning with attention. The result provides simultaneously pix...
This paper explains predictions of image captioning models with attention mechanisms beyond visualizing the attention itself. In this paper, we develop variants of layer-wise relevance backpropagation (LRP) and gradient backpropagation, tailored to image captioning models with attention mechanisms. The explanations pro...
[ { "type": "R", "before": "with attention. The result provides", "after": "models with attention mechanisms. The explanations provide", "start_char_pos": 266, "end_char_pos": 301, "major_intent": "clarity", "raw_intents": [ "clarity", "coherence", "clarity" ] }, ...
[ 0, 125, 281, 403, 550, 704, 961, 1089 ]
arxiv
2001.01037
2
This paper explains predictions of image captioning models with attention mechanisms beyond visualizing the attention itself. In this paper, we develop variants of layer-wise relevance backpropagation (LRP) and gradient backpropagation , tailored to image captioning models with attention mechanisms. The explanations pr...
This paper interprets the predictions of image captioning models with attention mechanisms beyond visualizing the attention itself. In this paper, we develop variants of layer-wise relevance propagation (LRP) and gradient-based explanation methods , tailored to image captioning models with attention mechanisms. We comp...
[ { "type": "R", "before": "explains", "after": "interprets the", "start_char_pos": 11, "end_char_pos": 19, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "backpropagation", "after": "propagati...
[ 0, 125, 300, 427, 583, 738, 995, 1118 ]
arxiv
2001.04063
1
In this paper, we present a new sequence-to-sequence pre-training model called ProphetNet, which introduces a novel self-supervised objective named future n-gram prediction and the proposed n-stream self-attention mechanism. Instead of the optimization of one-step ahead prediction in traditional sequence-to-sequence mo...
In this paper, we present a new sequence-to-sequence pre-training model called ProphetNet, which introduces a novel self-supervised objective named future n-gram prediction and the proposed n-stream self-attention mechanism. Instead of the optimization of one-step ahead prediction in traditional sequence-to-sequence mo...
[ { "type": "R", "before": "Experimental results show ProphetNet achieves the best performance on both", "after": "Then we conduct experiments on CNN/DailyMail, Gigaword, and SQuAD 1.1 benchmarks for", "start_char_pos": 731, "end_char_pos": 805, "major_intent": "meaning-changed", "raw_inte...
[ 0, 224, 479, 624, 730, 932 ]
arxiv
2001.05272
1
Chinese NER is a challenging task. As pictographs, Chinese characters contain latent glyph information , which is often overlooked. We propose the FGN , Fusion Glyph Network for Chinese NER. This method may offer glyph informationfor fusion representation learning with BERT . The major innovations of FGN include: (1) a...
Chinese NER is a challenging task. As pictographs, Chinese characters contain latent glyph infor-mation , which is often overlooked. In this paper, we propose the FGN , Fusion Glyph Network for Chinese NER. Except for adding glyph information, this method may also add extra interactive infor-mation with the fusion mech...
[ { "type": "R", "before": "information", "after": "infor-mation", "start_char_pos": 91, "end_char_pos": 102, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "others" ] }, { "type": "R", "before": "We", "after": "In this paper, we", ...
[ 0, 34, 131, 190, 314, 758 ]
arxiv
2001.05272
2
Chinese NER is a challenging task. As pictographs, Chinese characters contain latent glyph infor-mation , which is often overlooked. In this paper, we propose the FGN, Fusion Glyph Network for Chinese NER. Except for adding glyph information, this method may also add extra interactive infor-mation with the fusion mecha...
Chinese NER is a challenging task. As pictographs, Chinese characters contain latent glyph information , which is often overlooked. In this paper, we propose the FGN, Fusion Glyph Network for Chinese NER. Except for adding glyph information, this method may also add extra interactive information with the fusion mechani...
[ { "type": "R", "before": "infor-mation", "after": "information", "start_char_pos": 91, "end_char_pos": 103, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "infor-mation", "after": "informatio...
[ 0, 34, 132, 205, 325, 524, 740, 890 ]
arxiv
2001.05687
3
Although over 95 million people worldwide speak the Vietnamese language , there are not many research studies on Vietnamese machine reading comprehension (MRC), the task of understanding a text and answering questions about it. One of the reasons is because of the lack of high-quality benchmark datasets for this task. ...
Although Vietnamese is the 17th most popular native-speaker language in the world , there are not many research studies on Vietnamese machine reading comprehension (MRC), the task of understanding a text and answering questions about it. One of the reasons is because of the lack of high-quality benchmark datasets for t...
[ { "type": "R", "before": "over 95 million people worldwide speak the Vietnamese language", "after": "Vietnamese is the 17th most popular native-speaker language in the world", "start_char_pos": 9, "end_char_pos": 71, "major_intent": "meaning-changed", "raw_intents": [ "meaning-chan...
[ 0, 227, 319, 454, 547, 737, 856, 962, 1082, 1149 ]
arxiv
2001.07676
2
Some NLP tasks can be solved in a fully unsupervised fashion by providing a pretrained language model with "task descriptions" in natural language (e.g., Radford et al., 2019). While this approach underperforms its supervised counterpart, we show in this work that the two ideas can be combined: We introduce Pattern-Exp...
Some NLP tasks can be solved in a fully unsupervised fashion by providing a pretrained language model with "task descriptions" in natural language (e.g., Radford et al., 2019). While this approach underperforms its supervised counterpart, we show in this work that the two ideas can be combined: We introduce Pattern-Exp...
[ { "type": "R", "before": "regular", "after": "standard", "start_char_pos": 583, "end_char_pos": 590, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "D", "before": "both", "after": null, "start_char_pos"...
[ 0, 176, 485, 573, 654 ]
arxiv
2001.08604
1
Recent works have shown that generative data augmentation, where synthetic samples generated from deep generative models are used to augment the training dataset, benefit certain NLP tasks. In this work, we extend this approach to the task of dialogue state tracking for goal-oriented dialogues, in which the data natura...
Recent works have shown that generative data augmentation, where synthetic samples generated from deep generative models are used to augment the training dataset, benefit certain NLP tasks. In this work, we extend this approach to the task of dialog state tracking for goal-oriented dialogs. Since, goal-oriented dialogs...
[ { "type": "R", "before": "dialogue", "after": "dialog", "start_char_pos": 243, "end_char_pos": 251, "major_intent": "fluency", "raw_intents": [ "fluency", "clarity", "fluency" ] }, { "type": "R", "before": "dialogues, in which the data naturally exhibits...
[ 0, 189, 399, 543, 723 ]
arxiv
2001.08604
2
Recent works have shown that generative data augmentation, where synthetic samples generated from deep generative models are used to augment the training dataset, benefit certain NLP tasks. In this work, we extend this approach to the task of dialog state tracking for goal-oriented dialogs. Since, goal-oriented dialogs...
Recent works have shown that generative data augmentation, where synthetic samples generated from deep generative models complement the training dataset, benefit NLP tasks. In this work, we extend this approach to the task of dialog state tracking for goal-oriented dialogs. Due to the inherent hierarchical structure of...
[ { "type": "R", "before": "are used to augment", "after": "complement", "start_char_pos": 121, "end_char_pos": 140, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "D", "before": "certain", "after": null, ...
[ 0, 189, 291, 522, 744, 844, 1095 ]
arxiv
2001.11453
1
Most combinations of NLP tasks and language varieties lack in-domain examples for supervised training because of the paucity of annotated data. How can neural models make sample-efficient generalizations from task-language combinations with available data to low-resource ones? In this work, we propose a Bayesian genera...
Most combinations of NLP tasks and language varieties lack in-domain examples for supervised training because of the paucity of annotated data. How can neural models make sample-efficient generalizations from task--language combinations with available data to low-resource ones? In this work, we propose a Bayesian gener...
[ { "type": "R", "before": "task-language", "after": "task--language", "start_char_pos": 209, "end_char_pos": 222, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "others" ] }, { "type": "R", "before": "task-language", "after": "task-...
[ 0, 143, 277, 366, 465, 598, 679, 870, 1112 ]
null
2001.11453
2
Most combinations of NLP tasks and language varieties lack in-domain examples for supervised training because of the paucity of annotated data. How can neural models make sample-efficient generalizations from task--language combinations with available data to low-resource ones? In this work, we propose a Bayesian gener...
Most combinations of NLP tasks and language varieties lack in-domain examples for supervised training because of the paucity of annotated data. How can neural models make sample-efficient generalizations from task-language combinations with available data to low-resource ones? In this work, we propose a Bayesian genera...
[ { "type": "R", "before": "task--language", "after": "task-language", "start_char_pos": 209, "end_char_pos": 223, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "others" ] }, { "type": "R", "before": "task--language", "after": "task...
[ 0, 143, 278, 367, 466, 600, 681, 872, 1113, 1275 ]
null
2002.06353
1
We propose UniViLM: a Unified Video and Language pre-training Model for multimodal understanding and generation. Motivated by the recent success of BERT based pre-training technique for NLP and image-language tasks, VideoBERT and CBT are proposed to exploit BERT model for video and language pre-training using narrated ...
With the recent success of pre-training technique for NLP and image-linguistic tasks, there are still few works on video-linguistic pre-training . Besides, most of the existing multimodal models are pre-trained for understanding task, which leads to a pretrain-finetune discrepency for generation tasks. In this paper, w...
[ { "type": "R", "before": "We propose UniViLM: a Unified Video and Language pre-training Model for multimodal understanding and generation. Motivated by", "after": "With", "start_char_pos": 0, "end_char_pos": 125, "major_intent": "coherence", "raw_intents": [ "coherence", "cla...
[ 0, 112, 341, 510, 644, 779, 940 ]
arxiv
2002.09253
1
Autonomous reinforcement learning agents must be intrinsically motivated to explore their environment, discover potential goals, represent them and learn how to achieve them. As children do the same, they benefit from exposure to language, using it to formulate goals and imagine new ones as they learn their meaning. In...
Autonomous reinforcement learning agents must be intrinsically motivated to explore their environment, discover potential goals, represent them and learn how to achieve them. As children do the same, they benefit from exposure to language, using it to formulate goals and imagine new ones as they learn their meaning. In...
[ { "type": "R", "before": "model", "after": "encoder", "start_char_pos": 586, "end_char_pos": 591, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "A", "before": null, "after": ", using an algorithm grounded ...
[ 0, 174, 317, 520, 631, 971, 1129 ]
arxiv
2002.09253
2
Autonomous reinforcement learning agents must be intrinsically motivated to explore their environment, discover potential goals, represent them and learn how to achieve them. As children do the same, they benefit from exposure to language, using it to formulate goals and imagine new ones as they learn their meaning. In...
Developmental machine learning studies how artificial agents can model the way children learn open-ended repertoires of skills. Such agents need to create and represent goals, select which ones to pursue and learn to achieve them. Recent approaches have considered goal spaces that were either fixed and hand-defined or ...
[ { "type": "R", "before": "Autonomous reinforcement learning agents must be intrinsically motivated to explore their environment, discover potential goals, represent them and learn how", "after": "Developmental machine learning studies how artificial agents can model the way children learn open-ended rep...
[ 0, 174, 317, 520, 829, 1060, 1218 ]
arxiv
2002.09616
1
Producing natural and accurate responses like human beings is the ultimate goal of intelligent dialogue agents. So far, most of the past works concentrate on selecting or generating one pertinent and fluent response according to current query and its context. These models work on a one-to-one environment, making one re...
Different people have different habits of describing their intents in conversations. Some people may tend to deliberate their full intents in several successive utterances, i.e., they use several consistent messages for readability instead of a long sentence to express their question. This creates a predicament faced b...
[ { "type": "R", "before": "Producing natural and accurate responses like human beings is the ultimate goal of intelligent dialogue agents. So far, most of the past works concentrate on selecting or generating one pertinent and fluent response according to current query and its context. These models work on a...
[ 0, 111, 259, 355, 507, 619, 734, 916, 980, 1157, 1244, 1392 ]
arxiv
2002.09616
2
Different people have different habits of describing their intents in conversations. Some people may tend to deliberate their full intents in several successive utterances, i. e., they use several consistent messages for readability instead of a long sentence to express their question. This creates a predicament faced ...
Producing natural and accurate responses like human beings is the ultimate goal of intelligent dialogue agents. So far, most of the past works concentrate on selecting or generating one pertinent and fluent response according to current query and its context. These models work on a one-to-one environment, making one re...
[ { "type": "R", "before": "Different people have different habits of describing their intents in conversations. Some people may tend to deliberate their full intents in several successive utterances, i. e., they use several consistent", "after": "Producing natural and accurate responses like human beings...
[ 0, 84, 286, 546, 682, 815, 931, 1045, 1179, 1375, 1563, 1683 ]
arxiv
2002.10107
2
Community Question-Answering websites, such as StackOverflow and Quora, expect users to follow specific guidelines in order to maintain content quality. These systems mainly rely on community reports for assessing contents, which has serious problems such as the slow handling of violations, the loss of normal and exper...
Community Question-Answering websites, such as StackOverflow and Quora, expect users to follow specific guidelines in order to maintain content quality. These systems mainly rely on community reports for assessing contents, which has serious problems such as the slow handling of violations, the loss of normal and exper...
[ { "type": "A", "before": null, "after": "a", "start_char_pos": 709, "end_char_pos": 709, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "A", "before": null, "after": "the", "start_char_pos": 769, "e...
[ 0, 152, 412, 618, 759, 925 ]
arxiv
2003.02645
1
We introduce sentenceMIM, a probabilistic auto-encoder for language modelling, trained with Mutual Information Machine (MIM) learning. Previous attempts to learn variational auto-encoders for language data ? have had mixed success, with empirical performance well below state-of-the-art auto-regressive models, a key bar...
We introduce sentenceMIM, a probabilistic auto-encoder for language modelling, trained with Mutual Information Machine (MIM) learning. Previous attempts to learn variational auto-encoders for language data have had mixed success, with empirical performance well below state-of-the-art auto-regressive models, a key barri...
[ { "type": "D", "before": "?", "after": null, "start_char_pos": 206, "end_char_pos": 207, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "D", "before": "?", "after": null, "start_char_pos": 335, "end...
[ 0, 134, 380, 541, 637, 878, 1029 ]
arxiv
2003.02645
2
We introduce sentenceMIM, a probabilistic auto-encoder for language modelling , trained with Mutual Information Machine (MIM) learning . Previous attempts to learn variational auto-encoders for language data have had mixed success, with empirical performance well below state-of-the-art auto-regressive models, a key ba...
SentenceMIM is a probabilistic auto-encoder for language data , trained with Mutual Information Machine (MIM) learning to provide a fixed length representation of variable length language observations (ie, similar to VAE) . Previous attempts to learn VAEs for language data faced challenges due to posterior collapse. MI...
[ { "type": "R", "before": "We introduce sentenceMIM,", "after": "SentenceMIM is", "start_char_pos": 0, "end_char_pos": 25, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "modelling", "after": ...
[ 0, 378, 539, 635, 876, 1029 ]
arxiv
2004.12316
1
Empathetic conversational models have been shown to improve user satisfaction and task outcomes in numerous domains. In Psychology, persona has been shown to be highly correlated to personality, which in turn influences empathy. In addition, our empirical analysis also suggests that persona plays an important role in e...
Empathetic dialogue systems have been shown to improve user satisfaction and task outcomes in numerous domains. In Psychology, persona has been shown to be highly correlated to personality, which in turn influences empathy. In addition, our empirical analysis also suggests that persona plays an important role in empath...
[ { "type": "R", "before": "conversational models", "after": "dialogue systems", "start_char_pos": 11, "end_char_pos": 32, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "conversations", "after...
[ 0, 116, 228, 345, 517, 634, 769, 875 ]
arxiv
2004.12316
2
Empathetic dialogue systems have been shown to improve user satisfaction and task outcomes in numerous domains. In Psychology, persona has been shown to be highly correlated to personality, which in turn influences empathy. In addition, our empirical analysis also suggests that persona plays an important role in empath...
Empathetic conversational models have been shown to improve user satisfaction and task outcomes in numerous domains. In Psychology, persona has been shown to be highly correlated to personality, which in turn influences empathy. In addition, our empirical analysis also suggests that persona plays an important role in e...
[ { "type": "R", "before": "dialogue systems", "after": "conversational models", "start_char_pos": 11, "end_char_pos": 27, "major_intent": "clarity", "raw_intents": [ "style", "clarity", "clarity" ] }, { "type": "R", "before": "dialogues", "after": "co...
[ 0, 111, 223, 336, 512, 625, 760, 866 ]
arxiv
2004.12765
1
Automatic humor detection has interesting use cases in modern technologies, such as chatbots and personal assistants. In this paper, we describe a novel approach for detecting humor in short texts using BERT sentence embedding. Our proposed model uses BERT to generate tokens and sentence embedding for texts. It sends ...
Automatic humor detection has interesting use cases in modern technologies, such as chatbots and virtual assistants. Based on the general linguistic structure of humor, in this paper, we propose a novel approach for detecting humor in short texts by using BERT sentence embedding. Our proposed method uses BERT to genera...
[ { "type": "R", "before": "personal assistants. In", "after": "virtual assistants. Based on the general linguistic structure of humor, in", "start_char_pos": 97, "end_char_pos": 120, "major_intent": "meaning-changed", "raw_intents": [ "style", "meaning-changed", "meaning...
[ 0, 117, 228, 310, 409, 543, 739 ]
arxiv
2004.14519
2
Arabic is a morphological rich language, posing many challenges for information extraction (IE) tasks, including Named Entity Recognition (NER) , Part-of-Speech tagging (POS), Argument Role Labeling (ARL), and Relation Extraction (RE). A few multilingual pre-trained models have been proposed and show good performance f...
Multilingual pre-trained Transformers, such as mBERT (Devlin et al., 2019) and XLM-RoBERTa (Conneau et al., 2020a), have been shown to enable the effective cross-lingual zero-shot transfer. However, their performance on Arabic information extraction (IE) tasks is not very well studied . In this paper , we pre-train a c...
[ { "type": "R", "before": "Arabic is a morphological rich language, posing many challenges for information extraction (IE) tasks, including Named Entity Recognition (NER) , Part-of-Speech tagging (POS), Argument Role Labeling (ARL), and Relation Extraction (RE). A few multilingual", "after": "Multilingua...
[ 0, 235, 488, 614, 792 ]
arxiv
2004.14601
1
We propose a novel methodology for analyzing the encoding of grammatical structure in neural language models through transfer learning. We test how a language model can leverage its internal representations to transfer knowledge across languages and symbol systems. We train LSTMs on non-linguistic , structured data and...
We propose transfer learning as a method for analyzing the encoding of grammatical structure in neural language models . We train LSTMs on non-linguistic data and evaluate their performance on natural language to assess which kinds of data induce generalizable structural features that LSTMs can use for natural language...
[ { "type": "R", "before": "a novel methodology", "after": "transfer learning as a method", "start_char_pos": 11, "end_char_pos": 30, "major_intent": "clarity", "raw_intents": [ "clarity", "style", "clarity" ] }, { "type": "R", "before": "through transfer ...
[ 0, 135, 265, 463, 746, 971, 1153 ]
arxiv
2004.14601
2
We propose transfer learning as a method for analyzing the encoding of grammatical structure in neural language models. We train LSTMs on non-linguistic data and evaluate their performance on natural language to assess which kinds of data induce generalizable structural features that LSTMs can use for natural language....
We propose transfer learning as a method for analyzing the encoding of grammatical structure in neural language models. We train LSTMs on non-linguistic data and evaluate their performance on natural language to assess which kinds of data induce generalizable structural features that LSTMs can use for natural language....
[ { "type": "R", "before": "Training on artificial languages containing recursion (hierarchical structure) also improves performance on natural language, again with no vocabulary overlap", "after": "To pinpoint the kinds of abstract structure that models may be encoding to lead to this improvement, we run...
[ 0, 119, 320, 510, 866, 1205 ]
arxiv
2004.14623
2
In adversarial testing, we pose hard generalization tasks in order to gain insights into the solutions found by our models. What properties must a system have in order to succeed at these hard behavioral tasks? We argue that an essential factor is modular internal structure. Our central contribution is a new experiment...
We address whether neural models for Natural Language Inference (NLI) can learn the compositional interactions between lexical entailment and negation, using four methods: the behavioral evaluation methods of (1) challenge test sets and (2) systematic generalization tasks, and the structural evaluation methods of (3) p...
[ { "type": "R", "before": "In adversarial testing, we pose hard generalization tasks in order to gain insights into the solutions found by our models. What properties must a system have in order to succeed at these hard behavioral tasks? We argue that an essential factor is modular internal structure. Our ce...
[ 0, 123, 210, 275, 523, 695 ]
arxiv
2004.14974
1
We introduce the task of scientific fact-checking. Given a corpus of scientific articles and a claim about a scientific finding, a fact-checking model must identify abstracts that support or refute the claim. In addition, it must provide rationales for its predictions in the form of evidentiary sentences from the retri...
We introduce scientific claim verification, a new task to select abstracts from the research literature containing evidence that supports or refutes a given scientific claim, and to identify rationales justifying each decision. To study this task, we construct SciFact, a dataset of 1.4K expert-written scientific claims...
[ { "type": "R", "before": "the task of scientific fact-checking. Given a corpus of scientific articles and a claim about a scientific finding, a fact-checking model must identify abstracts that support or refute the claim. In addition, it must provide rationales for its predictions in the form of evidentiary...
[ 0, 50, 208, 335, 509, 576, 792, 923 ]
arxiv
2004.15003
1
One key principle for assessing semantic similarity between texts is to measure the degree of semantic overlap of them by considering word-by-word alignment. However, alignment-based approaches are inferior to the generic sentence vectorsin terms of performance. We hypothesize that the reason for the inferiority of ali...
One key principle for assessing textual similarity is measuring the degree of semantic overlap between two texts by considering the word alignment. Such alignment-based approaches are both intuitive and interpretable; however, they are empirically inferior to the simple cosine similarity between general-purpose sentenc...
[ { "type": "R", "before": "semantic similarity between texts is to measure", "after": "textual similarity is measuring", "start_char_pos": 32, "end_char_pos": 79, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", ...
[ 0, 157, 262, 422, 640, 764, 946, 1101 ]
arxiv
2004.15003
2
One key principle for assessing textual similarity is measuring the degree of semantic overlap between two texts by considering the word alignment. Such alignment-based approaches are both intuitive and interpretable; however, they are empirically inferior to the simple cosine similarity between general-purpose sentenc...
A key principle in assessing textual similarity is measuring the degree of semantic overlap between two texts by considering the word alignment. Such alignment-based approaches are intuitive and interpretable; however, they are empirically inferior to the simple cosine similarity between general-purpose sentence vector...
[ { "type": "R", "before": "One key principle for", "after": "A key principle in", "start_char_pos": 0, "end_char_pos": 21, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "fluency" ] }, { "type": "D", "before": "both", "after": null,...
[ 0, 147, 217, 330, 495, 652, 880, 980, 1141 ]
arxiv
2004.15011
1
We introduce TLDR generation for scientific papers, a new automatic summarization task with high source compression requiring expert background knowledge and complex language understanding. To facilitate research on this task, we introduce SciTLDR, a dataset of 3.9K TLDRs. Furthermore, we introduce a novel annotation ...
We introduce TLDR generation for scientific papers, a new automatic summarization task with high source compression , requiring expert background knowledge and complex language understanding. To facilitate research on this task, we introduce SciTLDR, a dataset of 3.9K TLDRs. Furthermore, we introduce a novel annotation...
[ { "type": "A", "before": null, "after": ",", "start_char_pos": 116, "end_char_pos": 116, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "tasks of extreme summarization and", "after": "task of...
[ 0, 190, 274, 411, 594 ]
arxiv
2004.15011
2
We introduce TLDR generation for scientific papers , a new automatic summarizationtask with high source compression , requiring expert background knowledge and complex language understanding . To facilitate research on this task, we introduce SciTLDR, a dataset of 3.9K TLDRs . Furthermore, we introduce a novel annotati...
We introduce TLDR generation , a new form of extreme summarization, for scientific papers. TLDR generation involves high source compression and requires expert background knowledge and understanding of complex domain-specific language . To facilitate study on this task, we introduce SciTLDR, a new multi-target dataset ...
[ { "type": "D", "before": "for scientific papers", "after": null, "start_char_pos": 29, "end_char_pos": 50, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "automatic summarizationtask with", "...
[ 0, 192, 414, 597 ]
arxiv
2005.00192
2
In the automatic evaluation of generative question answering (GenQA) systems, it is difficult to assess the correctness of generated answers due to the free-form of the answer. Moreover, there is a lack of benchmark datasets to evaluate the suitability of existing metrics in terms of correctness. To study a better metr...
In the automatic evaluation of generative question answering (GenQA) systems, it is difficult to assess the correctness of generated answers due to the free-form of the answer. Especially, widely used n-gram similarity metrics often fail to discriminate the incorrect answers since they equally consider all of the token...
[ { "type": "R", "before": "Moreover, there is a lack of benchmark datasets to evaluate the suitability of existing metrics in terms of correctness. To study a better metric for GenQA, we first create high-quality human judgments of correctness on two standard GenQA datasets. Using our human-evaluation datase...
[ 0, 176, 297, 425, 553, 646, 843 ]
arxiv
2005.00782
1
Pre-trained language models (PTLM) have greatly improved performance on commonsense inference benchmarks, however, it remains unclear whether they share a human's ability to consistently make correct inferences under perturbations. Prior studies of PTLMs have found inference deficits, but have failed to provide a syste...
Pre-trained language models (PTLM) have impressive performance on commonsense inference benchmarks, but their ability to practically employ commonsense to communicate with humans is fiercely debated. Prior evaluations of PTLMs have focused on factual world knowledge or the ability to reason when the necessary knowledge...
[ { "type": "R", "before": "greatly improved", "after": "impressive", "start_char_pos": 40, "end_char_pos": 56, "major_intent": "clarity", "raw_intents": [ "clarity", "style", "clarity" ] }, { "type": "R", "before": "however, it remains unclear whether the...
[ 0, 231, 437, 589, 853, 1147 ]
null
2005.00782
2
Pre-trained language models ( PTLM) have impressive performance on commonsense inference benchmarks, but their ability to practically employ commonsense to communicate with humans is fiercely debated. Prior evaluations of PTLMs have focused on factual world knowledge or the ability to reason when the necessary knowledg...
Pre-trained language models ( PTLMs) have achieved impressive performance on commonsense inference benchmarks, but their ability to employ commonsense to make robust inferences, which is crucial for effective communications with humans, is debated . In the pursuit of advancing fluid human-AI communication, we propose a...
[ { "type": "R", "before": "PTLM) have", "after": "PTLMs) have achieved", "start_char_pos": 30, "end_char_pos": 40, "major_intent": "style", "raw_intents": [ "clarity", "style", "style" ] }, { "type": "D", "before": "practically", "after": null, "s...
[ 0, 200, 345, 492, 714, 821, 1041 ]
null
2005.01795
2
Following each patient visit, physicians must draft a detailed clinical summary called a SOAP note. Moreover, with electronic health records, these notes must be digitized. Despite the benefits of this documentation, their creation remains an onerous process, contributing to increasing physician burnout. In this paper,...
Following each patient visit, physicians draft long semi-structured clinical summaries called SOAP notes. While invaluable to clinicians and researchers, creating digital SOAP notes is burdensome, contributing to physician burnout. In this paper, we introduce the first complete pipelines to leverage deep summarization ...
[ { "type": "R", "before": "must draft a detailed clinical summary called a SOAP note. Moreover, with electronic health records, these notes must be digitized. Despite the benefits of this documentation, their creation remains an onerous process, contributing to increasing", "after": "draft long semi-stru...
[ 0, 99, 172, 305, 484, 652, 809, 1001, 1140, 1199, 1343 ]
arxiv
2005.03954
1
We focus on the study of conversational recommendation in the context of multi-type dialogs, where the bots can proactively and naturally lead a conversation from a non-recommendation dialog (e.g., QA) to a recommendation dialog, taking into account user's interests and feedback. To facilitate the study of this task, w...
We focus on the study of conversational recommendation in the context of multi-type dialogs, where the bots can proactively and naturally lead a conversation from a non-recommendation dialog (e.g., QA) to a recommendation dialog, taking into account user's interests and feedback. To facilitate the study of this task, w...
[ { "type": "R", "before": "where there are", "after": "which contains", "start_char_pos": 417, "end_char_pos": 432, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "style" ] }, { "type": "R", "before": "a", "after": "every", "sta...
[ 0, 280, 530, 707, 885, 956 ]
arxiv
2005.03975
1
To address the need for refined information in COVID-19 pandemic, we propose a deep learning-based system that uses state-of-the-art natural language processing (NLP) question answering (QA) techniques combined with summarization for mining the available scientific literature. Our system leverages the Information Retr...
The outbreak of COVID-19 raises attention from the researchers from various communities. While many scientific articles have been published, a system that can provide reliable information to COVID-19 related questions from the latest academic resources is crucial, especially for the medical community in the current tim...
[ { "type": "R", "before": "To address the need for refined information in", "after": "The outbreak of", "start_char_pos": 0, "end_char_pos": 46, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "pan...
[ 0, 278, 424, 515, 619 ]
arxiv
2005.03975
2
The outbreak of COVID-19 raises attention from the researchers from various communities. While many scientific articles have been published , a system that can provide reliable information to COVID-19 related questions from the latest academic resources is crucial, especially for the medical community in the current ti...
We present CAiRE-COVID, a real-time question answering (QA) and multi-document summarization system, which won one of the 10 tasks in the Kaggle COVID-19 Open Research Dataset Challenge, judged by medical experts. Our system aims to tackle the recent challenge of mining the numerous scientific articles being published ...
[ { "type": "R", "before": "The outbreak of", "after": "We present CAiRE-COVID, a real-time question answering (QA) and multi-document summarization system, which won one of the 10 tasks in the Kaggle", "start_char_pos": 0, "end_char_pos": 15, "major_intent": "meaning-changed", "raw_intent...
[ 0, 88, 388, 607, 741, 832, 920, 979 ]
arxiv
2005.05298
1
This paper presents a new method SOLOIST, which uses transfer learning to efficiently build task-oriented dialog systems at scale. We parameterize a dialog system using a Transformer-based auto-regressive language model, which subsumes different dialog mod-ules (e.g., state tracker, dialog policy, responsegenerator ) i...
This paper presents a new method SOLOIST, which uses transfer learning to efficiently build task-oriented dialog systems at scale. We parameterize a dialog system using a Transformer-based auto-regressive language model, which subsumes different dialog modules (e.g., state tracker, dialog policy, response generator ) i...
[ { "type": "R", "before": "mod-ules", "after": "modules", "start_char_pos": 253, "end_char_pos": 261, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "responsegenerator", "after": "response gen...
[ 0, 130, 346, 536, 679, 1024 ]
arxiv
2005.06012
1
We describe Mega-COV, a billion-scale dataset from Twitter for studying COVID-19. The dataset is diverse (covers 234 countries), longitudinal (goes as back as 2007), multilingual (comes in 65 languages), and has a significant number of location-tagged tweets ( ~32M tweets). We release tweet IDs from the dataset , hopin...
We describe Mega-COV, a billion-scale dataset from Twitter for studying COVID-19. The dataset is diverse (covers 268 countries), longitudinal (goes as back as 2007), multilingual (comes in 100+ languages), and has a significant number of location-tagged tweets ( 169M tweets). We release tweet IDs from the dataset . We ...
[ { "type": "R", "before": "234", "after": "268", "start_char_pos": 113, "end_char_pos": 116, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": "65", "after": "100+...
[ 0, 81, 274 ]
arxiv
2005.06377
1
ROUGE is the de facto criterion for summarization research. However, its two major drawbackslimit the research and application of automated summarization systems . First, ROUGE favors lexical similarity instead of semantic similarity , making it especially unfit for abstractive summarization . Second, ROUGE cannot func...
Canonical automatic summary evaluation metrics, such as ROUGE, suffer from two drawbacks . First, semantic similarity and linguistic quality are not captured well . Second, a reference summary, which is expensive or impossible to obtain in many cases , is needed. Existing efforts to address the two drawbacks are done s...
[ { "type": "R", "before": "ROUGE is the de facto criterion for summarization research. However, its two major drawbackslimit the research and application of automated summarization systems", "after": "Canonical automatic summary evaluation metrics, such as ROUGE, suffer from two drawbacks", "start_ch...
[ 0, 59, 163, 294, 412, 583, 718, 865, 999 ]
arxiv
2005.07202
1
Bidirectional Encoder Representations from Transformers (BERT) models for biomedical specialties such as BioBERT and clinicalBERT have significantly improved in biomedical text-mining tasks and enabled us to extract valuable information from biomedical literature . However, we benefitted only in English because of the...
Bidirectional Encoder Representations from Transformers (BERT) models for medical specialties, such as BioBERT and clinicalBERT , have significantly improved in performing biomedical text mining tasks and have enabled extracting valuable information from biomedical literature ; however, only English speakers benefit du...
[ { "type": "R", "before": "biomedical specialties", "after": "medical specialties,", "start_char_pos": 74, "end_char_pos": 96, "major_intent": "style", "raw_intents": [ "style", "style", "style" ] }, { "type": "A", "before": null, "after": ",", "s...
[ 0, 266, 410, 510, 810, 941, 1137 ]
arxiv
2005.07202
2
Bidirectional Encoder Representations from Transformers (BERT)models for medical specialties , such as BioBERT and clinicalBERT, have significantly improved in performing biomedical text mining tasks and have enabled extracting valuable information from biomedical literature ; however, only English speakers benefit due...
Pre-training large-scale neural language models on raw texts has made a significant contribution to improving transfer learning in natural language processing (NLP). With the introduction of transformer-based language models , such as bidirectional encoder representations from transformers (BERT), the performance of in...
[ { "type": "R", "before": "Bidirectional Encoder Representations from Transformers (BERT)models for medical specialties", "after": "Pre-training large-scale neural language models on raw texts has made a significant contribution to improving transfer learning in natural language processing (NLP). With th...
[ 0, 277, 417, 510, 810, 945, 1127, 1263 ]
arxiv
2005.07456
1
Word embeddings represent words in a numeric space in such a way that semantic relations between words are encoded as distances and directions in the vector space. Cross-lingual word embeddings map words from one language to the vector space of another language, or words from multiple languages to the same vector space...
Word embeddings represent words in a numeric space so that semantic relations between words are represented as distances and directions in the vector space. Cross-lingual word embeddings transform vector spaces of different languages so that similar words are aligned. This is done by constructing a mapping between vect...
[ { "type": "R", "before": "in such a way", "after": "so", "start_char_pos": 51, "end_char_pos": 64, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before": "encoded", "after": "represented", "start...
[ 0, 163, 353, 519, 647, 873 ]
arxiv
2005.12086
1
Text style transfer is the task that generates a sentence by preserving the content of the input sentence and transferring the style. Most existing studies are progressing on non-parallel datasets because parallel datasets are limited and hard to construct. In this work, we introduce a method that follows two stages in...
Text style transfer is the task that generates a sentence by preserving the content of the input sentence and transferring the style. Most existing studies are progressing on non-parallel datasets because parallel datasets are limited and hard to construct. In this work, we introduce a method that follows two stages in...
[ { "type": "R", "before": "the", "after": "a", "start_char_pos": 422, "end_char_pos": 425, "major_intent": "fluency", "raw_intents": [ "fluency", "others", "fluency" ] }, { "type": "R", "before": "the", "after": "a", "start_char_pos": 470, "en...
[ 0, 133, 257, 343, 437, 548, 596, 683, 746, 835, 954 ]
arxiv
2005.12766
1
Pretrained language models such as BERT, GPT have shown great effectiveness in language understanding. The auxiliary predictive tasks in existing pretraining approaches are mostly defined on tokens, thus may not be able to capture sentence-level semantics very well. To address this issue, we propose CERT: Contrastive s...
Pretrained language models such as BERT, GPT have shown great effectiveness in language understanding. The auxiliary predictive tasks in existing pretraining approaches are mostly defined on tokens, thus may not be able to capture sentence-level semantics very well. To address this issue, we propose CERT: Contrastive s...
[ { "type": "R", "before": "three", "after": "11 natural", "start_char_pos": 821, "end_char_pos": 826, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "before": ": CoLA, RTE, a...
[ 0, 102, 266, 490, 563, 704, 800 ]
arxiv
2005.12889
1
Few resources represent implicit roles for natural language understanding , and existing studies in NLP only make coarse distinctions between categories of arguments omitted from linguistic form. In this paper , we design a typology for fine-grained implicit argument annotation on top of Universal Conceptual Cognitive ...
Predicate-argument structure analysis is a central component in meaning representations of text. The fact that some arguments are not explicitly mentioned in a sentence gives rise to ambiguity in language understanding, and renders it difficult for machines to interpret text correctly. However, only few resources repre...
[ { "type": "R", "before": "Few", "after": "Predicate-argument structure analysis is a central component in meaning representations of text. The fact that some arguments are not explicitly mentioned in a sentence gives rise to ambiguity in language understanding, and renders it difficult for machines to i...
[ 0, 195, 380, 491, 646, 806 ]
arxiv
2005.13837
1
One of the most crucial challenges in questionanswering (QA) is the scarcity of labeled data, since it is costly to obtain question-answer(QA) pairs for a target text domain with human annotation. An alternative approach totackle the problem is to use automatically generated QA pairs from either the problem context or ...
One of the most crucial challenges in question answering (QA) is the scarcity of labeled data, since it is costly to obtain question-answer(QA) pairs for a target text domain with human annotation. An alternative approach to tackle the problem is to use automatically generated QA pairs from either the problem context o...
[ { "type": "R", "before": "questionanswering", "after": "question answering", "start_char_pos": 38, "end_char_pos": 55, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "totackle", "after": "to ...
[ 0, 196, 376, 615, 988 ]
arxiv
2005.14672
1
Transfer learning, particularly approaches that combine multi-task learning with pre-trained contextualized embeddings and fine-tuning, have advanced the field of Natural Language Processing tremendously in recent years. In this paper we present MaChAmp, a toolkit for easy use of fine-tuning BERT-like models in multi-t...
Transfer learning, particularly approaches that combine multi-task learning with pre-trained contextualized embeddings and fine-tuning, have advanced the field of Natural Language Processing tremendously in recent years. In this paper we present MaChAmp, a toolkit for easy fine-tuning of contextualized embeddings in mu...
[ { "type": "D", "before": "use of", "after": null, "start_char_pos": 274, "end_char_pos": 280, "major_intent": "coherence", "raw_intents": [ "coherence", "clarity", "coherence" ] }, { "type": "R", "before": "BERT-like models", "after": "of contextuali...
[ 0, 220, 333 ]
arxiv
2005.14716
1
The average predictability (aka informativity) of a word in context has been shown to condition word duration (Seyfarth, 2014). All else being equal, words that tend to occur in more predictable environments are shorter than words that tend to occur in less predictable environments. One account of the informativity eff...
The average predictability (aka informativity) of a word in context has been shown to condition word duration (Seyfarth, 2014). All else being equal, words that tend to occur in more predictable environments are shorter than words that tend to occur in less predictable environments. One account of the informativity eff...
[ { "type": "R", "before": "word", "after": "probabilistic", "start_char_pos": 368, "end_char_pos": 372, "major_intent": "meaning-changed", "raw_intents": [ "style", "meaning-changed", "meaning-changed" ] }, { "type": "A", "before": null, "after": "men...
[ 0, 127, 283, 430, 533, 682, 902, 1205, 1322, 1461, 1686 ]
arxiv
2006.00119
1
We present ExplainIt, a review summarization system centered around opinion explainability: the simple notion of high-level opinions (e.g. "noisy room") being explainable by lower-level ones (e.g., "loud fridge"). ExplainIt utilizes a combination of supervised and unsupervised components to mine the opinion phrasesfro...
The Web is a major resource of both factual and subjective information. While there are significant efforts URLanize factual information into knowledge bases, there is much less work URLanizing opinions, which are abundant in subjective data, into a structured format. We present ExplainIt, a system that extracts URLani...
[ { "type": "A", "before": null, "after": "The Web is a major resource of both factual and subjective information. While there are significant efforts URLanize factual information into knowledge bases, there is much less work URLanizing opinions, which are abundant in subjective data, into a structured fo...
[ 0, 214, 454, 665, 833, 1034 ]
arxiv
2006.00575
1
In this survey, we provide a comprehensive description of recent neural entity linking (EL) systems . We distill their generic architecture that includes candidate generation , entity ranking , and unlinkable mention prediction components. For each of them, we summarize the prominent methods and models, including appro...
In this survey, we provide a comprehensive description of recent neural entity linking (EL) systems developed since 2015 as a result of the "deep learning revolution" in NLP. Our goal is to systemize design features of neural entity linking systems and compare their performances to the best classic methods on the commo...
[ { "type": "R", "before": ". We distill their generic architecture that includes candidate generation , entity ranking , and unlinkable mention prediction components. For", "after": "developed since 2015 as a result of the \"deep learning revolution\" in NLP. Our goal is to systemize design features of n...
[ 0, 101, 239, 387, 561, 813, 909 ]
arxiv
2006.00885
1
As the COVID-19 virus quickly spreads around the world, unfortunately, misinformation related to COVID-19 also gets created and spreads like wild fire. Such misinformation has caused confusion among people, disruptions in society, and even deadly consequences in health problems. To be able to understand, detect, and mi...
As the COVID-19 virus quickly spreads around the world, unfortunately, misinformation related to COVID-19 also gets created and spreads like wild fire. Such misinformation has caused confusion among people, disruptions in society, and even deadly consequences in health problems. To be able to understand, detect, and mi...
[ { "type": "R", "before": "1,896 news, 183,564", "after": "3,235 news, 294,692", "start_char_pos": 742, "end_char_pos": 761, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "b...
[ 0, 151, 279, 437, 726, 854 ]
arxiv
2006.00885
2
As the COVID-19 virus quickly spreads around the world, unfortunately, misinformation related to COVID-19 also gets created and spreads like wild fire. Such misinformation has caused confusion among people, disruptions in society, and even deadly consequences in health problems. To be able to understand, detect, and mi...
As the COVID-19 virus quickly spreads around the world, unfortunately, misinformation related to COVID-19 also gets created and spreads like wild fire. Such misinformation has caused confusion among people, disruptions in society, and even deadly consequences in health problems. To be able to understand, detect, and mi...
[ { "type": "R", "before": "3,235 news, 294,692", "after": "4,251 news, 296,000", "start_char_pos": 742, "end_char_pos": 761, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "meaning-changed", "meaning-changed" ] }, { "type": "R", "b...
[ 0, 151, 279, 437, 726, 854 ]
arxiv
2006.00995
1
A growing body of work makes use of probing in order to investigate the working of neural models, often considered black boxes. Recently, an ongoing debate emerged surrounding the limitations of the probing paradigm. In this work, we point out the inability to infer behavioral conclusions from probing results, and offe...
A growing body of work makes use of probing in order to investigate the working of neural models, often considered black boxes. Recently, an ongoing debate emerged surrounding the limitations of the probing paradigm. In this work, we point out the inability to infer behavioral conclusions from probing results, and offe...
[ { "type": "R", "before": "is focused", "after": "focuses", "start_char_pos": 350, "end_char_pos": 360, "major_intent": "style", "raw_intents": [ "clarity", "style", "style" ] }, { "type": "D", "before": "now", "after": null, "start_char_pos": 697...
[ 0, 127, 216, 442, 651, 811, 887 ]
arxiv
2006.00995
2
A growing body of work makes use of probing in order to investigate the working of neural models, often considered black boxes. Recently, an ongoing debate emerged surrounding the limitations of the probing paradigm. In this work, we point out the inability to infer behavioral conclusions from probing results , and off...
A growing body of work makes use of probing to investigate the working of neural models, often considered black boxes. Recently, an ongoing debate emerged surrounding the limitations of the probing paradigm. In this work, we point out the inability to infer behavioral conclusions from probing results and offer an alter...
[ { "type": "D", "before": "in order", "after": null, "start_char_pos": 44, "end_char_pos": 52, "major_intent": "coherence", "raw_intents": [ "coherence", "coherence", "clarity" ] }, { "type": "D", "before": ",", "after": null, "start_char_pos": 31...
[ 0, 127, 216, 440, 649, 805, 881 ]
arxiv
2006.01095
1
Artificial neural networks ( ANNS have shown much empirical success in solving perceptual tasks across various cognitive modalities. While they are only loosely inspired by the biological brain, recent studies report considerable similarities between representation extracted from task-optimized ANNS and neural populati...
Artificial neural networks ( ANNs) have shown much empirical success in solving perceptual tasks across various cognitive modalities. While they are only loosely inspired by the biological brain, recent studies report considerable similarities between representation extracted from task-optimized ANNs and neural populat...
[ { "type": "R", "before": "ANNS", "after": "ANNs)", "start_char_pos": 29, "end_char_pos": 33, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "meaning-changed" ] }, { "type": "R", "before": "ANNS", "after": "ANNs", "start_char_po...
[ 0, 132, 337, 605, 837, 1077, 1271 ]
arxiv
2006.01095
2
Artificial neural networks ( ANNs ) have shown much empirical success in solving perceptual tasks across various cognitive modalities. While they are only loosely inspired by the biological brain, recent studies report considerable similarities between representation extracted from task-optimized ANNs and neural popula...
Deep neural networks ( DNNs ) have shown much empirical success in solving perceptual tasks across various cognitive modalities. While they are only loosely inspired by the biological brain, recent studies report considerable similarities between representations extracted from task-optimized DNNs and neural populations...
[ { "type": "R", "before": "Artificial", "after": "Deep", "start_char_pos": 0, "end_char_pos": 10, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "others", "meaning-changed" ] }, { "type": "R", "before": "ANNs", "after": "DNNs",...
[ 0, 134, 339, 608, 842, 1082, 1277 ]
arxiv
2006.02163
1
Recent unsupervised machine translation (UMT) systems usually employ three main principles: initialization, language modeling and iterative back-translation, though they may apply these principles differently. This work introduces another component to this framework : Multi-Agent Cross-translated Diversification (MACD)...
Recent unsupervised machine translation (UMT) systems usually employ three main principles: initialization, language modeling and iterative back-translation, though they may apply them differently. Crucially, iterative back-translation and denoising auto-encoding for language modeling provide data diversity to train th...
[ { "type": "R", "before": "these principles differently. This work introduces another component to this framework : Multi-Agent Cross-translated Diversification (MACD). The method trains multiple UMT agents and then translates monolingual data back and forth using non-duplicative agents to acquire synthetic ...
[ 0, 209, 321, 545, 654, 844, 948 ]
arxiv
2006.02163
2
Recent unsupervised machine translation (UMT) systems usually employ three main principles: initialization, language modeling and iterative back-translation, though they may apply them differently. Crucially, iterative back-translation and denoising auto-encoding for language modeling provide data diversity to train th...
Recent unsupervised machine translation (UMT) systems usually employ three main principles: initialization, language modeling and iterative back-translation, though they may apply them differently. Crucially, iterative back-translation and denoising auto-encoding for language modeling provide data diversity to train th...
[ { "type": "R", "before": "it boosts the performance of the standard UMT methods by 1.5-2.0 BLEU. In particular, in", "after": "CBD achieves the state of the art in the", "start_char_pos": 693, "end_char_pos": 781, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed...
[ 0, 197, 334, 413, 622, 672, 763, 948, 1040 ]
arxiv
2006.02876
1
Improving neural machine translation (NMT) models using the back-translations of the monolingual target data (synthetic parallel data) is currently the state-of-the-art approach for training improved translation systems. The quality of the backward system - which is trained on the available parallel data and used for t...
Improving neural machine translation (NMT) models using the back-translations of the monolingual target data (synthetic parallel data) is currently the state-of-the-art approach for training improved translation systems. The target-side side monolingual data has been used in the back-translation approach to improve the...
[ { "type": "R", "before": "quality of the backward system - which is trained on the available parallel data and used for the", "after": "target-side side monolingual data has been used in the", "start_char_pos": 225, "end_char_pos": 322, "major_intent": "clarity", "raw_intents": [ "...
[ 0, 220, 422, 621, 783, 961 ]
null
2006.02876
2
Improving neural machine translation (NMT) models using the back-translations of the monolingual target data (synthetic parallel data) is currently the state-of-the-art approach for training improved translation systems. The target-side side monolingual data has been used in the back-translation approach to improve the...
Improving neural machine translation (NMT) models using the back-translations of the monolingual target data (synthetic parallel data) is currently the state-of-the-art approach for training improved translation systems. The quality of the backward system - which is trained on the available parallel data and used for t...
[ { "type": "R", "before": "target-side side monolingual data has been used in the", "after": "quality of the backward system - which is trained on the available parallel data and used for the", "start_char_pos": 225, "end_char_pos": 279, "major_intent": "meaning-changed", "raw_intents": [...
[ 0, 220, 356, 554, 674, 799, 940, 1102 ]
null
2006.03644
1
Stance detection on social media is an emerging opinion mining paradigm for various social and political applications wheresentiment analysis might be seen sub-optimal. This paper surveys the work on stance detection and situates its usage withincurrent opinion mining techniques in social media. An exhaustive review of...
Stance detection on social media is an emerging opinion mining paradigm for various social and political applications where sentiment analysis might be sub-optimal. This paper surveys the work on stance detection and situates its usage within current opinion mining techniques in social media. An exhaustive review of st...
[ { "type": "R", "before": "wheresentiment", "after": "where sentiment", "start_char_pos": 118, "end_char_pos": 132, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "D", "before": "seen", "after": null, "s...
[ 0, 168, 296, 535, 683, 804 ]
arxiv
2006.03654
1
Recent progress in pre-trained neural language models has significantly improved the performance of many natural language processing (NLP) tasks. In this paper we propose a new model architecture DeBERTa(Decoding-enhanced BERT with disentangled attention) that improves the BERT and RoBERTa models using two novel techni...
Recent progress in pre-trained neural language models has significantly improved the performance of many natural language processing (NLP) tasks. In this paper wepropose a new model architecture DeBERTa(Decoding-enhanced BERT with dis-entangled attention) that improves the BERT and RoBERTa models using two novel techni...
[ { "type": "R", "before": "we propose", "after": "wepropose", "start_char_pos": 160, "end_char_pos": 170, "major_intent": "others", "raw_intents": [ "others", "others", "others" ] }, { "type": "R", "before": "disentangled", "after": "dis-entangled", ...
[ 0, 145, 325, 598, 728, 858, 1143 ]
arxiv
2006.04315
1
Visual Question Answering (VQA ) models tend to rely on the language bias and thus fail to learn the reasoning from visual knowledge , which is however the original intention of VQA . In this paper, we propose a novel cause-effect look at the language bias , where the bias is formulated as the direct effect of questio...
Recent VQA models may tend to rely on language bias as a shortcut and thus fail to sufficiently learn the multi-modal knowledge from both vision and language . In this paper, we investigate how to capture and mitigate language bias in VQA. Motivated by causal effects, we proposed a novel counterfactual inference framew...
[ { "type": "R", "before": "Visual Question Answering (VQA ) models", "after": "Recent VQA models may", "start_char_pos": 0, "end_char_pos": 39, "major_intent": "clarity", "raw_intents": [ "clarity", "style", "clarity" ] }, { "type": "R", "before": "the la...
[ 0, 183, 366, 472 ]
arxiv
2006.04315
2
Recent VQA models may tend to rely on language bias as a shortcut and thus fail to sufficiently learn the multi-modal knowledge from both vision and language. In this paper, we investigate how to capture and mitigate language bias in VQA. Motivated by causal effects, we proposed a novel counterfactual inference framew...
VQA models may tend to rely on language bias as a shortcut and thus fail to sufficiently learn the multi-modal knowledge from both vision and language. Recent debiasing methods proposed to exclude the language prior during inference. However, they fail to disentangle the "good" language context and "bad" language bias...
[ { "type": "D", "before": "Recent", "after": null, "start_char_pos": 0, "end_char_pos": 6, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "A", "before": null, "after": "Recent debiasing methods proposed to e...
[ 0, 158, 239, 523 ]
arxiv
2006.06814
1
Designing task-oriented dialogue systems is a challenging research topic, since it needs not only to generate utterances fulfilling user requests but also to guarantee the comprehensibility. Many previous works trained end-to-end (E2E) models with supervised learning (SL), however, the bias in annotated system utteranc...
Designing task-oriented dialogue systems is a challenging research topic, since it needs not only to generate utterances fulfilling user requests but also to guarantee the comprehensibility. Many previous works trained end-to-end (E2E) models with supervised learning (SL), however, the bias in annotated system utteranc...
[ { "type": "R", "before": "our", "after": "o gur", "start_char_pos": 671, "end_char_pos": 674, "major_intent": "others", "raw_intents": [ "others", "others", "others" ] }, { "type": "A", "before": null, "after": ", where the latent dialogue act is app...
[ 0, 190, 347, 487, 667, 835, 1117, 1251 ]
arxiv
2006.06814
2
Designing task-oriented dialogue systems is a challenging research topic, since it needs not only to generate utterances fulfilling user requests but also to guarantee the comprehensibility. Many previous works trained end-to-end (E2E) models with supervised learning (SL), however, the bias in annotated system utteranc...
Designing task-oriented dialogue systems is a challenging research topic, since it needs not only to generate utterances fulfilling user requests but also to guarantee the comprehensibility. Many previous works trained end-to-end (E2E) models with supervised learning (SL), however, the bias in annotated system utteranc...
[ { "type": "R", "before": "o gur", "after": "our", "start_char_pos": 671, "end_char_pos": 676, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "R", "before": "ability of explanation", "after": "explanability ...
[ 0, 190, 347, 487, 667, 934, 1155, 1289, 1552 ]
arxiv
2006.10598
1
We present Shapeshifter Networks (SSNs), a flexible neural network framework that improves performance and reduces memory requirements on a diverse set of scenarios over standard neural networks. Our approach is based on the observation that many neural networks are severely overparameterized, resulting in significant ...
Fitting a model into GPU memory during training is an increasing concern as models continue to grow. To address this issue, we present Shapeshifter Networks (SSNs), a flexible neural network framework that decouples layers from model weights, enabling us to implement any neural network with an arbitrary number of param...
[ { "type": "R", "before": "We", "after": "Fitting a model into GPU memory during training is an increasing concern as models continue to grow. To address this issue, we", "start_char_pos": 0, "end_char_pos": 2, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", ...
[ 0, 195, 397, 561, 677, 800, 964, 1196 ]
arxiv
2006.10598
2
Fitting a model into GPU memory during training is an increasing concern as models continue to grow. To address this issue, we present Shapeshifter Networks (SSNs), a flexible neural network framework that decouples layers from model weights, enabling us to implement any neural network with an arbitrary number of param...
Fitting a model into GPU memory during training is an increasing concern as models continue to grow. Parameter sharing can reduce memory requirements, but existing methods only share parameters between identical layers, limiting their impact. This paper removes these restrictions with a novel task called Neural Paramet...
[ { "type": "R", "before": "To address this issue, we present Shapeshifter Networks (SSNs), a flexible neural network framework that decouples layers from model weights, enabling us to implement any neural network with an arbitrary number of parameters . In SSNseach layer obtains weights from a parameter stor...
[ 0, 100, 327, 446, 567, 676, 934 ]
arxiv
2006.11477
1
We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a qu...
We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a qu...
[ { "type": "R", "before": "We set a new state of the art on both the 100 hour subset of Librispeech as well as on TIMIT phoneme recognition", "after": "Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/noisy test sets", "start_char_pos": 388, "end_char_pos": 500, ...
[ 0, 387, 502, 672, 834, 929, 1028 ]
arxiv
2006.11477
2
We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a qu...
We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a qu...
[ { "type": "R", "before": "noisy", "after": "other", "start_char_pos": 472, "end_char_pos": 477, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "style" ] }, { "type": "R", "before": "5.2", "after": "4.8", "start_char_pos": 763, ...
[ 0, 387, 488, 660, 821 ]
arxiv
2006.15595
1
How to explicitly encode positional information into neural networks is an important problem in natural language processing. In the Transformer model , the positional information is simply encoded as embedding vectors, which are used in the input layer, or encoded as a bias term in the self-attention module. In this wo...
How to explicitly encode positional information into neural networks is important in learning the representation of natural languages, such as BERT. Based on the Transformer architecture , the positional information is simply encoded as embedding vectors, which are used in the input layer, or encoded as a bias term in ...
[ { "type": "R", "before": "an important problem in natural language processing. In the Transformer model", "after": "important in learning the representation of natural languages, such as BERT. Based on the Transformer architecture", "start_char_pos": 72, "end_char_pos": 149, "major_intent": ...
[ 0, 124, 309, 493, 569, 730, 913, 1070, 1274, 1390 ]
arxiv
2007.00576
1
To combat COVID-19, clinicians and scientists all need to digest the vast amount of relevant biomedical knowledge in literature to understand the disease mechanism and the related biological functions. We have developed a novel and comprehensive knowledge discovery framework, COVID-KG, which leverages novel semantic r...
To combat COVID-19, both clinicians and scientists need to digest the vast amount of relevant biomedical knowledge in literature to understand the disease mechanism and the related biological functions. We have developed a novel and comprehensive knowledge discovery framework, COVID-KG to extract fine-grained multimedi...
[ { "type": "A", "before": null, "after": "both", "start_char_pos": 20, "end_char_pos": 20, "major_intent": "coherence", "raw_intents": [ "coherence", "clarity", "coherence" ] }, { "type": "D", "before": "all", "after": null, "start_char_pos": 47, ...
[ 0, 202, 555, 688, 795 ]
arxiv
2007.00576
2
To combat COVID-19, both clinicians and scientists need to digest the vast amount of relevant biomedical knowledge in literature to understand the disease mechanism and the related biological functions. We have developed a novel and comprehensive knowledge discovery framework, textbf COVID-KG to extract fine-grained m...
To combat COVID-19, both clinicians and scientists need to digest vast amounts of relevant biomedical knowledge in scientific literature to understand the disease mechanism and related biological functions. We have developed a novel and comprehensive knowledge discovery framework, COVID-KG to extract fine-grained multi...
[ { "type": "R", "before": "the vast amount", "after": "vast amounts", "start_char_pos": 66, "end_char_pos": 81, "major_intent": "fluency", "raw_intents": [ "fluency", "fluency", "fluency" ] }, { "type": "A", "before": null, "after": "scientific", ...
[ 0, 203, 411, 563, 672 ]
arxiv
2007.04508
1
Using the presence or frequency of keywords is a classic approach in the formal analysis of text, but has the drawback of glossing over the relationality of word meanings. Word embedding models overcome this problem by constructing a standardized meaning space where words are assigned a location based on relations of s...
Using the frequency of keywords is a classic approach in the formal analysis of text, but has the drawback of glossing over the relationality of word meanings. Word embedding models overcome this problem by constructing a standardized and continuous "meaning-space" where words are assigned a location based on relations...
[ { "type": "D", "before": "presence or", "after": null, "start_char_pos": 10, "end_char_pos": 21, "major_intent": "coherence", "raw_intents": [ "coherence", "clarity", "coherence" ] }, { "type": "R", "before": "meaning space", "after": "and continuous...
[ 0, 171, 423, 522, 694, 877 ]
null
2007.06225
1
Motivation: NLP continues improving substantially through auto-regressive and auto-encoding Language Models . These LMsrequire expensive computing resources for self-supervised or un-supervised learning from huge unlabelled text corpora. The information learned is transferred through so-called embeddings to downstream ...
Computational biology and bioinformatics provide vast data gold-mines from protein sequences, ideal for Language Models (LMs) taken from Natural Language Processing (NLP). These LMs reach for new prediction frontiers at low inference costs . Here, we trained two auto-regressive language models (Transformer-XL , XLNet) ...
[ { "type": "R", "before": "Motivation: NLP continues improving substantially through auto-regressive and auto-encoding Language Models . These LMsrequire expensive computing resources for self-supervised or un-supervised learning from huge unlabelled text corpora. The information learned is transferred throu...
[ 0, 109, 237, 337, 571, 693, 835, 1124, 1230, 1501, 1768 ]
arxiv
2008.01766
1
Machines show an increasingly broad set of linguistic competencies, thanks to recent progress in Natural Language Processing (NLP). Many algorithms stem from past computational work in psychology, raising the question of whether they understand words as people do. In this paper, we compare how humans and machines repre...
Machines show an increasingly broad set of linguistic competencies, thanks to recent progress in Natural Language Processing (NLP). Many algorithms stem from past computational work in psychology, raising the question of whether they understand words as people do. In this paper, we compare how humans and machines repre...
[ { "type": "R", "before": "use words in order to express", "after": "express through words", "start_char_pos": 630, "end_char_pos": 659, "major_intent": "clarity", "raw_intents": [ "style", "clarity", "clarity" ] }, { "type": "D", "before": ",", "afte...
[ 0, 131, 264, 346, 476, 661, 796, 907 ]
arxiv
2008.01766
2
Machines show an increasingly broad set of linguistic competencies, thanks to recent progress in Natural Language Processing (NLP). Many algorithms stem from past computational work in psychology, raising the question of whether they understand words as people do . In this paper , we compare how humans and machines rep...
Machines have achieved a broad and growing set of linguistic competencies, thanks to recent progress in Natural Language Processing (NLP). Psychologists have shown increasing interest in such models, comparing their output to psychological judgments such as similarity, association, priming, and comprehension, raising t...
[ { "type": "R", "before": "show an increasingly broad", "after": "have achieved a broad and growing", "start_char_pos": 9, "end_char_pos": 35, "major_intent": "clarity", "raw_intents": [ "meaning-changed", "clarity", "clarity" ] }, { "type": "R", "before"...
[ 0, 131, 265, 348, 478, 654, 788, 900 ]
arxiv
2008.07905
1
Non-autoregressive neural machine translation achieves remarkable inference acceleration compared to autoregressive models. However, current non-autoregressive models still fall behind their autoregressive counterparts in prediction accuracy. We attribute the accuracy gaps to two disadvantages of non-autoregressive mod...
Although non-autoregressive models with one-iteration generation achieve remarkable inference speed-up, they still fall behind their autoregressive counterparts in prediction accuracy. The non-autoregressive models with the best accuracy currently rely on multiple decoding iterations, which largely sacrifice the infere...
[ { "type": "R", "before": "Non-autoregressive neural machine translation achieves remarkable inference acceleration compared to autoregressive models. However, current", "after": "Although", "start_char_pos": 0, "end_char_pos": 140, "major_intent": "coherence", "raw_intents": [ "coh...
[ 0, 123, 242, 422, 468, 902 ]
arxiv
2008.07905
2
Although non-autoregressive models with one-iteration generation achieve remarkable inference speed-up, they still fall behind their autoregressive counterparts in prediction accuracy. The non-autoregressive models with the best accuracy currently rely on multiple decoding iterations, which largely sacrifice the infere...
Recent work on non-autoregressive neural machine translation (NAT) aims at improving the efficiency by parallel decoding without sacrificing the quality. However, existing NAT methods are either inferior to Transformer or require multiple decoding passes, leading to reduced speedup. We propose the Glancing Language Mod...
[ { "type": "R", "before": "Although", "after": "Recent work on", "start_char_pos": 0, "end_char_pos": 8, "major_intent": "clarity", "raw_intents": [ "style", "clarity", "clarity" ] }, { "type": "R", "before": "models with one-iteration generation achieve ...
[ 0, 184, 359, 759 ]
arxiv
2008.11015
1
It is common for people to create different types of charts to explore a multi-dimensional dataset (table). However, to build an intelligent assistant that recommends commonly composed charts, the fundamental problems of "multi-dialect" unification , imbalanced data and open vocabulary exist . In this paper, we propose...
It is common for people to create different types of charts to explore a multi-dimensional dataset (table). However, to build a real-world intelligent assistant that recommends commonly composed charts, it should take the challenges of efficiency , imbalanced data hungry and table context into consideration . In this p...
[ { "type": "R", "before": "an", "after": "a real-world", "start_char_pos": 126, "end_char_pos": 128, "major_intent": "clarity", "raw_intents": [ "coherence", "clarity", "clarity" ] }, { "type": "R", "before": "the fundamental problems of \"multi-dialect\"...
[ 0, 107, 294, 418, 585, 801 ]
arxiv
2008.11015
2
It is common for people to create different types of charts to explore a multi-dimensional dataset (table). However, to build a real-world intelligent assistant that recommends commonly composed charts , it should take the challenges of efficiency, imbalanced data hungry and table context into consideration. In this pa...
It is common for people to create different types of charts to explore a multi-dimensional dataset (table). However, to recommend commonly composed charts in real world, one should take the challenges of efficiency, imbalanced data and table context into consideration. In this paper, we propose Table2Charts framework w...
[ { "type": "R", "before": "build a real-world intelligent assistant that recommends", "after": "recommend", "start_char_pos": 120, "end_char_pos": 176, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "R", "before...
[ 0, 107, 309, 433, 600, 817 ]
arxiv
2008.11608
1
Transformer-based language models have taken many fields in NLP by storm. BERT and its derivatives dominate most of the existing evaluation benchmarks, including those for Word Sense Disambiguation (WSD), thanks to their ability in capturing context-sensitive semantic nuances. However, there is still little knowledge a...
Transformer-based language models have taken many fields in NLP by storm. BERT and its derivatives dominate most of the existing evaluation benchmarks, including those for Word Sense Disambiguation (WSD), thanks to their ability in capturing context-sensitive semantic nuances. However, there is still little knowledge a...
[ { "type": "R", "before": "performs a decent job in capturing", "after": "captures", "start_char_pos": 610, "end_char_pos": 644, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "A", "before": null, "after": "...
[ 0, 73, 277, 410, 552, 750, 958, 1099 ]
arxiv
2008.11608
2
Transformer-based language models have taken many fields in NLP by storm. BERT and its derivatives dominate most of the existing evaluation benchmarks, including those for Word Sense Disambiguation (WSD), thanks to their ability in capturing context-sensitive semantic nuances. However, there is still little knowledge a...
Transformer-based language models have taken many fields in NLP by storm. BERT and its derivatives dominate most of the existing evaluation benchmarks, including those for Word Sense Disambiguation (WSD), thanks to their ability in capturing context-sensitive semantic nuances. However, there is still little knowledge a...
[ { "type": "R", "before": "for", "after": "in", "start_char_pos": 370, "end_char_pos": 373, "major_intent": "fluency", "raw_intents": [ "fluency", "clarity", "fluency" ] }, { "type": "R", "before": "captures", "after": "can accurately capture", "s...
[ 0, 73, 277, 410, 552, 734, 942, 1083, 1351 ]
arxiv
2009.03996
1
Our goal is to construct mathematical operations that combine non-determinism measured from quantum randomness with computational determinism so that non-mechanistic behavior is preserved in the computation. Formally, some results about operations applied to computably enumerable (c.e.) and bi-immune sets are proven he...
Our goal is to construct mathematical operations that combine indeterminism measured from quantum randomness with computational determinism so that non-mechanistic behavior is preserved in the computation. Formally, some results about operations applied to computably enumerable (c.e.) and bi-immune sets are proven here...
[ { "type": "R", "before": "non-determinism", "after": "indeterminism", "start_char_pos": 62, "end_char_pos": 77, "major_intent": "fluency", "raw_intents": [ "clarity", "fluency", "fluency" ] }, { "type": "R", "before": "objective is for the operations to"...
[ 0, 207, 390, 585 ]
arxiv
2009.03996
2
Our goal is to construct mathematical operations that combine indeterminism measured from quantum randomness with computational determinism so that non-mechanistic behavior is preserved in the computation. Formally, some results about operations applied to computably enumerable (c.e.) and bi-immune sets are proven here...
Our goal is to construct mathematical operations that combine indeterminism measured from quantum randomness with computational determinism so that non-mechanistic behavior is preserved in the computation. Formally, some results about operations applied to computably enumerable (c.e.) and bi-immune sets are proven here...
[ { "type": "R", "before": "operations", "after": "objective is for the operations to", "start_char_pos": 332, "end_char_pos": 342, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "clarity" ] }, { "type": "A", "before": null, "after":...
[ 0, 205, 364, 704 ]
null
2009.05166
1
Large-scale cross-lingual language models (LM), such as mBERT, Unicoder and XLM, have achieved great success in cross-lingual representation learning. However, when applied to zero-shot cross-lingual transfer tasks, most existing methods use only single-language input for LM finetuning, without leveraging the intrinsic...
Large-scale cross-lingual language models (LM), such as mBERT, Unicoder and XLM, have achieved great success in cross-lingual representation learning. However, when applied to zero-shot cross-lingual transfer tasks, most existing methods use only single-language input for LM finetuning, without leveraging the intrinsic...
[ { "type": "R", "before": "(77.0 on average) on the", "after": "on two", "start_char_pos": 1520, "end_char_pos": 1544, "major_intent": "meaning-changed", "raw_intents": [ "meaning-changed", "clarity", "meaning-changed" ] }, { "type": "R", "before": "bench...
[ 0, 150, 414, 533, 836, 973, 1099, 1240, 1443 ]
arxiv
2009.05166
2
Large-scale cross-lingual language models (LM), such as mBERT, Unicoder and XLM, have achieved great success in cross-lingual representation learning. However, when applied to zero-shot cross-lingual transfer tasks, most existing methods use only single-language input for LM finetuning, without leveraging the intrinsic...
Large-scale cross-lingual language models (LM), such as mBERT, Unicoder and XLM, have achieved great success in cross-lingual representation learning. However, when applied to zero-shot cross-lingual transfer tasks, most existing methods use only single-language input for LM finetuning, without leveraging the intrinsic...
[ { "type": "R", "before": "is", "after": "proves", "start_char_pos": 378, "end_char_pos": 380, "major_intent": "clarity", "raw_intents": [ "clarity", "clarity", "coherence" ] }, { "type": "R", "before": "cross-lingual", "after": "cross-language", ...
[ 0, 150, 414, 533, 836, 973, 1099, 1240, 1444 ]
arxiv
2009.05169
1
We propose a novel method to sparsify attention in the Transformer model by learning to select the most-informative token representations , thus leveraging the model's information bottleneck with twofold strength. A careful analysis shows that the contextualization of encoded representations in our model is significant...
We propose a novel method to sparsify attention in the Transformer model by learning to select the most-informative token representations during the training process, thus focusing on task-specific parts of the input. A reduction of quadratic time and memory complexity to sublinear was achieved due to a robust differen...
[ { "type": "R", "before": ", thus leveraging the model's information bottleneck with twofold strength. A careful analysis shows that the contextualization of encoded representations in our model is significantly more effective than in the original Transformer. We achieve a notable reduction in memory usage d...
[ 0, 213, 371 ]
arxiv