Dataset Viewer
Auto-converted to Parquet Duplicate
file
stringclasses
58 values
start
int64
14
8.27k
end
int64
49
8.32k
label
stringclasses
4 values
user
stringclasses
4 values
text
stringlengths
2
4.16k
paper_11.txt
78
562
Coherence
Ed
Cross-lingual information retrieval (CLIR) (Braschler et al., 1999;Shakery and Zhai, 2013;Jiang et al., 2020;Asai et al., 2021a), for example, can find relevant text in a high-resource language such as English even when the query is posed in a different, possibly low-resource, language. In this work, we develop useful CLIR models for this constrained, yet important, setting where a retrieval corpus is available only in a single high-resource language (English in our experiments).
paper_11.txt
797
960
Unsupported claim
Ed
alternative end-to-end approach that can tackle the problem purely cross-lingually, i.e., without involving MT, would clearly be more efficient and cost-effective
paper_11.txt
2,235
2,456
Lacks synthesis
Ed
KD (Hinton et al., 2014) is a powerful supervision technique typically used to distill the knowledge of a large teacher model about some task into a smaller student model (Mukherjee and Awadallah, 2020;Turc et al., 2020)
paper_13.txt
14
444
Lacks synthesis
Ed
Few-shot learning is the problem of learning classifiers with only a few training examples. Zero-shot learning (Larochelle et al., 2008), also known as dataless classification (Chang et al., 2008), is the extreme case, in which no labeled data is used. For text data, this is usually accomplished by representing the labels of the task in a textual form, which can either be the name of the label or a concise textual description.
paper_13.txt
1,321
2,033
Lacks synthesis
Ed
These models embed both input and label texts into a common vector space. The similarity of the two items can then be computed using a similarity function such as the dot product. The advantage is that input and label text are encoded independently, which means that the label embeddings can be pre-computed. Therefore, at inference time, only a single call to the model per input is needed. In contrast, the models typically applied in the entailment approach are Cross Attention (CA) models which need to be executed for every combination of text and label. On the other hand, they allow for interaction between the tokens of label and input, so that in theory they should be superior in classification accurac
paper_13.txt
1,713
1,880
Unsupported claim
Ed
In contrast, the models typically applied in the entailment approach are Cross Attention (CA) models which need to be executed for every combination of text and label.
paper_13.txt
3,365
3,395
Unsupported claim
Ed
In contrast to most prior work
paper_14.txt
182
310
Unsupported claim
Ed
Unfortunately, for many languages, and especially low-resource languages, such taskspecific labelled data is often not available
paper_14.txt
2,549
2,644
Unsupported claim
Ed
as this is the only task for which high-quality data is available in a large number of language
paper_14.txt
2,833
2,980
Unsupported claim
Ed
a base understanding of syntactic structure in both the source and target language is necessary for any meaningful natural language processing task
paper_15.txt
897
902
Format
Ed
2021)
paper_16.txt
14
414
Coherence
Ed
To facilitate the study of text summarization, earlier datasets are mostly in the news domain with relatively short input passages, such as NYT (Sandhaus, 2008), Gigaword (Napoles et al., 2012), CNN/Daily Mail (Hermann et al., 2015), NEWSROOM (Grusky et al., 2018) and XSUM (Narayan et al., 2018). Datasets for long docu-ments include Sharma et al. (2019), Cohan et al. (2018), andFisas et al. (2016)
paper_16.txt
14
414
Lacks synthesis
Ed
To facilitate the study of text summarization, earlier datasets are mostly in the news domain with relatively short input passages, such as NYT (Sandhaus, 2008), Gigaword (Napoles et al., 2012), CNN/Daily Mail (Hermann et al., 2015), NEWSROOM (Grusky et al., 2018) and XSUM (Narayan et al., 2018). Datasets for long docu-ments include Sharma et al. (2019), Cohan et al. (2018), andFisas et al. (2016)
paper_16.txt
713
1,264
Lacks synthesis
Ed
Researchers recently explore the peer review domain data for a few tasks, such as PeerRead (Kang et al., 2018) for paper decision predictions, AM-PERE for proposition classification in reviews, and RR (Cheng et al., 2020) for paired-argument extraction from review-rebuttal pairs. Additionally, a meta-review dataset is introduced by Bhatia et al. (2020) without any annotation. There are also some explorations on research articles (Teufel et al., 1999;Liakata et al., 2010;Lauscher et al., 2018), which differ in nature from the peer review domain.
paper_17.txt
14
1,286
Lacks synthesis
Ed
Fully supervised event extraction. Event extraction has been studied for over a decade (Ahn, 2006;Ji and Grishman, 2008) and most traditional event extraction works follow the fully supervised setting (Nguyen et al., 2016;Sha et al., 2018;Nguyen and Nguyen, 2019;Yang et al., 2019;Lin et al., 2020;Li et al., 2020). Many of them use classification-based models and use pipeline-style frameworks to extract events (Nguyen et al., 2016;Yang et al., 2019;Wadden et al., 2019). To better leverage shared knowledge in event triggers and arguments, some works propose to incorporate global features to jointly decide triggers and arguments (Lin et al., 2020;Li et al., 2013;Yang and Mitchell, 2016). Recently, few generation-based event extraction models have been proposed. TANL (Paolini et al., 2021) treats event extraction as translation tasks between augmented natural languages. Their predicted targetaugmented language embed labels into the input passage via using brackets and vertical bar symbols, hindering the model from fully leveraging label semantics. BART-Gen is also a generation-based model focusing on documentlevel event argument extraction. Yet, similar to TANL, they solve event extraction with a pipeline, which prevents knowledge sharing across subtasks.
paper_17.txt
1,726
2,044
Lacks synthesis
Ed
Liu et al. (2020) uses a machine reading comprehension formulation to conduct event extraction in a low-resource regime. Text2Event (Lu et al., 2021), a sequence-to-structure generation paradigm, first presents events in a linearized format, and then trains a generative model to generate the linearized event sequence
paper_17.txt
1,726
2,044
Coherence
Ed
Liu et al. (2020) uses a machine reading comprehension formulation to conduct event extraction in a low-resource regime. Text2Event (Lu et al., 2021), a sequence-to-structure generation paradigm, first presents events in a linearized format, and then trains a generative model to generate the linearized event sequence
paper_18.txt
952
970
Format
Ed
(Li et al., 2020a;
paper_18.txt
3,495
3,574
Unsupported claim
Ed
three large-scale benchmark datasets (OntoNotes V4.0, OntoNotes V5.0, and MSRA)
paper_19.txt
1,566
1,654
Unsupported claim
Ed
shown promising for AL in NLP due to its good qualitative and computational performance
paper_19.txt
1,801
1,824
Format
Ed
Shelmanov et al. (2021
paper_20.txt
252
631
Coherence
Ed
Following Chen et al. (2020c), other works adopt PLMs for few-shot D2T generation (Chang et al., 2021b;Su et al., 2021a). Kale and Rastogi (2020b) and Ribeiro et al. (2020) showed that PLMs using linearized representations of data can outperform graph neural networks on graph-to-text datasets, recently surpassed again by graph-based models (Ke et al., 2021;Chen et al., 2020a)
paper_20.txt
1,781
1,797
Format
Ed
Recently, have
paper_20.txt
3,514
3,533
Format
Ed
Jiang et al., 2020)
paper_37.txt
1,213
1,280
Format
Ed
Radford et al., 2021;Schick and Schütze, 2020a,b;Brown et al., 2020
paper_37.txt
1,544
1,572
Format
Ed
Schick and Schütze, 2020a,b)
paper_37.txt
992
1,071
Unsupported claim
Ed
they are impractical to use in real-world applications due to their model sizes
paper_37.txt
1,100
1,692
Lacks synthesis
Ed
Providing prompts or task descriptions play an vital role in improving pre-trained language models in many tasks Radford et al., 2021;Schick and Schütze, 2020a,b;Brown et al., 2020). Among them, GPT models (Radford et al., 2019;Brown et al., 2020) achieved great success in prompting or task demonstrations in NLP tasks. In light of this direction, prompt-based approaches improve small pre-trained models in few-shot text classification tasks Schick and Schütze, 2020a,b). CLIP (Radford et al., 2021) also explores prompt templates for image classification which affect zero-shot performance
paper_37.txt
49
932
Lacks synthesis
Ed
Recently, several few-shot learners on vision-language tasks were proposed including GPT (Radford et al., 2019;Brown et al., 2020), Frozen (Tsimpoukelli et al., 2021), PICa , and SimVLM . Frozen (Tsimpoukelli et al., 2021) is a large language model based on GPT-2 (Radford et al., 2019), and is transformed into a multimodal few-shot learner by extending the soft prompting to incorporate a set of images and text. Their approach shows the fewshot capability on visual question answering and image classification tasks. Similarly, PICa uses GPT-3 (Brown et al., 2020) to solve VQA tasks in a few-shot manner by providing a few in-context VQA examples. It converts images into textual descriptions so that GPT-3 can understand the images. SimVLM is trained with prefix language modeling on weakly-supervised datasets. It demonstrates its effectiveness on a zero-shot captioning task
paper_38.txt
24
794
Lacks synthesis
Ed
pre-training a transformer model on a large corpus with language modeling tasks and finetuning it on different downstream tasks has become the main transfer learning paradigm in natural language processing (Devlin et al., 2019). Notably, this paradigm requires updating and storing all the model parameters for every downstream task. As the model size proliferates (e.g., 330M parameters for BERT (Devlin et al., 2019) and 175B for GPT-3 (Brown et al., 2020)), it becomes computationally expensive and challenging to fine-tune the entire pre-trained language model (LM). Thus, it is natural to ask the question of whether we can transfer the knowledge of a pre-trained LM into downstream tasks by tuning only a small portion of its parameters with most of them freezing.
paper_38.txt
872
1,390
Lacks synthesis
Ed
One line of research (Li and Liang, 2021) suggests to augment the model with a few small trainable mod-ules and freeze the original transformer weight. Take Adapter (Houlsby et al., 2019;Pfeiffer et al., 2020a,b) and Compacter (Mahabadi et al., 2021) for example, both of them insert a small set of additional modules between each transformer layer. During fine-tuning, only these additional and taskspecific modules are trained, reducing the trainable parameters to ∼ 1-3% of the original transformer model per task.
paper_38.txt
1,434
1,921
Lacks synthesis
Ed
The GPT-3 models (Brown et al., 2020;Schick and Schütze, 2020) find that with proper manual prompts, a pre-trained LM can successfully match the fine-tuning performance of BERT models. LM-BFF (Gao et al., 2020), EFL (Wang et al., 2021), and AutoPrompt (Shin et al., 2020) further this direction by insert prompts in the input embedding layer. However, these methods rely on grid-search for a natural language-based prompt from a large search space, resulting in difficulties to optimize.
paper_38.txt
2,461
2,602
Unsupported claim
Ed
all existing prompt-tuning methods have thus far focused on task-specific prompts, making them incompatible with the traditional LM objective
paper_38.txt
2,617
2,711
Unsupported claim
Ed
it is unlikely to see many different sentences with the same prefix in the pre-training corpus
paper_39.txt
129
213
Format
Ed
(Eric et al., 2017;Wu et al., 2019; and collections of largescale annotation corpora
paper_39.txt
355
377
Format
Ed
(El Asri et al., 2017
paper_39.txt
530
533
Unsupported claim
Ed
SGD
paper_39.txt
874
909
Format
Ed
Quan et al., 2020;Lin et al., 2021)
paper_39.txt
998
1,151
Unsupported claim
Ed
vast majority of existing multilingual ToD datasets do not consider the real use cases when using a ToD system to search for local entities in a country.
paper_40.txt
396
437
Format
Ed
[Levy et al., 2017, Elsahar et al., 2018
paper_41.txt
1,890
2,370
Lacks synthesis
Ed
Previous work has shown that SNLI (Bowman et al., 2015) and MNLI (Williams et al., 2018) have annotation artifacts (e.g., negation is a strong indicator of contradictions) (Gururangan et al., 2018). The literature has also shown that simple adversarial attacks including negation cues are very effective (Naik et al., 2018;Wallace et al., 2019). Kovatchev et al. (2019) analyze 11 paraphrasing systems and show that they obtain substantially worse results when negation is present
paper_41.txt
3,201
3,272
Format
Ed
Bar-Haim et al., 2006;Giampiccolo et al., 2007;Bentivogli et al., 2009)
paper_43.txt
465
524
Unsupported claim
Ed
Machine Translation (MT) is the mainstream approach for GEC
paper_43.txt
918
966
Unsupported claim
Ed
recent powerful Transformer-based Seq2Seq model
paper_100.txt
594
825
Coherence
Ekaterina
Zhang et al. (2019) improves an LSTM- based encoder-decoder model with online vocabulary adaptation. For abbreviated pinyin, CoCAT (Huang et al., 2015) uses machine translation technology to reduce the number of the typing letters.
paper_100.txt
968
1,179
Coherence
Ekaterina
Our work differs from existing works in that we are the first one to exploit GPT and verify the pros and cons of GPT in different situations. In addition, there are some works handling pinyin with typing errors.
paper_100.txt
1,763
2,087
Coherence
Ekaterina
Sun et al. (2021) propose a general-purpose Chinese BERT with new embedding layers to inject pinyin and glyph information of characters. There are also task-specific BERT models, especially for the task of grammatical error correction since an important type of error is caused by characters pronounced with the same pinyin.
paper_11.txt
1,025
1,042
Unsupported claim
Ekaterina
multilingual BERT
paper_11.txt
1,485
1,487
Format
Ekaterina
1
paper_13.txt
14
105
Unsupported claim
Ekaterina
Few-shot learning is the problem of learning classifiers with only a few training examples.
paper_13.txt
267
444
Unsupported claim
Ekaterina
For text data, this is usually accomplished by representing the labels of the task in a textual form, which can either be the name of the label or a concise textual description.
paper_13.txt
1,713
1,880
Unsupported claim
Ekaterina
In contrast, the models typically applied in the entailment approach are Cross Attention (CA) models which need to be executed for every combination of text and label.
paper_13.txt
1,881
2,119
Unsupported claim
Ekaterina
On the other hand, they allow for interaction between the tokens of label and input, so that in theory they should be superior in classification accuracy. However, in this work we show that in practice, the difference in quality is small.
paper_13.txt
2,121
2,383
Unsupported claim
Ekaterina
Both CA and SNs also support the few-shot learning setup by fine-tuning the models on a small number of labeled examples. This is usually done by updating all parameters of the model, which in turn makes it impossible to share the models between different tasks.
paper_13.txt
3,365
3,483
Unsupported claim
Ekaterina
In contrast to most prior work, we also show that these results can also be achieved for languages other than English.
paper_14.txt
14
181
Unsupported claim
Ekaterina
At present, for a large majority of natural language processing tasks, the most successful approach is fine-tuning pre-trained models with task-specific labelled data.
paper_15.txt
649
715
Unsupported claim
Ekaterina
Researchers also realize that the vision modality maybe redundant.
paper_15.txt
897
902
Format
Ekaterina
2021)
paper_15.txt
864
956
Coherence
Ekaterina
Encouraging results appeared in 2021) proposed a cross-lingual visual pretraining approach.
paper_15.txt
14
1,184
Lacks synthesis
Ekaterina
Multimodal machine translation is a cross-domain task in the filed of machine translation. Early attempts mainly focused on enhancing the MMT model by better incorporation of the vision features (Calixto and Liu, 2017;Elliott and Kádár, 2017;Delbrouck and Dupont, 2017). However, directly encoding the whole image feature brings additional noise to the text (Yao and Wan, 2020;Liu et al., 2021a). To address the above issue, Yao and Wan (2020) proposed a multimodal self-attention to consider the relative difference of information between two modalities. Similarly, Liu et al. (2021a) used a Gumbel Softmax to achieve the same goal. Researchers also realize that the vision modality maybe redundant. Irrelevant images have little impact on the translation quality, and no significant BLEU drop is observed even the image is absent (Elliott, 2018). Encouraging results appeared in 2021) proposed a cross-lingual visual pretraining approach. In this work, we make a systematic study on whether stronger vision features are helpful. We also extend the research to enhanced features, such as object-detection and image captioning, which is complementary to previous work.
paper_16.txt
856
906
Unsupported claim
Ekaterina
AM-PERE for proposition classification in reviews
paper_16.txt
1,093
1,621
Coherence
Ekaterina
There are also some explorations on research articles (Teufel et al., 1999;Liakata et al., 2010;Lauscher et al., 2018), which differ in nature from the peer review domain. A wide range of control perspectives has been explored in controllable generation, including style control (e.g., sentiments (Duan et al., 2020), politeness (Madaan et al., 2020), formality , domains (Takeno et al., 2017) and persona ) and content control (e.g., length (Duan et al., 2020), entities (Fan et al., 2018a), and keywords (Tang et al., 2019)).
paper_17.txt
1,074
1,169
Unsupported claim
Ekaterina
BART-Gen is also a generation-based model focusing on documentlevel event argument extraction.
paper_17.txt
1,397
1,586
Unsupported claim
Ekaterina
However, their designs are not specific for low-resource scenarios, hence, these models can not enjoy all the benefits that DEGREE obtains for low-resource event extraction at the same time
paper_17.txt
1,726
2,044
Coherence
Ekaterina
Liu et al. (2020) uses a machine reading comprehension formulation to conduct event extraction in a low-resource regime. Text2Event (Lu et al., 2021), a sequence-to-structure generation paradigm, first presents events in a linearized format, and then trains a generative model to generate the linearized event sequence
paper_17.txt
2,261
2,403
Coherence
Ekaterina
Another thread of works are using meta-learning to deal with the less label challenge (Deng et al., 2020;Shen et al., 2021;Cong et al., 2021).
paper_17.txt
1,619
2,259
Lacks synthesis
Ekaterina
Low-resource event extraction. It has been a rising interest in event extraction under less data scenario. Liu et al. (2020) uses a machine reading comprehension formulation to conduct event extraction in a low-resource regime. Text2Event (Lu et al., 2021), a sequence-to-structure generation paradigm, first presents events in a linearized format, and then trains a generative model to generate the linearized event sequence. Text2Event's unnatural output format hinders the model from fully leveraging pre-trained knowledge. Hence, their model falls short on the cases with only extremely low data being available (as shown in Section 3).
paper_18.txt
952
970
Format
Ekaterina
(Li et al., 2020a;
paper_18.txt
1,391
1,496
Unsupported claim
Ekaterina
Nevertheless, building the lexicon is time-consuming and the quality of the lexicon may not be satisfied.
paper_18.txt
2,131
2,253
Unsupported claim
Ekaterina
However, too immersed regularity leads to unfavorable boundary detection of entities and disturbing character composition.
paper_18.txt
3,533
3,547
Unsupported claim
Ekaterina
OntoNotes V4.0
paper_18.txt
3,549
3,563
Unsupported claim
Ekaterina
OntoNotes V5.0
paper_18.txt
3,569
3,573
Unsupported claim
Ekaterina
MSRA
paper_18.txt
3,576
3,729
Unsupported claim
Ekaterina
The results show that RICON achieves considerable improvements compared to the state-of-the-art models, even outperforming existing lexicon-based models.
paper_18.txt
3,757
3,792
Unsupported claim
Ekaterina
a practical medical dataset (CBLUE)
paper_19.txt
14
174
Unsupported claim
Ekaterina
Deep learning, to a large extent, has freed data scientists from doing feature engineering, which has been one of the essential obstacles to annotation with AL.
paper_19.txt
723
900
Unsupported claim
Ekaterina
In our work, we take MNLP as a query strategy for experiments on sequence tagging tasks since it has demonstrated a good trade-off between quality and computational performance.
paper_19.txt
1,464
1,655
Unsupported claim
Ekaterina
We continue this line of works by relying on pre-trained Transformers since this architecture has been shown promising for AL in NLP due to its good qualitative and computational performance.
paper_19.txt
1,802
1,824
Format
Ekaterina
Shelmanov et al. (2021
paper_19.txt
2,873
3,303
Lacks synthesis
Ekaterina
Recently proposed alternatives to uncertaintybased query strategies leverage reinforcement learning and imitation learning (Fang et al., 2017;Liu et al., 2018;Vu et al., 2019;Brantley et al., 2020). This series of works aims at constructing trainable policy-based query strategies. However, this requires an excessive amount of computation while the transferability of learned policies across domains and tasks is underresearched.
paper_19.txt
3,692
3,717
Format
Ekaterina
(Shelmanov et al., 2021).
paper_19.txt
3,422
3,433
Unsupported claim
Ekaterina
ASM problem
paper_20.txt
1,003
1,049
Format
Ekaterina
(Heidari et al., 2021;Kale and Rastogi, 2020a;
paper_20.txt
791
1,331
Coherence
Ekaterina
Generation Using simple handcrafted templates for individual keys or predicates is an efficient way of introducing domain knowledge while preventing text-to-text models from overfitting to a specific data format (Heidari et al., 2021;Kale and Rastogi, 2020a;. Transforming individual triples to text is also used in Laha et al. (2020) whose work is the most similar to ours. They also build a three-step pipeline for zero-shot D2T generation, but they use handcrafted rules for producing the output text and do not address content planning.
paper_20.txt
14
760
Lacks synthesis
Ekaterina
D2T Generation with PLMs Large neural language models pretrained on self-supervised tasks (Lewis et al., 2020;Liu et al., 2019;Devlin et al., 2019) have recently gained a lot of traction in D2T generation research (Ferreira et al., 2020). Following Chen et al. (2020c), other works adopt PLMs for few-shot D2T generation (Chang et al., 2021b;Su et al., 2021a). Kale and Rastogi (2020b) and Ribeiro et al. (2020) showed that PLMs using linearized representations of data can outperform graph neural networks on graph-to-text datasets, recently surpassed again by graph-based models (Ke et al., 2021;Chen et al., 2020a). Although the models make use of general-domain pretraining tasks, all of them are eventually finetuned on domain-specific data.
paper_20.txt
1,583
1,870
Coherence
Ekaterina
As previously demonstrated, using a content plan in neural D2T generation has important impact on the overall text quality (Moryossef et al., 2019a,b;Puduppully et al., 2019;Trisedya et al., 2020). Recently, have shown that using a content plan leads to improved quality of PLM outputs.
paper_20.txt
2,366
2,388
Format
Ekaterina
Li and Jurafsky, 2017)
paper_20.txt
3,514
3,533
Format
Ekaterina
Jiang et al., 2020)
paper_20.txt
4,026
4,046
Format
Ekaterina
(Botha et al., 2018;
paper_20.txt
3,615
4,047
Coherence
Ekaterina
In contrast to sentence fusion (Geva et al., 2019;Barzilay and McKeown, 2005) or sentence compression (Filippova and Altun, 2013), we operate in the context of multiple sentences in a paragraph. The task is the central focus of our WIKIFLUENT corpus ( §4), which we synthesize using a model for the reverse task, split-andrephrase, i.e. splitting a complex sentence into simpler ones while preserving semantics (Botha et al., 2018;.
paper_53.txt
225
268
Format
Ekaterina
(Davison et al., 2019;Petroni et al., 2019;
paper_53.txt
364
385
Format
Ekaterina
(Trinh and Le, 2018;)
paper_53.txt
3,472
3,609
Unsupported claim
Ekaterina
DynaGen uses pretrained commonsense models to generate implications of a question and expands the inference input with these generations
paper_54.txt
441
509
Format
Ekaterina
(Zhang et al., 2020;Aghajanyan et al., 2020Aghajanyan et al., , 2021
paper_54.txt
1,943
2,069
Coherence
Ekaterina
inspired by Li et al. (2020), to train a machine learning model to smooth out such noise and to speed up the inference process
paper_54.txt
2,071
2,213
Lacks synthesis
Ekaterina
Different from Li et al. (2020), we propose to utilize non-autoregressive text generators, which generate all tokens in the output in parallel
paper_54.txt
3,248
3,521
Unsupported claim
Ekaterina
Regarding inference efficiency, our NAUS with truncating is 1000 times more efficient than the search approach; even with dynamic programming for length control, NAUS is still 100 times more efficient than search and several times more efficient than autoregressive models.
paper_55.txt
2,878
3,084
Unsupported claim
Ekaterina
Lastly, our dataset yields a novel extractive summarization dataset, providing a benchmark for studying domain transfer in summarization and enabling QA models to provide concise answers to complex queries.
paper_56.txt
1,389
1,551
Unsupported claim
Ekaterina
On the other hand, neural conversational models are not necessarily designed to generate faithful outputs, but to mimic the distributional properties of the data.
paper_56.txt
2,734
2,863
Unsupported claim
Ekaterina
we annotate responses generated by several state-of-the-art models, including ones that are designed to alleviate hallucinations.
End of preview. Expand in Data Studio

No dataset card yet

Downloads last month
540

Space using ImanAndrea/citation_annotations 1