id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
|---|---|---|---|
train_94100
|
To amortize both costs and annotators' effort at answering questions, each HIT requires the participants to annotate three stories after answering demographic questions.
|
my favorite set was all the art that was made out of stylish trash .
|
neutral
|
train_94101
|
S α : the flowers were in full bloom.
|
if annotators are asked to write their own text given a reference sentence, they may simply produce arbitrarily paraphrased output which does not exhibit a stylistic diversity.
|
neutral
|
train_94102
|
The TA shows that the annotators rarely indicate the same scenes, even if they are asked to annotate an event in the screenplay that is described by a specific synopsis sentence.
|
it still remains competitive against the baselines, indicating that tracking TPs in screenplays fully automatically is feasible.
|
neutral
|
train_94103
|
Traditional suicide risk assessment like Suicide Probability Scale (Bagge and Osman, 1998), Adult Suicide Ideation Questionnaire (Fu et al., 2007), Suicidal Affect-Behavior-Cognition Scale (Harris et al., 2015), etc.
|
the experimental result showed that by combining all of three emotion traits together, the proposed model could generate more discriminative suicidal prediction performance.
|
neutral
|
train_94104
|
Although highly beneficial, those models' use may not be sufficient for the case of dialogue as response generation for goal-oriented dialogue from extremely limited data requires specialized tools.
|
the base optimization objective is as follows: where x usr is user's query, x sys is the system's response, c is the dialogue context, and F e and F d are respectively hierarchical encoder and decoder.
|
neutral
|
train_94105
|
The model (as well as its variants listed above) is implemented in PyTorch (Paszke et al., 2017), and the code is openly available 1 .
|
we use it as-is (ZSDG) as well its variation as follows.
|
neutral
|
train_94106
|
Given the target domain, we first train LAED model(s) on the MetaLWOz data -here we exclude from training every domain that might overlap with the target one.
|
for the NLU ZSDG setup, we annotated all available SMD data and randomly selected a subset of 1000 utterances from each source domain, and 200 utterances from the target domain.
|
neutral
|
train_94107
|
We first describe the task we are addressing in this paper, and the corresponding base model.
|
we perform an exhaustive ablation study of DiKTNet by comparing it to all its variations mentioned above: HRED, HRED+ELMo, and HRED+LAED.
|
neutral
|
train_94108
|
In addition, with the recently adopted technique of training dialogue systems end-to-end data-efficiency of such systems becomes the key question in their adoption in practical applications.
|
we have a set of dialogues in source domains and just a few seed dialogues in the target domain.
|
neutral
|
train_94109
|
All bold-face results are statistically significant to p < 0.01. that MGT results in more general representations of language, thereby facilitating better transfer.
|
across both base architectures, MGT outperforms ensembling.
|
neutral
|
train_94110
|
In such case, the personal information about "School" may be fake.
|
if n crt < n wrg , the worker's decision will be "Fraud" and the manager's decision will be "Fraud" too.
|
neutral
|
train_94111
|
Then, based on the KG, we present structured dialogue management (Section 3) to explore the optimal dialogue strategy with reinforcement learning.
|
we find that, compared with Full-S and HP-S, MP-S is unable to learn any useful dialogue policy during training.
|
neutral
|
train_94112
|
It means that using the hierarchical policy to unfold dialogues is necessary for our task.
|
the system actions in our task are selecting nodes in the KG to generate questions.
|
neutral
|
train_94113
|
Given the same or similar dialog context, there may exist many valid expressions for the responses with the same dialog act a, each corresponding to a certain configuration of z.
|
this is because that taking average metrics may cause that the safety responses get higher scores than meaningful and diverse responses if most of these valid responses are not related to the ground-truth.
|
neutral
|
train_94114
|
In this Corpus, the dialog act categories are {Inform, Question, Directive, Commissive}.
|
in our experiment, we also find that HAE-CT still faces serious safe response problems.
|
neutral
|
train_94115
|
To tackle the challenges, we propose to take advantage of the hierarchical nature of response generation.
|
cNN models have been shown to be efficient for NLP and have achieved excellent results in sentence modeling and classification.
|
neutral
|
train_94116
|
This method needs the preorganized topic/theme annotations for each conversation, which are prohibitively expensive to obtain.
|
note that if annotators rate different options, this triplet will be counted as "tie".
|
neutral
|
train_94117
|
For example, when "The Shining" is mentioned, some of the top biased words are "creepy", "gory" and "scary", which are consistent with the style of the horror movie, and "stephen", who is the original creator of the movie.
|
then it combines the information with its hidden representation to form its updated representation at the next layer.
|
neutral
|
train_94118
|
By further introducing the recommender system's knowledge of the items that have appeared in dialog, we guide the dialog system to generate responses that are more consistent with the user's interests.
|
kBRD (D) stands for only incorporating the dialog contents.
|
neutral
|
train_94119
|
These actions can be regarded as users' feedbacks that reflect users' interest.
|
an ideal recommender dialog system is an endto-end framework that can effectively integrate the two systems so that they can bring mutual benefits to one another.
|
neutral
|
train_94120
|
The word embedding is also 1000 dimension, trained from random initialization.
|
while moving towards z AE of an arXiv-style sentence "This would be an interesting viewpoint", the responses gradu-context Do you want to play a game?
|
neutral
|
train_94121
|
Table 6 presents the generated examples of our models and baselines, our model can extract the keywords from the context which is helpful to generate an informative response, but the HRED model often generates safe responses like "Metoo" or "Y es".
|
we first rewrite the contexts in the test set with CRN, and then retrieve 10 candidates with the rewritten context from the index 6 .
|
neutral
|
train_94122
|
Numbers in parentheses indicate the gains or losses after adding the persona conditions.
|
the gain achieved by the IMN ctx model is limited.
|
neutral
|
train_94123
|
For the relevance score, the retrievalindependent Seq2Seq-MMI establishes a strong baseline.
|
relevance Fluency ours 2.69 3.11 3.20 Seq2Seq-MMI 2.64 3.02 3.14 Table 4: Ablation study on the ranker.
|
neutral
|
train_94124
|
First, their skeleton extractor is pre-trained by the lexical overlap between the retrieved response and the golden response.
|
for a fair comparison, the lengths of the skeletons (the number of preserving words) generated by PMI and Keywords are kept as the same with the one generated by our skeleton extractor.
|
neutral
|
train_94125
|
DSTRead (Gao et al., 2019) formulate the dialogue state tracking task as a reading comprehension problem by asking slot specified questions to the BERT model and find the answer span in the dialogue history for each of the pre-defined combined slot.
|
then for each s in S, the third CMRD generates the value sequence V d,s based on the corresponding h s,d .
|
neutral
|
train_94126
|
When the size of paired data is small, the basic Seq2Seq model tends to generate more generic responses.
|
different from existing work, we assume that n is small (e.g., a few hundred thousands) and further assume that there is another set with T i a piece of plain text sharing the same characteristics with {Y i } n i=1 (e.g., both are questions) and N > n. Our goal is to learn a generation probability P (Y |X) with both d P and d U .
|
neutral
|
train_94127
|
Indeed, existing big conversation data mix various intentions, styles, emotions, personas, and so on.
|
suppose that we have a dataset D is a pair of message-response, and n represents the number of pairs in D P .
|
neutral
|
train_94128
|
We conduct our experiments on a short-text conversation benchmark dataset (Shang et al., 2015) which contains about 4 million post-response pairs from the Sina Weibo 2 , a Chinese social platforms.
|
α denotes the attention weight.
|
neutral
|
train_94129
|
This is consistent with our motivation that open-domain short-text conversation covers a wide range of topics and areas, and the top frequent words are not enough to capture the content of most training pairs.
|
many enhanced encoder-decoder approaches have been proposed to improve the quality of generated responses.
|
neutral
|
train_94130
|
(1) The model tends to select more gain from a lower Kullback-Leibler (KL) divergence during training, which encourages the approximate posterior close to Gaussian prior, rendering the latent space of the former unused.
|
our work is supported by National Natural Science Foundation of China (61976154), Tianjin Natural Science Foundation (18JCY-BJC15500), National Natural Science Foundation of China (61771333), Tianjin Municipal Science and Technology Project (18ZXZNGX00330), and the Foundation of State Key Laboratory of Cognitive Intelligence, iFLYTEK(CIoS-20190001).
|
neutral
|
train_94131
|
(2) The speakers in a dyadic conversation have different linguistic characteristics, sentiments and personalities.
|
(3) Compared with the Gau-Generators in Generator KL, the vMF-Generators (i.e., SVN, SSVN Gau−E and SSVN) have the much higher KL values, indicating that the vMF is a better selection than Gaussian to solve the KL-vanishing problem.
|
neutral
|
train_94132
|
We compare our models with Information Retrieval (IR) based models and recommendation-only models.
|
additionally, the recommendations are not grounded in real observed movie preferences, which may make trained models less consistent with actual users.
|
neutral
|
train_94133
|
First, our response generation task only includes turns where the system's dialogue act is CONFORM SQL.
|
after annotation and reviewing, 3,007 of them were finished and kept in the final dataset.
|
neutral
|
train_94134
|
The difficulty of constructing dialogue-based NLIDB systems stems from these requirements.
|
the majority of previous work focus on converting a single, complex question into its corresponding SQL query.
|
neutral
|
train_94135
|
In task-oriented dialog systems, the user simulator's task is to complete a pre-defined goal by interacting with the system.
|
we used policy gradient method to train dialog systems (williams, 1992).
|
neutral
|
train_94136
|
When the dialog policy is bad, even if the NLG module can generate natural system responses, the users would still think the system is unnatural.
|
this is because SL-template's dialog manager is learned with supervised learning, less rigid than the agenda-based dialog policy, which further leads to a less rigid behaviour of the trained dialog system.
|
neutral
|
train_94137
|
The performance of the simulator directly impacts the RL policy.
|
another common automatic metrics used to assess the language model is BLEU, but since this is a user simulator study and we don't have ground truth, BLEU score is not available.
|
neutral
|
train_94138
|
One of the main drawbacks of HOCA is the generation of high-order tensor, the size of the high-order tensor will increases exponentially with the number of modalities as n i=1 t i , resulting in a lot of computation.
|
space (G) Training Time (s) HOCA-U 4.1 16418 HOCA-UBT 9.7 37040 L-HOCA-UBT 5.6 24615 The theoretical complexity of different attention mechanisms is illustrated in section 4.3.
|
neutral
|
train_94139
|
We simply compare the descriptions generated by HOCA-U and L-HOCA-UBT, respectively.
|
we implement the multimodal correlation between the different modalities in a more efficient way with lowrank approximation which has been widely used in the community of vision and language (Lei et al., 2015;Liu et al., 2018;.
|
neutral
|
train_94140
|
The goal of our work is to deal with unpaired image-caption data for image captioning.
|
baseline (10%) : a group of sheep standing next to each other.
|
neutral
|
train_94141
|
To compute the cross entropy loss, we use the pseudo-label assignment for the Unlabeled-COCO images.
|
these pseudo-labels are not noisefree, thus treating them equally with the paired ones is detrimental.
|
neutral
|
train_94142
|
After learning unimodal (H) and multimodal (Ĥ) representations of context, we use a Memory Fusion Network (MFN) (Zadeh et al., 2018a) to model the punchline ( , -∈/,$ , -∈/,0 , -∈/,1 2 .
|
a full understanding of humor requires analyzing the context of the punchline.
|
neutral
|
train_94143
|
Execution of the command then requires mapping the command into the physical visual space, after which the appropriate action can be taken.
|
talk2Car contains images in realistic settings accompanied by free language in contrast to curated datasets such as MS-COCO.
|
neutral
|
train_94144
|
We use only the articles from trustworthy media sources, according to our MBFC labels.
|
testing on Snopes + Reuters data.
|
neutral
|
train_94145
|
Table 1: Accuracy and Average Precision for individual feature types, calculated using 10-fold cross-validation using the Snopes dataset (S), and the Snopes+Reuters dataset (S+R).
|
there are cases when an image is completely legitimate, but it is published alongside some text that does not reflect its content accurately.
|
neutral
|
train_94146
|
Our new model generalizes the planning space of VPN to reason about intermediate goals and obstacles, and includes recurrent action generation for trajectories with multiple goals.
|
both metrics fail to reflect this.
|
neutral
|
train_94147
|
During fine-turning, we continuously generate additional examples using model failures.
|
without the implicit reasoning discriminator (-Implicit discriminator), the additional examples make learning too difficult, and do not help.
|
neutral
|
train_94148
|
E (I, B, R, T ) is a noncontextualized representation for each token and of its position in text, but also of the content and position of the bounding boxes.
|
vCR has much longer questions and answers compared to other popular visual Question Answering (vQA) datasets, such as vQA v1 (Antol et al., 2015), vQA v2 (Goyal et al., 2017) and GQA (Hudson and Manning, 2019), requiring more modeling capacity for language understanding.
|
neutral
|
train_94149
|
GQA (Hudson and Manning, 2019) uses real scenes from Visual Genome, but the language is artificially generated.
|
we found this to be surprisingly competitive on VCR compared to published baselines, perhaps due to our choice of powerful pretrained models.
|
neutral
|
train_94150
|
input sig- nals that can help to differentiate domains apart have has no impact on the task itself.
|
they are prone to fail when there is a severe change in the marginal distribution that is task irrelevant.
|
neutral
|
train_94151
|
The overall distribution of α values at convergence can be seen in Figure 3.
|
the decoder self and context attention only learn to distribute these parameters in a single mode.
|
neutral
|
train_94152
|
Encoder Architecture: Another way to encourage this robustness is through inductive bias in the encoder architecture.
|
in this work we present a probabilistic latent variable model capable of disentangling stylistic features of fonts from the underlying structure of each character.
|
neutral
|
train_94153
|
Since our model attempts to learn a smooth manifold over the latent style, we can also perform interpolation between the inferred font representations, something which is not directly possible using either of the baselines.
|
we propose a deep probabilistic approach that combines aspects of both these lines of past work.
|
neutral
|
train_94154
|
These are broken down into training, dev, and test splits of 7649, 1473, and 1560 fonts respectively.
|
we use a convolutional neural network which takes in a batch of characters from a single font, concatenated with their respective character type embedding.
|
neutral
|
train_94155
|
MacCartney and Manning (2009); Angeli and Manning (2014); and others incorporated knowledge of verb signatures within a natural logic framework (MacCart-ney, 2009;Sánchez Valencia, 1991) in order to perform natural language inference.
|
in this paper, we revisit the question of whether neural models of natural language inference-which are not explicitly endowed with knowledge of verbs' lexical semantic categorieslearn to make inferences about veridicality consistent with those made by humans.
|
neutral
|
train_94156
|
None of the traditional criteria for basic color terms hold up robustly.
|
the sequence gamma can have larger magnitude if, despite poor sifting, the order of the basic terms is correct.
|
neutral
|
train_94157
|
The results indicate that the BiLSTM-CRF framework is a better fit for encoding order information and long-range context dependency for such sequence labeling task.
|
in sentence S1, the negative focus is the propositional clause until the market close, yielding the interpretation that mutual fund trades take effect, but not until the market close.
|
neutral
|
train_94158
|
Although methods like (Elsner et al., 2007;Li and Jurafsky, 2017) attempt to combine these aspects of coherence, to our knowledge, no methods consider all the three aspects together in a single framework.
|
our bi-LSTM sentence encoder gives a representation h i ∈ R 2p for each sentence s i in the document.
|
neutral
|
train_94159
|
We observed that the naive pairwise ranking loss that uses a fixed margin unfairly penalizes the locally positive sentences during training.
|
the main differences between our implementation and the implementation referred in their paper are that we used a bi-LStM (as opposed to simple RNN) for sentence encoding and trained the network with the Adam optimizer (as opposed to AdaGrad).
|
neutral
|
train_94160
|
As shown in Table 6, the addition of the global model and LM loss to the local model improves performance on the standard discrimination task by 1.34%.
|
this section presents details of our experiment procedures and results.
|
neutral
|
train_94161
|
Subsequent studies extended the basic entity grid model.
|
we show that most of these models often fail on harder tasks with more realistic application scenarios.
|
neutral
|
train_94162
|
Here, except the random model, all of the baselines are based on neural networks, which are typically more competitive than traditional approaches (e.g., utilizing handcraft features).
|
the primary goal of sentence set ordering is to put an unordered set of sentences into a coherent paragraph in the correct order.
|
neutral
|
train_94163
|
This is related to the global dependencies of an entire paragraph.
|
we discuss the reason for our highest performance among previous models by classifying the factors affecting performance into four categories: sentence set modeling, topic latent vectors, permutation invariance, and attention at decoder.
|
neutral
|
train_94164
|
In this section, we present TGCM as a novel topic-guided coherence modeling for SSO.
|
in Table 2-3, the several values of some models are directly taken from (Cui et al., 2018), while we implemented the rest of models using their public code with the same experimental setup.
|
neutral
|
train_94165
|
Although TGCM-S do not utilize topical context features at encoder and decoder (such as t p and t s ), TGCM-S also employs the transformer-based attentive pointer decoder.
|
the pairwise model only learns the relative order from sentence-pair interactions.
|
neutral
|
train_94166
|
The topic latent vector t s i captures how topical a sentence s i is in itself (local context), whereas the topic latent vector t p representes the overall theme of a paragraph p (global context).
|
other parameters were initialized randomly based on He et al.
|
neutral
|
train_94167
|
Since our decoder directly receives the set of sentence vectors regardless of their input order via attention, our encoder is free from the constraint that all sentences must be compressed into a single fixedlength vector as a paragraph representation.
|
these models can capture global dependencies regardless of the order of input sentences.
|
neutral
|
train_94168
|
The attention layer repeatedly decides the next sentence from sentence-pair interactions between the current predicted sentence and each paragraph-independent candidate {s i } 5 1 .
|
with the multiple attention modules, TGCM draws global dependencies among sentences by attending over the topic-sensitive sentence vectors repeatedly.
|
neutral
|
train_94169
|
Its results were only slightly less good than those of stand alone GEN. To further investigate comparisons between different architectures for solving the attachment problem, we compared various local models extended with the MS-DAG decoding algorithm discussed in Section 5, giving the global results shown in the righthand columns of Tables 3 and 4.
|
we include LFs are expert-composed functions that make an attachment prediction for a given candidate: each LF returns a 1, a 0 or a -1 ("attached"/"do not know"/"not attached") for each candidate.
|
neutral
|
train_94170
|
To generate a large number of discourse structures via distant supervision from sentiment, we propose a four-step approach, shown in Figures 2 and 3.
|
although in principle new corpora could be created, the annotation process is expensive and time consuming.
|
neutral
|
train_94171
|
We study this question on the ACL and EMNLP paper collections and present an analysis on how well deep learning techniques can infer the authors of a paper.
|
a separate convolutional layer is dedicated to learning from part-of-speech sequences.
|
neutral
|
train_94172
|
For instance, in Figure 1, with identified key elements, one can easily categorize the incident in dimensions of "age of harasser" (adult), "single/multiple harasser(s)" (single), "type of harasser" (unspecified), "type of location" (park) , "time of day" (day time).
|
karlekar and Bansal (2018) were the first group to our knowledge that applied NLP to analyze large amount ( ∼10,000) of sexual harassment stories.
|
neutral
|
train_94173
|
This could explain why the regular BiLSTM model got lower performance than the CNN model.
|
the missing information is hence marked as "unspecified".
|
neutral
|
train_94174
|
In this study, we manually annotated those stories with labels in the dimensions of location, time, and harassers' characteristics, and marked the key elements related to these dimensions.
|
when training with key element extractions, it put almost all attention on the harasser "young man" (S2), which helped the model make correct prediction of "young harasser".
|
neutral
|
train_94175
|
The volume of social media data is huge, and the more we can extract from these data, the more powerful we can be as part of the efforts to build a safer and more inclusive communities.
|
the position of each word in this context sequence is crucial information for the CNN model to make the correct predictions (Nguyen and Grishman, 2015).
|
neutral
|
train_94176
|
The results show that our approach in general performs better in the Twitter dataset.
|
to the best of our knowledge, our work is the first that investigate VAEs as a tool for training data expansion, so as to enhance machine learning performance with limited amount of labeled data.
|
neutral
|
train_94177
|
We design a hierarchical seq2tree model that can better capture information of the AST of the equation.
|
these studies lost sight of important information of the math equation ASTs (e.g., parent and siblings of each node), despite of their promising results.
|
neutral
|
train_94178
|
One point to note is that our training procedure is based on finding shortest paths in a complete KB, and we use this same procedure on the incomplete KB setting.
|
the classifiers used for retrieval in the different iterations are identical.
|
neutral
|
train_94179
|
Our implementation shows comparable performance on the WikiMovies (MetaQA 1-hop) dataset, as shown in Table 3.
|
both models are limited by the number of facts and text that can fit into memory.
|
neutral
|
train_94180
|
As we discussed, different from the traditional RC task, the answer string will appear multiple times in different paragraphs in OpenQA.
|
besides, in the distantly supervised setup (Mintz et al., 2009), it is postulated that the paragraphs that contain the answer string are ground truths, while even some of the positive paragraphs are wrong-labeled as they do not concern the question, e.g.
|
neutral
|
train_94181
|
These paragraphs are inherently relevant to each other and provide enhanced evidence for the correct answer.
|
this indicates that SearchQA contains more positive paragraphs than other datasets.
|
neutral
|
train_94182
|
The scoring function of ComplEx is given by where e s , e o , w r ∈ C n are the embeddings of s, o, and r, respectively.
|
a matrix is block circulant if it can be written in the form 1b) .
|
neutral
|
train_94183
|
20162.96 Elsahar et al.
|
in this paper, we strive toward the above two issues via incorporating diversified contexts and answer-aware loss.
|
neutral
|
train_94184
|
Comparison with NAQANet The major difference between our model and NAQANet is that NAQANet does not have the reasoning module, i.e., M 0 is simply set as M P .
|
our NumNet model outperforms strong baselines with a large margin on the DRoP dataset.
|
neutral
|
train_94185
|
Hence, the learned model behaves more like a question language model with some loose context constraint, while it is unaware of the strong requirements that it should be closely grounded by the context and should be answered by the given answer.
|
these works either reduced the ground-truth data size or simplified the span-prediction QA task to answer sentence selection.
|
neutral
|
train_94186
|
In terms of syntactic correctness, KEAG and MHPGM both perform well thanks to their architectures of composing answer text and integrating knowledge.
|
an overview of the neural architecture of KEaG is depicted in Figure 1 3.2 Sequence-to-sequence model KEaG is built upon an extension of the sequenceto-sequence attentional model (Bahdanau et al., 2015;Nallapati et al., 2016;See et al., 2017).
|
neutral
|
train_94187
|
Each tuple (question, answer, and document) is fed into the BERT model as: " [CLS] As a remark, we want to highlight that both the DRD and ARD discriminators have been trained on different datasets than the final multiple choice QA model, which is trained and evaluated on the ARC Easy and Challenge datasets.
|
as shown in Figure 2, each supporting document is associated with a list of scores computed by the discriminators.
|
neutral
|
train_94188
|
This decision step is accomplished by a simple feed-forward network (Figure 3).
|
replacing GrU with Long-Short Term Memory (LSTM) cells (Hochreiter and Schmidhuber, 1997) gives similar performance but the training procedure is computationally more expensive.
|
neutral
|
train_94189
|
Humans capture that information by learning, by experiencing and from commonsense knowledge.
|
the weights in dictate to what extent the network should attend each document.
|
neutral
|
train_94190
|
In this work we use linguistic annotations as a basis for a Discourse-Aware Semantic Self-Attention encoder that we employ for reading comprehension on narrative texts.
|
), a h i is the output of head h i .
|
neutral
|
train_94191
|
2018for QA span prediction modeling.
|
to be specific, in the system without the paragraph-level neural retrieval module, we re-trained the sentence-level retrieval module with negative sentences directly sampled from the term-based retrieval set and then also re-trained the downstream QA or verification module.
|
neutral
|
train_94192
|
Progress on MRS has been made by improving individual IR or comprehension sub-modules with recent advancements on representative learning (Peters et al., 2018;Radford et al., 2018;Devlin et al., 2018).
|
this gives the final term-based retrieval set for HOtPOtQA.
|
neutral
|
train_94193
|
The task has gained popularity with its natural combination of information retrieval (IR) and machine comprehension (MC).
|
to validate the system design decisions in Sec 3 and reveal the importance of semantic retrieval towards downstream, we conducted a series of ablation and analysis experiments on all the modules.
|
neutral
|
train_94194
|
We add a "maybe" choice in PubMedQA to cover uncertain instances.
|
under reasoning-free setting where the annotator can see the conclusions, a single human achieves 90.4% accuracy and 84.2% macro-F1.
|
neutral
|
train_94195
|
To which organ system do the esophagus, liver, pancreas, small intestine, and colon belong?
|
although various neural Qa methods have achieved high performance on some of these datasets Trivedi et al., 2019;Tymoshenko et al., 2017;Seo et al., 2016;Wang and Jiang, 2016;De Cao et al., 2018;Back et al., 2018), we argue that more effort must be dedicated to explaining their inference process.
|
neutral
|
train_94196
|
Unsurprisingly, training and testing in the same domain (e.g., Fiction) leads to the best performance on sentence selection.
|
in MultiRC, AutoROCC outperforms our baseline of BERT + entire passage (row 10 vs 22) by 8.3% EM0, indicating that AutoROCC can filter out irrelevant content.
|
neutral
|
train_94197
|
Note that some of the works discussed here transfer knowledge from external datasets into the QA task they address (Chung et al., 2017;Pan et al., 2019;Min et al., 2017;Qiu et al., 2018;Chen et al., 2017).
|
the improvements are not large, e.g., the maximum improvement is 1.6% (when k = 4), which indicates that ROCC is robust to a certain extent to lexical variation.
|
neutral
|
train_94198
|
Open-domain question answering (QA) Inspired by the series of TREC QA competitions, 2 Chen et al.
|
centered around entity names, this model risks missing purely descriptive clues in the question.
|
neutral
|
train_94199
|
Because off-the-shelf IR systems generally optimize for shallow lexical similarity between query and candidate documents in favor of efficiency, a good proxy for this overlap is locating spans of text that have high lexical overlap with the intended supporting documents.
|
it is computationally inefficient, and has high variance especially for the second reasoning step and forward, because the context depends greatly on what queries have been chosen previously and their search results.
|
neutral
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.