inputs stringlengths 231 11.4k | targets stringlengths 1 1.64k | _template_idx int64 0 9 | _task_source stringclasses 1
value | _task_name stringclasses 1
value | _template_type stringclasses 2
values |
|---|---|---|---|---|---|
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
We assess our dataset using traditional and deep learning methods. Our simplest model is a linear SVM trained on word unigrams. SVMs have produced state-of-the-art results for many text classification tasks BIBREF13 . We also train a bidirectional Long Short-Term-Memory (BiLSTM) model, which we adapted from the sentiment analysis system of sentimentSystem,rasooli2018cross and altered to predict offensive labels instead. It consists of (1) an input embedding layer, (2) a bidirectional LSTM layer, (3) an average pooling layer of input features. The concatenation of the LSTM's and average pool layer is passed through a dense layer and the output is passed through a softmax function. We set two input channels for the input embedding layers: pre-trained FastText embeddings BIBREF14 , as well as updatable embeddings learned by the model during training. Finally, we also apply a Convolutional Neural Network (CNN) model based on the architecture of BIBREF15 , using the same multi-channel inputs as the above BiLSTM.
Question: What models are used in the experiment?
linear SVM bidirectional Long Short-Term-Memory (BiLSTM) Convolutional Neural Network (CNN)
Our data set contains $3,209$ reviews about 553 different cars from 49 different car manufacturers.
Question: How big is dataset of car-speak language?
$3,209$ reviews
We evaluated baselines and our model using accuracy as the metric on the ROCStories dataset, and summarized these results in Table 2 .
Question: Which metrics are they evaluating with?
| accuracy
| 0 | NIv2 | task460_qasper_answer_generation | fs_opt |
Detailed Instructions: In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
See one example below:
Problem: We evaluate the proposed approach on the Chinese social media text summarization task, based on the sequence-to-sequence model. Large-Scale Chinese Short Text Summarization Dataset (LCSTS) is constructed by BIBREF1 . The dataset consists of more than 2.4 million text-summary pairs in total, constructed from a famous Chinese social media microblogging service Weibo.
Question: Are results reported only for English data?
Solution: No
Explanation: Based on the context, the dataset is constructed from a famous Chinese social media microblogging service Weibo.
Problem: Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective.
Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing.
Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully.
Question: how was the dataset built?
Solution: | Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no" | 4 | NIv2 | task460_qasper_answer_generation | fs_opt |
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
Example Input: Last, we evaluate our approaches in 9 commonly used text classification datasets. We evaluate our methods on several commonly used datasets whose themes range from sentiment, web-page, science to medical and healthcare.
Question: What NLP tasks do they consider?
Example Output: text classification for themes including sentiment, web-page, science, medical and healthcare
Example Input: We conducted the human evaluation using Amazon Mechanical Turk to assess subjective quality. We recruit master level workers (who have good prior approval rates) to perform a human comparison between generated responses from two systems (which are randomly sampled from comparison systems). The workers are required to judge each utterance from 1 (bad) to 3 (good) in terms of informativeness and naturalness. Informativeness indicates the extent to which generated utterance contains all the information specified in the dialog act. Naturalness denotes whether the utterance is as natural as a human does. To reduce judgement bias, we distribute each question to three different workers. Finally, we collected in total of 5800 judges.
Question: What was the criteria for human evaluation?
Example Output: to judge each utterance from 1 (bad) to 3 (good) in terms of informativeness and naturalness
Example Input: Word embeddings have risen in popularity for NLP applications due to the success of models designed specifically for the big data setting. Note that “big” datasets are not always available, particularly in computational social science NLP applications, where the data of interest are often not obtained from large scale sources such as the internet and social media, but from sources such as press releases BIBREF11 , academic journals BIBREF10 , books BIBREF12 , and transcripts of recorded speech BIBREF13 , BIBREF14 , BIBREF15 have proposed a model-based method for training interpretable corpus-specific word embeddings for computational social science, using mixed membership representations, Metropolis-Hastings-Walker sampling, and NCE. Experimental results for prediction, supervised learning, and case studies on state of the Union addresses and NIPS articles, indicate that high-quality embeddings and topics can be obtained using the method. The results highlight the fact that big data is not always best, as domain-specific data can be very valuable, even when it is small.
Question: Why is big data not appropriate for this task?
Example Output: | Training embeddings from small-corpora can increase the performance of some tasks
| 3 | NIv2 | task460_qasper_answer_generation | fs_opt |
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
One example is below.
Q: We evaluate the proposed approach on the Chinese social media text summarization task, based on the sequence-to-sequence model. Large-Scale Chinese Short Text Summarization Dataset (LCSTS) is constructed by BIBREF1 . The dataset consists of more than 2.4 million text-summary pairs in total, constructed from a famous Chinese social media microblogging service Weibo.
Question: Are results reported only for English data?
A: No
Rationale: Based on the context, the dataset is constructed from a famous Chinese social media microblogging service Weibo.
Q: Table TABREF19 displays the performance of the 4 baselines on the ReviewQA's test set. These results are the performance achieved by our own implementation of these 4 models.
Question: What tasks were evaluated?
A: | ReviewQA's test set | 9 | NIv2 | task460_qasper_answer_generation | fs_opt |
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
One example: We evaluate the proposed approach on the Chinese social media text summarization task, based on the sequence-to-sequence model. Large-Scale Chinese Short Text Summarization Dataset (LCSTS) is constructed by BIBREF1 . The dataset consists of more than 2.4 million text-summary pairs in total, constructed from a famous Chinese social media microblogging service Weibo.
Question: Are results reported only for English data?
Solution is here: No
Explanation: Based on the context, the dataset is constructed from a famous Chinese social media microblogging service Weibo.
Now, solve this: First, we analyze the types of structures in code-mixed puns and classify them into two categories namely intra-sequential and intra-word.
Question: What are the categories of code-mixed puns?
Solution: | intra-sequential and intra-word | 6 | NIv2 | task460_qasper_answer_generation | fs_opt |
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
Q: As explained in Section SECREF15 , the corruption introduced in Doc2VecC acts as a data-dependent regularization that suppresses the embeddings of frequent but uninformative words. In contrast, Doc2VecC manages to clamp down the representation of words frequently appear in the training set, but are uninformative, such as symbols and stop words.
Question: How do they determine which words are informative?
A: Informative are those that will not be suppressed by regularization performed.
****
Q: In order to verify the effectiveness of our method (i.e., Multi-linear attention) replacing multi-head attention in Transformer, we carry out two NLP tasks named language modeling (LM) and neural machine translation (NMT). Then, we test different model configurations on the PTB BIBREF25 , WikiText-103 BIBREF26 and One-Billion Word benchmark BIBREF27 datasets and report the results in Table 1 and Table 2 . In this task, we have trained the Transformer model BIBREF2 on WMT 2016 English-German dataset BIBREF36 .
Question: What datasets or tasks do they conduct experiments on?
A: Language Modeling (LM) PTB BIBREF25 , WikiText-103 BIBREF26 and One-Billion Word benchmark BIBREF27 datasets neural machine translation (NMT) WMT 2016 English-German dataset
****
Q: The input document/summary that may have unordered sentences is processed so that it will have sentences clustered together. To test our approach, we jumble the ordering of sentences in a document, process the unordered document and compare the similarity of the output document with the original document. To structure an unordered document is an essential task in many applications. It is a post-requisite for applications like multiple document extractive text summarization where we have to present a summary of multiple documents. It is a prerequisite for applications like question answering from multiple documents where we have to present an answer by processing multiple documents.
Question: What is an unordered text document, do these arise in real-world corpora?
A: | A unordered text document is one where sentences in the document are disordered or jumbled. It doesn't appear that unordered text documents appear in corpora, but rather are introduced as part of processing pipeline.
****
| 4 | NIv2 | task460_qasper_answer_generation | fs_opt |
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
Example: We evaluate the proposed approach on the Chinese social media text summarization task, based on the sequence-to-sequence model. Large-Scale Chinese Short Text Summarization Dataset (LCSTS) is constructed by BIBREF1 . The dataset consists of more than 2.4 million text-summary pairs in total, constructed from a famous Chinese social media microblogging service Weibo.
Question: Are results reported only for English data?
Example solution: No
Example explanation: Based on the context, the dataset is constructed from a famous Chinese social media microblogging service Weibo.
Problem: To this goal, we select a diverse set of eight source languages from different language families – Basque, French, German, Hungarian, Italian, Navajo, Turkish, and Quechua – and three target languages – English, Spanish and Zulu.
Question: What are the tree target languages studied in the paper?
| Solution: English, Spanish and Zulu | 5 | NIv2 | task460_qasper_answer_generation | fs_opt |
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
Example: We evaluate the proposed approach on the Chinese social media text summarization task, based on the sequence-to-sequence model. Large-Scale Chinese Short Text Summarization Dataset (LCSTS) is constructed by BIBREF1 . The dataset consists of more than 2.4 million text-summary pairs in total, constructed from a famous Chinese social media microblogging service Weibo.
Question: Are results reported only for English data?
Example solution: No
Example explanation: Based on the context, the dataset is constructed from a famous Chinese social media microblogging service Weibo.
Problem: To identify legally sound answers, we recruit seven experts with legal training to construct answers to Turker questions.
Question: Who were the experts used for annotation?
| Solution: Individuals with legal training | 5 | NIv2 | task460_qasper_answer_generation | fs_opt |
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
Example: We evaluate the proposed approach on the Chinese social media text summarization task, based on the sequence-to-sequence model. Large-Scale Chinese Short Text Summarization Dataset (LCSTS) is constructed by BIBREF1 . The dataset consists of more than 2.4 million text-summary pairs in total, constructed from a famous Chinese social media microblogging service Weibo.
Question: Are results reported only for English data?
Example solution: No
Example explanation: Based on the context, the dataset is constructed from a famous Chinese social media microblogging service Weibo.
Problem: The second algorithm incorporates recent ideas of data selection in machine translation BIBREF7 . We develop a novel re-weighting method to resolve these problems, using ideas inspired by data selection in machine translation BIBREF26 , BIBREF7 .
Question: What is the data selection paper in machine translation
| Solution: BIBREF7 BIBREF26 | 5 | NIv2 | task460_qasper_answer_generation | fs_opt |
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
Q: News Articles
Our dataset source of news articles is described in BIBREF2. This dataset was built from two different sources, for the trusted news (real news) they sampled news articles from the English Gigaword corpus. For the false news, they collected articles from seven different unreliable news sites. Twitter
For this dataset, we rely on a list of several Twitter accounts for each type of false information from BIBREF6. This list was created based on public resources that annotated suspicious Twitter accounts. The authors in BIBREF6 have built a dataset by collecting tweets from these accounts and they made it available.
Question: What datasets did they use?
A: News Articles Twitter
****
Q: We extract data from the WMT'14 English-French (En-Fr) and English-German (En-De) datasets. To create a larger discrepancy between the tasks, so that there is a clear dataset size imbalance, the En-De data is artificially restricted to only 1 million parallel sentences, while the full En-Fr dataset, comprising almost 40 million parallel sentences, is used entirely.
Question: What datasets are used for experiments?
A: the WMT'14 English-French (En-Fr) and English-German (En-De) datasets.
****
Q: We construct three datasets based on IMDB reviews and Yelp reviews. The IMDB dataset is binarised and split into a training and test set, each with 25K reviews (2K reviews from the training set are reserved for development). For Yelp, we binarise the ratings, and create 2 datasets, where we keep only reviews with $\le $ 50 tokens (yelp50) and $\le $200 tokens (yelp200).
Question: What datasets do they use?
A: | three datasets based on IMDB reviews and Yelp reviews
****
| 4 | NIv2 | task460_qasper_answer_generation | fs_opt |
You will be given a definition of a task first, then an example. Follow the example to solve a new instance of the task.
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
We evaluate the proposed approach on the Chinese social media text summarization task, based on the sequence-to-sequence model. Large-Scale Chinese Short Text Summarization Dataset (LCSTS) is constructed by BIBREF1 . The dataset consists of more than 2.4 million text-summary pairs in total, constructed from a famous Chinese social media microblogging service Weibo.
Question: Are results reported only for English data?
Solution: No
Why? Based on the context, the dataset is constructed from a famous Chinese social media microblogging service Weibo.
New input: We construct three datasets based on IMDB reviews and Yelp reviews. The IMDB dataset is binarised and split into a training and test set, each with 25K reviews (2K reviews from the training set are reserved for development). For Yelp, we binarise the ratings, and create 2 datasets, where we keep only reviews with $\le $ 50 tokens (yelp50) and $\le $200 tokens (yelp200).
Question: What datasets do they use?
Solution: | three datasets based on IMDB reviews and Yelp reviews | 0 | NIv2 | task460_qasper_answer_generation | fs_opt |
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
Let me give you an example: We evaluate the proposed approach on the Chinese social media text summarization task, based on the sequence-to-sequence model. Large-Scale Chinese Short Text Summarization Dataset (LCSTS) is constructed by BIBREF1 . The dataset consists of more than 2.4 million text-summary pairs in total, constructed from a famous Chinese social media microblogging service Weibo.
Question: Are results reported only for English data?
The answer to this example can be: No
Here is why: Based on the context, the dataset is constructed from a famous Chinese social media microblogging service Weibo.
OK. solve this:
In order to understand the latent topics of those #MeToo tweets for college followers, we first utilize Latent Dirichlet Allocation (LDA) to label universal topics demonstrated by the users. Since certain words frequently appear in those #MeToo tweets (e.g., sexual harassment, men, women, story, etc.), we transform our corpus using TF-IDF, a term-weighting scheme that discounts the influence of common terms.
Question: How are the topics embedded in the #MeToo tweets extracted?
Answer: | Using Latent Dirichlet Allocation on TF-IDF transformed from the corpus | 8 | NIv2 | task460_qasper_answer_generation | fs_opt |
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
Let me give you an example: We evaluate the proposed approach on the Chinese social media text summarization task, based on the sequence-to-sequence model. Large-Scale Chinese Short Text Summarization Dataset (LCSTS) is constructed by BIBREF1 . The dataset consists of more than 2.4 million text-summary pairs in total, constructed from a famous Chinese social media microblogging service Weibo.
Question: Are results reported only for English data?
The answer to this example can be: No
Here is why: Based on the context, the dataset is constructed from a famous Chinese social media microblogging service Weibo.
OK. solve this:
Results ::: CoinCollector
In this setting, we compare the number of actions played in the environment (frames) and the score achieved by the agent (i.e. +1 reward if the coin is collected). In Go-Explore we also count the actions used to restore the environment to a selected cell, i.e. to bring the agent to the state represented in the selected cell. This allows a one-to-one comparison of the exploration efficiency between Go-Explore and algorithms that use a count-based reward in text-based games. Importantly, BIBREF8 showed that DQN and DRQN, without such counting rewards, could never find a successful trajectory in hard games such as the ones used in our experiments. Figure FIGREF17 shows the number of interactions with the environment (frames) versus the maximum score obtained, averaged over 10 games of the same difficulty. As shown by BIBREF8, DRQN++ finds a trajectory with the maximum score faster than to DQN++. On the other hand, phase 1 of Go-Explore finds an optimal trajectory with approximately half the interactions with the environment. Moreover, the trajectory length found by Go-Explore is always optimal (i.e. 30 steps) whereas both DQN++ and DRQN++ have an average length of 38 and 42 respectively. In case of Game CoinCollector, In CookingWorld, we compared models in the three settings mentioned earlier, namely, single, joint, and zero-shot. In all experiments, we measured the sum of the final scores of all the games and their trajectory length (number of steps). Table TABREF26 summarizes the results in these three settings. Phase 1 of Go-Explore on single games achieves a total score of 19,530 (sum over all games), which is very close to the maximum possible points (i.e. 19,882), with 47,562 steps. A winning trajectory was found in 4,279 out of the total of 4,440 games. This result confirms again that the exploration strategy of Go-Explore is effective in text-based games. Next, we evaluate the effectiveness and the generalization ability of the simple imitation learning policy trained using the extracted trajectories in phase 1 of Go-Explore in the three settings mentioned above. In this setting, each model is trained from scratch in each of the 4,440 games based on the trajectory found in phase 1 of Go-Explore (previous step). As shown in Table TABREF26, the LSTM-DQN BIBREF7, BIBREF8 approach without the use of admissible actions performs poorly. One explanation for this could be that it is difficult for this model to explore both language and game strategy at the same time; it is hard for the model to find a reward signal before it has learned to model language, since almost none of its actions will be admissible, and those reward signals are what is necessary in order to learn the language model. As we see in Table TABREF26, however, by using the admissible actions in the $\epsilon $-greedy step the score achieved by the LSTM-DQN increases dramatically (+ADM row in Table TABREF26). DRRN BIBREF10 achieves a very high score, since it explicitly learns how to rank admissible actions (i.e. a much simpler task than generating text). Finally, our approach of using a Seq2Seq model trained on the single trajectory provided by phase 1 of Go-Explore achieves the highest score among all the methods, even though we do not use admissible actions in this phase. However, in this experiment the Seq2Seq model cannot perfectly replicate the provided trajectory and the total score that it achieves is in fact 9.4% lower compared to the total score achieved by phase 1 of Go-Explore. Figure FIGREF61 (in Appendix SECREF60) shows the score breakdown for each level and model, where we can see that the gap between our model and other methods increases as the games become harder in terms of skills needed. In this setting the 4,440 games are split into training, validation, and test games. The split is done randomly but in a way that different difficulty levels (recipes 1, 2 and 3), are represented with equal ratios in all the 3 splits, i.e. stratified by difficulty. As shown in Table TABREF26, the zero-shot performance of the RL baselines is poor, which could be attributed to the same reasons why RL baselines under-perform in the Joint case. Especially interesting is that the performance of DRRN is substantially lower than that of the Go-Explore Seq2Seq model, even though the DRRN model has access to the admissible actions at test time, while the Seq2Seq model (as well as the LSTM-DQN model) has to construct actions token-by-token from the entire vocabulary of 20,000 tokens. On the other hand, Go-Explore Seq2Seq shows promising results by solving almost half of the unseen games. Figure FIGREF62 (in Appendix SECREF60) shows that most of the lost games are in the hardest set, where a very long sequence of actions is required for winning the game. These results demonstrate both the relative effectiveness of training a Seq2Seq model on Go-Explore trajectories, but they also indicate that additional effort needed for designing reinforcement learning algorithms that effectively generalize to unseen games.
Question: How better does new approach behave than existing solutions?
Answer: | On the other hand, phase 1 of Go-Explore finds an optimal trajectory with approximately half the interactions with the environment Moreover, the trajectory length found by Go-Explore is always optimal (i.e. 30 steps) whereas both DQN++ and DRQN++ have an average length of 38 and 42 respectively. Especially interesting is that the performance of DRRN is substantially lower than that of the Go-Explore Seq2Seq model | 8 | NIv2 | task460_qasper_answer_generation | fs_opt |
TASK DEFINITION: In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
PROBLEM: We collect three years of online news articles from June 2016 to June 2019.
Question: What unlabeled corpus did they use?
SOLUTION: three years of online news articles from June 2016 to June 2019
PROBLEM: For training data, 3 million webpages from this corpus were processed with a CCG parser to produce logical forms BIBREF10 . We also used the test set created by Krishnamurthy and Mitchell, which contains 220 queries generated in the same fashion as the training data from a separate section of ClueWeb. However, as they did not release a development set with their data, we used this set as a development set. This final test set contains 307 queries.
Question: How big is their dataset?
SOLUTION: 3 million webpages processed with a CCG parser for training, 220 queries for development, and 307 queries for testing
PROBLEM: We compare our method to the following baselines: (1) Single-task CNN: training a CNN model for each task individually; (2) Single-task FastText: training one FastText model BIBREF23 with fixed embeddings for each individual task; (3) Fine-tuned the holistic MTL-CNN: a standard transfer-learning approach, which trains one MTL-CNN model on all the training tasks offline, then fine-tunes the classifier layer (i.e. $\mathrm {M}^{(cls)}$ Figure 1 (a)) on each target task; (4) Matching Network: a metric-learning based few-shot learning model trained on all training tasks; (5) Prototypical Network: a variation of matching network with different prediction function as Eq. 9 ; (6) Convex combining all single-task models: training one CNN classifier on each meta-training task individually and taking the encoder, then for each target task training a linear combination of all the above single-task encoders with Eq. ( 24 ).
Question: Do they compare with the MAML algorithm?
SOLUTION: | No
| 8 | NIv2 | task460_qasper_answer_generation | fs_opt |
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
Each tweet is annotated as no evidence of depression (e.g., “Citizens fear an economic depression") or evidence of depression (e.g., “depressed over disappointment"). If a tweet is annotated evidence of depression, then it is further annotated with one or more depressive symptoms, for example, depressed mood (e.g., “feeling down in the dumps"), disturbed sleep (e.g., “another restless night"), or fatigue or loss of energy (e.g., “the fatigue is unbearable") BIBREF10 .
Question: How is the dataset annotated?
no evidence of depression depressed mood disturbed sleep fatigue or loss of energy
We then proceed to connect these mentions i) if they co-occur within the same document (we will refer to this as DOC-BASED edges), ii) if the pair of named entity mentions is identical (MATCH edges—these may connect nodes across and within documents), or iii) if they are in the same coreference chain, as predicted by the external coreference system (COREF edges).
Question: Did they use a relation extraction method to construct the edges in the graph?
No
To better analyze model generalization to an unseen, new domain as well as model leveraging the out-of-domain sources, we propose a new architecture which is an extension of the ARED model. In order to better select, aggregate and control the semantic information, a Refinement Adjustment LSTM-based component (RALSTM) is introduced to the decoder side. The proposed model can learn from unaligned data by jointly training the sentence planning and surface realization to produce natural language sentences.
Question: What is the difference of the proposed model with a standard RNN encoder-decoder?
| Introduce a "Refinement Adjustment LSTM-based component" to the decoder
| 0 | NIv2 | task460_qasper_answer_generation | fs_opt |
instruction:
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
question:
The proposed Multimodal Differential Network (MDN) consists of a representation module and a joint mixture module. We use a triplet network BIBREF41 , BIBREF42 in our representation module. The triplet network consists of three sub-parts: target, supporting, and contrasting networks. All three networks share the same parameters. The Mixture module brings the image and caption embeddings to a joint feature embedding space.
Question: How do the authors define a differential network?
answer:
The proposed Multimodal Differential Network (MDN) consists of a representation module and a joint mixture module.
question:
We draw on a recently released corpus of state speeches delivered during the annual UN General Debate that provides the first dataset of textual output from states that is recorded at regular time-series intervals and includes a sample of all countries that deliver speeches BIBREF11 .
Question: Which dataset do they use?
answer:
corpus of state speeches delivered during the annual UN General Debate
question:
We compare our proposed discrete CVAE (DCVAE) with the two-stage sampling approach to three categories of response generation models:
Baselines: Seq2seq, the basic encoder-decoder model with soft attention mechanism BIBREF30 used in decoding and beam search used in testing; MMI-bidi BIBREF5, which uses the MMI to re-rank results from beam search.
CVAE BIBREF14: We adjust the original work which is for multi-round conversation for our single-round setting. For a fair comparison, we utilize the same keywords used in our network pre-training as the knowledge-guided features in this model.
Other enhanced encoder-decoder models: Hierarchical Gated Fusion Unit (HGFU) BIBREF12, which incorporates a cue word extracted using pointwise mutual information (PMI) into the decoder to generate meaningful responses; Mechanism-Aware Neural Machine (MANM) BIBREF13, which introduces latent embeddings to allow for multiple diverse response generation.
Question: What other kinds of generation models are used in experiments?
answer:
| Seq2seq CVAE Hierarchical Gated Fusion Unit (HGFU) Mechanism-Aware Neural Machine (MANM)
| 9 | NIv2 | task460_qasper_answer_generation | fs_opt |
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
Ex Input:
Named Entity Recognition (NER) in the Biomedical domain usually includes recognition of entities such as proteins, genes, diseases, treatments, drugs, etc.
Question: What is NER?
Ex Output:
Named Entity Recognition
Ex Input:
SVM: We define 3 sets of features to characterize each question. The first is a simple bag-of-words set of features over the question (SVM-BOW), the second is bag-of-words features of the question as well as length of the question in words (SVM-BOW + LEN), and lastly we extract bag-of-words features, length of the question in words as well as part-of-speech tags for the question (SVM-BOW + LEN + POS). This results in vectors of 200, 201 and 228 dimensions respectively, which are provided to an SVM with a linear kernel. No-Answer Baseline (NA) : Most of the questions we receive are difficult to answer in a legally-sound way on the basis of information present in the privacy policy. We establish a simple baseline to quantify the effect of identifying every question as unanswerable. Word Count Baseline : To quantify the effect of using simple lexical matching to answer the questions, we retrieve the top candidate policy sentences for each question using a word count baseline BIBREF53, which counts the number of question words that also appear in a sentence. We include the top 2, 3 and 5 candidates as baselines. Human Performance: We pick each reference answer provided by an annotator, and compute the F1 with respect to the remaining references, as described in section 4.2.1. Each reference answer is treated as the prediction, and the remaining n-1 answers are treated as the gold reference. The average of the maximum F1 across all reference answers is computed as the human baseline.
Question: Were other baselines tested to compare with the neural baseline?
Ex Output:
SVM No-Answer Baseline (NA) Word Count Baseline Human Performance
Ex Input:
Specifically, PlEWi supplied 550,755 [error, correction] pairs, from which 298,715 were unique.
Question: How is PIEWi annotated?
Ex Output:
| [error, correction] pairs
| 1 | NIv2 | task460_qasper_answer_generation | fs_opt |
TASK DEFINITION: In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
PROBLEM: We asked medical doctors experienced in extracting knowledge related to medical entities from texts to annotate the entities described above. Initially, we asked four annotators to test our guidelines on two texts. Subsequently, identified issues were discussed and resolved. Following this pilot annotation phase, we asked two different annotators to annotate two case reports according to our guidelines. The same annotators annotated an overall collection of 53 case reports. The annotation was performed using WebAnno BIBREF7, a web-based tool for linguistic annotation. The annotators could choose between a pre-annotated version or a blank version of each text. The pre-annotated versions contained suggested entity spans based on string matches from lists of conditions and findings synonym lists.
Question: How was annotation performed?
SOLUTION: Experienced medical doctors used a linguistic annotation tool to annotate entities.
PROBLEM: Hence, the continuous relaxation to top-k-argmax operation can be simply implemented by iteratively using the max operation which is continuous and allows for gradient flow during backpropagation.
Question: Which loss metrics do they try in their new training procedure evaluated on the output of beam search?
SOLUTION: continuous relaxation to top-k-argmax
PROBLEM: Among all available parameters to tune the D2V algorithm released by Gensim, six of them were selected for optimisation BIBREF14. The window_size parameter affects the size of the sliding window used to parse texts. The alpha parameter represents the learning rate of the network. The sample setting allows the model to reduce the importance given to high-frequency words. The dm parameter defines the training used architecture (PV-DM or PV-DBOW).
Question: What Doc2Vec architectures other than PV-DBOW have been tried?
SOLUTION: | PV-DM
| 8 | NIv2 | task460_qasper_answer_generation | fs_opt |
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
Example: We evaluate the proposed approach on the Chinese social media text summarization task, based on the sequence-to-sequence model. Large-Scale Chinese Short Text Summarization Dataset (LCSTS) is constructed by BIBREF1 . The dataset consists of more than 2.4 million text-summary pairs in total, constructed from a famous Chinese social media microblogging service Weibo.
Question: Are results reported only for English data?
Example solution: No
Example explanation: Based on the context, the dataset is constructed from a famous Chinese social media microblogging service Weibo.
Problem: To better analyze model generalization to an unseen, new domain as well as model leveraging the out-of-domain sources, we propose a new architecture which is an extension of the ARED model. In order to better select, aggregate and control the semantic information, a Refinement Adjustment LSTM-based component (RALSTM) is introduced to the decoder side. The proposed model can learn from unaligned data by jointly training the sentence planning and surface realization to produce natural language sentences.
Question: What is the difference of the proposed model with a standard RNN encoder-decoder?
| Solution: Introduce a "Refinement Adjustment LSTM-based component" to the decoder | 5 | NIv2 | task460_qasper_answer_generation | fs_opt |
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
Example: We evaluate the proposed approach on the Chinese social media text summarization task, based on the sequence-to-sequence model. Large-Scale Chinese Short Text Summarization Dataset (LCSTS) is constructed by BIBREF1 . The dataset consists of more than 2.4 million text-summary pairs in total, constructed from a famous Chinese social media microblogging service Weibo.
Question: Are results reported only for English data?
Example solution: No
Example explanation: Based on the context, the dataset is constructed from a famous Chinese social media microblogging service Weibo.
Problem: In the multiple-choice setting, which is the variety of question-answering (QA) that we focus on in this paper, there is also pragmatic reasoning involved in selecting optimal answer choices (e.g., while greenhouse effect might in some other context be a reasonable answer to the second question in Figure FIGREF1, global warming is a preferable candidate).
Question: Do they focus on Reading Comprehension or multiple choice question answering?
| Solution: MULTIPLE CHOICE QUESTION ANSWERING | 5 | NIv2 | task460_qasper_answer_generation | fs_opt |
instruction:
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
question:
In this work, we introduce a new logical inference engine called MonaLog, which is based on natural logic and work on monotonicity stemming from vanBenthemEssays86. Since our logic operates over surface forms, it is straightforward to hybridize our models. We investigate using MonaLog in combination with the language model BERT BIBREF20, including for compositional data augmentation, i.e, re-generating entailed versions of examples in our training sets. We perform two experiments to test MonaLog. We first use MonaLog to solve the problems in a commonly used natural language inference dataset, SICK BIBREF1, comparing our results with previous systems. Second, we test the quality of the data generated by MonaLog. To do this, we generate more training data (sentence pairs) from the SICK training data using our system, and performe fine-tuning on BERT BIBREF20, a language model based on the transformer architecture BIBREF23, with the expanded dataset.
Question: How do they combine MonaLog with BERT?
answer:
They use Monalog for data-augmentation to fine-tune BERT on this task
question:
This model uses a particular RNN cell in order to store just relevant information about the given question. Dynamic Memory stores information of entities present in $T$ . The addition of the $s_t^T q$ term in the gating function is our main contribution.
Question: How does the model recognize entities and their relation to answers at inference time when answers are not accessible?
answer:
gating function Dynamic Memory
question:
In particular, we perform an extensive set of experiments on standard benchmarks for bilingual dictionary induction and monolingual and cross-lingual word similarity, as well as on an extrinsic task: cross-lingual hypernym discovery. For both the monolingual and cross-lingual settings, we can notice that our models generally outperform the corresponding baselines. As can be seen in Table 1 , our refinement method consistently improves over the baselines (i.e., VecMap and MUSE) on all language pairs and metrics. First and foremost, in terms of model-wise comparisons, we observe that our proposed alterations of both VecMap and MUSE improve their quality in a consistent manner, across most metrics and data configurations.
Question: What are the tasks that this method has shown improvements?
answer:
| bilingual dictionary induction, monolingual and cross-lingual word similarity, and cross-lingual hypernym discovery
| 9 | NIv2 | task460_qasper_answer_generation | fs_opt |
Part 1. Definition
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
Part 2. Example
We evaluate the proposed approach on the Chinese social media text summarization task, based on the sequence-to-sequence model. Large-Scale Chinese Short Text Summarization Dataset (LCSTS) is constructed by BIBREF1 . The dataset consists of more than 2.4 million text-summary pairs in total, constructed from a famous Chinese social media microblogging service Weibo.
Question: Are results reported only for English data?
Answer: No
Explanation: Based on the context, the dataset is constructed from a famous Chinese social media microblogging service Weibo.
Part 3. Exercise
The text and image encodings were combined by concatenation, which resulted in a feature vector of 4,864 dimensions. This multimodal representation was afterward fed as input into a multi-layer perceptron (MLP) with two hidden layer of 100 neurons with a ReLU activation function.
Question: Is the dataset multimodal?
Answer: | Yes | 7 | NIv2 | task460_qasper_answer_generation | fs_opt |
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
Example input: We evaluate the proposed approach on the Chinese social media text summarization task, based on the sequence-to-sequence model. Large-Scale Chinese Short Text Summarization Dataset (LCSTS) is constructed by BIBREF1 . The dataset consists of more than 2.4 million text-summary pairs in total, constructed from a famous Chinese social media microblogging service Weibo.
Question: Are results reported only for English data?
Example output: No
Example explanation: Based on the context, the dataset is constructed from a famous Chinese social media microblogging service Weibo.
Q: In Section SECREF16, we first provide more details about the experimental setting that we followed. As explained in Section SECREF3, we used BabelNet BIBREF29 as our reference taxonomy. BabelNet is a large-scale full-fledged taxonomy consisting of heterogeneous sources such as WordNet BIBREF36, Wikidata BIBREF37 and WiBi BIBREF38, making it suitable to test our hypothesis in a general setting. To test our proposed category induction model, we consider all BabelNet categories with fewer than 50 known instances. This is motivated by the view that conceptual neighborhood is mostly useful in cases where the number of known instances is small. For each of these categories, we split the set of known instances into 90% for training and 10% for testing. To tune the prior probability $\lambda _A$ for these categories, we hold out 10% from the training set as a validation set.
Question: What experiments they perform to demonstrate that their approach leads more accurate region based representations?
A: | To test our proposed category induction model, we consider all BabelNet categories with fewer than 50 known instances. This is motivated by the view that conceptual neighborhood is mostly useful in cases where the number of known instances is small. For each of these categories, we split the set of known instances into 90% for training and 10% for testing. | 3 | NIv2 | task460_qasper_answer_generation | fs_opt |
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
Input: Consider Input: This step entails counting occurrences of all words in the training corpus and sorting them in order of decreasing occurrence. As mentioned, the vocabulary is taken to be the INLINEFORM0 most frequently occurring words, that occur at least some number INLINEFORM1 times. It is implemented in Spark as a straight-forward map-reduce job.
Question: Do they perform any morphological tokenization?
Output: No
Input: Consider Input: We collected tweets related to five different DDoS attacks on three different American banks. For each attack, all the tweets containing the bank's name posted from one week before the attack until the attack day were collected. There are in total 35214 tweets in the dataset. Only the tweets from the Bank of America attack on 09/19/2012 were used in this experiment. In this subsection we evaluate how good the model generalizes. To achieve that, the dataset is divided into two groups, one is about the attacks on Bank of America and the other group is about PNC and Wells Fargo. The only difference between this experiment and the experiment in section 4.4 is the dataset. In this experiment setting $D_a$ contains only the tweets collected on the days of attack on PNC and Wells Fargo. $D_b$ only contains the tweets collected before the Bank of America attack.
Question: What is the training and test data used?
Output: Tweets related to a Bank of America DDos attack were used as training data. The test datasets contain tweets related to attacks to Bank of America, PNC and Wells Fargo.
Input: Consider Input: We also looked at where there was complete agreement by all annotators that a triple extraction was incorrect. In total there were 138 of these triples originating from 76 unique sentences.
Question: What is the most common error type?
| Output: all annotators that a triple extraction was incorrect
| 2 | NIv2 | task460_qasper_answer_generation | fs_opt |
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
Example input: We evaluate the proposed approach on the Chinese social media text summarization task, based on the sequence-to-sequence model. Large-Scale Chinese Short Text Summarization Dataset (LCSTS) is constructed by BIBREF1 . The dataset consists of more than 2.4 million text-summary pairs in total, constructed from a famous Chinese social media microblogging service Weibo.
Question: Are results reported only for English data?
Example output: No
Example explanation: Based on the context, the dataset is constructed from a famous Chinese social media microblogging service Weibo.
Q: In CBOW architecture the task is predicting the word given its context and in SG the task in predicting the context given the word. We built 16 models of word embeddings using the implementation of CBOW and Skip-gram methods in the FastText tool BIBREF9 .
Question: What is specific about the specific embeddings?
A: | predicting the word given its context | 3 | NIv2 | task460_qasper_answer_generation | fs_opt |
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
Ex Input:
We asked medical doctors experienced in extracting knowledge related to medical entities from texts to annotate the entities described above. Initially, we asked four annotators to test our guidelines on two texts. Subsequently, identified issues were discussed and resolved. Following this pilot annotation phase, we asked two different annotators to annotate two case reports according to our guidelines. The same annotators annotated an overall collection of 53 case reports. The annotation was performed using WebAnno BIBREF7, a web-based tool for linguistic annotation. The annotators could choose between a pre-annotated version or a blank version of each text. The pre-annotated versions contained suggested entity spans based on string matches from lists of conditions and findings synonym lists.
Question: How was annotation performed?
Ex Output:
Experienced medical doctors used a linguistic annotation tool to annotate entities.
Ex Input:
For the Russian language, with its rich morphology, lemmatizing the training and testing data for ELMo representations yields small but consistent improvements in the WSD task.
Question: What other examples of morphologically-rich languages do the authors give?
Ex Output:
Russian
Ex Input:
We extract data from the WMT'14 English-French (En-Fr) and English-German (En-De) datasets. To create a larger discrepancy between the tasks, so that there is a clear dataset size imbalance, the En-De data is artificially restricted to only 1 million parallel sentences, while the full En-Fr dataset, comprising almost 40 million parallel sentences, is used entirely.
Question: What datasets are used for experiments?
Ex Output:
| the WMT'14 English-French (En-Fr) and English-German (En-De) datasets.
| 1 | NIv2 | task460_qasper_answer_generation | fs_opt |
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
Input: Consider Input: In this work, we develop a technique to rapidly transfer an existing pre-trained model from English to other languages in an energy efficient way BIBREF8.
Question: How much training data from the non-English language is used by the system?
Output: No data. Pretrained model is used.
Input: Consider Input: Training and testing are done in alternating steps: In each epoch, for training, we first present to an LSTM network 1000 samples in a given language, which are generated according to a certain discrete probability distribution supported on a closed finite interval. We then freeze all the weights in our model, exhaustively enumerate all the sequences in the language by their lengths, and determine the first $k$ shortest sequences whose outputs the model produces inaccurately. experimented with 1, 2, 3, and 36 hidden units for $a^n b^n$ ; 2, 3, 4, and 36 hidden units for $a^n b^n c^n$ ; and 3, 4, 5, and 36 hidden units for $a^n b^n c^n d^n$ . Following the traditional approach adopted by BIBREF7 , BIBREF12 , BIBREF9 and many other studies, we train our neural network as follows. At each time step, we present one input character to our model and then ask it to predict the set of next possible characters, based on the current character and the prior hidden states. Given a vocabulary $\mathcal {V}^{(i)}$ of size $d$ , we use a one-hot representation to encode the input values; therefore, all the input vectors are $d$ -dimensional binary vectors. The output values are $(d+1)$ -dimensional though, since they may further contain the termination symbol $\dashv $ , in addition to the symbols in $\mathcal {V}^{(i)}$ . The output values are not always one-hot encoded, because there can be multiple possibilities for the next character in the sequence, therefore we instead use a $k$ -hot representation to encode the output values. Our objective is to minimize the mean-squared error (MSE) of the sequence predictions.
Question: What training settings did they try?
Output: Training and testing are done in alternating steps: In each epoch, for training, we first present to an LSTM network 1000 samples in a given language, which are generated according to a certain discrete probability distribution supported on a closed finite interval. We then freeze all the weights in our model, exhaustively enumerate all the sequences in the language by their lengths, and determine the first $k$ shortest sequences whose outputs the model produces inaccurately. experimented with 1, 2, 3, and 36 hidden units for $a^n b^n$ ; 2, 3, 4, and 36 hidden units for $a^n b^n c^n$ ; and 3, 4, 5, and 36 hidden units for $a^n b^n c^n d^n$ . Following the traditional approach adopted by BIBREF7 , BIBREF12 , BIBREF9 and many other studies, we train our neural network as follows. At each time step, we present one input character to our model and then ask it to predict the set of next possible characters, based on the current character and the prior hidden states. Given a vocabulary $\mathcal {V}^{(i)}$ of size $d$ , we use a one-hot representation to encode the input values; therefore, all the input vectors are $d$ -dimensional binary vectors. The output values are $(d+1)$ -dimensional though, since they may further contain the termination symbol $\dashv $ , in addition to the symbols in $\mathcal {V}^{(i)}$ . The output values are not always one-hot encoded, because there can be multiple possibilities for the next character in the sequence, therefore we instead use a $k$ -hot representation to encode the output values. Our objective is to minimize the mean-squared error (MSE) of the sequence predictions.
Input: Consider Input: To study how the multimodal context can boost the performance compared to an unimodal context we evaluate different models: a Feature Concatenation Model (FCM), a Spatial Concatenation Model (SCM) and a Textual Kernels Model (TKM)
Question: What models do they propose?
| Output: Feature Concatenation Model (FCM) Spatial Concatenation Model (SCM) Textual Kernels Model (TKM)
| 2 | NIv2 | task460_qasper_answer_generation | fs_opt |
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
One example is below.
Q: We evaluate the proposed approach on the Chinese social media text summarization task, based on the sequence-to-sequence model. Large-Scale Chinese Short Text Summarization Dataset (LCSTS) is constructed by BIBREF1 . The dataset consists of more than 2.4 million text-summary pairs in total, constructed from a famous Chinese social media microblogging service Weibo.
Question: Are results reported only for English data?
A: No
Rationale: Based on the context, the dataset is constructed from a famous Chinese social media microblogging service Weibo.
Q: In this section, we describe our data collection for these 3 components. We first describe a series of pilot studies that we conducted in order to collect commonsense inference questions (Section SECREF4 ). In Section SECREF5 , we discuss the resulting data collection of questions, texts and answers via crowdsourcing on Amazon Mechanical Turk (henceforth MTurk). Section SECREF17 gives information about some necessary postprocessing steps and the dataset validation.
Question: how was the data collected?
A: | The data was collected using 3 components: describe a series of pilot studies that were conducted to collect commonsense inference questions, then discuss the resulting data collection of questions, texts and answers via crowdsourcing on Amazon Mechanical Turk and gives information about some necessary postprocessing steps and the dataset validation. | 9 | NIv2 | task460_qasper_answer_generation | fs_opt |
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
Let me give you an example: We evaluate the proposed approach on the Chinese social media text summarization task, based on the sequence-to-sequence model. Large-Scale Chinese Short Text Summarization Dataset (LCSTS) is constructed by BIBREF1 . The dataset consists of more than 2.4 million text-summary pairs in total, constructed from a famous Chinese social media microblogging service Weibo.
Question: Are results reported only for English data?
The answer to this example can be: No
Here is why: Based on the context, the dataset is constructed from a famous Chinese social media microblogging service Weibo.
OK. solve this:
In this section we detail the discussions we use to test our metric and how we determine the ground truth (i.e. if the discussion is controversial or not). We use thirty different discussions that took place between March 2015 and June 2019, half of them with controversy and half without it. We considered discussions in four different languages: English, Portuguese, Spanish and French, occurring in five regions over the world: South and North America, Western Europe, Central and Southern Asia. We also studied these discussions taking first 140 characters and then 280 from each tweet to analyze the difference in performance and computing time wrt the length of the posts.
Question: How many languages do they experiment with?
Answer: | four different languages: English, Portuguese, Spanish and French | 8 | NIv2 | task460_qasper_answer_generation | fs_opt |
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
Input: Consider Input: Based on annotated corpora and token-based features, studies used machine learning approaches to build word segmentation systems with accuracy about 94%-97%.
Question: How successful are the approaches used to solve word segmentation in Vietnamese?
Output: Their accuracy in word segmentation is about 94%-97%.
Input: Consider Input: The best baseline model is NO-MOVE, reaching an accuracy of 30.3% on single sentences and 0.3 on complete paragraphs.
Question: How well did the baseline perform?
Output: accuracy of 30.3% on single sentences and 0.3 on complete paragraphs
Input: Consider Input: We assessed the proposed models on four different NLG domains: finding a restaurant, finding a hotel, buying a laptop, and buying a television. The Restaurant and Hotel were collected in BIBREF4 , while the Laptop and TV datasets have been released by BIBREF22 with a much larger input space but only one training example for each DA so that the system must learn partial realization of concepts and be able to recombine and apply them to unseen DAs.
Question: Does the model evaluated on NLG datasets or dialog datasets?
| Output: NLG datasets
| 2 | NIv2 | task460_qasper_answer_generation | fs_opt |
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
Q: We build on the state-of-the-art publicly available question answering system by docqa. The system extends BiDAF BIBREF4 with self-attention and performs well on document-level QA.
Question: What is the underlying question answering algorithm?
A: The system extends BiDAF BIBREF4 with self-attention
****
Q: We collect utterances from the $\mathbf {C}$hinese $\mathbf {A}$rtificial $\mathbf {I}$ntelligence $\mathbf {S}$peakers (CAIS), and annotate them with slot tags and intent labels. The training, validation and test sets are split by the distribution of intents, where detailed statistics are provided in the supplementary material. Since the utterances are collected from speaker systems in the real world, intent labels are partial to the PlayMusic option. We adopt the BIOES tagging scheme for slots instead of the BIO2 used in the ATIS, since previous studies have highlighted meaningful improvements with this scheme BIBREF30 in the sequence labeling field.
Question: What is the source of the CAIS dataset?
A: the $\mathbf {C}$hinese $\mathbf {A}$rtificial $\mathbf {I}$ntelligence $\mathbf {S}$peakers (CAIS)
****
Q: We compare our model with the following state-of-the-art joint entity and relation extraction models:
(1) SPTree BIBREF4: This is an end-to-end neural entity and relation extraction model using sequence LSTM and Tree LSTM. (2) Tagging BIBREF5: This is a neural sequence tagging model which jointly extracts the entities and relations using an LSTM encoder and an LSTM decoder. (3) CopyR BIBREF6: This model uses an encoder-decoder approach for joint extraction of entities and relations. (4) HRL BIBREF11: This model uses a reinforcement learning (RL) algorithm with two levels of hierarchy for tuple extraction. (5) GraphR BIBREF14: This model considers each token in a sentence as a node in a graph, and edges connecting the nodes as relations between them. (6) N-gram Attention BIBREF9: This model uses an encoder-decoder approach with N-gram attention mechanism for knowledge-base completion using distantly supervised data.
Question: What is previous work authors reffer to?
A: | SPTree Tagging CopyR HRL GraphR N-gram Attention
****
| 4 | NIv2 | task460_qasper_answer_generation | fs_opt |
TASK DEFINITION: In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
PROBLEM: For the Russian language, with its rich morphology, lemmatizing the training and testing data for ELMo representations yields small but consistent improvements in the WSD task.
Question: What other examples of morphologically-rich languages do the authors give?
SOLUTION: Russian
PROBLEM: Fig. FIGREF4 shows the distributions of seven question types grouped deterministically from the lexicons.
Question: How many question types do they find in the datasets analyzed?
SOLUTION: seven
PROBLEM: We evaluate our proposed models on three commonly used knowledge graph datasets—WN18RR BIBREF26, FB15k-237 BIBREF18, and YAGO3-10 BIBREF27.
Question: What benchmark datasets are used for the link prediction task?
SOLUTION: | WN18RR FB15k-237 YAGO3-10
| 8 | NIv2 | task460_qasper_answer_generation | fs_opt |
instruction:
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
question:
We defined the reward as being 1 for successfully completing the task, and 0 otherwise. A discount of $0.95$ was used to incentivize the system to complete dialogs faster rather than slower, yielding return 0 for failed dialogs, and $G = 0.95^{T-1}$ for successful dialogs, where $T$ is the number of system turns in the dialog. 0.95^{T-1} reward 0.95^{T-1}
Question: What is the reward model for the reinforcement learning appraoch?
answer:
reward 1 for successfully completing the task, with a discount by the number of turns, and reward 0 when fail
question:
We then proceed to connect these mentions i) if they co-occur within the same document (we will refer to this as DOC-BASED edges), ii) if the pair of named entity mentions is identical (MATCH edges—these may connect nodes across and within documents), or iii) if they are in the same coreference chain, as predicted by the external coreference system (COREF edges).
Question: Did they use a relation extraction method to construct the edges in the graph?
answer:
No
question:
The training dataset contains 2,815 examples, where 1,940 (i.e., 69%) are fake news and 1,968 (i.e., 70%) are click-baits; we further have 761 testing examples.
Question: what datasets were used?
answer:
| training dataset contains 2,815 examples 761 testing examples
| 9 | NIv2 | task460_qasper_answer_generation | fs_opt |
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
Let me give you an example: We evaluate the proposed approach on the Chinese social media text summarization task, based on the sequence-to-sequence model. Large-Scale Chinese Short Text Summarization Dataset (LCSTS) is constructed by BIBREF1 . The dataset consists of more than 2.4 million text-summary pairs in total, constructed from a famous Chinese social media microblogging service Weibo.
Question: Are results reported only for English data?
The answer to this example can be: No
Here is why: Based on the context, the dataset is constructed from a famous Chinese social media microblogging service Weibo.
OK. solve this:
We use TB-Dense and MATRES in our experiments and briefly summarize the data statistics in Table TABREF33.
Question: What datasets were used for this work?
Answer: | TB-Dense MATRES | 8 | NIv2 | task460_qasper_answer_generation | fs_opt |
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
Example Input: We make copies of the monolingual model for each language and add additional crosslingual latent variables (CLVs) to couple the monolingual models, capturing crosslingual semantic role patterns. Concretely, when training on parallel sentences, whenever the head words of the arguments are aligned, we add a CLV as a parent of the two corresponding role variables.
Question: Do they add one latent variable for each language pair in their Bayesian model?
Example Output: Yes
Example Input: Based on annotated corpora and token-based features, studies used machine learning approaches to build word segmentation systems with accuracy about 94%-97%.
Question: How successful are the approaches used to solve word segmentation in Vietnamese?
Example Output: Their accuracy in word segmentation is about 94%-97%.
Example Input: We use TB-Dense and MATRES in our experiments and briefly summarize the data statistics in Table TABREF33.
Question: What datasets were used for this work?
Example Output: | TB-Dense MATRES
| 3 | NIv2 | task460_qasper_answer_generation | fs_opt |
You will be given a definition of a task first, then an example. Follow the example to solve a new instance of the task.
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
We evaluate the proposed approach on the Chinese social media text summarization task, based on the sequence-to-sequence model. Large-Scale Chinese Short Text Summarization Dataset (LCSTS) is constructed by BIBREF1 . The dataset consists of more than 2.4 million text-summary pairs in total, constructed from a famous Chinese social media microblogging service Weibo.
Question: Are results reported only for English data?
Solution: No
Why? Based on the context, the dataset is constructed from a famous Chinese social media microblogging service Weibo.
New input: We plotted the distribution of harassment incidents in each categorization dimension (Figure FIGREF19). It displays statistics that provide important evidence as to the scale of harassment and that can serve as the basis for more effective interventions to be developed by authorities ranging from advocacy organizations to policy makers. It provides evidence to support some commonly assumed factors about harassment: First, we demonstrate that harassment occurred more frequently during the night time than the day time. Second, it shows that besides unspecified strangers (not shown in the figure), conductors and drivers are top the list of identified types of harassers, followed by friends and relatives. Furthermore, we uncovered that there exist strong correlations between the age of perpetrators and the location of harassment, between the single/multiple harasser(s) and location, and between age and single/multiple harasser(s) (Figure FIGREF20). We also found that the majority of young perpetrators engaged in harassment behaviors on the streets. These findings suggest that interventions with young men and boys, who are readily influenced by peers, might be most effective when education is done peer-to-peer. It also points to the locations where such efforts could be made, including both in schools and on the streets. In contrast, we found that adult perpetrators of sexual harassment are more likely to act alone. Most of the adult harassers engaged in harassment on public transportation. These differences in adult harassment activities and locations, mean that interventions should be responsive to these factors. For example, increasing the security measures on transit at key times and locations. In addition, we also found that the correlations between the forms of harassment with the age, single/multiple harasser, type of harasser, and location (Figure FIGREF21). For example, young harassers are more likely to engage in behaviors of verbal harassment, rather than physical harassment as compared to adults. It was a single perpetrator that engaged in touching or groping more often, rather than groups of perpetrators. In contrast, commenting happened more frequently when harassers were in groups. Last but not least, public transportation is where people got indecently touched most frequently both by fellow passengers and by conductors and drivers. The nature and location of the harassment are particularly significant in developing strategies for those who are harassed or who witness the harassment to respond and manage the everyday threat of harassment. For example, some strategies will work best on public transport, a particular closed, shared space setting, while other strategies might be more effective on the open space of the street.
Question: What patterns were discovered from the stories?
Solution: | we demonstrate that harassment occurred more frequently during the night time than the day time it shows that besides unspecified strangers (not shown in the figure), conductors and drivers are top the list of identified types of harassers, followed by friends and relatives we uncovered that there exist strong correlations between the age of perpetrators and the location of harassment, between the single/multiple harasser(s) and location, and between age and single/multiple harasser(s) We also found that the majority of young perpetrators engaged in harassment behaviors on the streets we found that adult perpetrators of sexual harassment are more likely to act alone we also found that the correlations between the forms of harassment with the age, single/multiple harasser, type of harasser, and location commenting happened more frequently when harassers were in groups. Last but not least, public transportation is where people got indecently touched most frequently both by fellow passengers and by conductors and drivers. | 0 | NIv2 | task460_qasper_answer_generation | fs_opt |
You will be given a definition of a task first, then an example. Follow the example to solve a new instance of the task.
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
We evaluate the proposed approach on the Chinese social media text summarization task, based on the sequence-to-sequence model. Large-Scale Chinese Short Text Summarization Dataset (LCSTS) is constructed by BIBREF1 . The dataset consists of more than 2.4 million text-summary pairs in total, constructed from a famous Chinese social media microblogging service Weibo.
Question: Are results reported only for English data?
Solution: No
Why? Based on the context, the dataset is constructed from a famous Chinese social media microblogging service Weibo.
New input: In test batch 4, our system (called FACTOIDS) achieved highest recall score of ‘0.7033’ but low precision of 0.1119, leaving open the question of how could we have better balanced the two measures.
Question: What was their highest recall score?
Solution: | 0.7033 | 0 | NIv2 | task460_qasper_answer_generation | fs_opt |
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
One example: We evaluate the proposed approach on the Chinese social media text summarization task, based on the sequence-to-sequence model. Large-Scale Chinese Short Text Summarization Dataset (LCSTS) is constructed by BIBREF1 . The dataset consists of more than 2.4 million text-summary pairs in total, constructed from a famous Chinese social media microblogging service Weibo.
Question: Are results reported only for English data?
Solution is here: No
Explanation: Based on the context, the dataset is constructed from a famous Chinese social media microblogging service Weibo.
Now, solve this: To collect a diverse training dataset, we have randomly sampled 1000 posts each from the subreddits politics, business, science, and AskReddit, and 1000 additional posts from the Reddit frontpage.
Question: what are the topics pulled from Reddit?
Solution: | politics, business, science, and AskReddit, and 1000 additional posts from the Reddit frontpage. | 6 | NIv2 | task460_qasper_answer_generation | fs_opt |
Detailed Instructions: In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
See one example below:
Problem: We evaluate the proposed approach on the Chinese social media text summarization task, based on the sequence-to-sequence model. Large-Scale Chinese Short Text Summarization Dataset (LCSTS) is constructed by BIBREF1 . The dataset consists of more than 2.4 million text-summary pairs in total, constructed from a famous Chinese social media microblogging service Weibo.
Question: Are results reported only for English data?
Solution: No
Explanation: Based on the context, the dataset is constructed from a famous Chinese social media microblogging service Weibo.
Problem: Using DSC loss improves the F1 score by +0.58 for MRPC and +0.73 for QQP.
Question: What are method improvements of F1 for paraphrase identification?
Solution: | Using DSC loss improves the F1 score by +0.58 for MRPC and +0.73 for QQP | 4 | NIv2 | task460_qasper_answer_generation | fs_opt |
Part 1. Definition
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
Part 2. Example
We evaluate the proposed approach on the Chinese social media text summarization task, based on the sequence-to-sequence model. Large-Scale Chinese Short Text Summarization Dataset (LCSTS) is constructed by BIBREF1 . The dataset consists of more than 2.4 million text-summary pairs in total, constructed from a famous Chinese social media microblogging service Weibo.
Question: Are results reported only for English data?
Answer: No
Explanation: Based on the context, the dataset is constructed from a famous Chinese social media microblogging service Weibo.
Part 3. Exercise
One-Vs-The-Rest strategy was adopted for the task of multi-class classification and I reported F-score, micro-F, macro-F and weighted-F scores using 10-fold cross-validation.
Question: What metrics are considered?
Answer: | F-score micro-F macro-F weighted-F | 7 | NIv2 | task460_qasper_answer_generation | fs_opt |
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
Let me give you an example: We evaluate the proposed approach on the Chinese social media text summarization task, based on the sequence-to-sequence model. Large-Scale Chinese Short Text Summarization Dataset (LCSTS) is constructed by BIBREF1 . The dataset consists of more than 2.4 million text-summary pairs in total, constructed from a famous Chinese social media microblogging service Weibo.
Question: Are results reported only for English data?
The answer to this example can be: No
Here is why: Based on the context, the dataset is constructed from a famous Chinese social media microblogging service Weibo.
OK. solve this:
Using sequence-to-sequence networks, the approach here is jointly training annotated utterance-level intents and slots/intent keywords by adding / tokens to the beginning/end of each utterance, with utterance-level intent-type as labels of such tokens. Our approach is an extension of BIBREF2 , in which only an term is added with intent-type tags associated to this sentence final token, both for LSTM and Bi-LSTM cases. However, we experimented with adding both and terms as Bi-LSTMs will be used for seq2seq learning, and we observed that slightly better results can be achieved by doing so. The idea behind is that, since this is a seq2seq learning problem, at the last time step (i.e., prediction at ) the reverse pass in Bi-LSTM would be incomplete (refer to Fig. FIGREF24 (a) to observe the last Bi-LSTM cell). Therefore, adding token and leveraging the backward LSTM output at first time step (i.e., prediction at ) would potentially help for joint seq2seq learning. An overall network architecture can be found in Fig. FIGREF30 for our joint models. We will report the experimental results on two variations (with and without intent keywords) as follows:
Joint-1: Seq2seq Bi-LSTM for utterance-level intent detection (jointly trained with slots)
Joint-2: Seq2seq Bi-LSTM for utterance-level intent detection (jointly trained with slots & intent keywords)
Question: What is shared in the joint model?
Answer: | jointly trained with slots | 8 | NIv2 | task460_qasper_answer_generation | fs_opt |
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
Example: We evaluate the proposed approach on the Chinese social media text summarization task, based on the sequence-to-sequence model. Large-Scale Chinese Short Text Summarization Dataset (LCSTS) is constructed by BIBREF1 . The dataset consists of more than 2.4 million text-summary pairs in total, constructed from a famous Chinese social media microblogging service Weibo.
Question: Are results reported only for English data?
Example solution: No
Example explanation: Based on the context, the dataset is constructed from a famous Chinese social media microblogging service Weibo.
Problem: We focus on the following aspects of NLP research: size, demographics, areas of research, impact, and correlation of citations with demographic attributes (age and gender).
Question: What aspect of NLP research is examined?
| Solution: size, demographics, areas of research, impact, and correlation of citations with demographic attributes (age and gender) | 5 | NIv2 | task460_qasper_answer_generation | fs_opt |
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
--------
Question: The UNGA speeches dataset, compiled by Baturo et al. UNGAspeeches, contains the text from 7,507 speeches given between 1970-2015 inclusive.
Question: how many speeches are in the dataset?
Answer: 7,507
Question: Twitter data: We used the Twitter API to scrap tweets with hashtags. All non-English tweets were filtered out by the API.
Question: Do they evaluate only on English datasets?
Answer: Yes
Question: A dialog turn from one speaker may not only be a direct response to the other speaker's query, but also likely to be a continuation of his own previous statement. Thus, when modeling turn $k$ in a dialog, we propose to connect the last RNN state of turn $k-2$ directly to the starting RNN state of turn $k$ , instead of letting it to propagate through the RNN for turn $k-1$ .
Question: How long of dialog history is captured?
Answer: | two previous turns
| 7 | NIv2 | task460_qasper_answer_generation | fs_opt |
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
--------
Question: We introduce our proposed diversity, density, and homogeneity metrics with their detailed formulations and key intuitions.
Question: Did they propose other metrics?
Answer: Yes
Question: To better analyze model generalization to an unseen, new domain as well as model leveraging the out-of-domain sources, we propose a new architecture which is an extension of the ARED model. In order to better select, aggregate and control the semantic information, a Refinement Adjustment LSTM-based component (RALSTM) is introduced to the decoder side. The proposed model can learn from unaligned data by jointly training the sentence planning and surface realization to produce natural language sentences.
Question: What is the difference of the proposed model with a standard RNN encoder-decoder?
Answer: Introduce a "Refinement Adjustment LSTM-based component" to the decoder
Question: Self-Attention on Question. As the question has integrated previous utterances, the model needs to directly relate previously mentioned concept with the current question. This is helpful for concept carry-over and coreference resolution. We thus employ self-attention on question.
Question: Does the model incorporate coreference and entailment?
Answer: | As the question has integrated previous utterances, the model needs to directly relate previously mentioned concept with the current question. This is helpful for concept carry-over and coreference resolution.
| 7 | NIv2 | task460_qasper_answer_generation | fs_opt |
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
--------
Question: 45 clinically interpretable features per admission were extracted as inputs to the readmission risk classifier. These features can be grouped into three categories (See Table TABREF5 for complete list of features):
Sociodemographics: gender, age, marital status, etc.
Past medical history: number of previous admissions, history of suicidality, average length of stay (up until that admission), etc.
Information from the current admission: length of stay (LOS), suicidal risk, number and length of notes, time of discharge, evaluation scores, etc.
The Current Admission feature group has the most number of features, with 29 features included in this group alone. These features can be further stratified into two groups: `structured' clinical features and `unstructured' clinical features.
Feature Extraction ::: Structured Features
Structure features are features that were identified on the EHR using regular expression matching and include rating scores that have been reported in the psychiatric literature as correlated with increased readmission risk, such as Global Assessment of Functioning, Insight and Compliance:
Global Assessment of Functioning (GAF): The psychosocial functioning of the patient ranging from 100 (extremely high functioning) to 1 (severely impaired) BIBREF13.
Insight: The degree to which the patient recognizes and accepts his/her illness (either Good, Fair or Poor).
Compliance: The ability of the patient to comply with medication and to follow medical advice (either Yes, Partial, or None).
These features are widely-used in clinical practice and evaluate the general state and prognosis of the patient during the patient's evaluation.
Feature Extraction ::: Unstructured Features
Unstructured features aim to capture the state of the patient in relation to seven risk factor domains (Appearance, Thought Process, Thought Content, Interpersonal, Substance Use, Occupation, and Mood) from the free-text narratives on the EHR. These seven domains have been identified as associated with readmission risk in prior work BIBREF14.
These unstructured features include: 1) the relative number of sentences in the admission notes that involve each risk factor domain (out of total number of sentences within the admission) and 2) clinical sentiment scores for each of these risk factor domains, i.e. sentiment scores that evaluate the patient’s psychosocial functioning level (positive, negative, or neutral) with respect to each of these risk factor domain.
Question: What features are used?
Answer: Sociodemographics: gender, age, marital status, etc. Past medical history: number of previous admissions, history of suicidality, average length of stay (up until that admission), etc. Information from the current admission: length of stay (LOS), suicidal risk, number and length of notes, time of discharge, evaluation scores, etc.
Question: For DQA four participants answered each question, therefore we took the average INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 values over the four evaluators as the result per question. The detailed answers by the participants and available online. To assess the correctness of the answers given both by participants in the DQA experiments, and by the QALD system, we use the classic information retrieval metrics of precision (P), recall (R), and F1. INLINEFORM0 measures the fraction of relevant (correct) answer (items) given versus all answers (answer items) given.
Question: Do they test performance of their approaches using human judgements?
Answer: Yes
Question: Active learning methods can generally be described into two parts: a learning engine and a selection engine BIBREF28 . The learning engine is essentially a classifier, which is mainly used for training of classification problems. The selection engine is based on the sampling strategy, which chooses samples that need to be relabeled by annotators from unlabeled data. Then, relabeled samples are added to training set for classifier to re-train, thus continuously improving the accuracy of the classifier. In this paper, a CRF-based segmenter and a scoring model are employed as learning engine and selection engine, respectively.
Question: How does the active learning model work?
Answer: | Active learning methods has a learning engine (mainly used for training of classification problems) and the selection engine (which chooses samples that need to be relabeled by annotators from unlabeled data). Then, relabeled samples are added to training set for classifier to re-train, thus continuously improving the accuracy of the classifier. In this paper, CRF-based segmenter and a scoring model are employed as learning engine and selection engine, respectively.
| 7 | NIv2 | task460_qasper_answer_generation | fs_opt |
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
[Q]: In fact, realizing the necessity of large text corpus for Sindhi, we started this research by collecting raw corpus from multiple web resource using web-scrappy framwork for extraction of news columns of daily Kawish and Awami Awaz Sindhi newspapers, Wikipedia dumps, short stories and sports news from Wichaar social blog, news from Focus Word press blog, historical writings, novels, stories, books from Sindh Salamat literary websites, novels, history and religious books from Sindhi Adabi Board and tweets regarding news and sports are collected from twitter.
Question: How is the data collected, which web resources were used?
[A]: daily Kawish and Awami Awaz Sindhi newspapers Wikipedia dumps short stories and sports news from Wichaar social blog news from Focus Word press blog historical writings, novels, stories, books from Sindh Salamat literary website novels, history and religious books from Sindhi Adabi Board tweets regarding news and sports are collected from twitter
[Q]: Building on these results, we take a somewhat more systematic approach to looking for interpretable hidden state dimensions, by using decision trees to predict individual hidden state dimensions (Figure 2 ). We visualize the overall dynamics of the hidden states by coloring the training data with the k-means clusters on the state vectors (Figures 3 , 3 ). In Figures 3 and 3 , we color-code the training data with the 10 HMM states. In Figures 3 and 3 , we apply k-means clustering to the LSTM state vectors, and color-code the training data with the clusters.
Question: Which methods do the authors use to reach the conclusion that LSTMs and HMMs learn complementary information?
[A]: decision trees to predict individual hidden state dimensions apply k-means clustering to the LSTM state vectors, and color-code the training data with the clusters
[Q]: Database Construction: we crawled travel information in Beijing from the Web, including Hotel, Attraction, and Restaurant domains (hereafter we name the three domains as HAR domains). Then, we used the metro information of entities in HAR domains to build the metro database. Goal Generation: a multi-domain goal generator was designed based on the database. The relation across domains is captured in two ways. One is to constrain two targets that locate near each other. The other is to use a taxi or metro to commute between two targets in HAR domains mentioned in the context. Dialogue Collection: before the formal data collection starts, we required the workers to make a small number of dialogues and gave them feedback about the dialogue quality. Then, well-trained workers were paired to converse according to the given goals. Dialogue Annotation: we used some rules to automatically annotate dialogue acts according to user states, system states, and dialogue histories.
Question: How was the dataset collected?
[A]: | Database Construction: we crawled travel information in Beijing from the Web, including Hotel, Attraction, and Restaurant domains (hereafter we name the three domains as HAR domains). Then, we used the metro information of entities in HAR domains to build the metro database. Goal Generation: a multi-domain goal generator was designed based on the database. The relation across domains is captured in two ways. One is to constrain two targets that locate near each other. The other is to use a taxi or metro to commute between two targets in HAR domains mentioned in the context. Dialogue Collection: before the formal data collection starts, we required the workers to make a small number of dialogues and gave them feedback about the dialogue quality. Then, well-trained workers were paired to converse according to the given goals. The workers were also asked to annotate both user states and system states. Dialogue Annotation: we used some rules to automatically annotate dialogue acts according to user states, system states, and dialogue histories.
| 5 | NIv2 | task460_qasper_answer_generation | fs_opt |
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
Example: We evaluate the proposed approach on the Chinese social media text summarization task, based on the sequence-to-sequence model. Large-Scale Chinese Short Text Summarization Dataset (LCSTS) is constructed by BIBREF1 . The dataset consists of more than 2.4 million text-summary pairs in total, constructed from a famous Chinese social media microblogging service Weibo.
Question: Are results reported only for English data?
Example solution: No
Example explanation: Based on the context, the dataset is constructed from a famous Chinese social media microblogging service Weibo.
Problem: Logistic regression: To produce the representation of the input, we concatenate the Bag-Of-Words representation of the document with the Bag-Of-Words representation of the question. LSTM: We start with a concatenation of the sequence of indexes of the document with the sequence of indexes of the question. Them we feed an LSTM network with this vector and use the final state as the representation of the input. Finally, we apply a logistic regression over this representation to produce the final decision. End-to-end memory networks: This architecture is based on two different memory cells (input and output) that contain a representation of the document. Deep projective reader: This is a model of our own design, largely inspired by the efficient R-net reader BIBREF12 .
Question: What baselines are presented?
| Solution: Logistic regression LSTM End-to-end memory networks Deep projective reader | 5 | NIv2 | task460_qasper_answer_generation | fs_opt |
Teacher: In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
Teacher: Now, understand the problem? If you are still confused, see the following example:
We evaluate the proposed approach on the Chinese social media text summarization task, based on the sequence-to-sequence model. Large-Scale Chinese Short Text Summarization Dataset (LCSTS) is constructed by BIBREF1 . The dataset consists of more than 2.4 million text-summary pairs in total, constructed from a famous Chinese social media microblogging service Weibo.
Question: Are results reported only for English data?
Solution: No
Reason: Based on the context, the dataset is constructed from a famous Chinese social media microblogging service Weibo.
Now, solve this instance: The CRF module achieved the best result on the Thai sentence segmentation task BIBREF8 ; therefore, we adopt the Bi-LSTM-CRF model as our baseline.
Question: Which deep learning architecture do they use for sentence segmentation?
Student: | Bi-LSTM-CRF | 2 | NIv2 | task460_qasper_answer_generation | fs_opt |
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
[EX Q]: The only remaining question is what makes two environments similar enough to infer the existence of a common category. There is, again, a large literature on this question (including the aforementioned language modeling, unsupervised parsing, and alignment work), but in the current work we will make use of a very simple criterion: fragments are interchangeable if they occur in at least one lexical environment that is exactly the same.
Question: How do they determine similar environments for fragments in their data augmentation scheme?
[EX A]: fragments are interchangeable if they occur in at least one lexical environment that is exactly the same
[EX Q]: Because of this, we do not claim that this dataset can be considered a ground truth.
Question: How is the ground truth for fake news established?
[EX A]: Ground truth is not established in the paper
[EX Q]: To understand the reason behind the drop in performance, first we need to understand how BERT processes input text data. BERT uses WordPiece tokenizer to tokenize the text. WordPiece tokenizer utterances based on the longest prefix matching algorithm to generate tokens . The tokens thus obtained are fed as input of the BERT model. When it comes to tokenizing noisy data, we see a very interesting behaviour from WordPiece tokenizer. Owing to the spelling mistakes, these words are not directly found in BERT’s dictionary. Hence WordPiece tokenizer tokenizes noisy words into subwords. However, it ends up breaking them into subwords whose meaning can be very different from the meaning of the original word. Often, this changes the meaning of the sentence completely, therefore leading to substantial dip in the performance.
Question: What is the reason behind the drop in performance using BERT for some popular task?
[EX A]: | Hence WordPiece tokenizer tokenizes noisy words into subwords. However, it ends up breaking them into subwords whose meaning can be very different from the meaning of the original word. Often, this changes the meaning of the sentence completely, therefore leading to substantial dip in the performance.
| 6 | NIv2 | task460_qasper_answer_generation | fs_opt |
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
Example Input: In Section SECREF16, we first provide more details about the experimental setting that we followed. As explained in Section SECREF3, we used BabelNet BIBREF29 as our reference taxonomy. BabelNet is a large-scale full-fledged taxonomy consisting of heterogeneous sources such as WordNet BIBREF36, Wikidata BIBREF37 and WiBi BIBREF38, making it suitable to test our hypothesis in a general setting. To test our proposed category induction model, we consider all BabelNet categories with fewer than 50 known instances. This is motivated by the view that conceptual neighborhood is mostly useful in cases where the number of known instances is small. For each of these categories, we split the set of known instances into 90% for training and 10% for testing. To tune the prior probability $\lambda _A$ for these categories, we hold out 10% from the training set as a validation set.
Question: What experiments they perform to demonstrate that their approach leads more accurate region based representations?
Example Output: To test our proposed category induction model, we consider all BabelNet categories with fewer than 50 known instances. This is motivated by the view that conceptual neighborhood is mostly useful in cases where the number of known instances is small. For each of these categories, we split the set of known instances into 90% for training and 10% for testing.
Example Input: The dataset contains a total of 9710 passages, with an average of 6.24 sentences per passage, 16.16 words per sentence, and an average length of 86 words.
Question: What is the size of this dataset?
Example Output: 9710 passages, with an average of 6.24 sentences per passage, 16.16 words per sentence, and an average length of 86 words
Example Input: The first dataset is an expanded version of the annotated Wikipedia conversations dataset from BIBREF9. This dataset uses carefully-controlled crowdsourced labels, strictly filtered to ensure the conversations are civil up to the moment of a personal attack. This is a useful property for the purposes of model analysis, and hence we focus on this as our primary dataset. However, we are conscious of the possibility that these strict labels may not fully capture the kind of behavior that moderators care about in practice. We therefore introduce a secondary dataset, constructed from the subreddit ChangeMyView (CMV) that does not use post-hoc annotations. Instead, the prediction task is to forecast whether the conversation will be subject to moderator action in the future. Wikipedia data. BIBREF9's `Conversations Gone Awry' dataset consists of 1,270 conversations that took place between Wikipedia editors on publicly accessible talk pages. The conversations are sourced from the WikiConv dataset BIBREF59 and labeled by crowdworkers as either containing a personal attack from within (i.e., hostile behavior by one user in the conversation directed towards another) or remaining civil throughout. Reddit CMV data. The CMV dataset is constructed from conversations collected via the Reddit API. In contrast to the Wikipedia-based dataset, we explicitly avoid the use of post-hoc annotation. Instead, we use as our label whether a conversation eventually had a comment removed by a moderator for violation of Rule 2: “Don't be rude or hostile to other users”.
Question: What labels for antisocial events are available in datasets?
Example Output: | The Conversations Gone Awry dataset is labelled as either containing a personal attack from withint (i.e. hostile behavior by one user in the conversation directed towards another) or remaining civil throughout. The Reddit Change My View dataset is labelled with whether or not a coversation eventually had a comment removed by a moderator for violation of Rule 2: "Don't be rude or hostile to others users."
| 3 | NIv2 | task460_qasper_answer_generation | fs_opt |
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
Example: We evaluate the proposed approach on the Chinese social media text summarization task, based on the sequence-to-sequence model. Large-Scale Chinese Short Text Summarization Dataset (LCSTS) is constructed by BIBREF1 . The dataset consists of more than 2.4 million text-summary pairs in total, constructed from a famous Chinese social media microblogging service Weibo.
Question: Are results reported only for English data?
Example solution: No
Example explanation: Based on the context, the dataset is constructed from a famous Chinese social media microblogging service Weibo.
Problem: For the human evaluation, we follow the standard approach in evaluating machine translation systems BIBREF23 , as used for question generation by BIBREF9 . We asked three workers to rate 300 generated questions between 1 (poor) and 5 (good) on two separate criteria: the fluency of the language used, and the relevance of the question to the context document and answer.
Question: What human evaluation metrics were used in the paper?
| Solution: rating questions on a scale of 1-5 based on fluency of language used and relevance of the question to the context | 5 | NIv2 | task460_qasper_answer_generation | fs_opt |
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
Example Input: We use Gaussian Processes as this probabilistic kernelised framework avoids the need for expensive cross-validation for hyperparameter selection
Question: Why is a Gaussian process an especially appropriate method for this classification problem?
Example Output: avoids the need for expensive cross-validation for hyperparameter selection
Example Input: More than 2,100 texts were paired with 15 questions each, resulting in a total number of approx. 32,000 annotated questions. For 13% of the questions, the workers did not agree on one of the 4 categories with a 3 out of 5 majority, so we did not include these questions in our dataset.
The distribution of category labels on the remaining 87% is shown in Table TABREF10 . 14,074 (52%) questions could be answered. Out of the answerable questions, 10,160 could be answered from the text directly (text-based) and 3,914 questions required the use of commonsense knowledge (script-based). After removing 135 questions during the validation, the final dataset comprises 13,939 questions, 3,827 of which require commonsense knowledge (i.e. 27.4%). This ratio was manually verified based on a random sample of questions.
Question: what dataset statistics are provided?
Example Output: More than 2,100 texts were paired with 15 questions each, resulting in a total number of approx. 32,000 annotated questions. 13% of the questions are not answerable. Out of the answerable questions, 10,160 could be answered from the text directly (text-based) and 3,914 questions required the use of commonsense knowledge (script-based). The final dataset comprises 13,939 questions, 3,827 of which require commonsense knowledge (i.e. 27.4%).
Example Input: For the human evaluation, we follow the standard approach in evaluating machine translation systems BIBREF23 , as used for question generation by BIBREF9 . We asked three workers to rate 300 generated questions between 1 (poor) and 5 (good) on two separate criteria: the fluency of the language used, and the relevance of the question to the context document and answer.
Question: What human evaluation metrics were used in the paper?
Example Output: | rating questions on a scale of 1-5 based on fluency of language used and relevance of the question to the context
| 3 | NIv2 | task460_qasper_answer_generation | fs_opt |
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
Input: Consider Input: Motivated by this, we introduce resolution mode variables $\Pi = \lbrace \pi _1, \ldots , \pi _n\rbrace $ , where for each mention $j$ the variable $\pi _j \in \lbrace str, prec, attr\rbrace $ indicates in which mode the mention should be resolved. In our model, we define three resolution modes — string-matching (str), precise-construct (prec), and attribute-matching (attr) — and $\Pi $ is deterministic when $D$ is given (i.e. $P(\Pi |D)$ is a point distribution). We determine $\pi _j$ for each mention $m_j$ in the following way:
$\pi _j = str$ , if there exists a mention $m_i, i < j$ such that the two mentions satisfy the String Match sieve, the Relaxed String Match sieve, or the Strict Head Match A sieve in the Stanford multi-sieve system BIBREF1 .
$\pi _j = prec$ , if there exists a mention $m_i, i < j$ such that the two mentions satisfy the Speaker Identification sieve, or the Precise Constructs sieve.
$\pi _j = attr$ , if there is no mention $m_i, i < j$ satisfies the above two conditions.
Question: Are resolution mode variables hand crafted?
Output: No
Input: Consider Input: Consequently, we investigate ways to detect suspicious accounts by considering their tweets in groups (chunks). Our hypothesis is that suspicious accounts have a unique pattern in posting tweet sequences. Since their intention is to mislead, the way they transition from one set of tweets to the next has a hidden signature, biased by their intentions. Given a news Twitter account, we read its tweets from the account's timeline. Then we sort the tweets by the posting date in ascending way and we split them into $N$ chunks. Each chunk consists of a sorted sequence of tweets labeled by the label of its corresponding account.
Question: How is a "chunk of posts" defined in this work?
Output: chunk consists of a sorted sequence of tweets labeled by the label of its corresponding account
Input: Consider Input: The average classification accuracy results are summarised in Table TABREF9.
Question: What evaluation metric is used?
| Output: average classification accuracy
| 2 | NIv2 | task460_qasper_answer_generation | fs_opt |
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
Example input: We evaluate the proposed approach on the Chinese social media text summarization task, based on the sequence-to-sequence model. Large-Scale Chinese Short Text Summarization Dataset (LCSTS) is constructed by BIBREF1 . The dataset consists of more than 2.4 million text-summary pairs in total, constructed from a famous Chinese social media microblogging service Weibo.
Question: Are results reported only for English data?
Example output: No
Example explanation: Based on the context, the dataset is constructed from a famous Chinese social media microblogging service Weibo.
Q: The average classification accuracy results are summarised in Table TABREF9.
Question: What evaluation metric is used?
A: | average classification accuracy | 3 | NIv2 | task460_qasper_answer_generation | fs_opt |
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
Example input: We evaluate the proposed approach on the Chinese social media text summarization task, based on the sequence-to-sequence model. Large-Scale Chinese Short Text Summarization Dataset (LCSTS) is constructed by BIBREF1 . The dataset consists of more than 2.4 million text-summary pairs in total, constructed from a famous Chinese social media microblogging service Weibo.
Question: Are results reported only for English data?
Example output: No
Example explanation: Based on the context, the dataset is constructed from a famous Chinese social media microblogging service Weibo.
Q: Sentiment: For each cluster, its overall sentiment score is quantified by the mean of the sentiment scores among all tweets.
Question: How is sentiment polarity measured?
A: | For each cluster, its overall sentiment score is quantified by the mean of the sentiment scores among all tweets | 3 | NIv2 | task460_qasper_answer_generation | fs_opt |
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
Example Input: Figure FIGREF9 illustrates the overall architecture of the proposed Attentional Encoder Network (AEN), which mainly consists of an embedding layer, an attentional encoder layer, a target-specific attention layer, and an output layer. Embedding layer has two types: GloVe embedding and BERT embedding. Accordingly, the models are named AEN-GloVe and AEN-BERT.
Question: How is their model different from BERT?
Example Output: overall architecture of the proposed Attentional Encoder Network (AEN), which mainly consists of an embedding layer, an attentional encoder layer, a target-specific attention layer, and an output layer.
Example Input: Contexts are either ground-truth contexts from that dataset, or they are Wikipedia passages retrieved using TF-IDF BIBREF24 based on a HotpotQA question. A new non-overlapping set of contexts was again constructed from Wikipedia via HotpotQA using the same method as Round 1. In addition to contexts from Wikipedia for Round 3, we also included contexts from the following domains: News (extracted from Common Crawl), fiction (extracted from BIBREF27, and BIBREF28), formal spoken text (excerpted from court and presidential debate transcripts in the Manually Annotated Sub-Corpus (MASC) of the Open American National Corpus), and causal or procedural text, which describes sequences of events or actions, extracted from WikiHow. Finally, we also collected annotations using the longer contexts present in the GLUE RTE training data, which came from the RTE5 dataset BIBREF29.
Question: What data sources do they use for creating their dataset?
Example Output: Wikipedia (of 250-600 characters) from the manually curated HotpotQA training set Manually Annotated Sub-Corpus (MASC) of the Open American National Corpus) RTE5
Example Input: In this part, we discuss three significant limitations of BLEU and ROUGE. These metrics can assign: High scores to semantically opposite translations/summaries, Low scores to semantically related translations/summaries and High scores to unintelligible translations/summaries.
Challenges with BLEU and ROUGE ::: High score, opposite meanings
Suppose that we have a reference summary s1. By adding a few negation terms to s1, one can create a summary s2 which is semantically opposite to s1 but yet has a high BLEU/ROUGE score.
Challenges with BLEU and ROUGE ::: Low score, similar meanings
In addition not to be sensitive to negation, BLEU and ROUGE score can give low scores to sentences with equivalent meaning. If s2 is a paraphrase of s1, the meaning will be the same ;however, the overlap between words in s1 and s2 will not necessarily be significant.
Challenges with BLEU and ROUGE ::: High score, unintelligible sentences
A third weakness of BLEU and ROUGE is that in their simplest implementations, they are insensitive to word permutation and can give very high scores to unintelligible sentences. Let s1 be "On a morning, I saw a man running in the street." and s2 be “On morning a, I saw the running a man street”. s2 is not an intelligible sentence. The unigram version of ROUGE and BLEU will give these 2 sentences a score of 1.
Question: What are the three limitations?
Example Output: | High scores to semantically opposite translations/summaries, Low scores to semantically related translations/summaries and High scores to unintelligible translations/summaries.
| 3 | NIv2 | task460_qasper_answer_generation | fs_opt |
Part 1. Definition
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
Part 2. Example
We evaluate the proposed approach on the Chinese social media text summarization task, based on the sequence-to-sequence model. Large-Scale Chinese Short Text Summarization Dataset (LCSTS) is constructed by BIBREF1 . The dataset consists of more than 2.4 million text-summary pairs in total, constructed from a famous Chinese social media microblogging service Weibo.
Question: Are results reported only for English data?
Answer: No
Explanation: Based on the context, the dataset is constructed from a famous Chinese social media microblogging service Weibo.
Part 3. Exercise
The two popular segmentation methods are morpheme segmentation BIBREF4 and Byte Pair Encoding (BPE) BIBREF5. After word segmentation, we additionally add an specific symbol behind each separated subword unit, which aims to assist the NMT model to identify the morpheme boundaries and capture the semantic information effectively. We utilize the Zemberek with a morphological disambiguation tool to segment the Turkish words into morpheme units, and utilize the morphology analysis tool BIBREF12 to segment the Uyghur words into morpheme units.
Question: How does the word segmentation method work?
Answer: | morpheme segmentation BIBREF4 and Byte Pair Encoding (BPE) BIBREF5 Zemberek BIBREF12 | 7 | NIv2 | task460_qasper_answer_generation | fs_opt |
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
Let me give you an example: We evaluate the proposed approach on the Chinese social media text summarization task, based on the sequence-to-sequence model. Large-Scale Chinese Short Text Summarization Dataset (LCSTS) is constructed by BIBREF1 . The dataset consists of more than 2.4 million text-summary pairs in total, constructed from a famous Chinese social media microblogging service Weibo.
Question: Are results reported only for English data?
The answer to this example can be: No
Here is why: Based on the context, the dataset is constructed from a famous Chinese social media microblogging service Weibo.
OK. solve this:
The dataset contains a total of 9710 passages, with an average of 6.24 sentences per passage, 16.16 words per sentence, and an average length of 86 words.
Question: What is the size of this dataset?
Answer: | 9710 passages, with an average of 6.24 sentences per passage, 16.16 words per sentence, and an average length of 86 words | 8 | NIv2 | task460_qasper_answer_generation | fs_opt |
TASK DEFINITION: In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
PROBLEM: To evaluate the usefulness of our corpus for SMT purposes, we used it to train an automatic translator with Moses BIBREF8 .
Question: What SMT models did they look at?
SOLUTION: automatic translator with Moses
PROBLEM: We collect three years of online news articles from June 2016 to June 2019.
Question: What unlabeled corpus did they use?
SOLUTION: three years of online news articles from June 2016 to June 2019
PROBLEM: We used state-of-the-art features that have shown to be useful in ID: some of them are language-independent (e.g., punctuation marks, positive and negative emoticons, quotations, personal pronouns, tweet's length, named entities) while others are language-dependent relying on dedicated lexicons (e.g., negation, opinion lexicons, opposition words).
Question: What text-based features are used?
SOLUTION: | language-independent (e.g., punctuation marks, positive and negative emoticons, quotations, personal pronouns, tweet's length, named entities) language-dependent relying on dedicated lexicons (e.g., negation, opinion lexicons, opposition words)
| 8 | NIv2 | task460_qasper_answer_generation | fs_opt |
Detailed Instructions: In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
See one example below:
Problem: We evaluate the proposed approach on the Chinese social media text summarization task, based on the sequence-to-sequence model. Large-Scale Chinese Short Text Summarization Dataset (LCSTS) is constructed by BIBREF1 . The dataset consists of more than 2.4 million text-summary pairs in total, constructed from a famous Chinese social media microblogging service Weibo.
Question: Are results reported only for English data?
Solution: No
Explanation: Based on the context, the dataset is constructed from a famous Chinese social media microblogging service Weibo.
Problem: Based on the resulting 1.8 million lists of about 169,000 distinct user ids, we compute a topic model with INLINEFORM0 topics using Latent Dirichlet Allocation BIBREF3 . For each of the user ids, we extract the most probable topic from the inferred user id-topic distribution as cluster id. This results in a thematic cluster id for most of the user ids in our background corpus grouping together accounts such as American or German political actors, musicians, media websites or sports clubs (see Table TABREF17 ).
Question: What topic clusters are identified by LDA?
Solution: | Clusters of Twitter user ids from accounts of American or German political actors, musicians, media websites or sports club | 4 | NIv2 | task460_qasper_answer_generation | fs_opt |
TASK DEFINITION: In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
PROBLEM: In this paper, we first highlight the importance of TV and radio broadcast as a source of data for ASR, and the potential impact it can have. We then perform a statistical analysis of gender representation in a data set composed of four state-of-the-art corpora of French broadcast, widely used within the speech community. Finally we question the impact of such a representation on the systems developed on this data, through the perspective of an ASR system.
Question: What tasks did they use to evaluate performance for male and female speakers?
SOLUTION: ASR
PROBLEM: Our model differs by learning the subword vectors and resulting representation jointly as weighted factorization of a word-context co-occurrence matrix is performed.
Question: Which matrix factorization methods do they use?
SOLUTION: weighted factorization of a word-context co-occurrence matrix
PROBLEM: The input of our model are the words in the input text $x[1], ... , x[n]$ and query $q[1], ... , q[n]$ . We concatenate pre-trained word embeddings from GloVe BIBREF40 and character embeddings trained by CharCNN BIBREF41 to represent input words. The $2d$ -dimension embedding vectors of input text $x_1, ... , x_n$ and query $q_1, ... , q_n$ are then fed into a Highway Layer BIBREF42 to improve the capability of word embeddings and character embeddings as
$$\begin{split} g_t &= {\rm sigmoid}(W_gx_t+b_g) \\ s_t &= {\rm relu } (W_xx_t+b_x) \\ u_t &= g_t \odot s_t + (1 - g_t) \odot x_t~. \end{split}$$ (Eq. 18) The same Highway Layer is applied to $q_t$ and produces $v_t$ . Next, $u_t$ and $v_t$ are fed into a Bi-Directional Long Short-Term Memory Network (BiLSTM) BIBREF44 respectively in order to model the temporal interactions between sequence words: Then we feed $\mathbf {U}$ and $\mathbf {V}$ into the attention flow layer BIBREF27 to model the interactions between the input text and query. Therefore, we introduce Self-Matching Layer BIBREF29 in our model as
$$\begin{split} o_t &= {\rm BiLSTM}(o_{t-1}, [h_t, c_t]) \\ s_j^t &= w^T {\rm tanh}(W_hh_j+\tilde{W_h}h_t)\\ \alpha _i^t &= {\rm exp}(s_i^t)/\Sigma _{j=1}^n{\rm exp}(s_j^t)\\ c_t &= \Sigma _{i=1}^n\alpha _i^th_i ~. \end{split}$$ (Eq. 20) Finally we feed the embeddings $\mathbf {O} = [o_1, ... , o_n]$ into a Pointer Network BIBREF39 to decode the answer sequence as
$$\begin{split} p_t &= {\rm LSTM}(p_{t-1}, c_t) \\ s_j^t &= w^T {\rm tanh}(W_oo_j+W_pp_{t-1})\\ \beta _i^t &= {\rm exp}(s_i^t)/\Sigma _{j=1}^n{\rm exp}(s_j^t)\\ c_t &= \Sigma _{i=1}^n\beta _i^to_i~. \end{split}$$ (Eq. 21) Therefore, the probability of generating the answer sequence $\textbf {a}$ is as follows
$${\rm P}(\textbf {a}|\mathbf {O}) = \prod _t {\rm P}(a^t | a^1, ... , a^{t-1}, \mathbf {O})~.$$ (Eq. 23)
Question: What QA models were used?
SOLUTION: | A pointer network decodes the answer from a bidirectional LSTM with attention flow layer and self-matching layer, whose inputs come from word and character embeddings of the query and input text fed through a highway layer.
| 8 | NIv2 | task460_qasper_answer_generation | fs_opt |
instruction:
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
question:
SVM: We define 3 sets of features to characterize each question. The first is a simple bag-of-words set of features over the question (SVM-BOW), the second is bag-of-words features of the question as well as length of the question in words (SVM-BOW + LEN), and lastly we extract bag-of-words features, length of the question in words as well as part-of-speech tags for the question (SVM-BOW + LEN + POS). This results in vectors of 200, 201 and 228 dimensions respectively, which are provided to an SVM with a linear kernel. No-Answer Baseline (NA) : Most of the questions we receive are difficult to answer in a legally-sound way on the basis of information present in the privacy policy. We establish a simple baseline to quantify the effect of identifying every question as unanswerable. Word Count Baseline : To quantify the effect of using simple lexical matching to answer the questions, we retrieve the top candidate policy sentences for each question using a word count baseline BIBREF53, which counts the number of question words that also appear in a sentence. We include the top 2, 3 and 5 candidates as baselines. Human Performance: We pick each reference answer provided by an annotator, and compute the F1 with respect to the remaining references, as described in section 4.2.1. Each reference answer is treated as the prediction, and the remaining n-1 answers are treated as the gold reference. The average of the maximum F1 across all reference answers is computed as the human baseline.
Question: Were other baselines tested to compare with the neural baseline?
answer:
SVM No-Answer Baseline (NA) Word Count Baseline Human Performance
question:
We used state-of-the-art features that have shown to be useful in ID: some of them are language-independent (e.g., punctuation marks, positive and negative emoticons, quotations, personal pronouns, tweet's length, named entities) while others are language-dependent relying on dedicated lexicons (e.g., negation, opinion lexicons, opposition words).
Question: What text-based features are used?
answer:
language-independent (e.g., punctuation marks, positive and negative emoticons, quotations, personal pronouns, tweet's length, named entities) language-dependent relying on dedicated lexicons (e.g., negation, opinion lexicons, opposition words)
question:
We build the iSQuAD and iNewsQA datasets based on SQuAD v1.1 BIBREF0 and NewsQA BIBREF1. iMRC: Making MRC Interactive ::: Evaluation Metric
Since iMRC involves both MRC and RL, we adopt evaluation metrics from both settings. First, as a question answering task, we use $\text{F}_1$ score to compare predicted answers against ground-truth, as in previous works. When there exist multiple ground-truth answers, we report the max $\text{F}_1$ score. Second, mastering multiple games remains quite challenging for RL agents. Therefore, we evaluate an agent's performance during both its training and testing phases. During training, we report training curves averaged over 3 random seeds. During test, we follow common practice in supervised learning tasks where we report the agent's test performance corresponding to its best validation performance .
Question: What are the models evaluated on?
answer:
| They evaluate F1 score and agent's test performance on their own built interactive datasets (iSQuAD and iNewsQA)
| 9 | NIv2 | task460_qasper_answer_generation | fs_opt |
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
One example is below.
Q: We evaluate the proposed approach on the Chinese social media text summarization task, based on the sequence-to-sequence model. Large-Scale Chinese Short Text Summarization Dataset (LCSTS) is constructed by BIBREF1 . The dataset consists of more than 2.4 million text-summary pairs in total, constructed from a famous Chinese social media microblogging service Weibo.
Question: Are results reported only for English data?
A: No
Rationale: Based on the context, the dataset is constructed from a famous Chinese social media microblogging service Weibo.
Q: Causal documents were chosen to contain one occurrence only of the exact unigrams: `caused', `causing', or `causes'.
Question: How do they extract causality from text?
A: | They identify documents that contain the unigrams 'caused', 'causing', or 'causes' | 9 | NIv2 | task460_qasper_answer_generation | fs_opt |
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
[Q]: We asked medical doctors experienced in extracting knowledge related to medical entities from texts to annotate the entities described above. Initially, we asked four annotators to test our guidelines on two texts. Subsequently, identified issues were discussed and resolved. Following this pilot annotation phase, we asked two different annotators to annotate two case reports according to our guidelines. The same annotators annotated an overall collection of 53 case reports. The annotation was performed using WebAnno BIBREF7, a web-based tool for linguistic annotation. The annotators could choose between a pre-annotated version or a blank version of each text. The pre-annotated versions contained suggested entity spans based on string matches from lists of conditions and findings synonym lists.
Question: How was annotation performed?
[A]: Experienced medical doctors used a linguistic annotation tool to annotate entities.
[Q]: Above all, we introduce two common baselines. The first one just selects the leading sentences to form a summary. It is often used as an official baseline of DUC, and we name it “LEAD”. The other system is called “QUERY_SIM”, which directly ranks sentences according to its TF-IDF cosine similarity to the query. Above all, we introduce two common baselines. The first one just selects the leading sentences to form a summary. It is often used as an official baseline of DUC, and we name it “LEAD”. The other system is called “QUERY_SIM”, which directly ranks sentences according to its TF-IDF cosine similarity to the query. In addition, we implement two popular extractive query-focused summarization methods, called MultiMR BIBREF2 and SVR BIBREF20 . Since our model is totally data-driven, we introduce a recent summarization system DocEmb BIBREF9 that also just use deep neural network features to rank sentences. To verify the effectiveness of the joint model, we design a baseline called ISOLATION, which performs saliency ranking and relevance ranking in isolation.
Question: What models do they compare to?
[A]: LEAD QUERY_SIM MultiMR SVR DocEmb ISOLATION
[Q]: Causal documents were chosen to contain one occurrence only of the exact unigrams: `caused', `causing', or `causes'.
Question: How do they extract causality from text?
[A]: | They identify documents that contain the unigrams 'caused', 'causing', or 'causes'
| 5 | NIv2 | task460_qasper_answer_generation | fs_opt |
Given the task definition, example input & output, solve the new input case.
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
Example: We evaluate the proposed approach on the Chinese social media text summarization task, based on the sequence-to-sequence model. Large-Scale Chinese Short Text Summarization Dataset (LCSTS) is constructed by BIBREF1 . The dataset consists of more than 2.4 million text-summary pairs in total, constructed from a famous Chinese social media microblogging service Weibo.
Question: Are results reported only for English data?
Output: No
Based on the context, the dataset is constructed from a famous Chinese social media microblogging service Weibo.
New input case for you: We evaluate our approach on two well benchmarked sequence tagging tasks, the CoNLL 2003 NER task BIBREF13 and the CoNLL 2000 Chunking task BIBREF14 .
Question: what are the evaluation datasets?
Output: | CoNLL 2003 CoNLL 2000 | 1 | NIv2 | task460_qasper_answer_generation | fs_opt |
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
Example Input: While the performances of a purely content-based model naturally stays stable, the performance of the other systems decrease notably – they perform worse than the content-based model.
Question: How do they demonstrate the robustness of their results?
Example Output: performances of a purely content-based model naturally stays stable
Example Input: We carried out a reliability study for the proposed scheme using two pairs of expert annotators, P1 and P2.
Question: do they use a crowdsourcing platform?
Example Output: No
Example Input: The evaluation results are quite favorable for both targets and particularly higher for Target-1, considering the fact that they are the initial experiments on the data set.
Question: Which SVM approach resulted in the best performance?
Example Output: | Target-1
| 3 | NIv2 | task460_qasper_answer_generation | fs_opt |
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
Let me give you an example: We evaluate the proposed approach on the Chinese social media text summarization task, based on the sequence-to-sequence model. Large-Scale Chinese Short Text Summarization Dataset (LCSTS) is constructed by BIBREF1 . The dataset consists of more than 2.4 million text-summary pairs in total, constructed from a famous Chinese social media microblogging service Weibo.
Question: Are results reported only for English data?
The answer to this example can be: No
Here is why: Based on the context, the dataset is constructed from a famous Chinese social media microblogging service Weibo.
OK. solve this:
We test the character and word-level variants by predicting hashtags for a held-out test set of posts.
Question: What other tasks do they test their method on?
Answer: | None | 8 | NIv2 | task460_qasper_answer_generation | fs_opt |
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
Ex Input:
This step consists of generating a query out of the claim and querying a search engine (here, we experiment with Google and Bing) in order to retrieve supporting documents. Rather than querying the search engine with the full claim (as on average, a claim is two sentences long), we generate a shorter query following the lessons highlighted in BIBREF0 . We rank the words by means of tf-idf. We compute the idf values on a 2015 Wikipedia dump and the English Gigaword. BIBREF0 suggested that a good way to perform high-quality search is to only consider the verbs, the nouns and the adjectives in the claim; thus, we exclude all words in the claim that belong to other parts of speech. Moreover, claims often contain named entities (e.g., names of persons, locations, and organizations); hence, we augment the initial query with all the named entities from the claim's text. We use IBM's AlchemyAPI to identify named entities. Ultimately, we generate queries of 5–10 tokens, which we execute against a search engine. We then collect the snippets and the URLs in the results, skipping any result that points to a domain that is considered unreliable.
Question: How are the potentially relevant text fragments identified?
Ex Output:
Generate a query out of the claim and querying a search engine, rank the words by means of TF-IDF, use IBM's AlchemyAPI to identify named entities, generate queries of 5–10 tokens, which execute against a search engine, and collect the snippets and the URLs in the results, skipping any result that points to a domain that is considered unreliable.
Ex Input:
In this paper, we introduce automatic `drunk-texting prediction' as a computational task. Given a tweet, the goal is to automatically identify if it was written by a drunk user.
Question: Do the authors equate drunk tweeting with drunk texting?
Ex Output:
Yes
Ex Input:
We obtain word vectors of size 300 from the learned word embeddings. To represent a Twitter profile, we retrieve word vectors for all the words that appear in a particular profile including the words appear in tweets, profile description, words extracted from emoji, cover and profile images converted to textual formats, and words extracted from YouTube video comments and descriptions for all YouTube videos shared in the user's timeline. Those word vectors are combined to compute the final feature vector for the Twitter profile.
Question: How in YouTube content translated into a vector format?
Ex Output:
| words extracted from YouTube video comments and descriptions for all YouTube videos shared in the user's timeline
| 1 | NIv2 | task460_qasper_answer_generation | fs_opt |
instruction:
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
question:
Given a statement and articles, workers are asked to judge whether the statement can be derived from the articles at three grades: True, Likely (i.e. Answerable), or Unsure (i.e. Unanswerable). If a worker selects Unsure, we ask workers to tell us why they are unsure from two choices (“Not stated in the article” or “Other”). If a worker selects True or Likely in the judgement task, we first ask which sentences in the given articles are justification explanations for a given statement, similarly to HotpotQA BIBREF2. The “summary” text boxes (i.e. NLDs) are then initialized with these selected sentences.
Question: How was the dataset annotated?
answer:
True, Likely (i.e. Answerable), or Unsure (i.e. Unanswerable) why they are unsure from two choices (“Not stated in the article” or “Other”) The “summary” text boxes
question:
EmotionLines BIBREF6 is a dialogue dataset composed of two subsets, Friends and EmotionPush, according to the source of the dialogues.
Question: what datasets were used?
answer:
Friends EmotionPush
question:
For the task-oriented system, although there are some objective evaluation metrics, such as the number of turns in a dialogue, the ratio of task completion, etc., there is no gold standard for automatically evaluating two (or more) dialogue systems when considering the satisfaction of the human and the fluency of the generated dialogue.
Question: What problems are found with the evaluation scheme?
answer:
| no gold standard for automatically evaluating two (or more) dialogue systems when considering the satisfaction of the human and the fluency of the generated dialogue
| 9 | NIv2 | task460_qasper_answer_generation | fs_opt |
TASK DEFINITION: In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
PROBLEM: Through our experiments, we make subtle points related to: (a) the performance of our features, (b) how our approach compares against human ability to detect drunk-texting, (c) most discriminative stylistic features, and (d) an error analysis that points to future work.
Question: Do the authors mention any confounds to their study?
SOLUTION: No
PROBLEM: (2) Different from LEM and DPEMM, AEM uses a generator network to capture the event-related patterns and is able to mine events from different text sources (short and long). Moreover, unlike traditional inference procedure, such as Gibbs sampling used in LEM and DPEMM, AEM could extract the events more efficiently due to the CUDA acceleration;
Question: What alternative to Gibbs sampling is used?
SOLUTION: generator network to capture the event-related patterns
PROBLEM: We test the character and word-level variants by predicting hashtags for a held-out test set of posts.
Question: What other tasks do they test their method on?
SOLUTION: | None
| 8 | NIv2 | task460_qasper_answer_generation | fs_opt |
TASK DEFINITION: In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
PROBLEM: The second Turkish dataset is the Twitter corpus which is formed of tweets about Turkish mobile network operators. Those tweets are mostly much noisier and shorter compared to the reviews in the movie corpus. In total, there are 1,716 tweets. 973 of them are negative and 743 of them are positive. These tweets are manually annotated by two humans, where the labels are either positive or negative.
Question: What details are given about the Twitter dataset?
SOLUTION: Those tweets are mostly much noisier and shorter compared to the reviews in the movie corpus. In total, there are 1,716 tweets. 973 of them are negative and 743 of them are positive.
PROBLEM: The ratio of correct `translations' (matches) was used as an evaluation measure.
Question: What evaluation metric do they use?
SOLUTION: Accuracy
PROBLEM: Despite the focus on sharing datasets and source codes on popular software development platforms such as GitHub (github.com) or Zenodo (zenodo.org), it is still a challenge to use data or code from other groups.
Question: Are datasets publicly available?
SOLUTION: | Yes
| 8 | NIv2 | task460_qasper_answer_generation | fs_opt |
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
One example is below.
Q: We evaluate the proposed approach on the Chinese social media text summarization task, based on the sequence-to-sequence model. Large-Scale Chinese Short Text Summarization Dataset (LCSTS) is constructed by BIBREF1 . The dataset consists of more than 2.4 million text-summary pairs in total, constructed from a famous Chinese social media microblogging service Weibo.
Question: Are results reported only for English data?
A: No
Rationale: Based on the context, the dataset is constructed from a famous Chinese social media microblogging service Weibo.
Q: mcdonald:11 established that, when no treebank annotations are available in the target language, training on multiple source languages outperforms training on one (i.e., multi-source model transfer outperforms single-source model transfer). Following guo:16, for each target language, we train the parser on six other languages in the Google universal dependency treebanks version 2.0 (de, en, es, fr, it, pt, sv, excluding whichever is the target language), and we use gold coarse POS tags.
Question: How does the model work if no treebank is available?
A: | train the parser on six other languages in the Google universal dependency treebanks version 2.0 (de, en, es, fr, it, pt, sv, excluding whichever is the target language), and we use gold coarse POS tags | 9 | NIv2 | task460_qasper_answer_generation | fs_opt |
TASK DEFINITION: In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
PROBLEM: Given a statement and articles, workers are asked to judge whether the statement can be derived from the articles at three grades: True, Likely (i.e. Answerable), or Unsure (i.e. Unanswerable). If a worker selects Unsure, we ask workers to tell us why they are unsure from two choices (“Not stated in the article” or “Other”). If a worker selects True or Likely in the judgement task, we first ask which sentences in the given articles are justification explanations for a given statement, similarly to HotpotQA BIBREF2. The “summary” text boxes (i.e. NLDs) are then initialized with these selected sentences.
Question: How was the dataset annotated?
SOLUTION: True, Likely (i.e. Answerable), or Unsure (i.e. Unanswerable) why they are unsure from two choices (“Not stated in the article” or “Other”) The “summary” text boxes
PROBLEM: To enable this, we use a bifocal attention mechanism which computes an attention over fields at a macro level and over values at a micro level. We then fuse these attention weights such that the attention weight for a field also influences the attention over the values within it. Fused Bifocal Attention Mechanism
Intuitively, when a human writes a description from a table she keeps track of information at two levels. At the macro level, it is important to decide which is the appropriate field to attend to next and at a micro level (i.e., within a field) it is important to know which values to attend to next. To capture this behavior, we use a bifocal attention mechanism as described below. Fused Attention: Intuitively, the attention weights assigned to a field should have an influence on all the values belonging to the particular field. To ensure this, we reweigh the micro level attention weights based on the corresponding macro level attention weights. In other words, we fuse the attention weights at the two levels as: DISPLAYFORM0
where INLINEFORM0 is the field corresponding to the INLINEFORM1 -th value, INLINEFORM2 is the macro level context vector.
Question: What is a bifocal attention mechanism?
SOLUTION: At the macro level, it is important to decide which is the appropriate field to attend to next micro level (i.e., within a field) it is important to know which values to attend to next fuse the attention weights at the two levels
PROBLEM: We adopt the annotation machinery of BIBREF5 implemented using Amazon's Mechanical Turk, and annotate each predicate by 2 trained workers independently, while a third consolidates their annotations into a final set of roles and arguments.
Question: How was the previous dataset annotated?
SOLUTION: | the annotation machinery of BIBREF5
| 8 | NIv2 | task460_qasper_answer_generation | fs_opt |
Teacher: In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
Teacher: Now, understand the problem? If you are still confused, see the following example:
We evaluate the proposed approach on the Chinese social media text summarization task, based on the sequence-to-sequence model. Large-Scale Chinese Short Text Summarization Dataset (LCSTS) is constructed by BIBREF1 . The dataset consists of more than 2.4 million text-summary pairs in total, constructed from a famous Chinese social media microblogging service Weibo.
Question: Are results reported only for English data?
Solution: No
Reason: Based on the context, the dataset is constructed from a famous Chinese social media microblogging service Weibo.
Now, solve this instance: Two-talker overlapped speech is artificially generated by mixing these waveform segments. To maximize the speech overlap, we developed a procedure to mix similarly sized segments at around 0dB. First, we sort the speech segments by length. Then, we take segments in pairs, zero-padding the shorter segment so both have the same length. These pairs are then mixed together to create the overlapped speech data.
Question: How are the two datasets artificially overlapped?
Student: | we sort the speech segments by length we take segments in pairs, zero-padding the shorter segment so both have the same length These pairs are then mixed together | 2 | NIv2 | task460_qasper_answer_generation | fs_opt |
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
One example is below.
Q: We evaluate the proposed approach on the Chinese social media text summarization task, based on the sequence-to-sequence model. Large-Scale Chinese Short Text Summarization Dataset (LCSTS) is constructed by BIBREF1 . The dataset consists of more than 2.4 million text-summary pairs in total, constructed from a famous Chinese social media microblogging service Weibo.
Question: Are results reported only for English data?
A: No
Rationale: Based on the context, the dataset is constructed from a famous Chinese social media microblogging service Weibo.
Q: We compare our model with several baseline models as described below. We note that the results of the first two are previously reported in BIBREF2.
Hasty Student BIBREF2 is a heuristics-based simple model which ignores the recipe and gives an answer by examining only the question and the answer set using distances in the visual feature space.
Impatient Reader BIBREF19 is a simple neural model that takes its name from the fact that it repeatedly computes attention over the recipe after observing each image in the query.
BiDAF BIBREF14 is a strong reading comprehension model that employs a bi-directional attention flow mechanism to obtain a question-aware representation and bases its predictions on this representation. Originally, it is a span-selection model from the input context. BiDAF w/ static memory is an extended version of the BiDAF model which resembles our proposed PRN model in that it includes a memory unit for the entities.
Question: What are previously reported models?
A: | Hasty Student Impatient Reader BiDAF BiDAF w/ static memory | 9 | NIv2 | task460_qasper_answer_generation | fs_opt |
Given the task definition, example input & output, solve the new input case.
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
Example: We evaluate the proposed approach on the Chinese social media text summarization task, based on the sequence-to-sequence model. Large-Scale Chinese Short Text Summarization Dataset (LCSTS) is constructed by BIBREF1 . The dataset consists of more than 2.4 million text-summary pairs in total, constructed from a famous Chinese social media microblogging service Weibo.
Question: Are results reported only for English data?
Output: No
Based on the context, the dataset is constructed from a famous Chinese social media microblogging service Weibo.
New input case for you: GE-FL reduces the heavy load of instance annotation and performs well when we provide prior knowledge with no bias. In our experiments, we observe that comparable numbers of labeled features for each class have to be supplied. We randomly select $t \in [1, 20]$ features from the feature pool for one class, and only one feature for the other. Our methods are also evaluated on datasets with different unbalanced class distributions. We manually construct several movie datasets with class distributions of 1:2, 1:3, 1:4 by randomly removing 50%, 67%, 75% positive documents. Incorporating KL divergence is robust enough to control unbalance both in the dataset and in labeled features while the other three methods are not so competitive.
Question: How do they define robustness of a model?
Output: | ability to accurately classify texts even when the amount of prior knowledge for different classes is unbalanced, and when the class distribution of the dataset is unbalanced | 1 | NIv2 | task460_qasper_answer_generation | fs_opt |
TASK DEFINITION: In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
PROBLEM: We show that by determining and integrating heterogeneous set of features from different modalities – aesthetic features from posted images (colorfulness, hue variance, sharpness, brightness, blurriness, naturalness), choice of profile picture (for gender, age, and facial expression), the screen name, the language features from both textual content and profile's description (n-gram, emotion, sentiment), and finally sociability from ego-network, and user engagement – we can reliably detect likely depressed individuals in a data set of 8,770 human-annotated Twitter users.
Question: What is the source of the user interaction data?
SOLUTION: Sociability from ego-network on Twitter
PROBLEM: To better analyze model generalization to an unseen, new domain as well as model leveraging the out-of-domain sources, we propose a new architecture which is an extension of the ARED model. In order to better select, aggregate and control the semantic information, a Refinement Adjustment LSTM-based component (RALSTM) is introduced to the decoder side. The proposed model can learn from unaligned data by jointly training the sentence planning and surface realization to produce natural language sentences.
Question: What is the difference of the proposed model with a standard RNN encoder-decoder?
SOLUTION: Introduce a "Refinement Adjustment LSTM-based component" to the decoder
PROBLEM: GE-FL reduces the heavy load of instance annotation and performs well when we provide prior knowledge with no bias. In our experiments, we observe that comparable numbers of labeled features for each class have to be supplied. We randomly select $t \in [1, 20]$ features from the feature pool for one class, and only one feature for the other. Our methods are also evaluated on datasets with different unbalanced class distributions. We manually construct several movie datasets with class distributions of 1:2, 1:3, 1:4 by randomly removing 50%, 67%, 75% positive documents. Incorporating KL divergence is robust enough to control unbalance both in the dataset and in labeled features while the other three methods are not so competitive.
Question: How do they define robustness of a model?
SOLUTION: | ability to accurately classify texts even when the amount of prior knowledge for different classes is unbalanced, and when the class distribution of the dataset is unbalanced
| 8 | NIv2 | task460_qasper_answer_generation | fs_opt |
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
Example input: We evaluate the proposed approach on the Chinese social media text summarization task, based on the sequence-to-sequence model. Large-Scale Chinese Short Text Summarization Dataset (LCSTS) is constructed by BIBREF1 . The dataset consists of more than 2.4 million text-summary pairs in total, constructed from a famous Chinese social media microblogging service Weibo.
Question: Are results reported only for English data?
Example output: No
Example explanation: Based on the context, the dataset is constructed from a famous Chinese social media microblogging service Weibo.
Q: As shown in Figure FIGREF7 , KAR is an end-to-end MRC model consisting of five layers:
Lexicon Embedding Layer. This layer maps the words to the lexicon embeddings. Context Embedding Layer. This layer maps the lexicon embeddings to the context embeddings. Coarse Memory Layer. This layer maps the context embeddings to the coarse memories. Refined Memory Layer. This layer maps the coarse memories to the refined memories. Answer Span Prediction Layer. This layer predicts the answer start position and the answer end position based on the above layers.
Question: What type of model is KAR?
A: | Lexicon Embedding Layer Context Embedding Layer Coarse Memory Layer Refined Memory Layer Answer Span Prediction Layer | 3 | NIv2 | task460_qasper_answer_generation | fs_opt |
Given the task definition, example input & output, solve the new input case.
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
Example: We evaluate the proposed approach on the Chinese social media text summarization task, based on the sequence-to-sequence model. Large-Scale Chinese Short Text Summarization Dataset (LCSTS) is constructed by BIBREF1 . The dataset consists of more than 2.4 million text-summary pairs in total, constructed from a famous Chinese social media microblogging service Weibo.
Question: Are results reported only for English data?
Output: No
Based on the context, the dataset is constructed from a famous Chinese social media microblogging service Weibo.
New input case for you: The word intrusion test is expensive to apply since it requires manual evaluations by human observers separately for each embedding dimension. Furthermore, the word intrusion test does not quantify the interpretability levels of the embedding dimensions, instead it yields a binary decision as to whether a dimension is interpretable or not. However, using continuous values is more adequate than making binary evaluations since interpretability levels may vary gradually across dimensions.
Question: What advantages does their proposed method of quantifying interpretability have over the human-in-the-loop evaluation they compare to?
Output: | it is less expensive and quantifies interpretability using continuous values rather than binary evaluations | 1 | NIv2 | task460_qasper_answer_generation | fs_opt |
Teacher: In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
Teacher: Now, understand the problem? If you are still confused, see the following example:
We evaluate the proposed approach on the Chinese social media text summarization task, based on the sequence-to-sequence model. Large-Scale Chinese Short Text Summarization Dataset (LCSTS) is constructed by BIBREF1 . The dataset consists of more than 2.4 million text-summary pairs in total, constructed from a famous Chinese social media microblogging service Weibo.
Question: Are results reported only for English data?
Solution: No
Reason: Based on the context, the dataset is constructed from a famous Chinese social media microblogging service Weibo.
Now, solve this instance: Experiments ::: Evaluation Metrics ::: Setup of Automatic Evaluation ::: Language Fluency
We fine-tuned the GPT-2 medium model BIBREF51 on our collected headlines and then used it to measure the perplexity (PPL) on the generated outputs.
Question: How is fluency automatically evaluated?
Student: | fine-tuned the GPT-2 medium model BIBREF51 on our collected headlines and then used it to measure the perplexity (PPL) on the generated outputs | 2 | NIv2 | task460_qasper_answer_generation | fs_opt |
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
Example: We evaluate the proposed approach on the Chinese social media text summarization task, based on the sequence-to-sequence model. Large-Scale Chinese Short Text Summarization Dataset (LCSTS) is constructed by BIBREF1 . The dataset consists of more than 2.4 million text-summary pairs in total, constructed from a famous Chinese social media microblogging service Weibo.
Question: Are results reported only for English data?
Example solution: No
Example explanation: Based on the context, the dataset is constructed from a famous Chinese social media microblogging service Weibo.
Problem: Drawing on the concept of variance in mathematics, local variance loss is defined as the reciprocal of its variance expecting the attention model to be able to focus on more salient parts. The standard variance calculation is based on the mean of the distribution. However, as previous work BIBREF15, BIBREF16 mentioned that the median value is more robust to outliers than the mean value, we use the median value to calculate the variance of the attention distribution. Thus, local variance loss can be calculated as:
where $\hat{\cdot }$ is a median operator and $\epsilon $ is utilized to avoid zero in the denominator.
Question: How do they define local variance?
| Solution: The reciprocal of the variance of the attention distribution | 5 | NIv2 | task460_qasper_answer_generation | fs_opt |
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
Input: Consider Input: Three different datasets have been used to train our models: the Toronto book corpus, Wikipedia sentences and tweets. Our Sent2Vec models also on average outperform or are at par with the C-PHRASE model, despite significantly lagging behind on the STS 2014 WordNet and News subtasks. This observation can be attributed to the fact that a big chunk of the data that the C-PHRASE model is trained on comes from English Wikipedia, helping it to perform well on datasets involving definition and news items.
Question: Do they report results only on English data?
Output: Yes
Input: Consider Input: Our method is purely text-based, and ignores the publication date and the source of the article. It combines task-specific embeddings, produced by a two-level attention-based deep neural network model, with manually crafted features (stylometric, lexical, grammatical, and semantic), into a kernel-based SVM classifier.
Question: what types of features were used?
Output: stylometric, lexical, grammatical, and semantic
Input: Consider Input: We have implemented the following interfaces for Macaw:
[leftmargin=*]
File IO: This interface is designed for experimental purposes, such as evaluating the performance of a conversational search technique on a dataset with multiple queries. This is not an interactive interface.
Standard IO: This interactive command line interface is designed for development purposes to interact with the system, see the logs, and debug or improve the system.
Telegram: This interactive interface is designed for interaction with real users (see FIGREF4). Telegram is a popular instant messaging service whose client-side code is open-source. We have implemented a Telegram bot that can be used with different devices (personal computers, tablets, and mobile phones) and different operating systems (Android, iOS, Linux, Mac OS, and Windows). This interface allows multi-modal interactions (text, speech, click, image). It can be also used for speech-only interactions. For speech recognition and generation, Macaw relies on online APIs, e.g., the services provided by Google Cloud and Microsoft Azure. In addition, there exist multiple popular groups and channels in Telegram, which allows further integration of social networks with conversational systems. For example, see the Naseri and Zamani's study on news popularity in Telegram BIBREF12.
Question: What interface does Macaw currently have?
| Output: File IO Standard IO Telegram
| 2 | NIv2 | task460_qasper_answer_generation | fs_opt |
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
Input: Consider Input: KGR10, also known as plWordNet Corpus 10.0 (PLWNC 10.0), is the result of the work on the toolchain to automatic acquisition and extraction of the website content, called CorpoGrabber BIBREF19 . It is a pipeline of tools to get the most relevant content of the website, including all subsites (up to the user-defined depth). The proposed toolchain can be used to build a big Web corpus of text documents. It requires the list of the root websites as the input. Tools composing CorpoGrabber are adapted to Polish, but most subtasks are language independent. The whole process can be run in parallel on a single machine and includes the following tasks: download of the HTML subpages of each input page URL with HTTrack, extraction of plain text from each subpage by removing boilerplate content (such as navigation links, headers, footers, advertisements from HTML pages) BIBREF20 , deduplication of plain text BIBREF20 , bad quality documents removal utilising Morphological Analysis Converter and Aggregator (MACA) BIBREF21 , documents tagging using Wrocław CRF Tagger (WCRFT) BIBREF22 . Last two steps are available only for Polish.
Question: How was the KGR10 corpus created?
Output: most relevant content of the website, including all subsites
Input: Consider Input: Bi-LSTM BIBREF11 is a baseline for neural models. Bi-LSTM$_{+ att. + LEX + POS}$ BIBREF10 is a multi-task learning framework for WSD, POS tagging, and LEX with self-attention mechanism, which converts WSD to a sequence learning task. GAS$_{ext}$ BIBREF12 is a variant of GAS which is a gloss-augmented variant of the memory network by extending gloss knowledge. CAN$^s$ and HCAN BIBREF13 are sentence-level and hierarchical co-attention neural network models which leverage gloss knowledge.
Question: How does the neural network architecture accomodate an unknown amount of senses per word?
Output: converts WSD to a sequence learning task leverage gloss knowledge by extending gloss knowledge
Input: Consider Input: From the description of music genres provided above emerges that there is a limited number of super-genres and derivation lines BIBREF19, BIBREF20, as shown in figure FIGREF1.
From a computational perspective, genres are classes and, although can be treated by machine learning algorithms, they do not include information about the relations between them. In order to formalize the relations between genres for computing purposes, we define a continuous genre scale from the most experimental and introverted super-genre to the most euphoric and inclusive one. We selected from Wikipedia the 77 genres that we mentioned in bold in the previous paragraph and asked to two independent raters to read the Wikipedia pages of the genres, listen to samples or artists of the genres (if they did not know already) and then annotate the following dimensions:
Question: How many genres did they collect from?
| Output: 77 genres
| 2 | NIv2 | task460_qasper_answer_generation | fs_opt |
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
One example: We evaluate the proposed approach on the Chinese social media text summarization task, based on the sequence-to-sequence model. Large-Scale Chinese Short Text Summarization Dataset (LCSTS) is constructed by BIBREF1 . The dataset consists of more than 2.4 million text-summary pairs in total, constructed from a famous Chinese social media microblogging service Weibo.
Question: Are results reported only for English data?
Solution is here: No
Explanation: Based on the context, the dataset is constructed from a famous Chinese social media microblogging service Weibo.
Now, solve this: Since it is infeasible to manually annotate all of the comments, we process this dataset with the goal of extracting threads that involve suspected trolling attempts and the direct responses to them. We had two human annotators who were trained on snippets (i.e., (suspected trolling attempt, responses) pairs) taken from 200 conversations and were allowed to discuss their findings. After this training stage, we asked them to independently label the four aspects for each snippet.
Question: Do they use a crowdsourcing platform for annotation?
Solution: | No | 6 | NIv2 | task460_qasper_answer_generation | fs_opt |
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
Experimental Setup
Question: what english datasets were used?
Answer with content missing: (Data section) Penn Treebank (PTB)
We conduct experiments on our self-collected CAIS to evaluate the generalizability in different language. We apply two baseline models for comparison, one is the popular BiLSTMs + CRF architecture BIBREF36 for sequence labeling task, and the other one is the more powerful sententce-state LSTM BIBREF21. The results listed in Table TABREF50 demonstrate the generalizability and effectiveness of our CM-Net when handling various domains and different languages.
Question: What were the baselines models?
BiLSTMs + CRF architecture BIBREF36 sententce-state LSTM BIBREF21
The SnapCaptions dataset is composed of 10K user-generated image (snap) and textual caption pairs where named entities in captions are manually labeled by expert human annotators (entity types: PER, LOC, ORG, MISC).
Question: Which types of named entities do they recognize?
| PER, LOC, ORG, MISC
| 0 | NIv2 | task460_qasper_answer_generation | fs_opt |
instruction:
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
question:
This paper used the real-time method to randomly collect 10% of publicly available English tweets using several pre-defined DDEO-related queries (Table TABREF6 ) within a specific time frame.
Question: Do they evaluate only on English data?
answer:
Yes
question:
We also looked at where there was complete agreement by all annotators that a triple extraction was incorrect. In total there were 138 of these triples originating from 76 unique sentences.
Question: What is the most common error type?
answer:
all annotators that a triple extraction was incorrect
question:
Knowledge Base Question Answering (KBQA) systems answer questions by obtaining information from KB tuples BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 .
Question: What is te core component for KBQA?
answer:
| answer questions by obtaining information from KB tuples
| 9 | NIv2 | task460_qasper_answer_generation | fs_opt |
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
Let me give you an example: We evaluate the proposed approach on the Chinese social media text summarization task, based on the sequence-to-sequence model. Large-Scale Chinese Short Text Summarization Dataset (LCSTS) is constructed by BIBREF1 . The dataset consists of more than 2.4 million text-summary pairs in total, constructed from a famous Chinese social media microblogging service Weibo.
Question: Are results reported only for English data?
The answer to this example can be: No
Here is why: Based on the context, the dataset is constructed from a famous Chinese social media microblogging service Weibo.
OK. solve this:
Knowledge Base Question Answering (KBQA) systems answer questions by obtaining information from KB tuples BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 .
Question: What is te core component for KBQA?
Answer: | answer questions by obtaining information from KB tuples | 8 | NIv2 | task460_qasper_answer_generation | fs_opt |
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
One example: We evaluate the proposed approach on the Chinese social media text summarization task, based on the sequence-to-sequence model. Large-Scale Chinese Short Text Summarization Dataset (LCSTS) is constructed by BIBREF1 . The dataset consists of more than 2.4 million text-summary pairs in total, constructed from a famous Chinese social media microblogging service Weibo.
Question: Are results reported only for English data?
Solution is here: No
Explanation: Based on the context, the dataset is constructed from a famous Chinese social media microblogging service Weibo.
Now, solve this: Our experiments on a goal-oriented dialog corpus, the personalized bAbI dialog dataset, show that leveraging personal information can significantly improve the performance of dialog systems.
Question: What datasets did they use?
Solution: | the personalized bAbI dialog dataset | 6 | NIv2 | task460_qasper_answer_generation | fs_opt |
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
Example Input: As we can see that, all variants of our CRU model could give substantial improvements over the traditional GRU model, where a maximum gain of 2.7%, 1.0%, and 1.9% can be observed in three datasets, respectively.
Question: Do experiment results show consistent significant improvement of new approach over traditional CNN and RNN models?
Example Output: Yes
Example Input: As a first corpus, we compiled INLINEFORM0 that represents a collection of 80 excerpts from scientific works including papers, dissertations, book chapters and technical reports, which we have chosen from the well-known Digital Bibliography & Library Project (DBLP) platform. As a second corpus, we compiled INLINEFORM0 , which represents a collection of 1,645 chat conversations of 550 sex offenders crawled from the Perverted-Justice portal. As a third corpus, we compiled INLINEFORM0 , which is a collection of 200 aggregated postings crawled from the Reddit platform.
Question: What size are the corpora?
Example Output: 80 excerpts from scientific works collection of 1,645 chat conversations collection of 200 aggregated postings
Example Input: We leveraged an existing, annotated Twitter dataset that was constructed based on a hierarchical model of depression-related symptoms BIBREF12 , BIBREF13 . The dataset contains 9,473 annotations for 9,300 tweets. Each tweet is annotated as no evidence of depression (e.g., “Citizens fear an economic depression") or evidence of depression (e.g., “depressed over disappointment").
Question: Do they evaluate only on English datasets?
Example Output: | Yes
| 3 | NIv2 | task460_qasper_answer_generation | fs_opt |
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
One example is below.
Q: We evaluate the proposed approach on the Chinese social media text summarization task, based on the sequence-to-sequence model. Large-Scale Chinese Short Text Summarization Dataset (LCSTS) is constructed by BIBREF1 . The dataset consists of more than 2.4 million text-summary pairs in total, constructed from a famous Chinese social media microblogging service Weibo.
Question: Are results reported only for English data?
A: No
Rationale: Based on the context, the dataset is constructed from a famous Chinese social media microblogging service Weibo.
Q: Table TABREF23 shows the unlabeled INLINEFORM0 scores for our models and various baselines.
Question: what were the evaluation metrics?
A: | INLINEFORM0 scores | 9 | NIv2 | task460_qasper_answer_generation | fs_opt |
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
Example input: We evaluate the proposed approach on the Chinese social media text summarization task, based on the sequence-to-sequence model. Large-Scale Chinese Short Text Summarization Dataset (LCSTS) is constructed by BIBREF1 . The dataset consists of more than 2.4 million text-summary pairs in total, constructed from a famous Chinese social media microblogging service Weibo.
Question: Are results reported only for English data?
Example output: No
Example explanation: Based on the context, the dataset is constructed from a famous Chinese social media microblogging service Weibo.
Q: We evaluate a character-level variant of our proposed language model over a preprocessed version of the Penn Treebank (PTB) and Text8 datasets. The unsupervised constituency parsing task compares hte tree structure inferred by the model with those annotated by human experts. The experiment is performed on WSJ10 dataset.
Question: Which dataset do they experiment with?
A: | Penn Treebank Text8 WSJ10 | 3 | NIv2 | task460_qasper_answer_generation | fs_opt |
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
Q: In case of polysemous words, only the first word sense (usually the most common) is taken into account.
Question: How do they handle polysemous words in their entity library?
A: only the first word sense (usually the most common) is taken into account
****
Q: The annotator carried out all annotation.
Question: How many annotators tagged each tweet?
A: One
****
Q: In our profile verification process, we observed that most gang member profiles portray a context representative of gang culture. Some examples of these profile pictures are shown in Figure FIGREF32 , where the user holds or points weapons, is seen in a group fashion which displays a gangster culture, or is showing off graffiti, hand signs, tattoos and bulk cash.
Question: What are the differences in the use of images between gang member and the rest of the Twitter population?
A: | user holds or points weapons, is seen in a group fashion which displays a gangster culture, or is showing off graffiti, hand signs, tattoos and bulk cash
****
| 4 | NIv2 | task460_qasper_answer_generation | fs_opt |
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
Q: Despite this, we found that the training of our embeddings was not considerably slower than the training of order-2 equivalents such as SGNS. Explicitly, our GPU trained CBOW vectors (using the experimental settings found below) in 3568 seconds, whereas training CP-S and JCP-S took 6786 and 8686 seconds respectively.
Question: Do they measure computation time of their factorizations compared to other word embeddings?
A: Yes
****
Q: To assess the consistency of annotations and also eliminate coincidental annotations, we used agreement rates, which is calculated by dividing the number of senses under each category where the annotators annotate consistently by the total number of each kind of sense. And considering the potential impact of unbalanced distribution of senses, we also used the Kappa value.
Question: Which inter-annotator metric do they use?
A: agreement rates Kappa value
****
Q: The dataset at HASOC 2019 were given in three languages: Hindi, English, and German. Dataset in Hindi and English had three subtasks each, while German had only two subtasks. We participated in all the tasks provided by the organisers and decided to develop a single model that would be language agnostic. We used the same model architecture for all the three languages. The training dataset of Hindi dataset was more balanced than English or German dataset. Hence, the results were around 0.78. As the dataset in German language was highly imbalanced, the results drops to 0.62.
Question: What are the languages used to test the model?
A: | Hindi, English and German (German task won)
****
| 4 | NIv2 | task460_qasper_answer_generation | fs_opt |
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
One example is below.
Q: We evaluate the proposed approach on the Chinese social media text summarization task, based on the sequence-to-sequence model. Large-Scale Chinese Short Text Summarization Dataset (LCSTS) is constructed by BIBREF1 . The dataset consists of more than 2.4 million text-summary pairs in total, constructed from a famous Chinese social media microblogging service Weibo.
Question: Are results reported only for English data?
A: No
Rationale: Based on the context, the dataset is constructed from a famous Chinese social media microblogging service Weibo.
Q: For the sake of interpretability, we will also project back coefficients from the embeddings as well as PCA models into the dummy variable space.
Question: How do their interpret the coefficients?
A: | The coefficients are projected back to the dummy variable space. | 9 | NIv2 | task460_qasper_answer_generation | fs_opt |
Part 1. Definition
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
Part 2. Example
We evaluate the proposed approach on the Chinese social media text summarization task, based on the sequence-to-sequence model. Large-Scale Chinese Short Text Summarization Dataset (LCSTS) is constructed by BIBREF1 . The dataset consists of more than 2.4 million text-summary pairs in total, constructed from a famous Chinese social media microblogging service Weibo.
Question: Are results reported only for English data?
Answer: No
Explanation: Based on the context, the dataset is constructed from a famous Chinese social media microblogging service Weibo.
Part 3. Exercise
We used the Twitter definition of hateful conduct in the first survey. This definition was presented at the beginning, and again above every tweet.
Question: What definition was one of the groups was shown?
Answer: | Twitter definition of hateful conduct | 7 | NIv2 | task460_qasper_answer_generation | fs_opt |
Part 1. Definition
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
Part 2. Example
We evaluate the proposed approach on the Chinese social media text summarization task, based on the sequence-to-sequence model. Large-Scale Chinese Short Text Summarization Dataset (LCSTS) is constructed by BIBREF1 . The dataset consists of more than 2.4 million text-summary pairs in total, constructed from a famous Chinese social media microblogging service Weibo.
Question: Are results reported only for English data?
Answer: No
Explanation: Based on the context, the dataset is constructed from a famous Chinese social media microblogging service Weibo.
Part 3. Exercise
We work with four individual datasets. The datasets contain addition, subtraction, multiplication, and division word problems.
AI2 BIBREF2. AI2 is a collection of 395 addition and subtraction problems, containing numeric values, where some may not be relevant to the question.
CC BIBREF19. The Common Core dataset contains 600 2-step questions. The Cognitive Computation Group at the University of Pennsylvania gathered these questions.
IL BIBREF4. The Illinois dataset contains 562 1-step algebra word questions. The Cognitive Computation Group compiled these questions also.
MAWPS BIBREF20. MAWPS is a relatively large collection, primarily from other MWP datasets. We use 2,373 of 3,915 MWPs from this set.
Question: What datasets do they use?
Answer: | AI2 BIBREF2 CC BIBREF19 IL BIBREF4 MAWPS BIBREF20 | 7 | NIv2 | task460_qasper_answer_generation | fs_opt |
In this task, you will be presented with a context from an academic paper and a question separated with a
. You have to answer the question based on the context.
Example Input: Validated transcripts were sent to professional translators. In order to control the quality of the professional translations, we applied various sanity checks to the translations BIBREF11. We also sanity check the overlaps of train, development and test sets in terms of transcripts and voice clips (via MD5 file hashing), and confirm they are totally disjoint.
Question: How is the quality of the data empirically evaluated?
Example Output: Validated transcripts were sent to professional translators. various sanity checks to the translations sanity check the overlaps of train, development and test sets
Example Input: The second, Continuous Bag-of-Words (CBOW), the model is given the sequence of words without the middle one and attempts to predict this omitted word.
Question: Which word embeddings are analysed?
Example Output: Continuous Bag-of-Words (CBOW)
Example Input: We work with four individual datasets. The datasets contain addition, subtraction, multiplication, and division word problems.
AI2 BIBREF2. AI2 is a collection of 395 addition and subtraction problems, containing numeric values, where some may not be relevant to the question.
CC BIBREF19. The Common Core dataset contains 600 2-step questions. The Cognitive Computation Group at the University of Pennsylvania gathered these questions.
IL BIBREF4. The Illinois dataset contains 562 1-step algebra word questions. The Cognitive Computation Group compiled these questions also.
MAWPS BIBREF20. MAWPS is a relatively large collection, primarily from other MWP datasets. We use 2,373 of 3,915 MWPs from this set.
Question: What datasets do they use?
Example Output: | AI2 BIBREF2 CC BIBREF19 IL BIBREF4 MAWPS BIBREF20
| 3 | NIv2 | task460_qasper_answer_generation | fs_opt |
End of preview. Expand in Data Studio
README.md exists but content is empty.
- Downloads last month
- 6