question_id stringlengths 40 40 | question stringlengths 4 171 | answer list | evidence list |
|---|---|---|---|
ec91b87c3f45df050e4e16018d2bf5b62e4ca298 | What is the baseline used? | [
"Unanswerable"
] | [
[]
] |
f129c97a81d81d32633c94111018880a7ffe16d1 | Which attention mechanisms do they compare? | [
"Soft attention, Hard Stochastic attention, Local Attention"
] | [
[
"We evaluate three models of the image attention mechanism INLINEFORM0 of equation EQREF11 . They have in common the fact that at each time step INLINEFORM1 of the decoding phase, all approaches first take as input the annotation sequence INLINEFORM2 to derive a time-dependent context vector that contain rele... |
100cf8b72d46da39fedfe77ec939fb44f25de77f | Which paired corpora did they use in the other experiment? | [
"dataset that contains article-comment parallel contents INLINEFORM0 , and an unpaired dataset that contains the documents (articles or comments) INLINEFORM1",
"Chinese dataset BIBREF0"
] | [
[
"In addition to the unsupervised training, we explore a semi-supervised training framework to combine the proposed unsupervised model and the supervised model. In this scenario we have a paired dataset that contains article-comment parallel contents INLINEFORM0 , and an unpaired dataset that contains the docu... |
8cc56fc44136498471754186cfa04056017b4e54 | By how much does their system outperform the lexicon-based models? | [
"Under the retrieval evaluation setting, their proposed model + IR2 had better MRR than NVDM by 0.3769, better MR by 4.6, and better Recall@10 by 20 . \nUnder the generative evaluation setting the proposed model + IR2 had better BLEU by 0.044 , better CIDEr by 0.033, better ROUGE by 0.032, and better METEOR by 0.0... | [
[
"NVDM (Lexical, Neural) is a VAE-based approach for document modeling BIBREF10 . We compare our model with this baseline to demonstrate the effect of modeling topic.",
"Table TABREF31 shows the performance of our models and the baselines in retrieval evaluation. We first compare our proposed model with ot... |
5fa431b14732b3c47ab6eec373f51f2bca04f614 | Which lexicon-based models did they compare with? | [
"TF-IDF, NVDM"
] | [
[
"TF-IDF (Lexical, Non-Neural) is an important unsupervised baseline. We use the concatenation of the title and the body as the query to retrieve the candidate comment set by means of the similarity of the tf-idf value. The model is trained on unpaired articles and comments, which is the same as our proposed m... |
33ccbc401b224a48fba4b167e86019ffad1787fb | How many comments were used? | [
"from 50K to 4.8M"
] | [
[
"We analyze the performance of the proposed method under the semi-supervised setting. We train the supervised IR model with different numbers of paired data. Figure FIGREF39 shows the curve (blue) of the recall1 score. As expected, the performance grows as the paired dataset becomes larger. We further combine... |
cca74448ab0c518edd5fc53454affd67ac1a201c | How many articles did they have? | [
"198,112"
] | [
[
"We select a large-scale Chinese dataset BIBREF0 with millions of real comments and a human-annotated test set to evaluate our model. The dataset is collected from Tencent News, which is one of the most popular Chinese websites for news and opinion articles. The dataset consists of 198,112 news articles. Each... |
b69ffec1c607bfe5aa4d39254e0770a3433a191b | What news comment dataset was used? | [
"Chinese dataset BIBREF0"
] | [
[
"We select a large-scale Chinese dataset BIBREF0 with millions of real comments and a human-annotated test set to evaluate our model. The dataset is collected from Tencent News, which is one of the most popular Chinese websites for news and opinion articles. The dataset consists of 198,112 news articles. Each... |
f5cf8738e8d211095bb89350ed05ee7f9997eb19 | By how much do they outperform standard BERT? | [
"up to four percentage points in accuracy"
] | [
[
"In this paper we presented a way of enriching BERT with knowledge graph embeddings and additional metadata. Exploiting the linked knowledge that underlies Wikidata improves performance for our task of document classification. With this approach we improve the standard BERT models by up to four percentage poi... |
bed527bcb0dd5424e69563fba4ae7e6ea1fca26a | What dataset do they use? | [
"2019 GermEval shared task on hierarchical text classification",
"GermEval 2019 shared task"
] | [
[
"In this paper, we work with the dataset of the 2019 GermEval shared task on hierarchical text classification BIBREF0 and use the predefined set of labels to evaluate our approach to this classification task."
],
[
"Our experiments are modelled on the GermEval 2019 shared task and deal with the classi... |
aeab5797b541850e692f11e79167928db80de1ea | How do they combine text representations with the knowledge graph embeddings? | [
"all three representations are concatenated and passed into a MLP"
] | [
[
"The BERT architecture uses 12 hidden layers, each layer consists of 768 units. To derive contextualized representations from textual features, the book title and blurb are concatenated and then fed through BERT. To minimize the GPU memory consumption, we limit the input length to 300 tokens (which is shorter... |
bfa3776c30cb30e0088e185a5908e5172df79236 | What is the algorithm used for the classification tasks? | [
"Random Forest Ensemble classifiers"
] | [
[
"To test whether topic models can be used for dating poetry or attributing authorship, we perform supervised classification experiments with Random Forest Ensemble classifiers. We find that we obtain better results by training and testing on stanzas instead of full poems, as we have more data available. Also,... |
a2a66726a5dca53af58aafd8494c4de833a06f14 | Is the outcome of the LDA analysis evaluated in any way? | [
"Yes"
] | [
[
"The Style baseline achieves an Accuracy of 83%, LDA features 89% and a combination of the two gets 90%. However, training on full poems reduces this to 42—52%."
]
] |
ee87608419e4807b9b566681631a8cd72197a71a | What is the corpus used in the study? | [
"TextGrid Repository",
"The Digital Library in the TextGrid Repository"
] | [
[
"The Digital Library in the TextGrid Repository represents an extensive collection of German texts in digital form BIBREF3. It was mined from http://zeno.org and covers a time period from the mid 16th century up to the first decades of the 20th century. It contains many important texts that can be considered ... |
cda4612b4bda3538d19f4b43dde7bc30c1eda4e5 | What are the traditional methods to identifying important attributes? | [
"automated attribute-value extraction, score the attributes using the Bayes model, evaluate their importance with several different frequency metrics, aggregate the weights from different sources into one consistent typicality score using a Ranking SVM model, OntoRank algorithm",
"TextRank, Word2vec BIBREF19, Glo... | [
[
"Many proposed approaches formulate the entity attribute ranking problem as a post processing step of automated attribute-value extraction. In BIBREF0 , BIBREF1 , BIBREF2 , Pasca et al. firstly extract potential class-attribute pairs using linguistically motivated patterns from unstructured text including que... |
e12674f0466f8c0da109b6076d9939b30952c7da | What do you use to calculate word/sub-word embeddings | [
"FastText"
] | [
[
"Evaluating FastText, GloVe and word2vec, we show that compared to other word representation learning algorithms, the FastText performs best. We sample and analyze the category attributes and find that many self-filled attributes contain misspellings. The FastText algorithm represents words by a sum of its ch... |
9fe6339c7027a1a0caffa613adabe8b5bb6a7d4a | What user generated text data do you use? | [
"Unanswerable"
] | [
[]
] |
b5c3787ab3784214fc35f230ac4926fe184d86ba | Did they propose other metrics? | [
"Yes"
] | [
[
"We introduce our proposed diversity, density, and homogeneity metrics with their detailed formulations and key intuitions."
]
] |
9174aded45bc36915f2e2adb6f352f3c7d9ada8b | Which real-world datasets did they use? | [
"SST-2 (Stanford Sentiment Treebank, version 2), Snips",
"SST-2, Snips"
] | [
[
"In the first task, we use the SST-2 (Stanford Sentiment Treebank, version 2) dataset BIBREF25 to conduct sentiment analysis experiments. SST-2 is a sentence binary classification dataset with train/dev/test splits provided and two types of sentence labels, i.e., positive and negative.",
"The second task ... |
a8f1029f6766bffee38a627477f61457b2d6ed5c | How did they obtain human intuitions? | [
"Unanswerable"
] | [
[]
] |
a2103e7fe613549a9db5e65008f33cf2ee0403bd | What are the country-specific drivers of international development rhetoric? | [
"wealth , democracy , population, levels of ODA, conflict "
] | [
[
"Yet surprisingly little is known about the agenda-setting process for international development in global governance institutions. This is perhaps best demonstrated by the lack of information on how the different goals and targets of the MDGs were decided, which led to much criticism and concern about the gl... |
13b36644357870008d70e5601f394ec3c6c07048 | Is the dataset multilingual? | [
"No",
"No"
] | [
[
"We use a new dataset of GD statements from 1970 to 2016, the UN General Debate Corpus (UNGDC), to examine the international development agenda in the UN BIBREF3 . Our application of NLP to these statements focuses in particular on structural topic models (STMs) BIBREF4 . The paper makes two contributions usi... |
e4a19b91b57c006a9086ae07f2d6d6471a8cf0ce | How are the main international development topics that states raise identified? | [
" They focus on exclusivity and semantic coherence measures: Highly frequent words in a given topic that do not appear very often in other topics are viewed as making that topic exclusive. They select select the 16-topic model, which has the largest positive residual in the regression fit, and provides higher exclu... | [
[
"We assess the optimal number of topics that need to be specified for the STM analysis. We follow the recommendations of the original STM paper and focus on exclusivity and semantic coherence measures. BIBREF5 propose semantic coherence measure, which is closely related to point-wise mutual information measur... |
fd0ef5a7b6f62d07776bf672579a99c67e61a568 | What experiments do the authors present to validate their system? | [
" we measure our system's performance for datasets across various domains, evaluations are done by managed judges who understands the knowledge base and then judge user queries relevance to the QA pairs"
] | [
[
"QnAMaker is not domain-specific and can be used for any type of data. To support this claim, we measure our system's performance for datasets across various domains. The evaluations are done by managed judges who understands the knowledge base and then judge user queries relevance to the QA pairs (binary lab... |
071bcb4b054215054f17db64bfd21f17fd9e1a80 | How does the conversation layer work? | [
"Unanswerable"
] | [
[]
] |
f399d5a8dbeec777a858f81dc4dd33a83ba341a2 | What components is the QnAMaker composed of? | [
"QnAMaker Portal, QnaMaker Management APIs, Azure Search Index, QnaMaker WebApp, Bot",
"QnAMaker Portal, QnaMaker Management APIs, Azure Search Index, QnaMaker WebApp, Bot"
] | [
[
"System description ::: Architecture",
"As shown in Figure FIGREF4, humans can have two different kinds of roles in the system: Bot-Developers who want to create a bot using the data they have, and End-Users who will chat with the bot(s) created by bot-developers. The components involved in the process ar... |
d28260b5565d9246831e8dbe594d4f6211b60237 | How they measure robustness in experiments? | [
"We empirically provide a formula to measure the richness in the scenario of machine translation.",
"boost the training BLEU very greatly, the over-fitting problem of the Plackett-Luce models PL($k$) is alleviated with moderately large $k$"
] | [
[
"The log-likelihood of a Plackett-Luce model is not a strict upper bound of the BLEU score, however, it correlates with BLEU well in the case of rich features. The concept of “rich” is actually qualitative, and obscure to define in different applications. We empirically provide a formula to measure the richne... |
8670989ca39214eda6c1d1d272457a3f3a92818b | Is new method inferior in terms of robustness to MIRAs in experiments? | [
"Unanswerable"
] | [
[]
] |
923b12c0a50b0ee22237929559fad0903a098b7b | What experiments with large-scale features are performed? | [
"Plackett-Luce Model for SMT Reranking"
] | [
[
"Evaluation ::: Plackett-Luce Model for SMT Reranking",
"After being de-duplicated, the N-best list has an average size of around 300, and with 7491 features. Refer to Formula DISPLAY_FORM9, this is ideal to use the Plackett-Luce model. Results are shown in Figure FIGREF12. We observe some interesting phe... |
67131c15aceeb51ae1d3b2b8241c8750a19cca8e | Which ASR system(s) is used in this work? | [
"Oracle "
] | [
[
"The preliminary architecture is shown in Fig. FIGREF4. For a given transcribed utterance, it is firstly encoded with Byte Pair Encoding (BPE) BIBREF14, a compression algorithm splitting words to fundamental subword units (pairs of bytes or BPs) and reducing the embedded vocabulary size. Then we use a BiLSTM ... |
579a0603ec56fc2b4aa8566810041dbb0cd7b5e7 | What are the series of simple models? | [
"perform experiments to utilize ASR $n$-best hypotheses during evaluation"
] | [
[
"Besides the Baseline and Oracle, where only ASR 1-best hypothesis is considered, we also perform experiments to utilize ASR $n$-best hypotheses during evaluation. The models evaluating with $n$-bests and a BM (pre-trained on transcription) are called Direct Models (in Fig. FIGREF7):"
]
] |
c9c85eee41556c6993f40e428fa607af4abe80a9 | Over which datasets/corpora is this work evaluated? | [
"$\\sim $ 8.7M annotated anonymised user utterances",
"on $\\sim $ 8.7M annotated anonymised user utterances"
] | [
[
"We conduct our experiments on $\\sim $ 8.7M annotated anonymised user utterances. They are annotated and derived from requests across 23 domains."
],
[
"We conduct our experiments on $\\sim $ 8.7M annotated anonymised user utterances. They are annotated and derived from requests across 23 domains."
... |
f8281eb49be3e8ea0af735ad3bec955a5dedf5b3 | Is the semantic hierarchy representation used for any task? | [
"Yes, Open IE",
"Yes"
] | [
[
"An extrinsic evaluation was carried out on the task of Open IE BIBREF7. It revealed that when applying DisSim as a preprocessing step, the performance of state-of-the-art Open IE systems can be improved by up to 346% in precision and 52% in recall, i.e. leading to a lower information loss and a higher accura... |
a5ee9b40a90a6deb154803bef0c71c2628acb571 | What are the corpora used for the task? | [
"For the English version, we performed both a thorough manual analysis and automatic evaluation across three commonly used TS datasets from two different domains, The evaluation of the German version is in progress."
] | [
[
"For the English version, we performed both a thorough manual analysis and automatic evaluation across three commonly used TS datasets from two different domains in order to assess the performance of our framework with regard to the sentence splitting subtask. The results show that our proposed sentence split... |
e286860c41a4f704a3a08e45183cb8b14fa2ad2f | Is the model evaluated? | [
"the English version is evaluated. The German version evaluation is in progress "
] | [
[
"For the English version, we performed both a thorough manual analysis and automatic evaluation across three commonly used TS datasets from two different domains in order to assess the performance of our framework with regard to the sentence splitting subtask. The results show that our proposed sentence split... |
982979cb3c71770d8d7d2d1be8f92b66223dec85 | What new metrics are suggested to track progress? | [
" For example, one metric could consist in checking whether for any given word, all words that are known to belong to the same class are closer than any words belonging to different classes, independently of the actual cosine"
] | [
[
"It is now clear that a key aspect for future work will be developing additional performance metrics based on topological properties. We are in line with recent work BIBREF16 , proposing to shift evaluation from absolute values to more exploratory evaluations focusing on weaknesses and strengths of the embedd... |
5ba6f7f235d0f5d1d01fd97dd5e4d5b0544fd212 | What intrinsic evaluation metrics are used? | [
"Class Membership Tests, Class Distinction Test, Word Equivalence Test",
"coverage metric, being distinct (cosine INLINEFORM0 0.7 or 0.8), belonging to the same class (cosine INLINEFORM1 0.7 or 0.8), being equivalent (cosine INLINEFORM2 0.85 or 0.95)"
] | [
[
"Tests and Gold-Standard Data for Intrinsic Evaluation",
"Using the gold standard data (described below), we performed three types of tests:",
"Class Membership Tests: embeddings corresponding two member of the same semantic class (e.g. “Months of the Year\", “Portuguese Cities\", “Smileys\") should b... |
7ce7edd06925a943e32b59f3e7b5159ccb7acaf6 | What experimental results suggest that using less than 50% of the available training examples might result in overfitting? | [
"consistent increase in the validation loss after about 15 epochs"
] | [
[
"On the right side of Figure FIGREF28 we show how the number of training (and validation) examples affects the loss. For a fixed INLINEFORM0 = 32768 we varied the amount of data used for training from 25% to 100%. Three trends are apparent. As we train with more data, we obtain better validation losses. This ... |
a883bb41449794e0a63b716d9766faea034eb359 | What multimodality is available in the dataset? | [
"context is a procedural text, the question and the multiple choice answers are composed of images",
"images and text"
] | [
[
"In the following, we explain our Procedural Reasoning Networks model. Its architecture is based on a bi-directional attention flow (BiDAF) model BIBREF6, but also equipped with an explicit reasoning module that acts on entity-specific relational memory units. Fig. FIGREF4 shows an overview of the network arc... |
5d83b073635f5fd8cd1bdb1895d3f13406583fbd | What are previously reported models? | [
"Hasty Student, Impatient Reader, BiDAF, BiDAF w/ static memory"
] | [
[
"We compare our model with several baseline models as described below. We note that the results of the first two are previously reported in BIBREF2.",
"Hasty Student BIBREF2 is a heuristics-based simple model which ignores the recipe and gives an answer by examining only the question and the answer set us... |
171ebfdc9b3a98e4cdee8f8715003285caeb2f39 | How better is accuracy of new model compared to previously reported models? | [
"Average accuracy of proposed model vs best prevous result:\nSingle-task Training: 57.57 vs 55.06\nMulti-task Training: 50.17 vs 50.59"
] | [
[
"Table TABREF29 presents the quantitative results for the visual reasoning tasks in RecipeQA. In single-task training setting, PRN gives state-of-the-art results compared to other neural models. Moreover, it achieves the best performance on average. These results demonstrate the importance of having a dynamic... |
3c3cb51093b5fd163e87a773a857496a4ae71f03 | How does the scoring model work? | [
"First, mapping the segmented sentence to a sequence of candidate word embeddings. Then, the scoring model takes the word embedding sequence as input, scoring over each individual candidate word",
" the scoring model takes the word embedding sequence as input, scoring over each individual candidate word from two ... | [
[
"To select the most appropriate sentences in a large number of unlabeled corpora, we propose a scoring model based on information entropy and neural network as the sampling strategy of active learning, which is inspired by Cai and Zhao BIBREF32 . The score of a segmented sentence is computed as follows. First... |
53a0763eff99a8148585ac642705637874be69d4 | How does the active learning model work? | [
"Active learning methods has a learning engine (mainly used for training of classification problems) and the selection engine (which chooses samples that need to be relabeled by annotators from unlabeled data). Then, relabeled samples are added to training set for classifier to re-train, thus continuously improving... | [
[
"Active learning methods can generally be described into two parts: a learning engine and a selection engine BIBREF28 . The learning engine is essentially a classifier, which is mainly used for training of classification problems. The selection engine is based on the sampling strategy, which chooses samples t... |
0bfed6f9cfe93617c5195c848583e3945f2002ff | Which neural network architectures are employed? | [
"gated neural network "
] | [
[
"To select the most appropriate sentences in a large number of unlabeled corpora, we propose a scoring model based on information entropy and neural network as the sampling strategy of active learning, which is inspired by Cai and Zhao BIBREF32 . The score of a segmented sentence is computed as follows. First... |
352c081c93800df9654315e13a880d6387b91919 | What are the key points in the role of script knowledge that can be studied? | [
"Unanswerable"
] | [
[]
] |
18fbf9c08075e3b696237d22473c463237d153f5 | Did the annotators agreed and how much? | [
"For event types and participant types, there was a moderate to substantial level of agreement using the Fleiss' Kappa. For coreference chain annotation, there was average agreement of 90.5%.",
"Moderate agreement of 0.64-0.68 Fleiss’ Kappa over event type labels, 0.77 Fleiss’ Kappa over participant labels, and g... | [
[
"In order to calculate inter-annotator agreement, a total of 30 stories from 6 scenarios were randomly chosen for parallel annotation by all 4 annotators after the first annotation phase. We checked the agreement on these data using Fleiss' Kappa BIBREF4 . The results are shown in Figure 4 and indicate modera... |
a37ef83ab6bcc6faff3c70a481f26174ccd40489 | How many subjects have been used to create the annotations? | [
" four different annotators"
] | [
[
"We used the WebAnno annotation tool BIBREF2 for our project. The stories from each scenario were distributed among four different annotators. In a calibration phase, annotators were presented with some sample texts for test annotations; the results were discussed with the authors. Throughout the whole annota... |
bc9c31b3ce8126d1d148b1025c66f270581fde10 | What datasets are used to evaluate this approach? | [
" Kinship and Nations knowledge graphs, YAGO3-10 and WN18KGs knowledge graphs ",
"WN18 and YAGO3-10"
] | [
[],
[
"Since the setting is quite different from traditional adversarial attacks, search for link prediction adversaries brings up unique challenges. To find these minimal changes for a target link, we need to identify the fact that, when added into or removed from the graph, will have the biggest impact on t... |
185841e979373808d99dccdade5272af02b98774 | How is this approach used to detect incorrect facts? | [
"if there is an error in the graph, the triple is likely to be inconsistent with its neighborhood, and thus the model should put least trust on this triple. In other words, the error triple should have the least influence on the model's prediction of the training data. "
] | [
[
"Here, we demonstrate another potential use of adversarial modifications: finding erroneous triples in the knowledge graph. Intuitively, if there is an error in the graph, the triple is likely to be inconsistent with its neighborhood, and thus the model should put least trust on this triple. In other words, t... |
d427e3d41c4c9391192e249493be23926fc5d2e9 | Can this adversarial approach be used to directly improve model accuracy? | [
"Yes"
] | [
[
"To evaluate this application, we inject random triples into the graph, and measure the ability of to detect the errors using our optimization. We consider two types of incorrect triples: 1) incorrect triples in the form of $\\langle s^{\\prime }, r, o\\rangle $ where $s^{\\prime }$ is chosen randomly from al... |
330f2cdeab689670b68583fc4125f5c0b26615a8 | what are the advantages of the proposed model? | [
"he proposed model outperforms all the baselines, being the svi version the one that performs best., the svi version converges much faster to higher values of the log marginal likelihood when compared to the batch version, which reflects the efficiency of the svi algorithm."
] | [
[
"For all the experiments the hyper-parameters INLINEFORM0 , INLINEFORM1 and INLINEFORM2 were set using a simple grid search in the collection INLINEFORM3 . The same approach was used to optimize the hyper-parameters of the all the baselines. For the svi algorithm, different mini-batch sizes and forgetting rat... |
c87b2dd5c439d5e68841a705dd81323ec0d64c97 | what are the state of the art approaches? | [
"Bosch 2006 (mv), LDA + LogReg (mv), LDA + Raykar, LDA + Rodrigues, Blei 2003 (mv), sLDA (mv)"
] | [
[
"With the purpose of comparing the proposed model with a popular state-of-the-art approach for image classification, for the LabelMe dataset, the following baseline was introduced:",
"Bosch 2006 (mv): This baseline is similar to one in BIBREF33 . The authors propose the use of pLSA to extract the latent t... |
f7789313a804e41fcbca906a4e5cf69039eeef9f | what datasets were used? | [
"Reuters-21578 BIBREF30, LabelMe BIBREF31, 20-Newsgroups benchmark corpus BIBREF29 ",
" 20-Newsgroups benchmark corpus , Reuters-21578, LabelMe"
] | [
[
"In order to validate the proposed classification model in real crowdsourcing settings, Amazon Mechanical Turk (AMT) was used to obtain labels from multiple annotators for two popular datasets: Reuters-21578 BIBREF30 and LabelMe BIBREF31 .",
"In order to first validate the proposed model for classificatio... |
2376c170c343e2305dac08ba5f5bda47c370357f | How was the dataset collected? | [
"Database Construction: we crawled travel information in Beijing from the Web, including Hotel, Attraction, and Restaurant domains (hereafter we name the three domains as HAR domains). Then, we used the metro information of entities in HAR domains to build the metro database. , Goal Generation: a multi-domain goal ... | [
[
"Database Construction: we crawled travel information in Beijing from the Web, including Hotel, Attraction, and Restaurant domains (hereafter we name the three domains as HAR domains). Then, we used the metro information of entities in HAR domains to build the metro database. For the taxi domain, there is no ... |
0137ecebd84a03b224eb5ca51d189283abb5f6d9 | What are the benchmark models? | [
"BERTNLU from ConvLab-2, a rule-based model (RuleDST) , TRADE (Transferable Dialogue State Generator) , a vanilla policy trained in a supervised fashion from ConvLab-2 (SL policy)"
] | [
[
"Model: We adapted BERTNLU from ConvLab-2. BERT BIBREF22 has shown strong performance in many NLP tasks. We use Chinese pre-trained BERT BIBREF23 for initialization and then fine-tune the parameters on CrossWOZ. We obtain word embeddings and the sentence representation (embedding of [CLS]) from BERT. Since th... |
5f6fbd57cce47f20a0fda27d954543c00c4344c2 | How was the corpus annotated? | [
"The workers were also asked to annotate both user states and system states, we used some rules to automatically annotate dialogue acts according to user states, system states, and dialogue histories"
] | [
[
"Dialogue Collection: before the formal data collection starts, we required the workers to make a small number of dialogues and gave them feedback about the dialogue quality. Then, well-trained workers were paired to converse according to the given goals. The workers were also asked to annotate both user stat... |
d6e2b276390bdc957dfa7e878de80cee1f41fbca | What models other than standalone BERT is new model compared to? | [
"Only Bert base and Bert large are compared to proposed approach."
] | [
[
"Results on WNLaMPro rare and medium are shown in Table TABREF34, where the mean reciprocal rank (MRR) is reported for BERT, Attentive Mimicking and Bertram. As can be seen, supplementing BERT with any of the proposed relearning methods results in noticeable improvements for the rare subset, with add clearly ... |
32537fdf0d4f76f641086944b413b2f756097e5e | How much is representaton improved for rare/medum frequency words compared to standalone BERT and previous work? | [
"improving the score for WNLaMPro-medium by 50% compared to BERT$_\\text{base}$ and 31% compared to Attentive Mimicking"
] | [
[
"Results on WNLaMPro rare and medium are shown in Table TABREF34, where the mean reciprocal rank (MRR) is reported for BERT, Attentive Mimicking and Bertram. As can be seen, supplementing BERT with any of the proposed relearning methods results in noticeable improvements for the rare subset, with add clearly ... |
ef081d78be17ef2af792e7e919d15a235b8d7275 | What are three downstream task datasets? | [
"MNLI BIBREF21, AG's News BIBREF22, DBPedia BIBREF23",
"MNLI, AG's News, DBPedia"
] | [
[
"To measure the effect of adding Bertram to BERT on downstream tasks, we apply the procedure described in Section SECREF4 to a commonly used textual entailment dataset as well as two text classification datasets: MNLI BIBREF21, AG's News BIBREF22 and DBPedia BIBREF23. For all three datasets, we use BERT$_\\te... |
537b2d7799124d633892a1ef1a485b3b071b303d | What is dataset for word probing task? | [
"WNLaMPro dataset"
] | [
[
"We evalute Bertram on the WNLaMPro dataset of BIBREF0. This dataset consists of cloze-style phrases like"
]
] |
9aca4b89e18ce659c905eccc78eda76af9f0072a | How fast is the model compared to baselines? | [
"Unanswerable"
] | [
[]
] |
b0376a7f67f1568a7926eff8ff557a93f434a253 | How big is the performance difference between this method and the baseline? | [
"Comparing with the highest performing baseline: 1.3 points on ACE2004 dataset, 0.6 points on CWEB dataset, and 0.86 points in the average of all scores."
] | [
[]
] |
dad8cc543a87534751f9f9e308787e1af06f0627 | What datasets used for evaluation? | [
"AIDA-B, ACE2004, MSNBC, AQUAINT, WNED-CWEB, WNED-WIKI",
"AIDA-CoNLL, ACE2004, MSNBC, AQUAINT, WNED-CWEB, WNED-WIKI, OURSELF-WIKI"
] | [
[
"We conduct experiments on several different types of public datasets including news and encyclopedia corpus. The training set is AIDA-Train and Wikipedia datasets, where AIDA-Train contains 18448 mentions and Wikipedia contains 25995 mentions. In order to compare with the previous methods, we evaluate our mo... |
0481a8edf795768d062c156875d20b8fb656432c | what are the mentioned cues? | [
"output of global LSTM network at time $V_{m_i}^t$5 , which encodes the mention context and target entity information from $V_{m_i}^t$6 to $V_{m_i}^t$7"
] | [
[
"Where $\\oplus $ indicates vector concatenation. The $V_{m_i}^t$ and $V_{e_i}^t$ respectively denote the vector of $m_i$ and $e_i$ at time $t$ . For each mention, there are multiple candidate entities correspond to it. With the purpose of comparing the semantic relevance between the mention and each candidat... |
b6a4ab009e6f213f011320155a7ce96e713c11cf | How did the author's work rank among other submissions on the challenge? | [
"Unanswerable"
] | [
[]
] |
cfffc94518d64cb3c8789395707e4336676e0345 | What approaches without reinforcement learning have been tried? | [
"classification, regression, neural methods",
" Support Vector Regression (SVR) and Support Vector Classification (SVC), deep learning regression models of BIBREF2 to convert them to classification models"
] | [
[
"We conducted cross-validation experiments using various values of $t$ and $m$. Table TABREF26 shows the results for the best values of $t$ and $m$ obtained. The regressor and classifier used Support Vector Regression (SVR) and Support Vector Classification (SVC) respectively. To enable a fair comparison we u... |
f60629c01f99de3f68365833ee115b95a3388699 | What classification approaches were experimented for this task? | [
"NNC SU4 F1, NNC top 5, Support Vector Classification (SVC)"
] | [
[
"We conducted cross-validation experiments using various values of $t$ and $m$. Table TABREF26 shows the results for the best values of $t$ and $m$ obtained. The regressor and classifier used Support Vector Regression (SVR) and Support Vector Classification (SVC) respectively. To enable a fair comparison we u... |
a7cb4f8e29fd2f3d1787df64cd981a6318b65896 | Did classification models perform better than previous regression one? | [
"Yes"
] | [
[
"We compare classification and regression approaches and show that classification produces better results than regression but the quality of the results depends on the approach followed to annotate the data labels."
]
] |
642c4704a71fd01b922a0ef003f234dcc7b223cd | What are the main sources of recall errors in the mapping? | [
"irremediable annotation discrepancies, differences in choice of attributes to annotate, The resources themselves would need updating to encode the relevant morphosyntactic information. Some languages had a very low number of overlapping forms, and no tag matches or near-matches between them, the two annotations en... | [
[
"We present the intrinsic task's recall scores in tab:recall. Bear in mind that due to annotation errors in the original corpora (like the vas example from sec:resources), the optimal score is not always $100\\%$ . Some shortcomings of recall come from irremediable annotation discrepancies. Largely, we are ha... |
e477e494fe15a978ff9c0a5f1c88712cdaec0c5c | Do they look for inconsistencies between different languages' annotations in UniMorph? | [
"Yes"
] | [
[
"Because our conversion rules are interpretable, we identify shortcomings in both resources, using each as validation for the other. We were able to find specific instances of incorrectly applied UniMorph annotation, as well as specific instances of cross-lingual inconsistency in both resources. These finding... |
04495845251b387335bf2e77e2c423130f43c7d9 | Do they look for inconsistencies between different UD treebanks? | [
"Yes"
] | [
[
"The contributions of this work are:"
]
] |
564dcaf8d0bcc274ab64c784e4c0f50d7a2c17ee | Which languages do they validate on? | [
"Ar, Bg, Ca, Cs, Da, De, En, Es, Eu, Fa, Fi, Fr, Ga, He, Hi, Hu, It, La, Lt, Lv, Nb, Nl, Nn, PL, Pt, Ro, Ru, Sl, Sv, Tr, Uk, Ur",
"We apply this conversion to the 31 languages, Arabic, Hindi, Lithuanian, Persian, and Russian. , Dutch, Spanish"
] | [
[],
[
"A dataset-by-dataset problem demands a dataset-by-dataset solution; our task is not to translate a schema, but to translate a resource. Starting from the idealized schema, we create a rule-based tool for converting UD-schema annotations to UniMorph annotations, incorporating language-specific post-edit... |
f3d0e6452b8d24b7f9db1fd898d1fbe6cd23f166 | Does the paper evaluate any adjustment to improve the predicion accuracy of face and audio features? | [
"No"
] | [
[]
] |
9b1d789398f1f1a603e4741a5eee63ccaf0d4a4f | How is face and audio data analysis evaluated? | [
"confusion matrices, $\\text{F}_1$ score"
] | [
[
"Table TABREF16 shows the confusion matrices for facial and audio emotion recognition on our complete AMMER data set and Table TABREF17 shows the results per class for each method, including facial and audio data and micro and macro averages. The classification from facial expressions yields a macro-averaged ... |
00bcdffff7e055f99aaf1b05cf41c98e2748e948 | What is the baseline method for the task? | [
"For the emotion recognition from text they use described neural network as baseline.\nFor audio and face there is no baseline."
] | [
[
"For the emotion recognition from text, we manually transcribe all utterances of our AMMER study. To exploit existing and available data sets which are larger than the AMMER data set, we develop a transfer learning approach. We use a neural network with an embedding layer (frozen weights, pre-trained on Commo... |
f92ee3c5fce819db540bded3cfcc191e21799cb1 | What are the emotion detection tools used for audio and face input? | [
"We apply an off-the-shelf tool for emotion recognition (the manufacturer cannot be disclosed due to licensing restrictions)",
"cannot be disclosed due to licensing restrictions"
] | [
[
"We preprocess the visual data by extracting the sequence of images for each interaction from the point where the agent's or the co-driver's question was completely uttered until the driver's response stops. The average length is 16.3 seconds, with the minimum at 2.2s and the maximum at 54.7s. We apply an off... |
4547818a3bbb727c4bb4a76554b5a5a7b5c5fedb | what amounts of size were used on german-english? | [
"Training data with 159000, 80000, 40000, 20000, 10000 and 5000 sentences, and 7584 sentences for development",
"ultra-low data condition (100k words of training data) and the full IWSLT 14 training corpus (3.2M words)"
] | [
[
"We use the TED data from the IWSLT 2014 German INLINEFORM0 English shared translation task BIBREF38 . We use the same data cleanup and train/dev split as BIBREF39 , resulting in 159000 parallel sentences of training data, and 7584 for development.",
"To simulate different amounts of training resources, w... |
07d7652ad4a0ec92e6b44847a17c378b0d9f57f5 | what were their experimental results in the low-resource dataset? | [
"10.37 BLEU"
] | [
[
"Table TABREF21 shows results for Korean INLINEFORM0 English, using the same configurations (1, 2 and 8) as for German–English. Our results confirm that the techniques we apply are successful across datasets, and result in stronger systems than previously reported on this dataset, achieving 10.37 BLEU as comp... |
9f3444c9fb2e144465d63abf58520cddd4165a01 | what are the methods they compare with in the korean-english dataset? | [
"gu-EtAl:2018:EMNLP1"
] | [
[
"Table TABREF21 shows results for Korean INLINEFORM0 English, using the same configurations (1, 2 and 8) as for German–English. Our results confirm that the techniques we apply are successful across datasets, and result in stronger systems than previously reported on this dataset, achieving 10.37 BLEU as comp... |
2348d68e065443f701d8052018c18daa4ecc120e | what pitfalls are mentioned in the paper? | [
"highly data-inefficient, underperform phrase-based statistical machine translation"
] | [
[
"While neural machine translation (NMT) has achieved impressive performance in high-resource data conditions, becoming dominant in the field BIBREF0 , BIBREF1 , BIBREF2 , recent research has argued that these models are highly data-inefficient, and underperform phrase-based statistical machine translation (PB... |
5679fabeadf680e35a4f7b092d39e8638dca6b4d | Does the paper report the results of previous models applied to the same tasks? | [
"Yes",
"No"
] | [
[
"Technical and theoretical questions related to the proposed method and infrastructure for the exploration and facilitation of debates will be discussed in three sections. The first section concerns notions of how to define what constitutes a belief or opinion and how these can be mined from texts. To this en... |
a939a53cabb4893b2fd82996f3dbe8688fdb7bbb | How is the quality of the discussion evaluated? | [
"Unanswerable"
] | [
[]
] |
8b99767620fd4efe51428b68841cc3ec06699280 | What is the technique used for text analysis and mining? | [
"Unanswerable"
] | [
[]
] |
312417675b3dc431eb7e7b16a917b7fed98d4376 | What are the causal mapping methods employed? | [
"Axelrod's causal mapping method"
] | [
[
"Axelrod's causal mapping method comprises a set of conventions to graphically represent networks of causes and effects (the nodes in a network) as well as the qualitative aspects of this relation (the network’s directed edges, notably assertions of whether the causal linkage is positive or negative). These c... |
792d7b579cbf7bfad8fe125b0d66c2059a174cf9 | What is the previous work's model? | [
"Ternary Trans-CNN"
] | [
[
"Mathur et al. in their paper for detecting offensive tweets proposed a Ternary Trans-CNN model where they train a model architecture comprising of 3 layers of Convolution 1D having filter sizes of 15, 12 and 10 and kernel size of 3 followed by 2 dense fully connected layer of size 64 and 3. The first dense F... |
44a2a8e187f8adbd7d63a51cd2f9d2d324d0c98d | What dataset is used? | [
"HEOT , A labelled dataset for a corresponding english tweets",
"HEOT"
] | [
[
"We used dataset, HEOT obtained from one of the past studies done by Mathur et al. where they annotated a set of cleaned tweets obtained from twitter for the conversations happening in Indian subcontinent. A labelled dataset for a corresponding english tweets were also obtained from a study conducted by David... |
5908d7fb6c48f975c5dfc5b19bb0765581df2b25 | How big is the dataset? | [
"3189 rows of text messages",
"Resulting dataset was 7934 messages for train and 700 messages for test."
] | [
[
"Dataset: Based on some earlier work, only available labelled dataset had 3189 rows of text messages of average length of 116 words and with a range of 1, 1295. Prior work addresses this concern by using Transfer Learning on an architecture learnt on about 14,500 messages with an accuracy of 83.90. We address... |
cca3301f20db16f82b5d65a102436bebc88a2026 | How is the dataset collected? | [
"A labelled dataset for a corresponding english tweets were also obtained from a study conducted by Davidson et al, HEOT obtained from one of the past studies done by Mathur et al"
] | [
[
"We used dataset, HEOT obtained from one of the past studies done by Mathur et al. where they annotated a set of cleaned tweets obtained from twitter for the conversations happening in Indian subcontinent. A labelled dataset for a corresponding english tweets were also obtained from a study conducted by David... |
cfd67b9eeb10e5ad028097d192475d21d0b6845b | Was each text augmentation technique experimented individually? | [
"No"
] | [
[]
] |
e1c681280b5667671c7f78b1579d0069cba72b0e | What models do previous work use? | [
"Ternary Trans-CNN , Hybrid multi-channel CNN and LSTM"
] | [
[
"Related Work ::: Transfer learning based approaches",
"Mathur et al. in their paper for detecting offensive tweets proposed a Ternary Trans-CNN model where they train a model architecture comprising of 3 layers of Convolution 1D having filter sizes of 15, 12 and 10 and kernel size of 3 followed by 2 dens... |
58d50567df71fa6c3792a0964160af390556757d | Does the dataset contain content from various social media platforms? | [
"No"
] | [
[
"Hinglish is a linguistic blend of Hindi (very widely spoken language in India) and English (an associate language of urban areas) and is spoken by upwards of 350 million people in India. While the name is based on the Hindi language, it does not refer exclusively to Hindi, but is used in India, with English ... |
07c79edd4c29635dbc1c2c32b8df68193b7701c6 | What dataset is used? | [
"HEOT , A labelled dataset for a corresponding english tweets "
] | [
[
"We used dataset, HEOT obtained from one of the past studies done by Mathur et al. where they annotated a set of cleaned tweets obtained from twitter for the conversations happening in Indian subcontinent. A labelled dataset for a corresponding english tweets were also obtained from a study conducted by David... |
66125cfdf11d3bf8e59728428e02021177142c3a | How they demonstrate that language-neutral component is sufficiently general in terms of modeling semantics to allow high-accuracy word-alignment? | [
"Table TABREF15 shows that word-alignment based on mBERT representations surpasses the outputs of the standard FastAlign tool even if it was provided large parallel corpus. This suggests that word-level semantics are well captured by mBERT contextual embeddings. For this task, learning an explicit projection had a ... | [
[
"Following BIBREF3, we hypothesize that a sentence representation in mBERT is composed of a language-specific component, which identifies the language of the sentence, and a language-neutral component, which captures the meaning of the sentence in a language-independent way. We assume that the language-specif... |
222b2469eede9a0448e0226c6c742e8c91522af3 | Are language-specific and language-neutral components disjunctive? | [
"No"
] | [
[
"We thus try to remove the language-specific information from the representations by centering the representations of sentences in each language so that their average lies at the origin of the vector space. We do this by estimating the language centroid as the mean of the mBERT representations for a set of se... |
6f8386ad64dce3a20bc75165c5c7591df8f419cf | How they show that mBERT representations can be split into a language-specific component and a language-neutral component? | [
"We thus try to remove the language-specific information from the representations by centering the representations of sentences in each language so that their average lies at the origin of the vector space."
] | [
[
"Following BIBREF3, we hypothesize that a sentence representation in mBERT is composed of a language-specific component, which identifies the language of the sentence, and a language-neutral component, which captures the meaning of the sentence in a language-independent way. We assume that the language-specif... |
81dc39ee6cdacf90d5f0f62134bf390a29146c65 | What challenges this work presents that must be solved to build better language-neutral representations? | [
"contextual embeddings do not represent similar semantic phenomena similarly and therefore they are not directly usable for zero-shot cross-lingual tasks"
] | [
[
"Using a set of semantically oriented tasks that require explicit semantic cross-lingual representations, we showed that mBERT contextual embeddings do not represent similar semantic phenomena similarly and therefore they are not directly usable for zero-shot cross-lingual tasks."
]
] |
b1ced2d6dcd1d7549be2594396cbda34da6c3bca | What is the performance of their system? | [
"Unanswerable"
] | [
[]
] |
f3be1a27df2e6ad12eed886a8cd2dfe09b9e2b30 | What evaluation metrics are used? | [
"Unanswerable"
] | [
[]
] |
a45a86b6a02a98d3ab11f1d04acd3446e95f5a16 | What is the source of the dialogues? | [
"Unanswerable"
] | [
[]
] |
1f1a9f2dd8c4c10b671cb8affe56e181948e229e | What pretrained LM is used? | [
"Generative Pre-trained Transformer (GPT)",
"Generative Pre-trained Transformer (GPT)"
] | [
[
"We apply the Generative Pre-trained Transformer (GPT) BIBREF2 as our pre-trained language model. GPT is a multi-layer Transformer decoder with a causal self-attention which is pre-trained, unsupervised, on the BooksCorpus dataset. BooksCorpus dataset contains over 7,000 unique unpublished books from a variet... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.