input
stringlengths
21
146
metadata
stringlengths
226
226
answers
stringlengths
37
1.27k
evidence
stringlengths
124
12k
what language pairs are explored?
{"label_key": "1912.01214", "label_file": "paper_tab_qa", "q_uid": "5eda469a8a77f028d0c5f1acd296111085614537", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "De-En, En-Fr, Fr-En, En-Es, Ro-En, En-De, Ar-En, En-Ru", "type": "abstractive"}, {"answer": "French-English-Spanish (Fr-En-Es), German-English-French (De-En-Fr) and Romanian-English-German (Ro-En-De), Arabic (Ar), Spanish (Es), and Russian (Ru), and mutual translation between themselves constitutes six zero-shot translation", "type": "extractive"}]
[{"raw_evidence": ["For MultiUN corpus, we use four languages: English (En) is set as the pivot language, which has parallel data with other three languages which do not have parallel data between each other. The three languages are Arabic (Ar), Spanish (Es), and Russian (Ru), and mutual translation between themselves constitutes six zero-shot translation direction for evaluation. We use 80K BPE splits as the vocabulary. Note that all sentences are tokenized by the tokenize.perl script, and we lowercase all data to avoid a large vocabulary for the MultiUN corpus.", "The statistics of Europarl and MultiUN corpora are summarized in Table TABREF18. For Europarl corpus, we evaluate on French-English-Spanish (Fr-En-Es), German-English-French (De-En-Fr) and Romanian-English-German (Ro-En-De), where English acts as the pivot language, its left side is the source language, and its right side is the target language. We remove the multi-parallel sentences between different training corpora to ensure zero-shot settings. We use the devtest2006 as the validation set and the test2006 as the test set for Fr$\\rightarrow $Es and De$\\rightarrow $Fr. For distant language pair Ro$\\rightarrow $De, we extract 1,000 overlapping sentences from newstest2016 as the test set and the 2,000 overlapping sentences split from the training set as the validation set since there is no official validation and test sets. For vocabulary, we use 60K sub-word tokens based on Byte Pair Encoding (BPE) BIBREF33.", "FLOAT SELECTED: Table 1: Data Statistics."], "highlighted_evidence": ["For MultiUN corpus, we use four languages: English (En) is set as the pivot language, which has parallel data with other three languages which do not have parallel data between each other. The three languages are Arabic (Ar), Spanish (Es), and Russian (Ru), and mutual translation between themselves constitutes six zero-shot translation direction for evaluation. ", "The statistics of Europarl and MultiUN corpora are summarized in Table TABREF18. For Europarl corpus, we evaluate on French-English-Spanish (Fr-En-Es), German-English-French (De-En-Fr) and Romanian-English-German (Ro-En-De), where English acts as the pivot language, its left side is the source language, and its right side is the target language. ", "FLOAT SELECTED: Table 1: Data Statistics."]}, {"raw_evidence": ["The statistics of Europarl and MultiUN corpora are summarized in Table TABREF18. For Europarl corpus, we evaluate on French-English-Spanish (Fr-En-Es), German-English-French (De-En-Fr) and Romanian-English-German (Ro-En-De), where English acts as the pivot language, its left side is the source language, and its right side is the target language. We remove the multi-parallel sentences between different training corpora to ensure zero-shot settings. We use the devtest2006 as the validation set and the test2006 as the test set for Fr$\\rightarrow $Es and De$\\rightarrow $Fr. For distant language pair Ro$\\rightarrow $De, we extract 1,000 overlapping sentences from newstest2016 as the test set and the 2,000 overlapping sentences split from the training set as the validation set since there is no official validation and test sets. For vocabulary, we use 60K sub-word tokens based on Byte Pair Encoding (BPE) BIBREF33.", "For MultiUN corpus, we use four languages: English (En) is set as the pivot language, which has parallel data with other three languages which do not have parallel data between each other. The three languages are Arabic (Ar), Spanish (Es), and Russian (Ru), and mutual translation between themselves constitutes six zero-shot translation direction for evaluation. We use 80K BPE splits as the vocabulary. Note that all sentences are tokenized by the tokenize.perl script, and we lowercase all data to avoid a large vocabulary for the MultiUN corpus."], "highlighted_evidence": ["For Europarl corpus, we evaluate on French-English-Spanish (Fr-En-Es), German-English-French (De-En-Fr) and Romanian-English-German (Ro-En-De), where English acts as the pivot language, its left side is the source language, and its right side is the target language. We remove the multi-parallel sentences between different training corpora to ensure zero-shot settings. We use the devtest2006 as the validation set and the test2006 as the test set for Fr$\\rightarrow $Es and De$\\rightarrow $Fr. For distant language pair Ro$\\rightarrow $De, we extract 1,000 overlapping sentences from newstest2016 as the test set and the 2,000 overlapping sentences split from the training set as the validation set since there is no official validation and test sets.", "For MultiUN corpus, we use four languages: English (En) is set as the pivot language, which has parallel data with other three languages which do not have parallel data between each other. The three languages are Arabic (Ar), Spanish (Es), and Russian (Ru), and mutual translation between themselves constitutes six zero-shot translation direction for evaluation."]}]
What accuracy does the proposed system achieve?
{"label_key": "1801.05147", "label_file": "paper_tab_qa", "q_uid": "ef4dba073d24042f24886580ae77add5326f2130", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "F1 scores of 85.99 on the DL-PS data, 75.15 on the EC-MT data and 71.53 on the EC-UQ data ", "type": "abstractive"}, {"answer": "F1 of 85.99 on the DL-PS dataset (dialog domain); 75.15 on EC-MT and 71.53 on EC-UQ (e-commerce domain)", "type": "abstractive"}]
[{"raw_evidence": ["FLOAT SELECTED: Table 2: Main results on the DL-PS data.", "FLOAT SELECTED: Table 3: Main results on the EC-MT and EC-UQ datasets."], "highlighted_evidence": ["FLOAT SELECTED: Table 2: Main results on the DL-PS data.", "FLOAT SELECTED: Table 3: Main results on the EC-MT and EC-UQ datasets."]}, {"raw_evidence": ["FLOAT SELECTED: Table 2: Main results on the DL-PS data.", "FLOAT SELECTED: Table 3: Main results on the EC-MT and EC-UQ datasets."], "highlighted_evidence": ["FLOAT SELECTED: Table 2: Main results on the DL-PS data.", "FLOAT SELECTED: Table 3: Main results on the EC-MT and EC-UQ datasets."]}]
On which benchmarks they achieve the state of the art?
{"label_key": "1704.06194", "label_file": "paper_tab_qa", "q_uid": "9ee07edc371e014df686ced4fb0c3a7b9ce3d5dc", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "SimpleQuestions, WebQSP", "type": "extractive"}, {"answer": "WebQSP, SimpleQuestions", "type": "extractive"}]
[{"raw_evidence": ["Finally, like STAGG, which uses multiple relation detectors (see yih2015semantic for the three models used), we also try to use the top-3 relation detectors from Section \"Relation Detection Results\" . As shown on the last row of Table 3 , this gives a significant performance boost, resulting in a new state-of-the-art result on SimpleQuestions and a result comparable to the state-of-the-art on WebQSP.", "FLOAT SELECTED: Table 3: KBQA results on SimpleQuestions (SQ) and WebQSP (WQ) test sets. The numbers in green color are directly comparable to our results since we start with the same entity linking results."], "highlighted_evidence": ["As shown on the last row of Table 3 , this gives a significant performance boost, resulting in a new state-of-the-art result on SimpleQuestions and a result comparable to the state-of-the-art on WebQSP", "FLOAT SELECTED: Table 3: KBQA results on SimpleQuestions (SQ) and WebQSP (WQ) test sets. The numbers in green color are directly comparable to our results since we start with the same entity linking results."]}, {"raw_evidence": ["Table 2 shows the results on two relation detection tasks. The AMPCNN result is from BIBREF20 , which yielded state-of-the-art scores by outperforming several attention-based methods. We re-implemented the BiCNN model from BIBREF4 , where both questions and relations are represented with the word hash trick on character tri-grams. The baseline BiLSTM with relation word sequence appears to be the best baseline on WebQSP and is close to the previous best result of AMPCNN on SimpleQuestions. Our proposed HR-BiLSTM outperformed the best baselines on both tasks by margins of 2-3% (p $<$ 0.001 and 0.01 compared to the best baseline BiLSTM w/ words on SQ and WQ respectively)."], "highlighted_evidence": ["The baseline BiLSTM with relation word sequence appears to be the best baseline on WebQSP and is close to the previous best result of AMPCNN on SimpleQuestions. Our proposed HR-BiLSTM outperformed the best baselines on both tasks by margins of 2-3% (p $<$ 0.001 and 0.01 compared to the best baseline BiLSTM w/ words on SQ and WQ respectively)."]}]
How do they calculate a static embedding for each word?
{"label_key": "1909.00512", "label_file": "paper_tab_qa", "q_uid": "891c2001d6baaaf0da4e65b647402acac621a7d2", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "They use the first principal component of a word's contextualized representation in a given layer as its static embedding.", "type": "abstractive"}, {"answer": " by taking the first principal component (PC) of its contextualized representations in a given layer", "type": "extractive"}]
[{"raw_evidence": ["FLOAT SELECTED: Table 1: The performance of various static embeddings on word embedding benchmark tasks. The best result for each task is in bold. For the contextualizing models (ELMo, BERT, GPT-2), we use the first principal component of a word’s contextualized representations in a given layer as its static embedding. The static embeddings created using ELMo and BERT’s contextualized representations often outperform GloVe and FastText vectors."], "highlighted_evidence": ["FLOAT SELECTED: Table 1: The performance of various static embeddings on word embedding benchmark tasks. The best result for each task is in bold. For the contextualizing models (ELMo, BERT, GPT-2), we use the first principal component of a word’s contextualized representations in a given layer as its static embedding. The static embeddings created using ELMo and BERT’s contextualized representations often outperform GloVe and FastText vectors."]}, {"raw_evidence": ["As noted earlier, we can create static embeddings for each word by taking the first principal component (PC) of its contextualized representations in a given layer. In Table TABREF34, we plot the performance of these PC static embeddings on several benchmark tasks. These tasks cover semantic similarity, analogy solving, and concept categorization: SimLex999 BIBREF21, MEN BIBREF22, WS353 BIBREF23, RW BIBREF24, SemEval-2012 BIBREF25, Google analogy solving BIBREF0 MSR analogy solving BIBREF26, BLESS BIBREF27 and AP BIBREF28. We leave out layers 3 - 10 in Table TABREF34 because their performance is between those of Layers 2 and 11."], "highlighted_evidence": ["As noted earlier, we can create static embeddings for each word by taking the first principal component (PC) of its contextualized representations in a given layer. "]}]
What is the performance of BERT on the task?
{"label_key": "2003.03106", "label_file": "paper_tab_qa", "q_uid": "66c96c297c2cffdf5013bab5e95b59101cb38655", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "F1 scores are:\nHUBES-PHI: Detection(0.965), Classification relaxed (0.95), Classification strict (0.937)\nMedoccan: Detection(0.972), Classification (0.967)", "type": "abstractive"}, {"answer": "BERT remains only 0.3 F1-score points behind, and would have achieved the second position among all the MEDDOCAN shared task competitors. Taking into account that only 3% of the gold labels remain incorrectly annotated, Table ", "type": "extractive"}]
[{"raw_evidence": ["To finish with this experiment set, Table also shows the strict classification precision, recall and F1-score for the compared systems. Despite the fact that, in general, the systems obtain high values, BERT outperforms them again. BERT's F1-score is 1.9 points higher than the next most competitive result in the comparison. More remarkably, the recall obtained by BERT is about 5 points above.", "FLOAT SELECTED: Table 5: Results of Experiment A: NUBES-PHI", "The results of the two MEDDOCAN scenarios –detection and classification– are shown in Table . These results follow the same pattern as in the previous experiments, with the CRF classifier being the most precise of all, and BERT outperforming both the CRF and spaCy classifiers thanks to its greater recall. We also show the results of mao2019hadoken who, despite of having used a BERT-based system, achieve lower scores than our models. The reason why it should be so remain unclear.", "FLOAT SELECTED: Table 8: Results of Experiment B: MEDDOCAN"], "highlighted_evidence": ["To finish with this experiment set, Table also shows the strict classification precision, recall and F1-score for the compared systems.", "FLOAT SELECTED: Table 5: Results of Experiment A: NUBES-PHI", "The results of the two MEDDOCAN scenarios –detection and classification– are shown in Table .", "FLOAT SELECTED: Table 8: Results of Experiment B: MEDDOCAN"]}, {"raw_evidence": ["In this experiment set, our BERT implementation is compared to several systems that participated in the MEDDOCAN challenge: a CRF classifier BIBREF18, a spaCy entity recogniser BIBREF18, and NLNDE BIBREF12, the winner of the shared task and current state of the art for sensitive information detection and classification in Spanish clinical text. Specifically, we include the results of a domain-independent NLNDE model (S2), and the results of a model enriched with domain-specific embeddings (S3). Finally, we include the results obtained by mao2019hadoken with a CRF output layer on top of BERT embeddings. MEDDOCAN consists of two scenarios:", "The results of the two MEDDOCAN scenarios –detection and classification– are shown in Table . These results follow the same pattern as in the previous experiments, with the CRF classifier being the most precise of all, and BERT outperforming both the CRF and spaCy classifiers thanks to its greater recall. We also show the results of mao2019hadoken who, despite of having used a BERT-based system, achieve lower scores than our models. The reason why it should be so remain unclear.", "FLOAT SELECTED: Table 8: Results of Experiment B: MEDDOCAN"], "highlighted_evidence": ["In this experiment set, our BERT implementation is compared to several systems that participated in the MEDDOCAN challenge: a CRF classifier BIBREF18, a spaCy entity recogniser BIBREF18, and NLNDE BIBREF12, the winner of the shared task and current state of the art for sensitive information detection and classification in Spanish clinical text. Specifically, we include the results of a domain-independent NLNDE model (S2), and the results of a model enriched with domain-specific embeddings (S3).", "The results of the two MEDDOCAN scenarios –detection and classification– are shown in Table . These results follow the same pattern as in the previous experiments, with the CRF classifier being the most precise of all, and BERT outperforming both the CRF and spaCy classifiers thanks to its greater recall. We also show the results of mao2019hadoken who, despite of having used a BERT-based system, achieve lower scores than our models. The reason why it should be so remain unclear.", "FLOAT SELECTED: Table 8: Results of Experiment B: MEDDOCAN"]}]
What state-of-the-art compression techniques were used in the comparison?
{"label_key": "1909.11687", "label_file": "paper_tab_qa", "q_uid": "efe9bad55107a6be7704ed97ecce948a8ca7b1d2", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "baseline without knowledge distillation (termed NoKD), Patient Knowledge Distillation (PKD)", "type": "extractive"}, {"answer": "NoKD, PKD, BERTBASE teacher model", "type": "extractive"}]
[{"raw_evidence": ["For the language modeling evaluation, we also evaluate a baseline without knowledge distillation (termed NoKD), with a model parameterized identically to the distilled student models but trained directly on the teacher model objective from scratch. For downstream tasks, we compare with NoKD as well as Patient Knowledge Distillation (PKD) from BIBREF34, who distill the 12-layer BERTBASE model into 3 and 6-layer BERT models by using the teacher model's hidden states."], "highlighted_evidence": ["For the language modeling evaluation, we also evaluate a baseline without knowledge distillation (termed NoKD), with a model parameterized identically to the distilled student models but trained directly on the teacher model objective from scratch. For downstream tasks, we compare with NoKD as well as Patient Knowledge Distillation (PKD) from BIBREF34, who distill the 12-layer BERTBASE model into 3 and 6-layer BERT models by using the teacher model's hidden states."]}, {"raw_evidence": ["FLOAT SELECTED: Table 3: Results of the distilled models, the teacher model and baselines on the downstream language understanding task test sets, obtained from the GLUE server, along with the size parameters and compression ratios of the respective models compared to the teacher BERTBASE. MNLI-m and MNLI-mm refer to the genre-matched and genre-mismatched test sets for MNLI.", "For the language modeling evaluation, we also evaluate a baseline without knowledge distillation (termed NoKD), with a model parameterized identically to the distilled student models but trained directly on the teacher model objective from scratch. For downstream tasks, we compare with NoKD as well as Patient Knowledge Distillation (PKD) from BIBREF34, who distill the 12-layer BERTBASE model into 3 and 6-layer BERT models by using the teacher model's hidden states.", "Table TABREF21 shows results on the downstream language understanding tasks, as well as model sizes, for our approaches, the BERTBASE teacher model, and the PKD and NoKD baselines. We note that models trained with our proposed approaches perform strongly and consistently improve upon the identically parametrized NoKD baselines, indicating that the dual training and shared projection techniques are effective, without incurring significant losses against the BERTBASE teacher model. Comparing with the PKD baseline, our 192-dimensional models, achieving a higher compression rate than either of the PKD models, perform better than the 3-layer PKD baseline and are competitive with the larger 6-layer baseline on task accuracy while being nearly 5 times as small."], "highlighted_evidence": ["FLOAT SELECTED: Table 3: Results of the distilled models, the teacher model and baselines on the downstream language understanding task test sets, obtained from the GLUE server, along with the size parameters and compression ratios of the respective models compared to the teacher BERTBASE. MNLI-m and MNLI-mm refer to the genre-matched and genre-mismatched test sets for MNLI.", "For the language modeling evaluation, we also evaluate a baseline without knowledge distillation (termed NoKD), with a model parameterized identically to the distilled student models but trained directly on the teacher model objective from scratch. For downstream tasks, we compare with NoKD as well as Patient Knowledge Distillation (PKD) from BIBREF34, who distill the 12-layer BERTBASE model into 3 and 6-layer BERT models by using the teacher model's hidden states.", "Table TABREF21 shows results on the downstream language understanding tasks, as well as model sizes, for our approaches, the BERTBASE teacher model, and the PKD and NoKD baselines"]}]
What discourse relations does it work best/worst for?
{"label_key": "1804.05918", "label_file": "paper_tab_qa", "q_uid": "f17ca24b135f9fe6bb25dc5084b13e1637ec7744", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "explicit discourse relations", "type": "extractive"}, {"answer": "Best: Expansion (Exp). Worst: Comparison (Comp).", "type": "abstractive"}]
[{"raw_evidence": ["The second row shows the performance of our basic paragraph-level model which predicts both implicit and explicit discourse relations in a paragraph. Compared to the variant system (the first row), the basic model further improved the classification performance on the first three implicit relations. Especially on the contingency relation, the classification performance was improved by another 1.42 percents. Moreover, the basic model yields good performance for recognizing explicit discourse relations as well, which is comparable with previous best result (92.05% macro F1-score and 93.09% accuracy as reported in BIBREF11 ).", "After untying parameters in the softmax prediction layer, implicit discourse relation classification performance was improved across all four relations, meanwhile, the explicit discourse relation classification performance was also improved. The CRF layer further improved implicit discourse relation recognition performance on the three small classes. In summary, our full paragraph-level neural network model achieves the best macro-average F1-score of 48.82% in predicting implicit discourse relations, which outperforms previous neural tensor network models (e.g., BIBREF18 ) by more than 2 percents and outperforms the best previous system BIBREF19 by 1 percent.", "As we explained in section 4.2, we ran our models for 10 times to obtain stable average performance. Then we also created ensemble models by applying majority voting to combine results of ten runs. From table 5 , each ensemble model obtains performance improvements compared with single model. The full model achieves performance boosting of (51.84 - 48.82 = 3.02) and (94.17 - 93.21 = 0.96) in macro F1-scores for predicting implicit and explicit discourse relations respectively. Furthermore, the ensemble model achieves the best performance for predicting both implicit and explicit discourse relations simultaneously."], "highlighted_evidence": ["the basic model yields good performance for recognizing explicit discourse relations as well, which is comparable with previous best result (92.05% macro F1-score and 93.09% accuracy as reported in BIBREF11 ).", "After untying parameters in the softmax prediction layer, implicit discourse relation classification performance was improved across all four relations, meanwhile, the explicit discourse relation classification performance was also improved.", "Then we also created ensemble models by applying majority voting to combine results of ten runs. From table 5 , each ensemble model obtains performance improvements compared with single model. The full model achieves performance boosting of (51.84 - 48.82 = 3.02) and (94.17 - 93.21 = 0.96) in macro F1-scores for predicting implicit and explicit discourse relations respectively. "]}, {"raw_evidence": ["FLOAT SELECTED: Table 3: Multi-class Classification Results on PDTB. We report accuracy (Acc) and macro-average F1scores for both explicit and implicit discourse relation predictions. We also report class-wise F1 scores.", "The Penn Discourse Treebank (PDTB): We experimented with PDTB v2.0 BIBREF7 which is the largest annotated corpus containing 36k discourse relations in 2,159 Wall Street Journal (WSJ) articles. In this work, we focus on the top-level discourse relation senses which are consist of four major semantic classes: Comparison (Comp), Contingency (Cont), Expansion (Exp) and Temporal (Temp). We followed the same PDTB section partition BIBREF12 as previous work and used sections 2-20 as training set, sections 21-22 as test set, and sections 0-1 as development set. Table 1 presents the data distributions we collected from PDTB.", "Multi-way Classification: The first section of table 3 shows macro average F1-scores and accuracies of previous works. The second section of table 3 shows the multi-class classification results of our implemented baseline systems. Consistent with results of previous works, neural tensors, when applied to Bi-LSTMs, improved implicit discourse relation prediction performance. However, the performance on the three small classes (Comp, Cont and Temp) remains low."], "highlighted_evidence": ["FLOAT SELECTED: Table 3: Multi-class Classification Results on PDTB. We report accuracy (Acc) and macro-average F1scores for both explicit and implicit discourse relation predictions. We also report class-wise F1 scores.", "In this work, we focus on the top-level discourse relation senses which are consist of four major semantic classes: Comparison (Comp), Contingency (Cont), Expansion (Exp) and Temporal (Temp).", "However, the performance on the three small classes (Comp, Cont and Temp) remains low."]}]
Which 7 Indian languages do they experiment with?
{"label_key": "2002.01664", "label_file": "paper_tab_qa", "q_uid": "75df70ce7aa714ec4c6456d0c51f82a16227f2cb", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "Hindi, English, Kannada, Telugu, Assamese, Bengali and Malayalam", "type": "abstractive"}, {"answer": "Kannada, Hindi, Telugu, Malayalam, Bengali, English and Assamese (in table, missing in text)", "type": "abstractive"}]
[{"raw_evidence": ["FLOAT SELECTED: Table 1: Dataset"], "highlighted_evidence": ["FLOAT SELECTED: Table 1: Dataset"]}, {"raw_evidence": ["In this section, we describe our dataset collection process. We collected and curated around 635Hrs of audio data for 7 Indian languages, namely Kannada, Hindi, Telugu, Malayalam, Bengali, and English. We collected the data from the All India Radio news channel where an actor will be reading news for about 5-10 mins. To cover many speakers for the dataset, we crawled data from 2010 to 2019. Since the audio is very long to train any deep neural network directly, we segment the audio clips into smaller chunks using Voice activity detector. Since the audio clips will have music embedded during the news, we use Inhouse music detection model to remove the music segments from the dataset to make the dataset clean and our dataset contains 635Hrs of clean audio which is divided into 520Hrs of training data containing 165K utterances and 115Hrs of testing data containing 35K utterances. The amount of audio data for training and testing for each of the language is shown in the table bellow.", "FLOAT SELECTED: Table 1: Dataset"], "highlighted_evidence": ["We collected and curated around 635Hrs of audio data for 7 Indian languages, namely Kannada, Hindi, Telugu, Malayalam, Bengali, and English.", "The amount of audio data for training and testing for each of the language is shown in the table bellow.", "FLOAT SELECTED: Table 1: Dataset"]}]
Do they use graphical models?
{"label_key": "1809.00540", "label_file": "paper_tab_qa", "q_uid": "a99fdd34422f4231442c220c97eafc26c76508dd", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "No", "type": "boolean"}, {"answer": "No", "type": "boolean"}]
[{"raw_evidence": ["FLOAT SELECTED: Table 2: Clustering results on the labeled dataset. We compare our algorithm (with and without timestamps) with the online micro-clustering routine of Aggarwal and Yu (2006) (denoted by CluStream). The F1 values are for the precision (P) and recall (R) in the following columns. See Table 3 for a legend of the different models. Best result for each language is in bold."], "highlighted_evidence": ["FLOAT SELECTED: Table 2: Clustering results on the labeled dataset. We compare our algorithm (with and without timestamps) with the online micro-clustering routine of Aggarwal and Yu (2006) (denoted by CluStream). The F1 values are for the precision (P) and recall (R) in the following columns. See Table 3 for a legend of the different models. Best result for each language is in bold."]}, {"raw_evidence": [], "highlighted_evidence": []}]
What metric is used for evaluation?
{"label_key": "1809.00540", "label_file": "paper_tab_qa", "q_uid": "d604f5fb114169f75f9a38fab18c1e866c5ac28b", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "F1, precision, recall, accuracy", "type": "abstractive"}, {"answer": "Precision, recall, F1, accuracy", "type": "abstractive"}]
[{"raw_evidence": ["FLOAT SELECTED: Table 2: Clustering results on the labeled dataset. We compare our algorithm (with and without timestamps) with the online micro-clustering routine of Aggarwal and Yu (2006) (denoted by CluStream). The F1 values are for the precision (P) and recall (R) in the following columns. See Table 3 for a legend of the different models. Best result for each language is in bold.", "FLOAT SELECTED: Table 3: Accuracy of the SVM ranker on the English training set. TOKENS are the word token features, LEMMAS are the lemma features for title and body, ENTS are named entity features and TS are timestamp features. All features are described in detail in §4, and are listed for both the title and the body."], "highlighted_evidence": ["FLOAT SELECTED: Table 2: Clustering results on the labeled dataset. We compare our algorithm (with and without timestamps) with the online micro-clustering routine of Aggarwal and Yu (2006) (denoted by CluStream). The F1 values are for the precision (P) and recall (R) in the following columns. See Table 3 for a legend of the different models. Best result for each language is in bold.", "FLOAT SELECTED: Table 3: Accuracy of the SVM ranker on the English training set. TOKENS are the word token features, LEMMAS are the lemma features for title and body, ENTS are named entity features and TS are timestamp features. All features are described in detail in §4, and are listed for both the title and the body."]}, {"raw_evidence": ["To investigate the importance of each feature, we now consider in Table TABREF37 the accuracy of the SVM ranker for English as described in § SECREF19 . We note that adding features increases the accuracy of the SVM ranker, especially the timestamp features. However, the timestamp feature actually interferes with our optimization of INLINEFORM0 to identify when new clusters are needed, although they improve the SVM reranking accuracy. We speculate this is true because high accuracy in the reranking problem does not necessarily help with identifying when new clusters need to be opened.", "FLOAT SELECTED: Table 2: Clustering results on the labeled dataset. We compare our algorithm (with and without timestamps) with the online micro-clustering routine of Aggarwal and Yu (2006) (denoted by CluStream). The F1 values are for the precision (P) and recall (R) in the following columns. See Table 3 for a legend of the different models. Best result for each language is in bold.", "Table TABREF35 gives the final monolingual results on the three datasets. For English, we see that the significant improvement we get using our algorithm over the algorithm of aggarwal2006framework is due to an increased recall score. We also note that the trained models surpass the baseline for all languages, and that the timestamp feature (denoted by TS), while not required to beat the baseline, has a very relevant contribution in all cases. Although the results for both the baseline and our models seem to differ across languages, one can verify a consistent improvement from the latter to the former, suggesting that the score differences should be mostly tied to the different difficulty found across the datasets for each language. The presented scores show that our learning framework generalizes well to different languages and enables high quality clustering results."], "highlighted_evidence": ["To investigate the importance of each feature, we now consider in Table TABREF37 the accuracy of the SVM ranker for English as described in § SECREF19 . ", "FLOAT SELECTED: Table 2: Clustering results on the labeled dataset. We compare our algorithm (with and without timestamps) with the online micro-clustering routine of Aggarwal and Yu (2006) (denoted by CluStream). The F1 values are for the precision (P) and recall (R) in the following columns. See Table 3 for a legend of the different models. Best result for each language is in bold.", "Table TABREF35 gives the final monolingual results on the three datasets."]}]
Which eight NER tasks did they evaluate on?
{"label_key": "2004.03354", "label_file": "paper_tab_qa", "q_uid": "1d3e914d0890fc09311a70de0b20974bf7f0c9fe", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "BC5CDR-disease, NCBI-disease, BC5CDR-chem, BC4CHEMD, BC2GM, JNLPBA, LINNAEUS, Species-800", "type": "abstractive"}, {"answer": "BC5CDR-disease, NCBI-disease, BC5CDR-chem, BC4CHEMD, BC2GM, JNLPBA, LINNAEUS, Species-800", "type": "abstractive"}]
[{"raw_evidence": ["FLOAT SELECTED: Table 2: Top: Examples of within-space and cross-space nearest neighbors (NNs) by cosine similarity in GreenBioBERT’s wordpiece embedding layer. Blue: Original wordpiece space. Green: Aligned Word2Vec space. Bottom: Biomedical NER test set precision / recall / F1 (%) measured with the CoNLL NER scorer. Boldface: Best model in row. Underlined: Best inexpensive model (without target-domain pretraining) in row."], "highlighted_evidence": ["FLOAT SELECTED: Table 2: Top: Examples of within-space and cross-space nearest neighbors (NNs) by cosine similarity in GreenBioBERT’s wordpiece embedding layer. Blue: Original wordpiece space. Green: Aligned Word2Vec space. Bottom: Biomedical NER test set precision / recall / F1 (%) measured with the CoNLL NER scorer. Boldface: Best model in row. Underlined: Best inexpensive model (without target-domain pretraining) in row."]}, {"raw_evidence": ["FLOAT SELECTED: Table 2: Top: Examples of within-space and cross-space nearest neighbors (NNs) by cosine similarity in GreenBioBERT’s wordpiece embedding layer. Blue: Original wordpiece space. Green: Aligned Word2Vec space. Bottom: Biomedical NER test set precision / recall / F1 (%) measured with the CoNLL NER scorer. Boldface: Best model in row. Underlined: Best inexpensive model (without target-domain pretraining) in row."], "highlighted_evidence": ["FLOAT SELECTED: Table 2: Top: Examples of within-space and cross-space nearest neighbors (NNs) by cosine similarity in GreenBioBERT’s wordpiece embedding layer. Blue: Original wordpiece space. Green: Aligned Word2Vec space. Bottom: Biomedical NER test set precision / recall / F1 (%) measured with the CoNLL NER scorer. Boldface: Best model in row. Underlined: Best inexpensive model (without target-domain pretraining) in row."]}]
Do they test their framework performance on commonly used language pairs, such as English-to-German?
{"label_key": "1611.04798", "label_file": "paper_tab_qa", "q_uid": "897ba53ef44f658c128125edd26abf605060fb13", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "Yes", "type": "boolean"}, {"answer": "Yes", "type": "boolean"}]
[{"raw_evidence": ["FLOAT SELECTED: Table 1: Results of the English→German systems in a simulated under-resourced scenario."], "highlighted_evidence": ["FLOAT SELECTED: Table 1: Results of the English→German systems in a simulated under-resourced scenario."]}, {"raw_evidence": ["A standard NMT system employs parallel data only. While good parallel corpora are limited in number, getting monolingual data of an arbitrary language is trivial. To make use of German monolingual corpus in an English INLINEFORM0 German NMT system, sennrich2016b built a separate German INLINEFORM1 English NMT using the same parallel corpus, then they used that system to translate the German monolingual corpus back to English, forming a synthesis parallel data. gulcehre2015 trained another RNN-based language model to score the monolingual corpus and integrate it to the NMT system through shallow or deep fusion. Both methods requires to train separate systems with possibly different hyperparameters for each. Conversely, by applying mix-source method to the big monolingual data, we need to train only one network. We mix the TED parallel corpus and the substantial monolingual corpus (EPPS+NC+CommonCrawl) and train a mix-source NMT system from those data."], "highlighted_evidence": [" standard NMT system employs parallel data only. While good parallel corpora are limited in number, getting monolingual data of an arbitrary language is trivial. To make use of German monolingual corpus in an English INLINEFORM0 German NMT system, sennrich2016b built a separate German INLINEFORM1 English NMT using the same parallel corpus, then they used that system to translate the German monolingual corpus back to English, forming a synthesis parallel data. gulcehre2015 trained another RNN-based language model to score the monolingual corpus and integrate it to the NMT system through shallow or deep fusion. Both methods requires to train separate systems with possibly different hyperparameters for each. Conversely, by applying mix-source method to the big monolingual data, we need to train only one network. We mix the TED parallel corpus and the substantial monolingual corpus (EPPS+NC+CommonCrawl) and train a mix-source NMT system from those data."]}]
What languages are evaluated?
{"label_key": "1809.01541", "label_file": "paper_tab_qa", "q_uid": "c32adef59efcb9d1a5b10e1d7c999a825c9e6d9a", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "German, English, Spanish, Finnish, French, Russian, Swedish.", "type": "abstractive"}]
[{"raw_evidence": ["FLOAT SELECTED: Table 2: Official shared task test set results."], "highlighted_evidence": ["FLOAT SELECTED: Table 2: Official shared task test set results."]}]
What is MSD prediction?
{"label_key": "1809.01541", "label_file": "paper_tab_qa", "q_uid": "32a3c248b928d4066ce00bbb0053534ee62596e7", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "The task of predicting MSD tags: V, PST, V.PCTP, PASS.", "type": "abstractive"}, {"answer": "morphosyntactic descriptions (MSD)", "type": "extractive"}]
[{"raw_evidence": ["FLOAT SELECTED: Table 1: Example input sentence. Context MSD tags and lemmas, marked in gray, are only available in Track 1. The cyan square marks the main objective of predicting the word form made. The magenta square marks the auxiliary objective of predicting the MSD tag V;PST;V.PTCP;PASS."], "highlighted_evidence": ["FLOAT SELECTED: Table 1: Example input sentence. Context MSD tags and lemmas, marked in gray, are only available in Track 1. The cyan square marks the main objective of predicting the word form made. The magenta square marks the auxiliary objective of predicting the MSD tag V;PST;V.PTCP;PASS."]}, {"raw_evidence": ["There are two tracks of Task 2 of CoNLL–SIGMORPHON 2018: in Track 1 the context is given in terms of word forms, lemmas and morphosyntactic descriptions (MSD); in Track 2 only word forms are available. See Table TABREF1 for an example. Task 2 is additionally split in three settings based on data size: high, medium and low, with high-resource datasets consisting of up to 70K instances per language, and low-resource datasets consisting of only about 1K instances."], "highlighted_evidence": ["There are two tracks of Task 2 of CoNLL–SIGMORPHON 2018: in Track 1 the context is given in terms of word forms, lemmas and morphosyntactic descriptions (MSD); in Track 2 only word forms are available."]}]
What other models do they compare to?
{"label_key": "1809.09194", "label_file": "paper_tab_qa", "q_uid": "d3dbb5c22ef204d85707d2d24284cc77fa816b6c", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "SAN Baseline, BNA, DocQA, R.M-Reader, R.M-Reader+Verifier and DocQA+ELMo", "type": "abstractive"}, {"answer": "BNA, DocQA, R.M-Reader, R.M-Reader + Verifier, DocQA + ELMo, R.M-Reader+Verifier+ELMo", "type": "abstractive"}]
[{"raw_evidence": ["Table TABREF21 reports comparison results in literature published . Our model achieves state-of-the-art on development dataset in setting without pre-trained large language model (ELMo). Comparing with the much complicated model R.M.-Reader + Verifier, which includes several components, our model still outperforms by 0.7 in terms of F1 score. Furthermore, we observe that ELMo gives a great boosting on the performance, e.g., 2.8 points in terms of F1 for DocQA. This encourages us to incorporate ELMo into our model in future.", "The results in terms of EM and F1 is summarized in Table TABREF20 . We observe that Joint SAN outperforms the SAN baseline with a large margin, e.g., 67.89 vs 69.27 (+1.38) and 70.68 vs 72.20 (+1.52) in terms of EM and F1 scores respectively, so it demonstrates the effectiveness of the joint optimization. By incorporating the output information of classifier into Joint SAN, it obtains a slight improvement, e.g., 72.2 vs 72.66 (+0.46) in terms of F1 score. By analyzing the results, we found that in most cases when our model extract an NULL string answer, the classifier also predicts it as an unanswerable question with a high probability.", "FLOAT SELECTED: Table 2: Comparison with published results in literature. 1: results are extracted from (Rajpurkar et al., 2018); 2: results are extracted from (Hu et al., 2018). ∗: it is unclear which model is used. #: we only evaluate the Joint SAN in the submission."], "highlighted_evidence": ["Table TABREF21 reports comparison results in literature published .", "The results in terms of EM and F1 is summarized in Table TABREF20 . We observe that Joint SAN outperforms the SAN baseline with a large margin, e.g., 67.89 vs 69.27 (+1.38) and 70.68 vs 72.20 (+1.52) in terms of EM and F1 scores respectively, so it demonstrates the effectiveness of the joint optimization.", "FLOAT SELECTED: Table 2: Comparison with published results in literature. 1: results are extracted from (Rajpurkar et al., 2018); 2: results are extracted from (Hu et al., 2018). ∗: it is unclear which model is used. #: we only evaluate the Joint SAN in the submission."]}, {"raw_evidence": ["FLOAT SELECTED: Table 2: Comparison with published results in literature. 1: results are extracted from (Rajpurkar et al., 2018); 2: results are extracted from (Hu et al., 2018). ∗: it is unclear which model is used. #: we only evaluate the Joint SAN in the submission.", "Table TABREF21 reports comparison results in literature published . Our model achieves state-of-the-art on development dataset in setting without pre-trained large language model (ELMo). Comparing with the much complicated model R.M.-Reader + Verifier, which includes several components, our model still outperforms by 0.7 in terms of F1 score. Furthermore, we observe that ELMo gives a great boosting on the performance, e.g., 2.8 points in terms of F1 for DocQA. This encourages us to incorporate ELMo into our model in future."], "highlighted_evidence": ["FLOAT SELECTED: Table 2: Comparison with published results in literature. 1: results are extracted from (Rajpurkar et al., 2018); 2: results are extracted from (Hu et al., 2018). ∗: it is unclear which model is used. #: we only evaluate the Joint SAN in the submission.", "Table TABREF21 reports comparison results in literature published ."]}]
How much better than the baseline is LiLi?
{"label_key": "1802.06024", "label_file": "paper_tab_qa", "q_uid": "286078813136943dfafb5155ee15d2429e7601d9", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "In case of Freebase knowledge base, LiLi model had better F1 score than the single model by 0.20 , 0.01, 0.159 for kwn, unk, and all test Rel type. The values for WordNet are 0.25, 0.1, 0.2. \n", "type": "abstractive"}]
[{"raw_evidence": ["Baselines. As none of the existing KBC methods can solve the OKBC problem, we choose various versions of LiLi as baselines.", "Single: Version of LiLi where we train a single prediction model INLINEFORM0 for all test relations.", "Sep: We do not transfer (past learned) weights for initializing INLINEFORM0 , i.e., we disable LL.", "F-th): Here, we use a fixed prediction threshold 0.5 instead of relation-specific threshold INLINEFORM0 .", "BG: The missing or connecting links (when the user does not respond) are filled with “@-RelatedTo-@\" blindly, no guessing mechanism.", "w/o PTS: LiLi does not ask for additional clues via past task selection for skillset improvement.", "Evaluation-I: Strategy Formulation Ability. Table 5 shows the list of inference strategies formulated by LiLi for various INLINEFORM0 and INLINEFORM1 , which control the strategy formulation of LiLi. When INLINEFORM2 , LiLi cannot interact with user and works like a closed-world method. Thus, INLINEFORM3 drops significantly (0.47). When INLINEFORM4 , i.e. with only one interaction per query, LiLi acquires knowledge well for instances where either of the entities or relation is unknown. However, as one unknown entity may appear in multiple test triples, once the entity becomes known, LiLi doesn’t need to ask for it again and can perform inference on future triples causing significant increase in INLINEFORM5 (0.97). When INLINEFORM6 , LiLi is able to perform inference on all instances and INLINEFORM7 becomes 1. For INLINEFORM8 , LiLi uses INLINEFORM9 only once (as only one MLQ satisfies INLINEFORM10 ) compared to INLINEFORM11 . In summary, LiLi’s RL-model can effectively formulate query-specific inference strategies (based on specified parameter values). Evaluation-II: Predictive Performance. Table 6 shows the comparative performance of LiLi with baselines. To judge the overall improvements, we performed paired t-test considering +ve F1 scores on each relation as paired data. Considering both KBs and all relation types, LiLi outperforms Sep with INLINEFORM12 . If we set INLINEFORM13 (training with very few clues), LiLi outperforms Sep with INLINEFORM14 on Freebase considering MCC. Thus, the lifelong learning mechanism is effective in transferring helpful knowledge. Single model performs better than Sep for unknown relations due to the sharing of knowledge (weights) across tasks. However, for known relations, performance drops because, as a new relation arrives to the system, old weights get corrupted and catastrophic forgetting occurs. For unknown relations, as the relations are evaluated just after training, there is no chance for catastrophic forgetting. The performance improvement ( INLINEFORM15 ) of LiLi over F-th on Freebase signifies that the relation-specific threshold INLINEFORM16 works better than fixed threshold 0.5 because, if all prediction values for test instances lie above (or below) 0.5, F-th predicts all instances as +ve (-ve) which degrades its performance. Due to the utilization of contextual similarity (highly correlated with class labels) of entity-pairs, LiLi’s guessing mechanism works better ( INLINEFORM17 ) than blind guessing (BG). The past task selection mechanism of LiLi also improves its performance over w/o PTS, as it acquires more clues during testing for poorly performed tasks (evaluated on validation set). For Freebase, due to a large number of past tasks [9 (25% of 38)], the performance difference is more significant ( INLINEFORM18 ). For WordNet, the number is relatively small [3 (25% of 14)] and hence, the difference is not significant.", "FLOAT SELECTED: Table 6: Comparison of predictive performance of various versions of LiLi [kwn = known, unk = unknown, all = overall]."], "highlighted_evidence": ["Baselines. As none of the existing KBC methods can solve the OKBC problem, we choose various versions of LiLi as baselines.\n\nSingle: Version of LiLi where we train a single prediction model INLINEFORM0 for all test relations.\n\nSep: We do not transfer (past learned) weights for initializing INLINEFORM0 , i.e., we disable LL.\n\nF-th): Here, we use a fixed prediction threshold 0.5 instead of relation-specific threshold INLINEFORM0 .\n\nBG: The missing or connecting links (when the user does not respond) are filled with “@-RelatedTo-@\" blindly, no guessing mechanism.\n\nw/o PTS: LiLi does not ask for additional clues via past task selection for skillset improvement.", "Table 6 shows the comparative performance of LiLi with baselines. To judge the overall improvements, we performed paired t-test considering +ve F1 scores on each relation as paired data. Considering both KBs and all relation types, LiLi outperforms Sep with INLINEFORM12 . If we set INLINEFORM13 (training with very few clues), LiLi outperforms Sep with INLINEFORM14 on Freebase considering MCC. Thus, the lifelong learning mechanism is effective in transferring helpful knowledge. ", "FLOAT SELECTED: Table 6: Comparison of predictive performance of various versions of LiLi [kwn = known, unk = unknown, all = overall]."]}]
How many labels do the datasets have?
{"label_key": "1809.00530", "label_file": "paper_tab_qa", "q_uid": "6aa2a1e2e3666f2b2a1f282d4cbdd1ca325eb9de", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "719313", "type": "abstractive"}, {"answer": "Book, Electronics, Beauty and Music each have 6000, IMDB 84919, Yelp 231163, Cell Phone 194792 and Baby 160792 labeled data.", "type": "abstractive"}]
[{"raw_evidence": ["FLOAT SELECTED: Table 1: Summary of datasets."], "highlighted_evidence": ["FLOAT SELECTED: Table 1: Summary of datasets."]}, {"raw_evidence": ["Large-scale datasets: We further conduct experiments on four much larger datasets: IMDB (I), Yelp2014 (Y), Cell Phone (C), and Baby (B). IMDB and Yelp2014 were previously used in BIBREF25 , BIBREF26 . Cell phone and Baby are from the large-scale Amazon dataset BIBREF24 , BIBREF27 . Detailed statistics are summarized in Table TABREF9 . We keep all reviews in the original datasets and consider a transductive setting where all target examples are used for both training (without label information) and evaluation. We perform sampling to balance the classes of labeled source data in each minibatch INLINEFORM3 during training.", "Small-scale datasets: Our new dataset was derived from the large-scale Amazon datasets released by McAuley et al. ( BIBREF24 ). It contains four domains: Book (BK), Electronics (E), Beauty (BT), and Music (M). Each domain contains two datasets. Set 1 contains 6000 instances with exactly balanced class labels, and set 2 contains 6000 instances that are randomly sampled from the large dataset, preserving the original label distribution, which we believe better reflects the label distribution in real life. The examples in these two sets do not overlap. Detailed statistics of the generated datasets are given in Table TABREF9 .", "FLOAT SELECTED: Table 1: Summary of datasets."], "highlighted_evidence": ["Large-scale datasets: We further conduct experiments on four much larger datasets: IMDB (I), Yelp2014 (Y), Cell Phone (C), and Baby (B). IMDB and Yelp2014 were previously used in BIBREF25 , BIBREF26 . Cell phone and Baby are from the large-scale Amazon dataset BIBREF24 , BIBREF27 . Detailed statistics are summarized in Table TABREF9 .", "Small-scale datasets: Our new dataset was derived from the large-scale Amazon datasets released by McAuley et al. ( BIBREF24 ). It contains four domains: Book (BK), Electronics (E), Beauty (BT), and Music (M). Each domain contains two datasets. Set 1 contains 6000 instances with exactly balanced class labels, and set 2 contains 6000 instances that are randomly sampled from the large dataset, preserving the original label distribution, which we believe better reflects the label distribution in real life. The examples in these two sets do not overlap. Detailed statistics of the generated datasets are given in Table TABREF9 .", "FLOAT SELECTED: Table 1: Summary of datasets."]}]
What are the source and target domains?
{"label_key": "1809.00530", "label_file": "paper_tab_qa", "q_uid": "9176d2ba1c638cdec334971c4c7f1bb959495a8e", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "Book, electronics, beauty, music, IMDB, Yelp, cell phone, baby, DVDs, kitchen", "type": "abstractive"}, {"answer": "we use set 1 of the source domain as the only source with sentiment label information during training, and we evaluate the trained model on set 1 of the target domain, Book (BK), Electronics (E), Beauty (BT), and Music (M)", "type": "extractive"}]
[{"raw_evidence": ["FLOAT SELECTED: Table 1: Summary of datasets.", "Most previous works BIBREF0 , BIBREF1 , BIBREF6 , BIBREF7 , BIBREF29 carried out experiments on the Amazon benchmark released by Blitzer et al. ( BIBREF0 ). The dataset contains 4 different domains: Book (B), DVDs (D), Electronics (E), and Kitchen (K). Following their experimental settings, we consider the binary classification task to predict whether a review is positive or negative on the target domain. Each domain consists of 1000 positive and 1000 negative reviews respectively. We also allow 4000 unlabeled reviews to be used for both the source and the target domains, of which the positive and negative reviews are balanced as well, following the settings in previous works. We construct 12 cross-domain sentiment classification tasks and split the labeled data in each domain into a training set of 1600 reviews and a test set of 400 reviews. The classifier is trained on the training set of the source domain and is evaluated on the test set of the target domain. The comparison results are shown in Table TABREF37 ."], "highlighted_evidence": ["FLOAT SELECTED: Table 1: Summary of datasets.", "The dataset contains 4 different domains: Book (B), DVDs (D), Electronics (E), and Kitchen (K). "]}, {"raw_evidence": ["Small-scale datasets: Our new dataset was derived from the large-scale Amazon datasets released by McAuley et al. ( BIBREF24 ). It contains four domains: Book (BK), Electronics (E), Beauty (BT), and Music (M). Each domain contains two datasets. Set 1 contains 6000 instances with exactly balanced class labels, and set 2 contains 6000 instances that are randomly sampled from the large dataset, preserving the original label distribution, which we believe better reflects the label distribution in real life. The examples in these two sets do not overlap. Detailed statistics of the generated datasets are given in Table TABREF9 .", "In all our experiments on the small-scale datasets, we use set 1 of the source domain as the only source with sentiment label information during training, and we evaluate the trained model on set 1 of the target domain. Since we cannot control the label distribution of unlabeled data during training, we consider two different settings:"], "highlighted_evidence": ["It contains four domains: Book (BK), Electronics (E), Beauty (BT), and Music (M). Each domain contains two datasets. Set 1 contains 6000 instances with exactly balanced class labels, and set 2 contains 6000 instances that are randomly sampled from the large dataset, preserving the original label distribution, which we believe better reflects the label distribution in real life. The examples in these two sets do not overlap. Detailed statistics of the generated datasets are given in Table TABREF9 .\n\nIn all our experiments on the small-scale datasets, we use set 1 of the source domain as the only source with sentiment label information during training, and we evaluate the trained model on set 1 of the target domain."]}]
Which datasets are used?
{"label_key": "1912.08960", "label_file": "paper_tab_qa", "q_uid": "b1bc9ae9d40e7065343c12f860a461c7c730a612", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "Existential (OneShape, MultiShapes), Spacial (TwoShapes, Multishapes), Quantification (Count, Ratio) datasets are generated from ShapeWorldICE", "type": "abstractive"}, {"answer": "ShapeWorldICE datasets: OneShape, MultiShapes, TwoShapes, MultiShapes, Count, and Ratio", "type": "abstractive"}]
[{"raw_evidence": ["We develop a variety of ShapeWorldICE datasets, with a similar idea to the “skill tasks” in the bAbI framework BIBREF22. Table TABREF4 gives an overview for different ShapeWorldICE datasets we use in this paper. We consider three different types of captioning tasks, each of which focuses on a distinct aspect of reasoning abilities. Existential descriptions examine whether a certain object is present in an image. Spatial descriptions identify spatial relationships among visual objects. Quantification descriptions involve count-based and ratio-based statements, with an explicit focus on inspecting models for their counting ability. We develop two variants for each type of dataset to enable different levels of visual complexity or specific aspects of the same reasoning type. All the training and test captions sampled in this work are in English.", "FLOAT SELECTED: Table 1: Sample captions and images from ShapeWorldICE datasets (truthful captions in blue, false in red). Images from Existential-OneShape only contain one object, while images from Spatial-TwoShapes contain two objects. Images from the other four datasets follow the same distribution with multiple abstract objects present in a visual scene."], "highlighted_evidence": ["We develop a variety of ShapeWorldICE datasets, with a similar idea to the “skill tasks” in the bAbI framework BIBREF22. Table TABREF4 gives an overview for different ShapeWorldICE datasets we use in this paper.", "FLOAT SELECTED: Table 1: Sample captions and images from ShapeWorldICE datasets (truthful captions in blue, false in red). Images from Existential-OneShape only contain one object, while images from Spatial-TwoShapes contain two objects. Images from the other four datasets follow the same distribution with multiple abstract objects present in a visual scene."]}, {"raw_evidence": ["Practical evaluation of GTD is currently only possible on synthetic data. We construct a range of datasets designed for image captioning evaluation. We call this diagnostic evaluation benchmark ShapeWorldICE (ShapeWorld for Image Captioning Evaluation). We illustrate the evaluation of specific image captioning models on ShapeWorldICE. We empirically demonstrate that the existing metrics BLEU and SPICE do not capture true caption-image agreement in all scenarios, while the GTD framework allows a fine-grained investigation of how well existing models cope with varied visual situations and linguistic constructions.", "We develop a variety of ShapeWorldICE datasets, with a similar idea to the “skill tasks” in the bAbI framework BIBREF22. Table TABREF4 gives an overview for different ShapeWorldICE datasets we use in this paper. We consider three different types of captioning tasks, each of which focuses on a distinct aspect of reasoning abilities. Existential descriptions examine whether a certain object is present in an image. Spatial descriptions identify spatial relationships among visual objects. Quantification descriptions involve count-based and ratio-based statements, with an explicit focus on inspecting models for their counting ability. We develop two variants for each type of dataset to enable different levels of visual complexity or specific aspects of the same reasoning type. All the training and test captions sampled in this work are in English.", "FLOAT SELECTED: Table 1: Sample captions and images from ShapeWorldICE datasets (truthful captions in blue, false in red). Images from Existential-OneShape only contain one object, while images from Spatial-TwoShapes contain two objects. Images from the other four datasets follow the same distribution with multiple abstract objects present in a visual scene."], "highlighted_evidence": ["Practical evaluation of GTD is currently only possible on synthetic data. We construct a range of datasets designed for image captioning evaluation. We call this diagnostic evaluation benchmark ShapeWorldICE (ShapeWorld for Image Captioning Evaluation). We illustrate the evaluation of specific image captioning models on ShapeWorldICE.", "We develop a variety of ShapeWorldICE datasets, with a similar idea to the “skill tasks” in the bAbI framework BIBREF22. Table TABREF4 gives an overview for different ShapeWorldICE datasets we use in this paper.", "FLOAT SELECTED: Table 1: Sample captions and images from ShapeWorldICE datasets (truthful captions in blue, false in red). Images from Existential-OneShape only contain one object, while images from Spatial-TwoShapes contain two objects. Images from the other four datasets follow the same distribution with multiple abstract objects present in a visual scene."]}]
What are previous state of the art results?
{"label_key": "2002.11910", "label_file": "paper_tab_qa", "q_uid": "9da1e124d28b488b0d94998d32aa2fa8a5ebec51", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "Overall F1 score:\n- He and Sun (2017) 58.23\n- Peng and Dredze (2017) 58.99\n- Xu et al. (2018) 59.11", "type": "abstractive"}, {"answer": "For Named entity the maximum precision was 66.67%, and the average 62.58%, same values for Recall was 55.97% and 50.33%, and for F1 57.14% and 55.64%. Where for Nominal Mention had maximum recall of 74.48% and average of 73.67%, Recall had values of 54.55% and 53.7%, and F1 had values of 62.97% and 62.12%. Finally the Overall F1 score had maximum value of 59.11% and average of 58.77%", "type": "abstractive"}]
[{"raw_evidence": ["FLOAT SELECTED: Table 1: The results of two previous models, and results of this study, in which we apply a boundary assembling method. Precision, recall, and F1 scores are shown for both named entity and nominal mention. For both tasks and their overall performance, we outperform the other two models."], "highlighted_evidence": ["FLOAT SELECTED: Table 1: The results of two previous models, and results of this study, in which we apply a boundary assembling method. Precision, recall, and F1 scores are shown for both named entity and nominal mention. For both tasks and their overall performance, we outperform the other two models."]}, {"raw_evidence": ["Our best model performance with its Precision, Recall, and F1 scores on named entity and nominal mention are shown in Table TABREF5. This best model performance is achieved with a dropout rate of 0.1, and a learning rate of 0.05. Our results are compared with state-of-the-art models BIBREF15, BIBREF19, BIBREF20 on the same Sina Weibo training and test datasets. Our model shows an absolute improvement of 2% for the overall F1 score.", "FLOAT SELECTED: Table 1: The results of two previous models, and results of this study, in which we apply a boundary assembling method. Precision, recall, and F1 scores are shown for both named entity and nominal mention. For both tasks and their overall performance, we outperform the other two models."], "highlighted_evidence": ["Our best model performance with its Precision, Recall, and F1 scores on named entity and nominal mention are shown in Table TABREF5. ", "FLOAT SELECTED: Table 1: The results of two previous models, and results of this study, in which we apply a boundary assembling method. Precision, recall, and F1 scores are shown for both named entity and nominal mention. For both tasks and their overall performance, we outperform the other two models."]}]
What is the model performance on target language reading comprehension?
{"label_key": "1909.09587", "label_file": "paper_tab_qa", "q_uid": "37be0d479480211291e068d0d3823ad0c13321d3", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "Table TABREF6, Table TABREF8", "type": "extractive"}, {"answer": "when testing on English, the F1 score of the model training on Chinese (Zh) is 53.8, F1 score is only 44.1 for the model training on Zh-En", "type": "extractive"}]
[{"raw_evidence": ["Table TABREF6 shows the result of different models trained on either Chinese or English and tested on Chinese. In row (f), multi-BERT is fine-tuned on English but tested on Chinese, which achieves competitive performance compared with QANet trained on Chinese. We also find that multi-BERT trained on English has relatively lower EM compared with the model with comparable F1 scores. This shows that the model learned with zero-shot can roughly identify the answer spans in context but less accurate. In row (c), we fine-tuned a BERT model pre-trained on English monolingual corpus (English BERT) on Chinese RC training data directly by appending fastText-initialized Chinese word embeddings to the original word embeddings of English-BERT. Its F1 score is even lower than that of zero-shot transferring multi-BERT (rows (c) v.s. (e)). The result implies multi-BERT does acquire better cross-lingual capability through pre-training on multilingual corpus. Table TABREF8 shows the results of multi-BERT fine-tuned on different languages and then tested on English , Chinese and Korean. The top half of the table shows the results of training data without translation. It is not surprising that when the training and testing sets are in the same language, the best results are achieved, and multi-BERT shows transfer capability when training and testing sets are in different languages, especially between Chinese and Korean.", "FLOAT SELECTED: Table 1: EM/F1 scores over Chinese testing set.", "FLOAT SELECTED: Table 2: EM/F1 score of multi-BERTs fine-tuned on different training sets and tested on different languages (En: English, Fr: French, Zh: Chinese, Jp: Japanese, Kr: Korean, xx-yy: translated from xx to yy). The text in bold means training data language is the same as testing data language."], "highlighted_evidence": ["Table TABREF6 shows the result of different models trained on either Chinese or English and tested on Chinese. In row (f), multi-BERT is fine-tuned on English but tested on Chinese, which achieves competitive performance compared with QANet trained on Chinese. We also find that multi-BERT trained on English has relatively lower EM compared with the model with comparable F1 scores. ", "Table TABREF8 shows the results of multi-BERT fine-tuned on different languages and then tested on English , Chinese and Korean. The top half of the table shows the results of training data without translation. It is not surprising that when the training and testing sets are in the same language, the best results are achieved, and multi-BERT shows transfer capability when training and testing sets are in different languages, especially between Chinese and Korean.", "FLOAT SELECTED: Table 1: EM/F1 scores over Chinese testing set.", "FLOAT SELECTED: Table 2: EM/F1 score of multi-BERTs fine-tuned on different training sets and tested on different languages (En: English, Fr: French, Zh: Chinese, Jp: Japanese, Kr: Korean, xx-yy: translated from xx to yy). The text in bold means training data language is the same as testing data language."]}, {"raw_evidence": ["Table TABREF6 shows the result of different models trained on either Chinese or English and tested on Chinese. In row (f), multi-BERT is fine-tuned on English but tested on Chinese, which achieves competitive performance compared with QANet trained on Chinese. We also find that multi-BERT trained on English has relatively lower EM compared with the model with comparable F1 scores. This shows that the model learned with zero-shot can roughly identify the answer spans in context but less accurate. In row (c), we fine-tuned a BERT model pre-trained on English monolingual corpus (English BERT) on Chinese RC training data directly by appending fastText-initialized Chinese word embeddings to the original word embeddings of English-BERT. Its F1 score is even lower than that of zero-shot transferring multi-BERT (rows (c) v.s. (e)). The result implies multi-BERT does acquire better cross-lingual capability through pre-training on multilingual corpus. Table TABREF8 shows the results of multi-BERT fine-tuned on different languages and then tested on English , Chinese and Korean. The top half of the table shows the results of training data without translation. It is not surprising that when the training and testing sets are in the same language, the best results are achieved, and multi-BERT shows transfer capability when training and testing sets are in different languages, especially between Chinese and Korean.", "In the lower half of Table TABREF8, the results are obtained by the translated training data. First, we found that when testing on English and Chinese, translation always degrades the performance (En v.s. En-XX, Zh v.s. Zh-XX). Even though we translate the training data into the same language as testing data, using the untranslated data still yield better results. For example, when testing on English, the F1 score of the model training on Chinese (Zh) is 53.8, while the F1 score is only 44.1 for the model training on Zh-En. This shows that translation degrades the quality of data. There are some exceptions when testing on Korean. Translating the English training data into Chinese, Japanese and Korean still improve the performance on Korean. We also found that when translated into the same language, the English training data is always better than the Chinese data (En-XX v.s. Zh-XX), with only one exception (En-Fr v.s. Zh-Fr when testing on KorQuAD). This may be because we have less Chinese training data than English. These results show that the quality and the size of dataset are much more important than whether the training and testing are in the same language or not.", "FLOAT SELECTED: Table 2: EM/F1 score of multi-BERTs fine-tuned on different training sets and tested on different languages (En: English, Fr: French, Zh: Chinese, Jp: Japanese, Kr: Korean, xx-yy: translated from xx to yy). The text in bold means training data language is the same as testing data language."], "highlighted_evidence": ["Table TABREF8 shows the results of multi-BERT fine-tuned on different languages and then tested on English , Chinese and Korean.", "For example, when testing on English, the F1 score of the model training on Chinese (Zh) is 53.8, while the F1 score is only 44.1 for the model training on Zh-En.", "FLOAT SELECTED: Table 2: EM/F1 score of multi-BERTs fine-tuned on different training sets and tested on different languages (En: English, Fr: French, Zh: Chinese, Jp: Japanese, Kr: Korean, xx-yy: translated from xx to yy). The text in bold means training data language is the same as testing data language."]}]
What source-target language pairs were used in this work?
{"label_key": "1909.09587", "label_file": "paper_tab_qa", "q_uid": "a3d9b101765048f4b61cbd3eaa2439582ebb5c77", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "En-Fr, En-Zh, En-Jp, En-Kr, Zh-En, Zh-Fr, Zh-Jp, Zh-Kr to English, Chinese or Korean", "type": "abstractive"}, {"answer": "English , Chinese", "type": "extractive"}, {"answer": "English, Chinese, Korean, we translated the English and Chinese datasets into more languages, with Google Translate", "type": "extractive"}]
[{"raw_evidence": ["FLOAT SELECTED: Table 2: EM/F1 score of multi-BERTs fine-tuned on different training sets and tested on different languages (En: English, Fr: French, Zh: Chinese, Jp: Japanese, Kr: Korean, xx-yy: translated from xx to yy). The text in bold means training data language is the same as testing data language."], "highlighted_evidence": ["FLOAT SELECTED: Table 2: EM/F1 score of multi-BERTs fine-tuned on different training sets and tested on different languages (En: English, Fr: French, Zh: Chinese, Jp: Japanese, Kr: Korean, xx-yy: translated from xx to yy). The text in bold means training data language is the same as testing data language."]}, {"raw_evidence": ["In the lower half of Table TABREF8, the results are obtained by the translated training data. First, we found that when testing on English and Chinese, translation always degrades the performance (En v.s. En-XX, Zh v.s. Zh-XX). Even though we translate the training data into the same language as testing data, using the untranslated data still yield better results. For example, when testing on English, the F1 score of the model training on Chinese (Zh) is 53.8, while the F1 score is only 44.1 for the model training on Zh-En. This shows that translation degrades the quality of data. There are some exceptions when testing on Korean. Translating the English training data into Chinese, Japanese and Korean still improve the performance on Korean. We also found that when translated into the same language, the English training data is always better than the Chinese data (En-XX v.s. Zh-XX), with only one exception (En-Fr v.s. Zh-Fr when testing on KorQuAD). This may be because we have less Chinese training data than English. These results show that the quality and the size of dataset are much more important than whether the training and testing are in the same language or not.", "FLOAT SELECTED: Table 2: EM/F1 score of multi-BERTs fine-tuned on different training sets and tested on different languages (En: English, Fr: French, Zh: Chinese, Jp: Japanese, Kr: Korean, xx-yy: translated from xx to yy). The text in bold means training data language is the same as testing data language."], "highlighted_evidence": ["In the lower half of Table TABREF8, the results are obtained by the translated training data. First, we found that when testing on English and Chinese, translation always degrades the performance (En v.s. En-XX, Zh v.s. Zh-XX). Even though we translate the training data into the same language as testing data, using the untranslated data still yield better results. ", "FLOAT SELECTED: Table 2: EM/F1 score of multi-BERTs fine-tuned on different training sets and tested on different languages (En: English, Fr: French, Zh: Chinese, Jp: Japanese, Kr: Korean, xx-yy: translated from xx to yy). The text in bold means training data language is the same as testing data language."]}, {"raw_evidence": ["We have training and testing sets in three different languages: English, Chinese and Korean. The English dataset is SQuAD BIBREF2. The Chinese dataset is DRCD BIBREF14, a Chinese RC dataset with 30,000+ examples in the training set and 10,000+ examples in the development set. The Korean dataset is KorQuAD BIBREF15, a Korean RC dataset with 60,000+ examples in the training set and 10,000+ examples in the development set, created in exactly the same procedure as SQuAD. We always use the development sets of SQuAD, DRCD and KorQuAD for testing since the testing sets of the corpora have not been released yet.", "Next, to construct a diverse cross-lingual RC dataset with compromised quality, we translated the English and Chinese datasets into more languages, with Google Translate. An obvious issue with this method is that some examples might no longer have a recoverable span. To solve the problem, we use fuzzy matching to find the most possible answer, which calculates minimal edit distance between translated answer and all possible spans. If the minimal edit distance is larger than min(10, lengths of translated answer - 1), we drop the examples during training, and treat them as noise when testing. In this way, we can recover more than 95% of examples. The following generated datasets are recovered with same setting."], "highlighted_evidence": ["We have training and testing sets in three different languages: English, Chinese and Korean.", "Next, to construct a diverse cross-lingual RC dataset with compromised quality, we translated the English and Chinese datasets into more languages, with Google Translate."]}]
Which baselines did they compare against?
{"label_key": "1809.02286", "label_file": "paper_tab_qa", "q_uid": "0ad4359e3e7e5e5f261c2668fe84c12bc762b3b8", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "Various tree structured neural networks including variants of Tree-LSTM, Tree-based CNN, RNTN, and non-tree models including variants of LSTMs, CNNs, residual, and self-attention based networks", "type": "abstractive"}, {"answer": "Sentence classification baselines: RNTN (Socher et al. 2013), AdaMC-RNTN (Dong et al. 2014), TE-RNTN (Qian et al. 2015), TBCNN (Mou et al. 2015), Tree-LSTM (Tai, Socher, and Manning 2015), AdaHT-LSTM-CM (Liu, Qiu, and Huang 2017), DC-TreeLSTM (Liu, Qiu, and Huang 2017), TE-LSTM (Huang, Qian, and Zhu 2017), BiConTree (Teng and Zhang 2017), Gumbel Tree-LSTM (Choi, Yoo, and Lee 2018), TreeNet (Cheng et al. 2018), CNN (Kim 2014), AdaSent (Zhao, Lu, and Poupart 2015), LSTM-CNN (Zhou et al. 2016), byte-mLSTM (Radford, Jozefowicz, and Sutskever 2017), BCN + Char + CoVe (McCann et al. 2017), BCN + Char + ELMo (Peters et al. 2018). \nStanford Natural Language Inference baselines: Latent Syntax Tree-LSTM (Yogatama et al. 2017), Tree-based CNN (Mou et al. 2016), Gumbel Tree-LSTM (Choi, Yoo, and Lee 2018), NSE (Munkhdalai and Yu 2017), Reinforced Self- Attention Network (Shen et al. 2018), Residual stacked encoders: (Nie and Bansal 2017), BiLSTM with generalized pooling (Chen, Ling, and Zhu 2018).", "type": "abstractive"}]
[{"raw_evidence": ["FLOAT SELECTED: Table 1: The comparison of various models on different sentence classification tasks. We report the test accuracy of each model in percentage. Our SATA Tree-LSTM shows superior or competitive performance on all tasks, compared to previous treestructured models as well as other sophisticated models. ?: Latent tree-structured models. †: Models which are pre-trained with large external corpora.", "Our experimental results on the SNLI dataset are shown in table 2 . In this table, we report the test accuracy and number of trainable parameters for each model. Our SATA-LSTM again demonstrates its decent performance compared against the neural models built on both syntactic trees and latent trees, as well as the non-tree models. (Latent Syntax Tree-LSTM: BIBREF10 ( BIBREF10 ), Tree-based CNN: BIBREF35 ( BIBREF35 ), Gumbel Tree-LSTM: BIBREF11 ( BIBREF11 ), NSE: BIBREF36 ( BIBREF36 ), Reinforced Self-Attention Network: BIBREF4 ( BIBREF4 ), Residual stacked encoders: BIBREF37 ( BIBREF37 ), BiLSTM with generalized pooling: BIBREF38 ( BIBREF38 ).) Note that the number of learned parameters in our model is also comparable to other sophisticated models, showing the efficiency of our model."], "highlighted_evidence": ["FLOAT SELECTED: Table 1: The comparison of various models on different sentence classification tasks. We report the test accuracy of each model in percentage. Our SATA Tree-LSTM shows superior or competitive performance on all tasks, compared to previous treestructured models as well as other sophisticated models. ?: Latent tree-structured models. †: Models which are pre-trained with large external corpora.", "Our experimental results on the SNLI dataset are shown in table 2 . In this table, we report the test accuracy and number of trainable parameters for each model. Our SATA-LSTM again demonstrates its decent performance compared against the neural models built on both syntactic trees and latent trees, as well as the non-tree models. (Latent Syntax Tree-LSTM: BIBREF10 ( BIBREF10 ), Tree-based CNN: BIBREF35 ( BIBREF35 ), Gumbel Tree-LSTM: BIBREF11 ( BIBREF11 ), NSE: BIBREF36 ( BIBREF36 ), Reinforced Self-Attention Network: BIBREF4 ( BIBREF4 ), Residual stacked encoders: BIBREF37 ( BIBREF37 ), BiLSTM with generalized pooling: BIBREF38 ( BIBREF38 ).)"]}, {"raw_evidence": ["FLOAT SELECTED: Table 1: The comparison of various models on different sentence classification tasks. We report the test accuracy of each model in percentage. Our SATA Tree-LSTM shows superior or competitive performance on all tasks, compared to previous treestructured models as well as other sophisticated models. ?: Latent tree-structured models. †: Models which are pre-trained with large external corpora.", "Our experimental results on the SNLI dataset are shown in table 2 . In this table, we report the test accuracy and number of trainable parameters for each model. Our SATA-LSTM again demonstrates its decent performance compared against the neural models built on both syntactic trees and latent trees, as well as the non-tree models. (Latent Syntax Tree-LSTM: BIBREF10 ( BIBREF10 ), Tree-based CNN: BIBREF35 ( BIBREF35 ), Gumbel Tree-LSTM: BIBREF11 ( BIBREF11 ), NSE: BIBREF36 ( BIBREF36 ), Reinforced Self-Attention Network: BIBREF4 ( BIBREF4 ), Residual stacked encoders: BIBREF37 ( BIBREF37 ), BiLSTM with generalized pooling: BIBREF38 ( BIBREF38 ).) Note that the number of learned parameters in our model is also comparable to other sophisticated models, showing the efficiency of our model."], "highlighted_evidence": ["FLOAT SELECTED: Table 1: The comparison of various models on different sentence classification tasks. We report the test accuracy of each model in percentage. Our SATA Tree-LSTM shows superior or competitive performance on all tasks, compared to previous treestructured models as well as other sophisticated models. ?: Latent tree-structured models. †: Models which are pre-trained with large external corpora.", "Our SATA-LSTM again demonstrates its decent performance compared against the neural models built on both syntactic trees and latent trees, as well as the non-tree models. (Latent Syntax Tree-LSTM: BIBREF10 ( BIBREF10 ), Tree-based CNN: BIBREF35 ( BIBREF35 ), Gumbel Tree-LSTM: BIBREF11 ( BIBREF11 ), NSE: BIBREF36 ( BIBREF36 ), Reinforced Self-Attention Network: BIBREF4 ( BIBREF4 ), Residual stacked encoders: BIBREF37 ( BIBREF37 ), BiLSTM with generalized pooling: BIBREF38 ( BIBREF38 ).)"]}]
What baselines did they consider?
{"label_key": "1809.01202", "label_file": "paper_tab_qa", "q_uid": "4cbe5a36b492b99f9f9fea8081fe4ba10a7a0e94", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "state-of-the-art PDTB taggers", "type": "extractive"}, {"answer": "Linear SVM, RBF SVM, and Random Forest", "type": "abstractive"}]
[{"raw_evidence": ["We first use state-of-the-art PDTB taggers for our baseline BIBREF13 , BIBREF12 for the evaluation of the causality prediction of our models ( BIBREF12 requires sentences extracted from the text as its input, so we used our parser to extract sentences from the message). Then, we compare how models work for each task and disassembled them to inspect how each part of the models can affect their final prediction performances. We conducted McNemar's test to determine whether the performance differences are statistically significant at $p < .05$ ."], "highlighted_evidence": ["We first use state-of-the-art PDTB taggers for our baseline BIBREF13 , BIBREF12 for the evaluation of the causality prediction of our models ( BIBREF12 requires sentences extracted from the text as its input, so we used our parser to extract sentences from the message)."]}, {"raw_evidence": ["FLOAT SELECTED: Table 5: Causal explanation identification performance. Bold indicates significant imrpovement over next best model (p < .05)"], "highlighted_evidence": ["FLOAT SELECTED: Table 5: Causal explanation identification performance. Bold indicates significant imrpovement over next best model (p < .05)"]}]
By how much more does PARENT correlate with human judgements in comparison to other text generation metrics?
{"label_key": "1906.01081", "label_file": "paper_tab_qa", "q_uid": "ffa7f91d6406da11ddf415ef094aaf28f3c3872d", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "Best proposed metric has average correlation with human judgement of 0.913 and 0.846 compared to best compared metrics result of 0.758 and 0.829 on WikiBio and WebNLG challenge.", "type": "abstractive"}, {"answer": "Their average correlation tops the best other model by 0.155 on WikiBio.", "type": "abstractive"}]
[{"raw_evidence": ["We use bootstrap sampling (500 iterations) over the 1100 tables for which we collected human annotations to get an idea of how the correlation of each metric varies with the underlying data. In each iteration, we sample with replacement, tables along with their references and all the generated texts for that table. Then we compute aggregated human evaluation and metric scores for each of the models and compute the correlation between the two. We report the average correlation across all bootstrap samples for each metric in Table TABREF37 . The distribution of correlations for the best performing metrics are shown in Figure FIGREF38 .", "FLOAT SELECTED: Table 2: Correlation of metrics with human judgments on WikiBio. A superscript of C/W indicates that the correlation is significantly lower than that of PARENTC/W using a bootstrap confidence test for α = 0.1.", "FLOAT SELECTED: Table 4: Average pearson correlation across 500 bootstrap samples of each metric to human ratings for each aspect of the generations from the WebNLG challenge.", "The human ratings were collected on 3 distinct aspects – grammaticality, fluency and semantics, where semantics corresponds to the degree to which a generated text agrees with the meaning of the underlying RDF triples. We report the correlation of several metrics with these ratings in Table TABREF48 . Both variants of PARENT are either competitive or better than the other metrics in terms of the average correlation to all three aspects. This shows that PARENT is applicable for high quality references as well."], "highlighted_evidence": ["We report the average correlation across all bootstrap samples for each metric in Table TABREF37 .", "FLOAT SELECTED: Table 2: Correlation of metrics with human judgments on WikiBio. A superscript of C/W indicates that the correlation is significantly lower than that of PARENTC/W using a bootstrap confidence test for α = 0.1.", "FLOAT SELECTED: Table 4: Average pearson correlation across 500 bootstrap samples of each metric to human ratings for each aspect of the generations from the WebNLG challenge.", "We report the correlation of several metrics with these ratings in Table TABREF48 ."]}, {"raw_evidence": ["FLOAT SELECTED: Table 2: Correlation of metrics with human judgments on WikiBio. A superscript of C/W indicates that the correlation is significantly lower than that of PARENTC/W using a bootstrap confidence test for α = 0.1."], "highlighted_evidence": ["FLOAT SELECTED: Table 2: Correlation of metrics with human judgments on WikiBio. A superscript of C/W indicates that the correlation is significantly lower than that of PARENTC/W using a bootstrap confidence test for α = 0.1."]}]
Which stock market sector achieved the best performance?
{"label_key": "1812.10479", "label_file": "paper_tab_qa", "q_uid": "b634ff1607ce5756655e61b9a6f18bc736f84c83", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "Energy with accuracy of 0.538", "type": "abstractive"}, {"answer": "Energy", "type": "abstractive"}]
[{"raw_evidence": ["FLOAT SELECTED: Table 8: Sector-level performance comparison."], "highlighted_evidence": ["FLOAT SELECTED: Table 8: Sector-level performance comparison."]}, {"raw_evidence": ["FLOAT SELECTED: Table 7: Our volatility model performance compared with GARCH(1,1). Best performance in bold. Our model has superior performance across the three evaluation metrics and taking into consideration the state-of-the-art volatility proxies, namely Garman-Klass (σ̂PK) and Parkinson (σ̂PK)."], "highlighted_evidence": ["FLOAT SELECTED: Table 7: Our volatility model performance compared with GARCH(1,1). Best performance in bold. Our model has superior performance across the three evaluation metrics and taking into consideration the state-of-the-art volatility proxies, namely Garman-Klass (σ̂PK) and Parkinson (σ̂PK)."]}]
How much does their model outperform existing models?
{"label_key": "1909.08089", "label_file": "paper_tab_qa", "q_uid": "de5b6c25e35b3a6c5e40e350fc5e52c160b33490", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "Best proposed model result vs best previous result:\nArxiv dataset: Rouge 1 (43.62 vs 42.81), Rouge L (29.30 vs 31.80), Meteor (21.78 vs 21.35)\nPubmed dataset: Rouge 1 (44.85 vs 44.29), Rouge L (31.48 vs 35.21), Meteor (20.83 vs 20.56)", "type": "abstractive"}, {"answer": "On arXiv dataset, the proposed model outperforms baselie model by (ROUGE-1,2,L) 0.67 0.72 0.77 respectively and by Meteor 0.31.\n", "type": "abstractive"}]
[{"raw_evidence": ["The performance of all models on arXiv and Pubmed is shown in Table TABREF28 and Table TABREF29 , respectively. Follow the work BIBREF18 , we use the approximate randomization as the statistical significance test method BIBREF32 with a Bonferroni correction for multiple comparisons, at the confidence level 0.01 ( INLINEFORM0 ). As we can see in these tables, on both datasets, the neural extractive models outperforms the traditional extractive models on informativeness (ROUGE-1,2) by a wide margin, but results are mixed on ROUGE-L. Presumably, this is due to the neural training process, which relies on a goal standard based on ROUGE-1. Exploring other training schemes and/or a combination of traditional and neural approaches is left as future work. Similarly, the neural extractive models also dominate the neural abstractive models on ROUGE-1,2, but these abstractive models tend to have the highest ROUGE-L scores, possibly because they are trained directly on gold standard abstract summaries.", "FLOAT SELECTED: Table 4.1: Results on the arXiv dataset. For models with an ∗, we report results from [8]. Models are traditional extractive in the first block, neural abstractive in the second block, while neural extractive in the third block. The Oracle (last row) corresponds to using the ground truth labels, obtained (for training) by the greedy algorithm, see Section 4.1.2. Results that are not significantly distinguished from the best systems are bold.", "FLOAT SELECTED: Table 4.2: Results on the Pubmed dataset. For models with an ∗, we report results from [8]. See caption of Table 4.1 above for details on compared models. Results that are not significantly distinguished from the best systems are bold."], "highlighted_evidence": ["The performance of all models on arXiv and Pubmed is shown in Table TABREF28 and Table TABREF29 , respectively.", "As we can see in these tables, on both datasets, the neural extractive models outperforms the traditional extractive models on informativeness (ROUGE-1,2) by a wide margin, but results are mixed on ROUGE-L.", "FLOAT SELECTED: Table 4.1: Results on the arXiv dataset. For models with an ∗, we report results from [8]. Models are traditional extractive in the first block, neural abstractive in the second block, while neural extractive in the third block. The Oracle (last row) corresponds to using the ground truth labels, obtained (for training) by the greedy algorithm, see Section 4.1.2. Results that are not significantly distinguished from the best systems are bold.", "FLOAT SELECTED: Table 4.2: Results on the Pubmed dataset. For models with an ∗, we report results from [8]. See caption of Table 4.1 above for details on compared models. Results that are not significantly distinguished from the best systems are bold."]}, {"raw_evidence": ["FLOAT SELECTED: Table 4.1: Results on the arXiv dataset. For models with an ∗, we report results from [8]. Models are traditional extractive in the first block, neural abstractive in the second block, while neural extractive in the third block. The Oracle (last row) corresponds to using the ground truth labels, obtained (for training) by the greedy algorithm, see Section 4.1.2. Results that are not significantly distinguished from the best systems are bold.", "FLOAT SELECTED: Table 4.2: Results on the Pubmed dataset. For models with an ∗, we report results from [8]. See caption of Table 4.1 above for details on compared models. Results that are not significantly distinguished from the best systems are bold."], "highlighted_evidence": ["FLOAT SELECTED: Table 4.1: Results on the arXiv dataset. For models with an ∗, we report results from [8]. Models are traditional extractive in the first block, neural abstractive in the second block, while neural extractive in the third block. The Oracle (last row) corresponds to using the ground truth labels, obtained (for training) by the greedy algorithm, see Section 4.1.2. Results that are not significantly distinguished from the best systems are bold.", "FLOAT SELECTED: Table 4.2: Results on the Pubmed dataset. For models with an ∗, we report results from [8]. See caption of Table 4.1 above for details on compared models. Results that are not significantly distinguished from the best systems are bold."]}]
What embedding techniques are explored in the paper?
{"label_key": "1609.00559", "label_file": "paper_tab_qa", "q_uid": "8b3d3953454c88bde88181897a7a2c0c8dd87e23", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "Skip–gram, CBOW", "type": "extractive"}, {"answer": "integrated vector-res, vector-faith, Skip–gram, CBOW", "type": "extractive"}]
[{"raw_evidence": ["muneeb2015evalutating trained both the Skip–gram and CBOW models over the PubMed Central Open Access (PMC) corpus of approximately 1.25 million articles. They evaluated the models on a subset of the UMNSRS data, removing word pairs that did not occur in their training corpus more than ten times. chiu2016how evaluated both the the Skip–gram and CBOW models over the PMC corpus and PubMed. They also evaluated the models on a subset of the UMNSRS ignoring those words that did not appear in their training corpus. Pakhomov2016corpus trained CBOW model over three different types of corpora: clinical (clinical notes from the Fairview Health System), biomedical (PMC corpus), and general English (Wikipedia). They evaluated their method using a subset of the UMNSRS restricting to single word term pairs and removing those not found within their training corpus. sajad2015domain trained the Skip–gram model over CUIs identified by MetaMap on the OHSUMED corpus, a collection of 348,566 biomedical research articles. They evaluated the method on the complete UMNSRS, MiniMayoSRS and the MayoSRS datasets; any subset information about the dataset was not explicitly stated therefore we believe a direct comparison may be possible.", "FLOAT SELECTED: Table 4: Comparison with Previous Work"], "highlighted_evidence": ["chiu2016how evaluated both the the Skip–gram and CBOW models over the PMC corpus and PubMed.", "FLOAT SELECTED: Table 4: Comparison with Previous Work"]}, {"raw_evidence": ["Table TABREF31 shows a comparison to the top correlation scores reported by each of these works on the respective datasets (or subsets) they evaluated their methods on. N refers to the number of term pairs in the dataset the authors report they evaluated their method. The table also includes our top scoring results: the integrated vector-res and vector-faith. The results show that integrating semantic similarity measures into second–order co–occurrence vectors obtains a higher or on–par correlation with human judgments as the previous works reported results with the exception of the UMNSRS rel dataset. The results reported by Pakhomov2016corpus and chiu2016how obtain a higher correlation although the results can not be directly compared because both works used different subsets of the term pairs from the UMNSRS dataset.", "muneeb2015evalutating trained both the Skip–gram and CBOW models over the PubMed Central Open Access (PMC) corpus of approximately 1.25 million articles. They evaluated the models on a subset of the UMNSRS data, removing word pairs that did not occur in their training corpus more than ten times. chiu2016how evaluated both the the Skip–gram and CBOW models over the PMC corpus and PubMed. They also evaluated the models on a subset of the UMNSRS ignoring those words that did not appear in their training corpus. Pakhomov2016corpus trained CBOW model over three different types of corpora: clinical (clinical notes from the Fairview Health System), biomedical (PMC corpus), and general English (Wikipedia). They evaluated their method using a subset of the UMNSRS restricting to single word term pairs and removing those not found within their training corpus. sajad2015domain trained the Skip–gram model over CUIs identified by MetaMap on the OHSUMED corpus, a collection of 348,566 biomedical research articles. They evaluated the method on the complete UMNSRS, MiniMayoSRS and the MayoSRS datasets; any subset information about the dataset was not explicitly stated therefore we believe a direct comparison may be possible."], "highlighted_evidence": ["Table TABREF31 shows a comparison to the top correlation scores reported by each of these works on the respective datasets (or subsets) they evaluated their methods on. N refers to the number of term pairs in the dataset the authors report they evaluated their method. The table also includes our top scoring results: the integrated vector-res and vector-faith.", "chiu2016how evaluated both the the Skip–gram and CBOW models over the PMC corpus and PubMed. They also evaluated the models on a subset of the UMNSRS ignoring those words that did not appear in their training corpus. Pakhomov2016corpus trained CBOW model over three different types of corpora: clinical (clinical notes from the Fairview Health System), biomedical (PMC corpus), and general English (Wikipedia)."]}]
Which other approaches do they compare their model with?
{"label_key": "1904.10503", "label_file": "paper_tab_qa", "q_uid": "5a65ad10ff954d0f27bb3ccd9027e3d8f7f6bb76", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "Akbik et al. (2018), Link et al. (2012)", "type": "abstractive"}, {"answer": "They compare to Akbik et al. (2018) and Link et al. (2012).", "type": "abstractive"}]
[{"raw_evidence": ["FLOAT SELECTED: Table 3: Comparison with existing models."], "highlighted_evidence": ["FLOAT SELECTED: Table 3: Comparison with existing models."]}, {"raw_evidence": ["In this paper, we present a deep neural network model for the task of fine-grained named entity classification using ELMo embeddings and Wikidata. The proposed model learns representations for entity mentions based on its context and incorporates the rich structure of Wikidata to augment these labels into finer-grained subtypes. We can see comparisons of our model made on Wiki(gold) in Table TABREF20 . We note that the model performs similarly to existing systems without being trained or tuned on that particular dataset. Future work may include refining the clustering method described in Section 2.2 to extend to types other than person, location, organization, and also to include disambiguation of entity types.", "FLOAT SELECTED: Table 3: Comparison with existing models."], "highlighted_evidence": ["We can see comparisons of our model made on Wiki(gold) in Table TABREF20 .", "FLOAT SELECTED: Table 3: Comparison with existing models."]}]
How is non-standard pronunciation identified?
{"label_key": "1912.01772", "label_file": "paper_tab_qa", "q_uid": "f9bf6bef946012dd42835bf0c547c0de9c1d229f", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "Original transcription was labeled with additional labels in [] brackets with nonstandard pronunciation.", "type": "abstractive"}]
[{"raw_evidence": ["In addition, the transcription includes annotations for noises and disfluencies including aborted words, mispronunciations, poor intelligibility, repeated and corrected words, false starts, hesitations, undefined sound or pronunciations, non-verbal articulations, and pauses. Foreign words, in this case Spanish words, are also labelled as such.", "FLOAT SELECTED: Table 2: Example of an utterance along with the different annotations. We additionally highlight the code-switching annotations ([SPA] indicates Spanish words) as well as pre-normalized transcriptions that indicating non-standard pronunciations ([!1pu’] indicates that the previous 1 word was pronounced as ‘pu’’ instead of ‘pues’)."], "highlighted_evidence": ["In addition, the transcription includes annotations for noises and disfluencies including aborted words, mispronunciations, poor intelligibility, repeated and corrected words, false starts, hesitations, undefined sound or pronunciations, non-verbal articulations, and pauses. Foreign words, in this case Spanish words, are also labelled as such.", "FLOAT SELECTED: Table 2: Example of an utterance along with the different annotations. We additionally highlight the code-switching annotations ([SPA] indicates Spanish words) as well as pre-normalized transcriptions that indicating non-standard pronunciations ([!1pu’] indicates that the previous 1 word was pronounced as ‘pu’’ instead of ‘pues’)."]}]
What kind of celebrities do they obtain tweets from?
{"label_key": "1909.04002", "label_file": "paper_tab_qa", "q_uid": "4d28c99750095763c81bcd5544491a0ba51d9070", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "Amitabh Bachchan, Ariana Grande, Barack Obama, Bill Gates, Donald Trump,\nEllen DeGeneres, J K Rowling, Jimmy Fallon, Justin Bieber, Kevin Durant, Kim Kardashian, Lady Gaga, LeBron James,Narendra Modi, Oprah Winfrey", "type": "abstractive"}, {"answer": "Celebrities from varioius domains - Acting, Music, Politics, Business, TV, Author, Sports, Modeling. ", "type": "abstractive"}]
[{"raw_evidence": ["FLOAT SELECTED: Table 1: Twitter celebrities in our dataset, with tweet counts before and after filtering (Foll. denotes followers in millions)"], "highlighted_evidence": ["FLOAT SELECTED: Table 1: Twitter celebrities in our dataset, with tweet counts before and after filtering (Foll. denotes followers in millions)"]}, {"raw_evidence": ["FLOAT SELECTED: Table 1: Twitter celebrities in our dataset, with tweet counts before and after filtering (Foll. denotes followers in millions)"], "highlighted_evidence": ["FLOAT SELECTED: Table 1: Twitter celebrities in our dataset, with tweet counts before and after filtering (Foll. denotes followers in millions)"]}]
What summarization algorithms did the authors experiment with?
{"label_key": "1712.00991", "label_file": "paper_tab_qa", "q_uid": "443d2448136364235389039cbead07e80922ec5c", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "LSA, TextRank, LexRank and ILP-based summary.", "type": "abstractive"}, {"answer": "LSA, TextRank, LexRank", "type": "abstractive"}]
[{"raw_evidence": ["We considered a dataset of 100 employees, where for each employee multiple peer comments were recorded. Also, for each employee, a manual summary was generated by an HR personnel. The summaries generated by our ILP-based approach were compared with the corresponding manual summaries using the ROUGE BIBREF22 unigram score. For comparing performance of our ILP-based summarization algorithm, we explored a few summarization algorithms provided by the Sumy package. A common parameter which is required by all these algorithms is number of sentences keep in the final summary. ILP-based summarization requires a similar parameter K, which is automatically decided based on number of total candidate phrases. Assuming a sentence is equivalent to roughly 3 phrases, for Sumy algorithms, we set number of sentences parameter to the ceiling of K/3. Table TABREF51 shows average and standard deviation of ROUGE unigram f1 scores for each algorithm, over the 100 summaries. The performance of ILP-based summarization is comparable with the other algorithms, as the two sample t-test does not show statistically significant difference. Also, human evaluators preferred phrase-based summary generated by our approach to the other sentence-based summaries.", "FLOAT SELECTED: Table 9. Comparative performance of various summarization algorithms"], "highlighted_evidence": ["Table TABREF51 shows average and standard deviation of ROUGE unigram f1 scores for each algorithm, over the 100 summaries.", "For comparing performance of our ILP-based summarization algorithm, we explored a few summarization algorithms provided by the Sumy package.", "FLOAT SELECTED: Table 9. Comparative performance of various summarization algorithms"]}, {"raw_evidence": ["FLOAT SELECTED: Table 9. Comparative performance of various summarization algorithms", "We considered a dataset of 100 employees, where for each employee multiple peer comments were recorded. Also, for each employee, a manual summary was generated by an HR personnel. The summaries generated by our ILP-based approach were compared with the corresponding manual summaries using the ROUGE BIBREF22 unigram score. For comparing performance of our ILP-based summarization algorithm, we explored a few summarization algorithms provided by the Sumy package. A common parameter which is required by all these algorithms is number of sentences keep in the final summary. ILP-based summarization requires a similar parameter K, which is automatically decided based on number of total candidate phrases. Assuming a sentence is equivalent to roughly 3 phrases, for Sumy algorithms, we set number of sentences parameter to the ceiling of K/3. Table TABREF51 shows average and standard deviation of ROUGE unigram f1 scores for each algorithm, over the 100 summaries. The performance of ILP-based summarization is comparable with the other algorithms, as the two sample t-test does not show statistically significant difference. Also, human evaluators preferred phrase-based summary generated by our approach to the other sentence-based summaries."], "highlighted_evidence": ["FLOAT SELECTED: Table 9. Comparative performance of various summarization algorithms", "For comparing performance of our ILP-based summarization algorithm, we explored a few summarization algorithms provided by the Sumy package. ", "Table TABREF51 shows average and standard deviation of ROUGE unigram f1 scores for each algorithm, over the 100 summaries. "]}]
What evaluation metrics are looked at for classification tasks?
{"label_key": "1712.00991", "label_file": "paper_tab_qa", "q_uid": "fb3d30d59ed49e87f63d3735b876d45c4c6b8939", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "Precision, Recall, F-measure, accuracy", "type": "extractive"}, {"answer": "Precision, Recall and F-measure", "type": "extractive"}]
[{"raw_evidence": ["Precision, Recall and F-measure for this multi-label classification are computed using a strategy similar to the one described in BIBREF21 . Let INLINEFORM0 be the set of predicted labels and INLINEFORM1 be the set of actual labels for the INLINEFORM2 instance. Precision and recall for this instance are computed as follows: INLINEFORM3", "We randomly selected 2000 sentences from the supervisor assessment corpus and manually tagged them (dataset D1). This labelled dataset contained 705, 103, 822 and 370 sentences having the class labels STRENGTH, WEAKNESS, SUGGESTION or OTHER respectively. We trained several multi-class classifiers on this dataset. Table TABREF10 shows the results of 5-fold cross-validation experiments on dataset D1. For the first 5 classifiers, we used their implementation from the SciKit Learn library in Python (scikit-learn.org). The features used for these classifiers were simply the sentence words along with their frequencies. For the last 2 classifiers (in Table TABREF10 ), we used our own implementation. The overall accuracy for a classifier is defined as INLINEFORM0 , where the denominator is 2000 for dataset D1. Note that the pattern-based approach is unsupervised i.e., it did not use any training data. Hence, the results shown for it are for the entire dataset and not based on cross-validation.", "FLOAT SELECTED: Table 1. Results of 5-fold cross validation for sentence classification on dataset D1.", "FLOAT SELECTED: Table 7. Results of 5-fold cross validation for multi-class multi-label classification on dataset D2."], "highlighted_evidence": ["Precision, Recall and F-measure for this multi-label classification are computed using a strategy similar to the one described in BIBREF21 . ", "The overall accuracy for a classifier is defined as INLINEFORM0 , where the denominator is 2000 for dataset D1. ", "FLOAT SELECTED: Table 1. Results of 5-fold cross validation for sentence classification on dataset D1.", "FLOAT SELECTED: Table 7. Results of 5-fold cross validation for multi-class multi-label classification on dataset D2."]}, {"raw_evidence": ["Precision, Recall and F-measure for this multi-label classification are computed using a strategy similar to the one described in BIBREF21 . Let INLINEFORM0 be the set of predicted labels and INLINEFORM1 be the set of actual labels for the INLINEFORM2 instance. Precision and recall for this instance are computed as follows: INLINEFORM3"], "highlighted_evidence": ["Precision, Recall and F-measure for this multi-label classification are computed using a strategy similar to the one described in BIBREF21 ."]}]
What methods were used for sentence classification?
{"label_key": "1712.00991", "label_file": "paper_tab_qa", "q_uid": "197b276d0610ebfacd57ab46b0b29f3033c96a40", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "Logistic Regression, Multinomial Naive Bayes, Random Forest, AdaBoost, Linear SVM, SVM with ADWSK and Pattern-based", "type": "abstractive"}, {"answer": "Logistic Regression, Multinomial Naive Bayes, Random Forest, AdaBoost, Linear SVM, SVM with ADWSK, Pattern-based approach", "type": "abstractive"}]
[{"raw_evidence": ["We randomly selected 2000 sentences from the supervisor assessment corpus and manually tagged them (dataset D1). This labelled dataset contained 705, 103, 822 and 370 sentences having the class labels STRENGTH, WEAKNESS, SUGGESTION or OTHER respectively. We trained several multi-class classifiers on this dataset. Table TABREF10 shows the results of 5-fold cross-validation experiments on dataset D1. For the first 5 classifiers, we used their implementation from the SciKit Learn library in Python (scikit-learn.org). The features used for these classifiers were simply the sentence words along with their frequencies. For the last 2 classifiers (in Table TABREF10 ), we used our own implementation. The overall accuracy for a classifier is defined as INLINEFORM0 , where the denominator is 2000 for dataset D1. Note that the pattern-based approach is unsupervised i.e., it did not use any training data. Hence, the results shown for it are for the entire dataset and not based on cross-validation.", "FLOAT SELECTED: Table 1. Results of 5-fold cross validation for sentence classification on dataset D1."], "highlighted_evidence": ["Table TABREF10 shows the results of 5-fold cross-validation experiments on dataset D1. For the first 5 classifiers, we used their implementation from the SciKit Learn library in Python (scikit-learn.org). The features used for these classifiers were simply the sentence words along with their frequencies. For the last 2 classifiers (in Table TABREF10 ), we used our own implementation.", "FLOAT SELECTED: Table 1. Results of 5-fold cross validation for sentence classification on dataset D1."]}, {"raw_evidence": ["FLOAT SELECTED: Table 1. Results of 5-fold cross validation for sentence classification on dataset D1.", "FLOAT SELECTED: Table 7. Results of 5-fold cross validation for multi-class multi-label classification on dataset D2.", "We manually tagged the same 2000 sentences in Dataset D1 with attributes, where each sentence may get 0, 1, 2, etc. up to 15 class labels (this is dataset D2). This labelled dataset contained 749, 206, 289, 207, 91, 223, 191, 144, 103, 80, 82, 42, 29, 15, 24 sentences having the class labels listed in Table TABREF20 in the same order. The number of sentences having 0, 1, 2, or more than 2 attributes are: 321, 1070, 470 and 139 respectively. We trained several multi-class multi-label classifiers on this dataset. Table TABREF21 shows the results of 5-fold cross-validation experiments on dataset D2.", "We randomly selected 2000 sentences from the supervisor assessment corpus and manually tagged them (dataset D1). This labelled dataset contained 705, 103, 822 and 370 sentences having the class labels STRENGTH, WEAKNESS, SUGGESTION or OTHER respectively. We trained several multi-class classifiers on this dataset. Table TABREF10 shows the results of 5-fold cross-validation experiments on dataset D1. For the first 5 classifiers, we used their implementation from the SciKit Learn library in Python (scikit-learn.org). The features used for these classifiers were simply the sentence words along with their frequencies. For the last 2 classifiers (in Table TABREF10 ), we used our own implementation. The overall accuracy for a classifier is defined as INLINEFORM0 , where the denominator is 2000 for dataset D1. Note that the pattern-based approach is unsupervised i.e., it did not use any training data. Hence, the results shown for it are for the entire dataset and not based on cross-validation."], "highlighted_evidence": ["FLOAT SELECTED: Table 1. Results of 5-fold cross validation for sentence classification on dataset D1.", "FLOAT SELECTED: Table 7. Results of 5-fold cross validation for multi-class multi-label classification on dataset D2.", "We trained several multi-class multi-label classifiers on this dataset. Table TABREF21 shows the results of 5-fold cross-validation experiments on dataset D2.", "We trained several multi-class classifiers on this dataset. Table TABREF10 shows the results of 5-fold cross-validation experiments on dataset D1. "]}]
What modern MRC gold standards are analyzed?
{"label_key": "2003.04642", "label_file": "paper_tab_qa", "q_uid": "9ecde59ffab3c57ec54591c3c7826a9188b2b270", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "fit our problem definition and were published in the years 2016 to 2019, have at least $(2019 - publication\\ year) \\times 20$ citations", "type": "extractive"}, {"answer": "MSMARCO, HOTPOTQA, RECORD, MULTIRC, NEWSQA, and DROP.", "type": "abstractive"}]
[{"raw_evidence": ["We select contemporary MRC benchmarks to represent all four commonly used problem definitions BIBREF15. In selecting relevant datasets, we do not consider those that are considered “solved”, i.e. where the state of the art performance surpasses human performance, as is the case with SQuAD BIBREF28, BIBREF7. Concretely, we selected gold standards that fit our problem definition and were published in the years 2016 to 2019, have at least $(2019 - publication\\ year) \\times 20$ citations, and bucket them according to the answer selection styles as described in Section SECREF4 We randomly draw one from each bucket and add two randomly drawn datasets from the candidate pool. This leaves us with the datasets described in Table TABREF19. For a more detailed description, we refer to Appendix ."], "highlighted_evidence": ["Concretely, we selected gold standards that fit our problem definition and were published in the years 2016 to 2019, have at least $(2019 - publication\\ year) \\times 20$ citations, and bucket them according to the answer selection styles as described in Section SECREF4"]}, {"raw_evidence": ["FLOAT SELECTED: Table 1: Summary of selected datasets"], "highlighted_evidence": ["FLOAT SELECTED: Table 1: Summary of selected datasets"]}]
What was the score of the proposed model?
{"label_key": "1904.07904", "label_file": "paper_tab_qa", "q_uid": "38f58f13c7f23442d5952c8caf126073a477bac0", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "Best results authors obtain is EM 51.10 and F1 63.11", "type": "abstractive"}, {"answer": "EM Score of 51.10", "type": "abstractive"}]
[{"raw_evidence": ["To better demonstrate the effectiveness of the proposed model, we compare with baselines and show the results in Table TABREF12 . The baselines are: (a) trained on S-SQuAD, (b) trained on T-SQuAD and then fine-tuned on S-SQuAD, and (c) previous best model trained on S-SQuAD BIBREF5 by using Dr.QA BIBREF20 . We also compare to the approach proposed by Lan et al. BIBREF16 in the row (d). This approach is originally proposed for spoken language understanding, and we adopt the same approach on the setting here. The approach models domain-specific features from the source and target domains separately by two different embedding encoders with a shared embedding encoder for modeling domain-general features. The domain-general parameters are adversarially trained by domain discriminator."], "highlighted_evidence": ["To better demonstrate the effectiveness of the proposed model, we compare with baselines and show the results in Table TABREF12 ."]}, {"raw_evidence": ["FLOAT SELECTED: Table 2. The EM/F1 scores of proposed adversarial domain adaptation approaches over Spoken-SQuAD."], "highlighted_evidence": ["FLOAT SELECTED: Table 2. The EM/F1 scores of proposed adversarial domain adaptation approaches over Spoken-SQuAD."]}]
What hyperparameters are explored?
{"label_key": "2003.11645", "label_file": "paper_tab_qa", "q_uid": "27275fe9f6a9004639f9ac33c3a5767fea388a98", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "Dimension size, window size, architecture, algorithm, epochs, hidden dimension size, learning rate, loss function, optimizer algorithm.", "type": "abstractive"}, {"answer": "Hyperparameters explored were: dimension size, window size, architecture, algorithm and epochs.", "type": "abstractive"}]
[{"raw_evidence": ["FLOAT SELECTED: Table 1: Hyper-parameter choices", "FLOAT SELECTED: Table 2: Network hyper-parameters"], "highlighted_evidence": ["FLOAT SELECTED: Table 1: Hyper-parameter choices", "FLOAT SELECTED: Table 2: Network hyper-parameters"]}, {"raw_evidence": ["To form the vocabulary, words occurring less than 5 times in the corpora were dropped, stop words removed using the natural language toolkit (NLTK) (BIBREF22) and data pre-processing carried out. Table TABREF2 describes most hyper-parameters explored for each dataset. In all, 80 runs (of about 160 minutes) were conducted for the 15MB Wiki Abstract dataset with 80 serialized models totaling 15.136GB while 80 runs (for over 320 hours) were conducted for the 711MB SW dataset, with 80 serialized models totaling over 145GB. Experiments for all combinations for 300 dimensions were conducted on the 3.9GB training set of the BW corpus and additional runs for other dimensions for the window 8 + skipgram + heirarchical softmax combination to verify the trend of quality of word vectors as dimensions are increased.", "FLOAT SELECTED: Table 1: Hyper-parameter choices"], "highlighted_evidence": ["Table TABREF2 describes most hyper-parameters explored for each dataset.", "FLOAT SELECTED: Table 1: Hyper-parameter choices"]}]
Do they test both skipgram and c-bow?
{"label_key": "2003.11645", "label_file": "paper_tab_qa", "q_uid": "c2d1387e08cf25cb6b1f482178cca58030e85b70", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "Yes", "type": "boolean"}, {"answer": "Yes", "type": "boolean"}]
[{"raw_evidence": ["FLOAT SELECTED: Table 1: Hyper-parameter choices"], "highlighted_evidence": ["FLOAT SELECTED: Table 1: Hyper-parameter choices"]}, {"raw_evidence": ["FLOAT SELECTED: Table 1: Hyper-parameter choices", "To form the vocabulary, words occurring less than 5 times in the corpora were dropped, stop words removed using the natural language toolkit (NLTK) (BIBREF22) and data pre-processing carried out. Table TABREF2 describes most hyper-parameters explored for each dataset. In all, 80 runs (of about 160 minutes) were conducted for the 15MB Wiki Abstract dataset with 80 serialized models totaling 15.136GB while 80 runs (for over 320 hours) were conducted for the 711MB SW dataset, with 80 serialized models totaling over 145GB. Experiments for all combinations for 300 dimensions were conducted on the 3.9GB training set of the BW corpus and additional runs for other dimensions for the window 8 + skipgram + heirarchical softmax combination to verify the trend of quality of word vectors as dimensions are increased."], "highlighted_evidence": ["FLOAT SELECTED: Table 1: Hyper-parameter choices", "Table TABREF2 describes most hyper-parameters explored for each dataset."]}]
what is the state of the art?
{"label_key": "1608.06757", "label_file": "paper_tab_qa", "q_uid": "c2b8ee872b99f698b3d2082d57f9408a91e1b4c1", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "Babelfy, DBpedia Spotlight, Entityclassifier.eu, FOX, LingPipe MUC-7, NERD-ML, Stanford NER, TagMe 2", "type": "abstractive"}]
[{"raw_evidence": ["FLOAT SELECTED: Table 2: Comparison of annotators trained for common English news texts (micro-averaged scores on match per annotation span). The table shows micro-precision, recall and NER-style F1 for CoNLL2003, KORE50, ACE2004 and MSNBC datasets."], "highlighted_evidence": ["FLOAT SELECTED: Table 2: Comparison of annotators trained for common English news texts (micro-averaged scores on match per annotation span). The table shows micro-precision, recall and NER-style F1 for CoNLL2003, KORE50, ACE2004 and MSNBC datasets."]}]
Do the authors also analyze transformer-based architectures?
{"label_key": "1806.04330", "label_file": "paper_tab_qa", "q_uid": "8bf7f1f93d0a2816234d36395ab40c481be9a0e0", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "No", "type": "boolean"}, {"answer": "No", "type": "boolean"}]
[{"raw_evidence": ["FLOAT SELECTED: Table 1: Summary of representative neural models for sentence pair modeling. The upper half contains sentence encoding models, and the lower half contains sentence pair interaction models."], "highlighted_evidence": ["FLOAT SELECTED: Table 1: Summary of representative neural models for sentence pair modeling. The upper half contains sentence encoding models, and the lower half contains sentence pair interaction models."]}, {"raw_evidence": [], "highlighted_evidence": []}]
what were the baselines?
{"label_key": "1904.03288", "label_file": "paper_tab_qa", "q_uid": "2ddb51b03163d309434ee403fef42d6b9aecc458", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "LF-MMI Attention\nSeq2Seq \nRNN-T \nChar E2E LF-MMI \nPhone E2E LF-MMI \nCTC + Gram-CTC", "type": "abstractive"}]
[{"raw_evidence": ["We also evaluate the Jasper model's performance on a conversational English corpus. The Hub5 Year 2000 (Hub5'00) evaluation (LDC2002S09, LDC2005S13) is widely used in academia. It is divided into two subsets: Switchboard (SWB) and Callhome (CHM). The training data for both the acoustic and language models consisted of the 2000hr Fisher+Switchboard training data (LDC2004S13, LDC2005S13, LDC97S62). Jasper DR 10x5 was trained using SGD with momentum for 50 epochs. We compare to other models trained using the same data and report Hub5'00 results in Table TABREF31 .", "FLOAT SELECTED: Table 7: Hub5’00, WER (%)"], "highlighted_evidence": [" We compare to other models trained using the same data and report Hub5'00 results in Table TABREF31 .", "FLOAT SELECTED: Table 7: Hub5’00, WER (%)"]}]
what competitive results did they obtain?
{"label_key": "1904.03288", "label_file": "paper_tab_qa", "q_uid": "e587559f5ab6e42f7d981372ee34aebdc92b646e", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "In case of read speech datasets, their best model got the highest nov93 score of 16.1 and the highest nov92 score of 13.3.\nIn case of Conversational Speech, their best model got the highest SWB of 8.3 and the highest CHM of 19.3. ", "type": "abstractive"}, {"answer": "On WSJ datasets author's best approach achieves 9.3 and 6.9 WER compared to best results of 7.5 and 4.1 on nov93 and nov92 subsets.\nOn Hub5'00 datasets author's best approach achieves WER of 7.8 and 16.2 compared to best result of 7.3 and 14.2 on Switchboard (SWB) and Callhome (CHM) subsets.", "type": "abstractive"}]
[{"raw_evidence": ["We trained a smaller Jasper 10x3 model with SGD with momentum optimizer for 400 epochs on a combined WSJ dataset (80 hours): LDC93S6A (WSJ0) and LDC94S13A (WSJ1). The results are provided in Table TABREF29 .", "FLOAT SELECTED: Table 6: WSJ End-to-End Models, WER (%)", "FLOAT SELECTED: Table 7: Hub5’00, WER (%)", "We also evaluate the Jasper model's performance on a conversational English corpus. The Hub5 Year 2000 (Hub5'00) evaluation (LDC2002S09, LDC2005S13) is widely used in academia. It is divided into two subsets: Switchboard (SWB) and Callhome (CHM). The training data for both the acoustic and language models consisted of the 2000hr Fisher+Switchboard training data (LDC2004S13, LDC2005S13, LDC97S62). Jasper DR 10x5 was trained using SGD with momentum for 50 epochs. We compare to other models trained using the same data and report Hub5'00 results in Table TABREF31 ."], "highlighted_evidence": ["We trained a smaller Jasper 10x3 model with SGD with momentum optimizer for 400 epochs on a combined WSJ dataset (80 hours): LDC93S6A (WSJ0) and LDC94S13A (WSJ1). The results are provided in Table TABREF29 .", "FLOAT SELECTED: Table 6: WSJ End-to-End Models, WER (%)", "FLOAT SELECTED: Table 7: Hub5’00, WER (%)", "We also evaluate the Jasper model's performance on a conversational English corpus. The Hub5 Year 2000 (Hub5'00) evaluation (LDC2002S09, LDC2005S13) is widely used in academia. It is divided into two subsets: Switchboard (SWB) and Callhome (CHM). The training data for both the acoustic and language models consisted of the 2000hr Fisher+Switchboard training data (LDC2004S13, LDC2005S13, LDC97S62). Jasper DR 10x5 was trained using SGD with momentum for 50 epochs. We compare to other models trained using the same data and report Hub5'00 results in Table TABREF31 ."]}, {"raw_evidence": ["FLOAT SELECTED: Table 6: WSJ End-to-End Models, WER (%)", "FLOAT SELECTED: Table 7: Hub5’00, WER (%)", "We trained a smaller Jasper 10x3 model with SGD with momentum optimizer for 400 epochs on a combined WSJ dataset (80 hours): LDC93S6A (WSJ0) and LDC94S13A (WSJ1). The results are provided in Table TABREF29 .", "We also evaluate the Jasper model's performance on a conversational English corpus. The Hub5 Year 2000 (Hub5'00) evaluation (LDC2002S09, LDC2005S13) is widely used in academia. It is divided into two subsets: Switchboard (SWB) and Callhome (CHM). The training data for both the acoustic and language models consisted of the 2000hr Fisher+Switchboard training data (LDC2004S13, LDC2005S13, LDC97S62). Jasper DR 10x5 was trained using SGD with momentum for 50 epochs. We compare to other models trained using the same data and report Hub5'00 results in Table TABREF31 ."], "highlighted_evidence": ["FLOAT SELECTED: Table 6: WSJ End-to-End Models, WER (%)", "FLOAT SELECTED: Table 7: Hub5’00, WER (%)", "We trained a smaller Jasper 10x3 model with SGD with momentum optimizer for 400 epochs on a combined WSJ dataset (80 hours): LDC93S6A (WSJ0) and LDC94S13A (WSJ1). The results are provided in Table TABREF29 .", "We compare to other models trained using the same data and report Hub5'00 results in Table TABREF31 ."]}]
By how much is performance improved with multimodality?
{"label_key": "1909.13714", "label_file": "paper_tab_qa", "q_uid": "f68508adef6f4bcdc0cc0a3ce9afc9a2b6333cc5", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "by 2.3-6.8 points in f1 score for intent recognition and 0.8-3.5 for slot filling", "type": "abstractive"}, {"answer": "F1 score increased from 0.89 to 0.92", "type": "abstractive"}]
[{"raw_evidence": ["FLOAT SELECTED: Table 1: Speech Embeddings Experiments: Precision/Recall/F1-scores (%) of NLU Models"], "highlighted_evidence": ["FLOAT SELECTED: Table 1: Speech Embeddings Experiments: Precision/Recall/F1-scores (%) of NLU Models"]}, {"raw_evidence": ["For incorporating speech embeddings experiments, performance results of NLU models on in-cabin data with various feature concatenations can be found in Table TABREF3, using our previous hierarchical joint model (H-Joint-2). When used in isolation, Word2Vec and Speech2Vec achieves comparable performances, which cannot reach GloVe performance. This was expected as the pre-trained Speech2Vec vectors have lower vocabulary coverage than GloVe. Yet, we observed that concatenating GloVe + Speech2Vec, and further GloVe + Word2Vec + Speech2Vec yields better NLU results: F1-score increased from 0.89 to 0.91 for intent recognition, from 0.96 to 0.97 for slot filling.", "For multimodal (audio & video) features exploration, performance results of the compared models with varying modality/feature concatenations can be found in Table TABREF4. Since these audio/video features are extracted per utterance (on segmented audio & video clips), we experimented with the utterance-level intent recognition task only, using hierarchical joint learning (H-Joint-2). We investigated the audio-visual feature additions on top of text-only and text+speech embedding models. Adding openSMILE/IS10 features from audio, as well as incorporating intermediate CNN/Inception-ResNet-v2 features from video brought slight improvements to our intent models, reaching 0.92 F1-score. These initial results using feature concatenations may need further explorations, especially for certain intent-types such as stop (audio intensity) or relevant slots such as passenger gestures/gaze (from cabin video) and outside objects (from road video)."], "highlighted_evidence": ["Yet, we observed that concatenating GloVe + Speech2Vec, and further GloVe + Word2Vec + Speech2Vec yields better NLU results: F1-score increased from 0.89 to 0.91 for intent recognition, from 0.96 to 0.97 for slot filling.", "We investigated the audio-visual feature additions on top of text-only and text+speech embedding models. Adding openSMILE/IS10 features from audio, as well as incorporating intermediate CNN/Inception-ResNet-v2 features from video brought slight improvements to our intent models, reaching 0.92 F1-score."]}]
How much is performance improved on NLI?
{"label_key": "1909.03405", "label_file": "paper_tab_qa", "q_uid": "bdc91d1283a82226aeeb7a2f79dbbc57d3e84a1a", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": " improvement on the RTE dataset is significant, i.e., 4% absolute gain over the BERTBase", "type": "extractive"}, {"answer": "The average score improved by 1.4 points over the previous best result.", "type": "abstractive"}]
[{"raw_evidence": ["Table TABREF21 illustrates the experimental results, showing that our method is beneficial for all of NLI tasks. The improvement on the RTE dataset is significant, i.e., 4% absolute gain over the BERTBase. Besides NLI, our model also performs better than BERTBase in the STS task. The STS tasks are semantically similar to the NLI tasks, and hence able to take advantage of PSP as well. Actually, the proposed method has a positive effect whenever the input is a sentence pair. The improvements suggest that the PSP task encourages the model to learn more detailed semantics in the pre-training, which improves the model on the downstream learning tasks. Moreover, our method is surprisingly able to achieve slightly better results in the single-sentence problem. The improvement should be attributed to better semantic representation.", "FLOAT SELECTED: Table 2: Results on the test set of GLUE benchmark. The performance was obtained by the official evaluation server. The number below each task is the number of training examples. The ”Average” column follows the setting in the BERT paper, which excludes the problematic WNLI task. F1 scores are reported for QQP and MRPC, Spearman correlations are reported for STS-B, and accuracy scores are reported for the other tasks. All the listed models are trained on the Wikipedia and the Book Corpus datasets. The results are the average of 5 runs."], "highlighted_evidence": ["Table TABREF21 illustrates the experimental results, showing that our method is beneficial for all of NLI tasks. The improvement on the RTE dataset is significant, i.e., 4% absolute gain over the BERTBase.", "FLOAT SELECTED: Table 2: Results on the test set of GLUE benchmark. The performance was obtained by the official evaluation server. The number below each task is the number of training examples. The ”Average” column follows the setting in the BERT paper, which excludes the problematic WNLI task. F1 scores are reported for QQP and MRPC, Spearman correlations are reported for STS-B, and accuracy scores are reported for the other tasks. All the listed models are trained on the Wikipedia and the Book Corpus datasets. The results are the average of 5 runs."]}, {"raw_evidence": ["FLOAT SELECTED: Table 2: Results on the test set of GLUE benchmark. The performance was obtained by the official evaluation server. The number below each task is the number of training examples. The ”Average” column follows the setting in the BERT paper, which excludes the problematic WNLI task. F1 scores are reported for QQP and MRPC, Spearman correlations are reported for STS-B, and accuracy scores are reported for the other tasks. All the listed models are trained on the Wikipedia and the Book Corpus datasets. The results are the average of 5 runs."], "highlighted_evidence": ["FLOAT SELECTED: Table 2: Results on the test set of GLUE benchmark. The performance was obtained by the official evaluation server. The number below each task is the number of training examples. The ”Average” column follows the setting in the BERT paper, which excludes the problematic WNLI task. F1 scores are reported for QQP and MRPC, Spearman correlations are reported for STS-B, and accuracy scores are reported for the other tasks. All the listed models are trained on the Wikipedia and the Book Corpus datasets. The results are the average of 5 runs."]}]
what was the baseline?
{"label_key": "1907.03060", "label_file": "paper_tab_qa", "q_uid": "761de1610e934189850e8fda707dc5239dd58092", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "pivot-based translation relying on a helping language BIBREF10, nduction of phrase tables from monolingual data BIBREF14 , attentional RNN-based model (RNMT) BIBREF2, Transformer model BIBREF18, bi-directional model BIBREF11, multi-to-multi (M2M) model BIBREF8, back-translation BIBREF17", "type": "extractive"}, {"answer": "M2M Transformer", "type": "abstractive"}]
[{"raw_evidence": ["We began with evaluating standard MT paradigms, i.e., PBSMT BIBREF3 and NMT BIBREF1 . As for PBSMT, we also examined two advanced methods: pivot-based translation relying on a helping language BIBREF10 and induction of phrase tables from monolingual data BIBREF14 .", "As for NMT, we compared two types of encoder-decoder architectures: attentional RNN-based model (RNMT) BIBREF2 and the Transformer model BIBREF18 . In addition to standard uni-directional modeling, to cope with the low-resource problem, we examined two multi-directional models: bi-directional model BIBREF11 and multi-to-multi (M2M) model BIBREF8 .", "After identifying the best model, we also examined the usefulness of a data augmentation method based on back-translation BIBREF17 ."], "highlighted_evidence": ["We began with evaluating standard MT paradigms, i.e., PBSMT BIBREF3 and NMT BIBREF1 . As for PBSMT, we also examined two advanced methods: pivot-based translation relying on a helping language BIBREF10 and induction of phrase tables from monolingual data BIBREF14 .\n\nAs for NMT, we compared two types of encoder-decoder architectures: attentional RNN-based model (RNMT) BIBREF2 and the Transformer model BIBREF18 . In addition to standard uni-directional modeling, to cope with the low-resource problem, we examined two multi-directional models: bi-directional model BIBREF11 and multi-to-multi (M2M) model BIBREF8 .\n\nAfter identifying the best model, we also examined the usefulness of a data augmentation method based on back-translation BIBREF17 ."]}, {"raw_evidence": ["In this paper, we challenged the difficult task of Ja INLINEFORM0 Ru news domain translation in an extremely low-resource setting. We empirically confirmed the limited success of well-established solutions when restricted to in-domain data. Then, to incorporate out-of-domain data, we proposed a multilingual multistage fine-tuning approach and observed that it substantially improves Ja INLINEFORM1 Ru translation by over 3.7 BLEU points compared to a strong baseline, as summarized in Table TABREF53 . This paper contains an empirical comparison of several existing approaches and hence we hope that our paper can act as a guideline to researchers attempting to tackle extremely low-resource translation.", "FLOAT SELECTED: Table 13: Summary of our investigation: BLEU scores of the best NMT systems at each step."], "highlighted_evidence": ["Then, to incorporate out-of-domain data, we proposed a multilingual multistage fine-tuning approach and observed that it substantially improves Ja INLINEFORM1 Ru translation by over 3.7 BLEU points compared to a strong baseline, as summarized in Table TABREF53 . ", "FLOAT SELECTED: Table 13: Summary of our investigation: BLEU scores of the best NMT systems at each step."]}]
How larger are the training sets of these versions of ELMo compared to the previous ones?
{"label_key": "1911.10049", "label_file": "paper_tab_qa", "q_uid": "603fee7314fa65261812157ddfc2c544277fcf90", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "By 14 times.", "type": "abstractive"}, {"answer": "up to 1.95 times larger", "type": "abstractive"}]
[{"raw_evidence": ["Recently, ELMoForManyLangs BIBREF6 project released pre-trained ELMo models for a number of different languages BIBREF7. These models, however, were trained on a significantly smaller datasets. They used 20-million-words data randomly sampled from the raw text released by the CoNLL 2017 Shared Task - Automatically Annotated Raw Texts and Word Embeddings BIBREF8, which is a combination of Wikipedia dump and common crawl. The quality of these models is questionable. For example, we compared the Latvian model by ELMoForManyLangs with a model we trained on a complete (wikidump + common crawl) Latvian corpus, which has about 280 million tokens. The difference of each model on the word analogy task is shown in Figure FIGREF16 in Section SECREF5. As the results of the ELMoForManyLangs embeddings are significantly worse than using the full corpus, we can conclude that these embeddings are not of sufficient quality. For that reason, we computed ELMo embeddings for seven languages on much larger corpora. As this effort requires access to large amount of textual data and considerable computational resources, we made the precomputed models publicly available by depositing them to Clarin repository."], "highlighted_evidence": ["They used 20-million-words data randomly sampled from the raw text released by the CoNLL 2017 Shared Task - Automatically Annotated Raw Texts and Word Embeddings BIBREF8, which is a combination of Wikipedia dump and common crawl. ", "For example, we compared the Latvian model by ELMoForManyLangs with a model we trained on a complete (wikidump + common crawl) Latvian corpus, which has about 280 million tokens."]}, {"raw_evidence": ["Although ELMo is trained on character level and is able to handle out-of-vocabulary words, a vocabulary file containing most common tokens is used for efficiency during training and embedding generation. The original ELMo model was trained on a one billion word large English corpus, with a given vocabulary file of about 800,000 words. Later, ELMo models for other languages were trained as well, but limited to larger languages with many resources, like German and Japanese.", "FLOAT SELECTED: Table 1: The training corpora used. We report their size (in billions of tokens), and ELMo vocabulary size (in millions of tokens)."], "highlighted_evidence": ["The original ELMo model was trained on a one billion word large English corpus, with a given vocabulary file of about 800,000 words.", "FLOAT SELECTED: Table 1: The training corpora used. We report their size (in billions of tokens), and ELMo vocabulary size (in millions of tokens)."]}]
What is the improvement in performance for Estonian in the NER task?
{"label_key": "1911.10049", "label_file": "paper_tab_qa", "q_uid": "09a1173e971e0fcdbf2fbecb1b077158ab08f497", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "5 percent points.", "type": "abstractive"}, {"answer": "0.05 F1", "type": "abstractive"}]
[{"raw_evidence": ["FLOAT SELECTED: Table 4: The results of NER evaluation task, averaged over 5 training and evaluation runs. The scores are average F1 score of the three named entity classes. The columns show FastText, ELMo, and the difference between them (∆(E − FT ))."], "highlighted_evidence": ["FLOAT SELECTED: Table 4: The results of NER evaluation task, averaged over 5 training and evaluation runs. The scores are average F1 score of the three named entity classes. The columns show FastText, ELMo, and the difference between them (∆(E − FT ))."]}, {"raw_evidence": ["FLOAT SELECTED: Table 4: The results of NER evaluation task, averaged over 5 training and evaluation runs. The scores are average F1 score of the three named entity classes. The columns show FastText, ELMo, and the difference between them (∆(E − FT ))."], "highlighted_evidence": ["FLOAT SELECTED: Table 4: The results of NER evaluation task, averaged over 5 training and evaluation runs. The scores are average F1 score of the three named entity classes. The columns show FastText, ELMo, and the difference between them (∆(E − FT ))."]}]
what is the state of the art on WSJ?
{"label_key": "1812.06864", "label_file": "paper_tab_qa", "q_uid": "70e9210fe64f8d71334e5107732d764332a81cb1", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "CNN-DNN-BLSTM-HMM", "type": "abstractive"}, {"answer": "HMM-based system", "type": "extractive"}]
[{"raw_evidence": ["Table TABREF11 shows Word Error Rates (WER) on WSJ for the current state-of-the-art and our models. The current best model trained on this dataset is an HMM-based system which uses a combination of convolutional, recurrent and fully connected layers, as well as speaker adaptation, and reaches INLINEFORM0 WER on nov92. DeepSpeech 2 shows a WER of INLINEFORM1 but uses 150 times more training data for the acoustic model and huge text datasets for LM training. Finally, the state-of-the-art among end-to-end systems trained only on WSJ, and hence the most comparable to our system, uses lattice-free MMI on augmented data (with speed perturbation) and gets INLINEFORM2 WER. Our baseline system, trained on mel-filterbanks, and decoded with a n-gram language model has a INLINEFORM3 WER. Replacing the n-gram LM by a convolutional one reduces the WER to INLINEFORM4 , and puts our model on par with the current best end-to-end system. Replacing the speech features by a learnable frontend finally reduces the WER to INLINEFORM5 and then to INLINEFORM6 when doubling the number of learnable filters, improving over DeepSpeech 2 and matching the performance of the best HMM-DNN system.", "FLOAT SELECTED: Table 1: WER (%) on the open vocabulary task of WSJ."], "highlighted_evidence": ["Table TABREF11 shows Word Error Rates (WER) on WSJ for the current state-of-the-art and our models. The current best model trained on this dataset is an HMM-based system which uses a combination of convolutional, recurrent and fully connected layers, as well as speaker adaptation, and reaches INLINEFORM0 WER on nov92.", "FLOAT SELECTED: Table 1: WER (%) on the open vocabulary task of WSJ."]}, {"raw_evidence": ["Table TABREF11 shows Word Error Rates (WER) on WSJ for the current state-of-the-art and our models. The current best model trained on this dataset is an HMM-based system which uses a combination of convolutional, recurrent and fully connected layers, as well as speaker adaptation, and reaches INLINEFORM0 WER on nov92. DeepSpeech 2 shows a WER of INLINEFORM1 but uses 150 times more training data for the acoustic model and huge text datasets for LM training. Finally, the state-of-the-art among end-to-end systems trained only on WSJ, and hence the most comparable to our system, uses lattice-free MMI on augmented data (with speed perturbation) and gets INLINEFORM2 WER. Our baseline system, trained on mel-filterbanks, and decoded with a n-gram language model has a INLINEFORM3 WER. Replacing the n-gram LM by a convolutional one reduces the WER to INLINEFORM4 , and puts our model on par with the current best end-to-end system. Replacing the speech features by a learnable frontend finally reduces the WER to INLINEFORM5 and then to INLINEFORM6 when doubling the number of learnable filters, improving over DeepSpeech 2 and matching the performance of the best HMM-DNN system."], "highlighted_evidence": ["Table TABREF11 shows Word Error Rates (WER) on WSJ for the current state-of-the-art and our models. The current best model trained on this dataset is an HMM-based system which uses a combination of convolutional, recurrent and fully connected layers, as well as speaker adaptation, and reaches INLINEFORM0 WER on nov92."]}]
what is the size of the augmented dataset?
{"label_key": "1811.12254", "label_file": "paper_tab_qa", "q_uid": "57f23dfc264feb62f45d9a9e24c60bd73d7fe563", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "609", "type": "abstractive"}]
[{"raw_evidence": ["FLOAT SELECTED: Table 1: Speech datasets used. Note that HAPD, HAFP and FP only have samples from healthy subjects. Detailed description in App. 2.", "All datasets shown in Tab. SECREF2 were transcribed manually by trained transcriptionists, employing the same list of annotations and protocols, with the same set of features extracted from the transcripts (see Sec. SECREF3 ). HAPD and HAFP are jointly referred to as HA.", "Binary classification of each speech transcript as AD or HC is performed. We do 5-fold cross-validation, stratified by subject so that each subject's samples do not occur in both training and testing sets in each fold. The minority class is oversampled in the training set using SMOTE BIBREF14 to deal with the class imbalance. We consider a Random Forest (100 trees), Naïve Bayes (with equal priors), SVM (with RBF kernel), and a 2-layer neural network (10 units, Adam optimizer, 500 epochs) BIBREF15 . Additionally, we augment the DB data with healthy samples from FP with varied ages.", "We augment DB with healthy samples from FP with varying ages (Tab. SECREF11 ), considering 50 samples for each 15 year duration starting from age 30. Adding the same number of samples from bins of age greater than 60 leads to greater increase in performance. This could be because the average age of participants in the datasets (DB, HA etc.) we use are greater than 60. Note that despite such a trend, addition of healthy data produces fair classifiers with respect to samples with age INLINEFORM0 60 and those with age INLINEFORM1 60 (balanced F1 scores of 75.6% and 76.1% respectively; further details in App. SECREF43 .)", "FLOAT SELECTED: Table 3: Augmenting DB with healthy data of varied ages. Scores averaged across 4 classifiers."], "highlighted_evidence": ["FLOAT SELECTED: Table 1: Speech datasets used. Note that HAPD, HAFP and FP only have samples from healthy subjects. Detailed description in App. 2.", "\nAll datasets shown in Tab. SECREF2 were transcribed manually by trained transcriptionists, employing the same list of annotations and protocols, with the same set of features extracted from the transcripts (see Sec. SECREF3 ). HAPD and HAFP are jointly referred to as HA.", "Additionally, we augment the DB data with healthy samples from FP with varied ages.", "We augment DB with healthy samples from FP with varying ages (Tab. SECREF11 ), considering 50 samples for each 15 year duration starting from age 30. ", "FLOAT SELECTED: Table 3: Augmenting DB with healthy data of varied ages. Scores averaged across 4 classifiers."]}]
How many sentences does the dataset contain?
{"label_key": "1908.05828", "label_file": "paper_tab_qa", "q_uid": "d51dc36fbf6518226b8e45d4c817e07e8f642003", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "3606", "type": "abstractive"}, {"answer": "6946", "type": "extractive"}]
[{"raw_evidence": ["FLOAT SELECTED: Table 1: Dataset statistics"], "highlighted_evidence": ["FLOAT SELECTED: Table 1: Dataset statistics"]}, {"raw_evidence": ["In order to label our dataset with POS-tags, we first created POS annotated dataset of 6946 sentences and 16225 unique words extracted from POS-tagged Nepali National Corpus and trained a BiLSTM model with 95.14% accuracy which was used to create POS-tags for our dataset."], "highlighted_evidence": ["In order to label our dataset with POS-tags, we first created POS annotated dataset of 6946 sentences and 16225 unique words extracted from POS-tagged Nepali National Corpus and trained a BiLSTM model with 95.14% accuracy which was used to create POS-tags for our dataset."]}]
What is the baseline?
{"label_key": "1908.05828", "label_file": "paper_tab_qa", "q_uid": "cb77d6a74065cb05318faf57e7ceca05e126a80d", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "CNN modelBIBREF0, Stanford CRF modelBIBREF21", "type": "extractive"}, {"answer": "Bam et al. SVM, Ma and Hovy w/glove, Lample et al. w/fastText, Lample et al. w/word2vec", "type": "abstractive"}]
[{"raw_evidence": ["Similar approaches has been applied to many South Asian languages like HindiBIBREF6, IndonesianBIBREF7, BengaliBIBREF19 and In this paper, we present the neural network architecture for NER task in Nepali language, which doesn't require any manual feature engineering nor any data pre-processing during training. First we are comparing BiLSTMBIBREF14, BiLSTM+CNNBIBREF20, BiLSTM+CRFBIBREF1, BiLSTM+CNN+CRFBIBREF2 models with CNN modelBIBREF0 and Stanford CRF modelBIBREF21. Secondly, we show the comparison between models trained on general word embeddings, word embedding + character-level embedding, word embedding + part-of-speech(POS) one-hot encoding and word embedding + grapheme clustered or sub-word embeddingBIBREF22. The experiments were performed on the dataset that we created and on the dataset received from ILPRL lab. Our extensive study shows that augmenting word embedding with character or grapheme-level representation and POS one-hot encoding vector yields better results compared to using general word embedding alone."], "highlighted_evidence": ["First we are comparing BiLSTMBIBREF14, BiLSTM+CNNBIBREF20, BiLSTM+CRFBIBREF1, BiLSTM+CNN+CRFBIBREF2 models with CNN modelBIBREF0 and Stanford CRF modelBIBREF21."]}, {"raw_evidence": ["FLOAT SELECTED: Table 6: Comparison with previous models based on Test F1 score"], "highlighted_evidence": ["FLOAT SELECTED: Table 6: Comparison with previous models based on Test F1 score"]}]
What is the size of the dataset?
{"label_key": "1908.05828", "label_file": "paper_tab_qa", "q_uid": "a1b3e2107302c5a993baafbe177684ae88d6f505", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "Dataset contains 3606 total sentences and 79087 total entities.", "type": "abstractive"}, {"answer": "ILPRL contains 548 sentences, OurNepali contains 3606 sentences", "type": "abstractive"}]
[{"raw_evidence": ["After much time, we received the dataset from Bal Krishna Bal, ILPRL, KU. This dataset follows standard CoNLL-2003 IOB formatBIBREF25 with POS tags. This dataset is prepared by ILPRL Lab, KU and KEIV Technologies. Few corrections like correcting the NER tags had to be made on the dataset. The statistics of both the dataset is presented in table TABREF23.", "FLOAT SELECTED: Table 1: Dataset statistics"], "highlighted_evidence": ["The statistics of both the dataset is presented in table TABREF23.", "FLOAT SELECTED: Table 1: Dataset statistics"]}, {"raw_evidence": ["FLOAT SELECTED: Table 1: Dataset statistics", "Dataset Statistics ::: OurNepali dataset", "Since, we there was no publicly available standard Nepali NER dataset and did not receive any dataset from the previous researchers, we had to create our own dataset. This dataset contains the sentences collected from daily newspaper of the year 2015-2016. This dataset has three major classes Person (PER), Location (LOC) and Organization (ORG). Pre-processing was performed on the text before creation of the dataset, for example all punctuations and numbers besides ',', '-', '|' and '.' were removed. Currently, the dataset is in standard CoNLL-2003 IO formatBIBREF25.", "Dataset Statistics ::: ILPRL dataset", "After much time, we received the dataset from Bal Krishna Bal, ILPRL, KU. This dataset follows standard CoNLL-2003 IOB formatBIBREF25 with POS tags. This dataset is prepared by ILPRL Lab, KU and KEIV Technologies. Few corrections like correcting the NER tags had to be made on the dataset. The statistics of both the dataset is presented in table TABREF23."], "highlighted_evidence": ["FLOAT SELECTED: Table 1: Dataset statistics", "Dataset Statistics ::: OurNepali dataset\nSince, we there was no publicly available standard Nepali NER dataset and did not receive any dataset from the previous researchers, we had to create our own dataset.", "Dataset Statistics ::: ILPRL dataset\nAfter much time, we received the dataset from Bal Krishna Bal, ILPRL, KU. ", " The statistics of both the dataset is presented in table TABREF23."]}]
How many different types of entities exist in the dataset?
{"label_key": "1908.05828", "label_file": "paper_tab_qa", "q_uid": "1462eb312944926469e7cee067dfc7f1267a2a8c", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "OurNepali contains 3 different types of entities, ILPRL contains 4 different types of entities", "type": "abstractive"}, {"answer": "three", "type": "extractive"}]
[{"raw_evidence": ["FLOAT SELECTED: Table 1: Dataset statistics", "Table TABREF24 presents the total entities (PER, LOC, ORG and MISC) from both of the dataset used in our experiments. The dataset is divided into three parts with 64%, 16% and 20% of the total dataset into training set, development set and test set respectively."], "highlighted_evidence": ["FLOAT SELECTED: Table 1: Dataset statistics", "Table TABREF24 presents the total entities (PER, LOC, ORG and MISC) from both of the dataset used in our experiments."]}, {"raw_evidence": ["Since, we there was no publicly available standard Nepali NER dataset and did not receive any dataset from the previous researchers, we had to create our own dataset. This dataset contains the sentences collected from daily newspaper of the year 2015-2016. This dataset has three major classes Person (PER), Location (LOC) and Organization (ORG). Pre-processing was performed on the text before creation of the dataset, for example all punctuations and numbers besides ',', '-', '|' and '.' were removed. Currently, the dataset is in standard CoNLL-2003 IO formatBIBREF25."], "highlighted_evidence": ["This dataset has three major classes Person (PER), Location (LOC) and Organization (ORG)."]}]
How big is the new Nepali NER dataset?
{"label_key": "1908.05828", "label_file": "paper_tab_qa", "q_uid": "f59f1f5b528a2eec5cfb1e49c87699e0c536cc45", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "3606 sentences", "type": "abstractive"}, {"answer": "Dataset contains 3606 total sentences and 79087 total entities.", "type": "abstractive"}]
[{"raw_evidence": ["FLOAT SELECTED: Table 1: Dataset statistics", "After much time, we received the dataset from Bal Krishna Bal, ILPRL, KU. This dataset follows standard CoNLL-2003 IOB formatBIBREF25 with POS tags. This dataset is prepared by ILPRL Lab, KU and KEIV Technologies. Few corrections like correcting the NER tags had to be made on the dataset. The statistics of both the dataset is presented in table TABREF23."], "highlighted_evidence": ["FLOAT SELECTED: Table 1: Dataset statistics", "The statistics of both the dataset is presented in table TABREF23.\n\n"]}, {"raw_evidence": ["After much time, we received the dataset from Bal Krishna Bal, ILPRL, KU. This dataset follows standard CoNLL-2003 IOB formatBIBREF25 with POS tags. This dataset is prepared by ILPRL Lab, KU and KEIV Technologies. Few corrections like correcting the NER tags had to be made on the dataset. The statistics of both the dataset is presented in table TABREF23.", "FLOAT SELECTED: Table 1: Dataset statistics"], "highlighted_evidence": ["The statistics of both the dataset is presented in table TABREF23.", "FLOAT SELECTED: Table 1: Dataset statistics"]}]
What is the performance improvement of the grapheme-level representation model over the character-level model?
{"label_key": "1908.05828", "label_file": "paper_tab_qa", "q_uid": "9bd080bb2a089410fd7ace82e91711136116af6c", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "On OurNepali test dataset Grapheme-level representation model achieves average 0.16% improvement, on ILPRL test dataset it achieves maximum 1.62% improvement", "type": "abstractive"}, {"answer": "BiLSTM+CNN(grapheme-level) which turns out to be performing on par with BiLSTM+CNN(character-level) under the same configuration", "type": "extractive"}]
[{"raw_evidence": ["FLOAT SELECTED: Table 5: Comparison of different variation of our models"], "highlighted_evidence": ["FLOAT SELECTED: Table 5: Comparison of different variation of our models"]}, {"raw_evidence": ["We also present a neural architecture BiLSTM+CNN(grapheme-level) which turns out to be performing on par with BiLSTM+CNN(character-level) under the same configuration. We believe this will not only help Nepali language but also other languages falling under the umbrellas of Devanagari languages. Our model BiLSTM+CNN(grapheme-level) and BiLSTM+CNN(G)+POS outperforms all other model experimented in OurNepali and ILPRL dataset respectively."], "highlighted_evidence": ["We also present a neural architecture BiLSTM+CNN(grapheme-level) which turns out to be performing on par with BiLSTM+CNN(character-level) under the same configuration."]}]
What is the performance of classifiers?
{"label_key": "2002.02070", "label_file": "paper_tab_qa", "q_uid": "d53299fac8c94bd0179968eb868506124af407d1", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "Table TABREF10, The KNN classifier seem to perform the best across all four metrics. This is probably due to the multi-class nature of the data set, While these classifiers did not perform particularly well, they provide a good starting point for future work on this subject", "type": "extractive"}, {"answer": "Using F1 Micro measure, the KNN classifier perform 0.6762, the RF 0.6687, SVM 0.6712 and MLP 0.6778.", "type": "abstractive"}]
[{"raw_evidence": ["In order to evaluate our classifiers, we perform 4-fold cross validation on a shuffled data set. Table TABREF10 shows the F1 micro and F1 macro scores for all the classifiers. The KNN classifier seem to perform the best across all four metrics. This is probably due to the multi-class nature of the data set.", "FLOAT SELECTED: Table 2: Evaluation metrics for all classifiers."], "highlighted_evidence": ["In order to evaluate our classifiers, we perform 4-fold cross validation on a shuffled data set. Table TABREF10 shows the F1 micro and F1 macro scores for all the classifiers. The KNN classifier seem to perform the best across all four metrics. This is probably due to the multi-class nature of the data set.", "FLOAT SELECTED: Table 2: Evaluation metrics for all classifiers."]}, {"raw_evidence": ["FLOAT SELECTED: Table 2: Evaluation metrics for all classifiers."], "highlighted_evidence": ["FLOAT SELECTED: Table 2: Evaluation metrics for all classifiers."]}]
What classifiers have been trained?
{"label_key": "2002.02070", "label_file": "paper_tab_qa", "q_uid": "29f2954098f055fb19d9502572f085862d75bf61", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "KNN\nRF\nSVM\nMLP", "type": "abstractive"}, {"answer": " K Nearest Neighbors (KNN), Random Forest (RF), Support Vector Machine (SVM), Multi-layer Perceptron (MLP)", "type": "extractive"}]
[{"raw_evidence": ["FLOAT SELECTED: Table 2: Evaluation metrics for all classifiers.", "In order to evaluate our classifiers, we perform 4-fold cross validation on a shuffled data set. Table TABREF10 shows the F1 micro and F1 macro scores for all the classifiers. The KNN classifier seem to perform the best across all four metrics. This is probably due to the multi-class nature of the data set."], "highlighted_evidence": ["FLOAT SELECTED: Table 2: Evaluation metrics for all classifiers.", "In order to evaluate our classifiers, we perform 4-fold cross validation on a shuffled data set. Table TABREF10 shows the F1 micro and F1 macro scores for all the classifiers."]}, {"raw_evidence": ["We train a series of classifiers in order to classify car-speak. We train three classifiers on the review vectors that we prepared in Section SECREF8. The classifiers we use are K Nearest Neighbors (KNN), Random Forest (RF), Support Vector Machine (SVM), and Multi-layer Perceptron (MLP) BIBREF13."], "highlighted_evidence": [" The classifiers we use are K Nearest Neighbors (KNN), Random Forest (RF), Support Vector Machine (SVM), and Multi-layer Perceptron (MLP) BIBREF13."]}]
What other sentence embeddings methods are evaluated?
{"label_key": "1908.10084", "label_file": "paper_tab_qa", "q_uid": "e2db361ae9ad9dbaa9a85736c5593eb3a471983d", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "GloVe, BERT, Universal Sentence Encoder, TF-IDF, InferSent", "type": "abstractive"}, {"answer": "Avg. GloVe embeddings, Avg. fast-text embeddings, Avg. BERT embeddings, BERT CLS-vector, InferSent - GloVe and Universal Sentence Encoder.", "type": "abstractive"}]
[{"raw_evidence": ["FLOAT SELECTED: Table 1: Spearman rank correlation ρ between the cosine similarity of sentence representations and the gold labels for various Textual Similarity (STS) tasks. Performance is reported by convention as ρ × 100. STS12-STS16: SemEval 2012-2016, STSb: STSbenchmark, SICK-R: SICK relatedness dataset.", "FLOAT SELECTED: Table 3: Average Pearson correlation r and average Spearman’s rank correlation ρ on the Argument Facet Similarity (AFS) corpus (Misra et al., 2016). Misra et al. proposes 10-fold cross-validation. We additionally evaluate in a cross-topic scenario: Methods are trained on two topics, and are evaluated on the third topic."], "highlighted_evidence": ["FLOAT SELECTED: Table 1: Spearman rank correlation ρ between the cosine similarity of sentence representations and the gold labels for various Textual Similarity (STS) tasks. Performance is reported by convention as ρ × 100. STS12-STS16: SemEval 2012-2016, STSb: STSbenchmark, SICK-R: SICK relatedness dataset.", "FLOAT SELECTED: Table 3: Average Pearson correlation r and average Spearman’s rank correlation ρ on the Argument Facet Similarity (AFS) corpus (Misra et al., 2016). Misra et al. proposes 10-fold cross-validation. We additionally evaluate in a cross-topic scenario: Methods are trained on two topics, and are evaluated on the third topic."]}, {"raw_evidence": ["We compare the SBERT sentence embeddings to other sentence embeddings methods on the following seven SentEval transfer tasks:", "The results can be found in Table TABREF15. SBERT is able to achieve the best performance in 5 out of 7 tasks. The average performance increases by about 2 percentage points compared to InferSent as well as the Universal Sentence Encoder. Even though transfer learning is not the purpose of SBERT, it outperforms other state-of-the-art sentence embeddings methods on this task.", "FLOAT SELECTED: Table 5: Evaluation of SBERT sentence embeddings using the SentEval toolkit. SentEval evaluates sentence embeddings on different sentence classification tasks by training a logistic regression classifier using the sentence embeddings as features. Scores are based on a 10-fold cross-validation."], "highlighted_evidence": ["We compare the SBERT sentence embeddings to other sentence embeddings methods on the following seven SentEval transfer tasks:", "The results can be found in Table TABREF15.", "FLOAT SELECTED: Table 5: Evaluation of SBERT sentence embeddings using the SentEval toolkit. SentEval evaluates sentence embeddings on different sentence classification tasks by training a logistic regression classifier using the sentence embeddings as features. Scores are based on a 10-fold cross-validation."]}]
which non-english language had the best performance?
{"label_key": "1806.04511", "label_file": "paper_tab_qa", "q_uid": "e79a5b6b6680bd2f63e9f4adbaae1d7795d81e38", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "Russian", "type": "extractive"}, {"answer": "Russsian", "type": "abstractive"}]
[{"raw_evidence": ["Considering the improvements over the majority baseline achieved by the RNN model for both non-English (on the average 22.76% relative improvement; 15.82% relative improvement on Spanish, 72.71% vs. 84.21%, 30.53% relative improvement on Turkish, 56.97% vs. 74.36%, 37.13% relative improvement on Dutch, 59.63% vs. 81.77%, and 7.55% relative improvement on Russian, 79.60% vs. 85.62%) and English test sets (27.34% relative improvement), we can draw the conclusion that our model is robust to handle multiple languages. Building separate models for each language requires both labeled and unlabeled data. Even though having lots of labeled data in every language is the perfect case, it is unrealistic. Therefore, eliminating the resource requirement in this resource-constrained task is crucial. The fact that machine translation can be used in reusing models from different languages is promising for reducing the data requirements."], "highlighted_evidence": ["Considering the improvements over the majority baseline achieved by the RNN model for both non-English (on the average 22.76% relative improvement; 15.82% relative improvement on Spanish, 72.71% vs. 84.21%, 30.53% relative improvement on Turkish, 56.97% vs. 74.36%, 37.13% relative improvement on Dutch, 59.63% vs. 81.77%, and 7.55% relative improvement on Russian, 79.60% vs. 85.62%) and English test sets (27.34% relative improvement), we can draw the conclusion that our model is robust to handle multiple languages."]}, {"raw_evidence": ["FLOAT SELECTED: Table 3: Accuracy results (%) for RNN-based approach compared with majority baseline and lexicon-based baseline."], "highlighted_evidence": ["FLOAT SELECTED: Table 3: Accuracy results (%) for RNN-based approach compared with majority baseline and lexicon-based baseline."]}]
How big is the dataset used in this work?
{"label_key": "1910.06592", "label_file": "paper_tab_qa", "q_uid": "3e1829e96c968cbd8ad8e9ce850e3a92a76b26e4", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "Total dataset size: 171 account (522967 tweets)", "type": "abstractive"}, {"answer": "212 accounts", "type": "abstractive"}]
[{"raw_evidence": ["Data. We build a dataset of Twitter accounts based on two lists annotated in previous works. For the non-factual accounts, we rely on a list of 180 Twitter accounts from BIBREF1. This list was created based on public resources where suspicious Twitter accounts were annotated with the main fake news types (clickbait, propaganda, satire, and hoax). We discard the satire labeled accounts since their intention is not to mislead or deceive. On the other hand, for the factual accounts, we use a list with another 32 Twitter accounts from BIBREF19 that are considered trustworthy by independent third parties. We discard some accounts that publish news in languages other than English (e.g., Russian or Arabic). Moreover, to ensure the quality of the data, we remove the duplicate, media-based, and link-only tweets. For each account, we collect the maximum amount of tweets allowed by Twitter API. Table TABREF13 presents statistics on our dataset.", "FLOAT SELECTED: Table 1: Statistics on the data with respect to each account type: propaganda (P), clickbait (C), hoax (H), and real news (R)."], "highlighted_evidence": ["Table TABREF13 presents statistics on our dataset.", "FLOAT SELECTED: Table 1: Statistics on the data with respect to each account type: propaganda (P), clickbait (C), hoax (H), and real news (R)."]}, {"raw_evidence": ["Data. We build a dataset of Twitter accounts based on two lists annotated in previous works. For the non-factual accounts, we rely on a list of 180 Twitter accounts from BIBREF1. This list was created based on public resources where suspicious Twitter accounts were annotated with the main fake news types (clickbait, propaganda, satire, and hoax). We discard the satire labeled accounts since their intention is not to mislead or deceive. On the other hand, for the factual accounts, we use a list with another 32 Twitter accounts from BIBREF19 that are considered trustworthy by independent third parties. We discard some accounts that publish news in languages other than English (e.g., Russian or Arabic). Moreover, to ensure the quality of the data, we remove the duplicate, media-based, and link-only tweets. For each account, we collect the maximum amount of tweets allowed by Twitter API. Table TABREF13 presents statistics on our dataset."], "highlighted_evidence": [" For the non-factual accounts, we rely on a list of 180 Twitter accounts from BIBREF1.", "On the other hand, for the factual accounts, we use a list with another 32 Twitter accounts from BIBREF19 that are considered trustworthy by independent third parties."]}]
What is the size of the new dataset?
{"label_key": "1902.09666", "label_file": "paper_tab_qa", "q_uid": "74fb77a624ea9f1821f58935a52cca3086bb0981", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "14,100 tweets", "type": "abstractive"}, {"answer": "Dataset contains total of 14100 annotations.", "type": "abstractive"}]
[{"raw_evidence": ["FLOAT SELECTED: Table 3: Distribution of label combinations in OLID."], "highlighted_evidence": ["FLOAT SELECTED: Table 3: Distribution of label combinations in OLID."]}, {"raw_evidence": ["FLOAT SELECTED: Table 3: Distribution of label combinations in OLID.", "The data included in OLID has been collected from Twitter. We retrieved the data using the Twitter API by searching for keywords and constructions that are often included in offensive messages, such as `she is' or `to:BreitBartNews'. We carried out a first round of trial annotation of 300 instances with six experts. The goal of the trial annotation was to 1) evaluate the proposed tagset; 2) evaluate the data retrieval method; and 3) create a gold standard with instances that could be used as test questions in the training and test setting annotation which was carried out using crowdsourcing. The breakdown of keywords and their offensive content in the trial data of 300 tweets is shown in Table TABREF14 . We included a left (@NewYorker) and far-right (@BreitBartNews) news accounts because there tends to be political offense in the comments. One of the best offensive keywords was tweets that were flagged as not being safe by the Twitter `safe' filter (the `-' indicates `not safe'). The vast majority of content on Twitter is not offensive so we tried different strategies to keep a reasonable number of tweets in the offensive class amounting to around 30% of the dataset including excluding some keywords that were not high in offensive content such as `they are` and `to:NewYorker`. Although `he is' is lower in offensive content we kept it as a keyword to avoid gender bias. In addition to the keywords in the trial set, we searched for more political keywords which tend to be higher in offensive content, and sampled our dataset such that 50% of the the tweets come from political keywords and 50% come from non-political keywords. In addition to the keywords `gun control', and `to:BreitbartNews', political keywords used to collect these tweets are `MAGA', `antifa', `conservative' and `liberal'. We computed Fliess' INLINEFORM0 on the trial set for the five annotators on 21 of the tweets. INLINEFORM1 is .83 for Layer A (OFF vs NOT) indicating high agreement. As to normalization and anonymization, no user metadata or Twitter IDs have been stored, and URLs and Twitter mentions have been substituted to placeholders. We follow prior work in related areas (burnap2015cyber,davidson2017automated) and annotate our data using crowdsourcing using the platform Figure Eight. We ensure data quality by: 1) we only received annotations from individuals who were experienced in the platform; and 2) we used test questions to discard annotations of individuals who did not reach a certain threshold. Each instance in the dataset was annotated by multiple annotators and inter-annotator agreement has been calculated. We first acquired two annotations for each instance. In case of 100% agreement, we considered these as acceptable annotations, and in case of disagreement, we requested more annotations until the agreement was above 66%. After the crowdsourcing annotation, we used expert adjudication to guarantee the quality of the annotation. The breakdown of the data into training and testing for the labels from each level is shown in Table TABREF15 ."], "highlighted_evidence": ["FLOAT SELECTED: Table 3: Distribution of label combinations in OLID.", "The breakdown of the data into training and testing for the labels from each level is shown in Table TABREF15 ."]}]
How long is the dataset for each step of hierarchy?
{"label_key": "1902.09666", "label_file": "paper_tab_qa", "q_uid": "1b72aa2ec3ce02131e60626639f0cf2056ec23ca", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "Level A: 14100 Tweets\nLevel B: 4640 Tweets\nLevel C: 4089 Tweets", "type": "abstractive"}]
[{"raw_evidence": ["The data included in OLID has been collected from Twitter. We retrieved the data using the Twitter API by searching for keywords and constructions that are often included in offensive messages, such as `she is' or `to:BreitBartNews'. We carried out a first round of trial annotation of 300 instances with six experts. The goal of the trial annotation was to 1) evaluate the proposed tagset; 2) evaluate the data retrieval method; and 3) create a gold standard with instances that could be used as test questions in the training and test setting annotation which was carried out using crowdsourcing. The breakdown of keywords and their offensive content in the trial data of 300 tweets is shown in Table TABREF14 . We included a left (@NewYorker) and far-right (@BreitBartNews) news accounts because there tends to be political offense in the comments. One of the best offensive keywords was tweets that were flagged as not being safe by the Twitter `safe' filter (the `-' indicates `not safe'). The vast majority of content on Twitter is not offensive so we tried different strategies to keep a reasonable number of tweets in the offensive class amounting to around 30% of the dataset including excluding some keywords that were not high in offensive content such as `they are` and `to:NewYorker`. Although `he is' is lower in offensive content we kept it as a keyword to avoid gender bias. In addition to the keywords in the trial set, we searched for more political keywords which tend to be higher in offensive content, and sampled our dataset such that 50% of the the tweets come from political keywords and 50% come from non-political keywords. In addition to the keywords `gun control', and `to:BreitbartNews', political keywords used to collect these tweets are `MAGA', `antifa', `conservative' and `liberal'. We computed Fliess' INLINEFORM0 on the trial set for the five annotators on 21 of the tweets. INLINEFORM1 is .83 for Layer A (OFF vs NOT) indicating high agreement. As to normalization and anonymization, no user metadata or Twitter IDs have been stored, and URLs and Twitter mentions have been substituted to placeholders. We follow prior work in related areas (burnap2015cyber,davidson2017automated) and annotate our data using crowdsourcing using the platform Figure Eight. We ensure data quality by: 1) we only received annotations from individuals who were experienced in the platform; and 2) we used test questions to discard annotations of individuals who did not reach a certain threshold. Each instance in the dataset was annotated by multiple annotators and inter-annotator agreement has been calculated. We first acquired two annotations for each instance. In case of 100% agreement, we considered these as acceptable annotations, and in case of disagreement, we requested more annotations until the agreement was above 66%. After the crowdsourcing annotation, we used expert adjudication to guarantee the quality of the annotation. The breakdown of the data into training and testing for the labels from each level is shown in Table TABREF15 .", "FLOAT SELECTED: Table 3: Distribution of label combinations in OLID."], "highlighted_evidence": [" The breakdown of the data into training and testing for the labels from each level is shown in Table TABREF15 .", "FLOAT SELECTED: Table 3: Distribution of label combinations in OLID."]}]
What different correlations result when using different variants of ROUGE scores?
{"label_key": "1604.00400", "label_file": "paper_tab_qa", "q_uid": "bf52c01bf82612d0c7bbf2e6a5bb2570c322936f", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "we observe that many variants of Rouge scores do not have high correlations with human pyramid scores", "type": "extractive"}, {"answer": "Using Pearson corelation measure, for example, ROUGE-1-P is 0.257 and ROUGE-3-F 0.878.", "type": "abstractive"}]
[{"raw_evidence": ["Table TABREF23 shows the Pearson, Spearman and Kendall correlation of Rouge and Sera, with pyramid scores. Both Rouge and Sera are calculated with stopwords removed and with stemming. Our experiments with inclusion of stopwords and without stemming showed similar results and thus, we do not include those to avoid redundancy.", "Another important observation is regarding the effectiveness of Rouge scores (top part of Table TABREF23 ). Interestingly, we observe that many variants of Rouge scores do not have high correlations with human pyramid scores. The lowest F-score correlations are for Rouge-1 and Rouge-L (with INLINEFORM0 =0.454). Weak correlation of Rouge-1 shows that matching unigrams between the candidate summary and gold summaries is not accurate in quantifying the quality of the summary. On higher order n-grams, however, we can see that Rouge correlates better with pyramid. In fact, the highest overall INLINEFORM1 is obtained by Rouge-3. Rouge-L and its weighted version Rouge-W, both have weak correlations with pyramid. Skip-bigrams (Rouge-S) and its combination with unigrams (Rouge-SU) also show sub-optimal correlations. Note that INLINEFORM2 and INLINEFORM3 correlations are more reliable in our setup due to the small sample size."], "highlighted_evidence": ["Table TABREF23 shows the Pearson, Spearman and Kendall correlation of Rouge and Sera, with pyramid scores.", "Interestingly, we observe that many variants of Rouge scores do not have high correlations with human pyramid scores. The lowest F-score correlations are for Rouge-1 and Rouge-L (with INLINEFORM0 =0.454). Weak correlation of Rouge-1 shows that matching unigrams between the candidate summary and gold summaries is not accurate in quantifying the quality of the summary."]}, {"raw_evidence": ["We provided an analysis of existing evaluation metrics for scientific summarization with evaluation of all variants of Rouge. We showed that Rouge may not be the best metric for summarization evaluation; especially in summaries with high terminology variations and paraphrasing (e.g. scientific summaries). Furthermore, we showed that different variants of Rouge result in different correlation values with human judgments, indicating that not all Rouge scores are equally effective. Among all variants of Rouge, Rouge-2 and Rouge-3 are better correlated with manual judgments in the context of scientific summarization. We furthermore proposed an alternative and more effective approach for scientific summarization evaluation (Summarization Evaluation by Relevance Analysis - Sera). Results revealed that in general, the proposed evaluation metric achieves higher correlations with semi-manual pyramid evaluation scores in comparison with Rouge.", "FLOAT SELECTED: Table 2: Correlation between variants of ROUGE and SERA, with human pyramid scores. All variants of ROUGE are displayed. F : F-Score; R: Recall; P : Precision; DIS: Discounted variant of SERA; KW: using Keyword query reformulation; NP: Using noun phrases for query reformulation. The numbers in front of the SERA metrics indicate the rank cut-off point."], "highlighted_evidence": ["Furthermore, we showed that different variants of Rouge result in different correlation values with human judgments, indicating that not all Rouge scores are equally effective. Among all variants of Rouge, Rouge-2 and Rouge-3 are better correlated with manual judgments in the context of scientific summarization. ", "FLOAT SELECTED: Table 2: Correlation between variants of ROUGE and SERA, with human pyramid scores. All variants of ROUGE are displayed. F : F-Score; R: Recall; P : Precision; DIS: Discounted variant of SERA; KW: using Keyword query reformulation; NP: Using noun phrases for query reformulation. The numbers in front of the SERA metrics indicate the rank cut-off point."]}]
What tasks were evaluated?
{"label_key": "1810.12196", "label_file": "paper_tab_qa", "q_uid": "52f8a3e3cd5d42126b5307adc740b71510a6bdf5", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "ReviewQA's test set", "type": "extractive"}, {"answer": "Detection of an aspect in a review, Prediction of the customer general satisfaction, Prediction of the global trend of an aspect in a given review, Prediction of whether the rating of a given aspect is above or under a given value, Prediction of the exact rating of an aspect in a review, Prediction of the list of all the positive/negative aspects mentioned in the review, Comparison between aspects, Prediction of the strengths and weaknesses in a review", "type": "abstractive"}]
[{"raw_evidence": ["Table TABREF19 displays the performance of the 4 baselines on the ReviewQA's test set. These results are the performance achieved by our own implementation of these 4 models. According to our results, the simple LSTM network and the MemN2N perform very poorly on this dataset. Especially on the most advanced reasoning tasks. Indeed, the task 5 which corresponds to the prediction of the exact rating of an aspect seems to be very challenging for these model. Maybe the tokenization by sentence to create the memory blocks of the MemN2N, which is appropriated in the case of the bAbI tasks, is not a good representation of the documents when it has to handle human generated comments. However, the logistic regression achieves reasonable performance on these tasks, and do not suffer from catastrophic performance on any tasks. Its worst result comes on task 6 and one of the reason is probably that this architecture is not designed to predict a list of answers. On the contrary, the deep projective reader achieves encouraging on this dataset. It outperforms all the other baselines, with very good scores on the first fourth tasks. The question/document and document/document attention layers proposed in BIBREF12 seem once again to produce rich encodings of the inputs which are relevant for our projection layer."], "highlighted_evidence": ["Table TABREF19 displays the performance of the 4 baselines on the ReviewQA's test set. These results are the performance achieved by our own implementation of these 4 models."]}, {"raw_evidence": ["FLOAT SELECTED: Table 1: Descriptions and examples of the 8 tasks evaluated in ReviewQA.", "We introduce a list of 8 different competencies that a reading system should master in order to process reviews and text documents in general. These 8 tasks require different competencies and a different level of understanding of the document to be well answered. For instance, detecting if an aspect is mentioned in a review will require less understanding of the review than predicting explicitly the rating of this aspect. Table TABREF10 presents the 8 tasks we have introduced in this dataset with an example of a question that corresponds to each task. We also provide the expected type of the answer (Yes/No question, rating question...). It can be an additional tool to analyze the errors of the readers."], "highlighted_evidence": ["FLOAT SELECTED: Table 1: Descriptions and examples of the 8 tasks evaluated in ReviewQA.", "Table TABREF10 presents the 8 tasks we have introduced in this dataset with an example of a question that corresponds to each task."]}]
What are their results on both datasets?
{"label_key": "1707.05236", "label_file": "paper_tab_qa", "q_uid": "ab9b0bde6113ffef8eb1c39919d21e5913a05081", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "Combining pattern based and Machine translation approaches gave the best overall F0.5 scores. It was 49.11 for FCE dataset , 21.87 for the first annotation of CoNLL-14, and 30.13 for the second annotation of CoNLL-14. ", "type": "abstractive"}]
[{"raw_evidence": ["The error detection results can be seen in Table TABREF4 . We use INLINEFORM0 as the main evaluation measure, which was established as the preferred measure for error correction and detection by the CoNLL-14 shared task BIBREF3 . INLINEFORM1 calculates a weighted harmonic mean of precision and recall, which assigns twice as much importance to precision – this is motivated by practical applications, where accurate predictions from an error detection system are more important compared to coverage. For comparison, we also report the performance of the error detection system by Rei2016, trained using the same FCE dataset.", "FLOAT SELECTED: Table 2: Error detection performance when combining manually annotated and artificial training data.", "The results show that error detection performance is substantially improved by making use of artificially generated data, created by any of the described methods. When comparing the error generation system by Felice2014a (FY14) with our pattern-based (PAT) and machine translation (MT) approaches, we see that the latter methods covering all error types consistently improve performance. While the added error types tend to be less frequent and more complicated to capture, the added coverage is indeed beneficial for error detection. Combining the pattern-based approach with the machine translation system (Ann+PAT+MT) gave the best overall performance on all datasets. The two frameworks learn to generate different types of errors, and taking advantage of both leads to substantial improvements in error detection."], "highlighted_evidence": ["The error detection results can be seen in Table TABREF4 . We use INLINEFORM0 as the main evaluation measure, which was established as the preferred measure for error correction and detection by the CoNLL-14 shared task BIBREF3 .", "FLOAT SELECTED: Table 2: Error detection performance when combining manually annotated and artificial training data.", "The results show that error detection performance is substantially improved by making use of artificially generated data, created by any of the described methods. When comparing the error generation system by Felice2014a (FY14) with our pattern-based (PAT) and machine translation (MT) approaches, we see that the latter methods covering all error types consistently improve performance. While the added error types tend to be less frequent and more complicated to capture, the added coverage is indeed beneficial for error detection. Combining the pattern-based approach with the machine translation system (Ann+PAT+MT) gave the best overall performance on all datasets. The two frameworks learn to generate different types of errors, and taking advantage of both leads to substantial improvements in error detection."]}]
Does this method help in sentiment classification task improvement?
{"label_key": "1908.11047", "label_file": "paper_tab_qa", "q_uid": "f2155dc4aeab86bf31a838c8ff388c85440fce6e", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "Yes", "type": "boolean"}, {"answer": "No", "type": "boolean"}]
[{"raw_evidence": ["Results are shown in Table TABREF12. Consistent with previous findings, cwrs offer large improvements across all tasks. Though helpful to span-level task models without cwrs, shallow syntactic features offer little to no benefit to ELMo models. mSynC's performance is similar. This holds even for phrase-structure parsing, where (gold) chunks align with syntactic phrases, indicating that task-relevant signal learned from exposure to shallow syntax is already learned by ELMo. On sentiment classification, chunk features are slightly harmful on average (but variance is high); mSynC again performs similarly to ELMo-transformer. Overall, the performance differences across all tasks are small enough to infer that shallow syntax is not particularly helpful when using cwrs.", "FLOAT SELECTED: Table 2: Test-set performance of ELMo-transformer (Peters et al., 2018b), our reimplementation, and mSynC, compared to baselines without CWR. Evaluation metric is F1 for all tasks except sentiment, which reports accuracy. Reported results show the mean and standard deviation across 5 runs for coarse-grained NER and sentiment classification and 3 runs for other tasks."], "highlighted_evidence": ["Results are shown in Table TABREF12. Consistent with previous findings, cwrs offer large improvements across all tasks.", "FLOAT SELECTED: Table 2: Test-set performance of ELMo-transformer (Peters et al., 2018b), our reimplementation, and mSynC, compared to baselines without CWR. Evaluation metric is F1 for all tasks except sentiment, which reports accuracy. Reported results show the mean and standard deviation across 5 runs for coarse-grained NER and sentiment classification and 3 runs for other tasks."]}, {"raw_evidence": ["Results are shown in Table TABREF12. Consistent with previous findings, cwrs offer large improvements across all tasks. Though helpful to span-level task models without cwrs, shallow syntactic features offer little to no benefit to ELMo models. mSynC's performance is similar. This holds even for phrase-structure parsing, where (gold) chunks align with syntactic phrases, indicating that task-relevant signal learned from exposure to shallow syntax is already learned by ELMo. On sentiment classification, chunk features are slightly harmful on average (but variance is high); mSynC again performs similarly to ELMo-transformer. Overall, the performance differences across all tasks are small enough to infer that shallow syntax is not particularly helpful when using cwrs."], "highlighted_evidence": ["Overall, the performance differences across all tasks are small enough to infer that shallow syntax is not particularly helpful when using cwrs."]}]
For how many probe tasks the shallow-syntax-aware contextual embedding perform better than ELMo’s embedding?
{"label_key": "1908.11047", "label_file": "paper_tab_qa", "q_uid": "ed6a15f0f7fa4594e51d5bde21cc0c6c1bedbfdc", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "performance of baseline ELMo-transformer and mSynC are similar, with mSynC doing slightly worse on 7 out of 9 tasks", "type": "extractive"}, {"answer": "3", "type": "abstractive"}]
[{"raw_evidence": ["Results in Table TABREF13 show ten probes. Again, we see the performance of baseline ELMo-transformer and mSynC are similar, with mSynC doing slightly worse on 7 out of 9 tasks. As we would expect, on the probe for predicting chunk tags, mSynC achieves 96.9 $F_1$ vs. 92.2 $F_1$ for ELMo-transformer, indicating that mSynC is indeed encoding shallow syntax. Overall, the results further confirm that explicit shallow syntax does not offer any benefits over ELMo-transformer."], "highlighted_evidence": ["Results in Table TABREF13 show ten probes. Again, we see the performance of baseline ELMo-transformer and mSynC are similar, with mSynC doing slightly worse on 7 out of 9 tasks."]}, {"raw_evidence": ["FLOAT SELECTED: Table 3: Test performance of ELMo-transformer (Peters et al., 2018b) vs. mSynC on several linguistic probes from Liu et al. (2019). In each case, performance of the best layer from the architecture is reported. Details on the probes can be found in §4.2.1."], "highlighted_evidence": ["FLOAT SELECTED: Table 3: Test performance of ELMo-transformer (Peters et al., 2018b) vs. mSynC on several linguistic probes from Liu et al. (2019). In each case, performance of the best layer from the architecture is reported. Details on the probes can be found in §4.2.1."]}]
What are the black-box probes used?
{"label_key": "1908.11047", "label_file": "paper_tab_qa", "q_uid": "4d706ce5bde82caf40241f5b78338ea5ee5eb01e", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "CCG Supertagging CCGBank , PTB part-of-speech tagging, EWT part-of-speech tagging,\nChunking, Named Entity Recognition, Semantic Tagging, Grammar Error Detection, Preposition Supersense Role, Preposition Supersense Function, Event Factuality Detection", "type": "abstractive"}, {"answer": "Probes are linear models trained on frozen cwrs to make predictions about linguistic (syntactic and semantic) properties of words and phrases.", "type": "extractive"}]
[{"raw_evidence": ["Recent work has probed the knowledge encoded in cwrs and found they capture a surprisingly large amount of syntax BIBREF10, BIBREF1, BIBREF11. We further examine the contextual embeddings obtained from the enhanced architecture and a shallow syntactic context, using black-box probes from BIBREF1. Our analysis indicates that our shallow-syntax-aware contextual embeddings do not transfer to linguistic tasks any more easily than ELMo embeddings (§SECREF18).", "FLOAT SELECTED: Table 6: Dataset and metrics for each probing task from Liu et al. (2019), corresponding to Table 3."], "highlighted_evidence": [" We further examine the contextual embeddings obtained from the enhanced architecture and a shallow syntactic context, using black-box probes from BIBREF1", "FLOAT SELECTED: Table 6: Dataset and metrics for each probing task from Liu et al. (2019), corresponding to Table 3."]}, {"raw_evidence": ["We further analyze whether awareness of shallow syntax carries over to other linguistic tasks, via probes from BIBREF1. Probes are linear models trained on frozen cwrs to make predictions about linguistic (syntactic and semantic) properties of words and phrases. Unlike §SECREF11, there is minimal downstream task architecture, bringing into focus the transferability of cwrs, as opposed to task-specific adaptation."], "highlighted_evidence": ["Probes are linear models trained on frozen cwrs to make predictions about linguistic (syntactic and semantic) properties of words and phrases."]}]
What are improvements for these two approaches relative to ELMo-only baselines?
{"label_key": "1908.11047", "label_file": "paper_tab_qa", "q_uid": "86bf75245358f17e35fc133e46a92439ac86d472", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "only modest gains on three of the four downstream tasks", "type": "extractive"}, {"answer": " the performance differences across all tasks are small enough ", "type": "extractive"}]
[{"raw_evidence": ["Results in Table TABREF13 show ten probes. Again, we see the performance of baseline ELMo-transformer and mSynC are similar, with mSynC doing slightly worse on 7 out of 9 tasks. As we would expect, on the probe for predicting chunk tags, mSynC achieves 96.9 $F_1$ vs. 92.2 $F_1$ for ELMo-transformer, indicating that mSynC is indeed encoding shallow syntax. Overall, the results further confirm that explicit shallow syntax does not offer any benefits over ELMo-transformer."], "highlighted_evidence": ["Results in Table TABREF13 show ten probes. Again, we see the performance of baseline ELMo-transformer and mSynC are similar, with mSynC doing slightly worse on 7 out of 9 tasks. As we would expect, on the probe for predicting chunk tags, mSynC achieves 96.9 $F_1$ vs. 92.2 $F_1$ for ELMo-transformer, indicating that mSynC is indeed encoding shallow syntax. Overall, the results further confirm that explicit shallow syntax does not offer any benefits over ELMo-transformer."]}, {"raw_evidence": ["FLOAT SELECTED: Table 2: Test-set performance of ELMo-transformer (Peters et al., 2018b), our reimplementation, and mSynC, compared to baselines without CWR. Evaluation metric is F1 for all tasks except sentiment, which reports accuracy. Reported results show the mean and standard deviation across 5 runs for coarse-grained NER and sentiment classification and 3 runs for other tasks.", "Results are shown in Table TABREF12. Consistent with previous findings, cwrs offer large improvements across all tasks. Though helpful to span-level task models without cwrs, shallow syntactic features offer little to no benefit to ELMo models. mSynC's performance is similar. This holds even for phrase-structure parsing, where (gold) chunks align with syntactic phrases, indicating that task-relevant signal learned from exposure to shallow syntax is already learned by ELMo. On sentiment classification, chunk features are slightly harmful on average (but variance is high); mSynC again performs similarly to ELMo-transformer. Overall, the performance differences across all tasks are small enough to infer that shallow syntax is not particularly helpful when using cwrs."], "highlighted_evidence": ["FLOAT SELECTED: Table 2: Test-set performance of ELMo-transformer (Peters et al., 2018b), our reimplementation, and mSynC, compared to baselines without CWR. Evaluation metric is F1 for all tasks except sentiment, which reports accuracy. Reported results show the mean and standard deviation across 5 runs for coarse-grained NER and sentiment classification and 3 runs for other tasks.", "Overall, the performance differences across all tasks are small enough to infer that shallow syntax is not particularly helpful when using cwrs"]}]
What are the industry classes defined in this paper?
{"label_key": "1612.08205", "label_file": "paper_tab_qa", "q_uid": "cd2878c5a52542ddf080b20bec005d9a74f2d916", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "technology, religion, fashion, publishing, sports or recreation, real estate, agriculture/environment, law, security/military, tourism, construction, museums or libraries, banking/investment banking, automotive", "type": "abstractive"}, {"answer": "Technology, Religion, Fashion, Publishing, Sports coach, Real Estate, Law, Environment, Tourism, Construction, Museums, Banking, Security, Automotive.", "type": "abstractive"}]
[{"raw_evidence": ["FLOAT SELECTED: Table 1: Industry categories and number of users per category."], "highlighted_evidence": ["FLOAT SELECTED: Table 1: Industry categories and number of users per category."]}, {"raw_evidence": ["FLOAT SELECTED: Table 7: Three top-ranked words for each industry."], "highlighted_evidence": ["FLOAT SELECTED: Table 7: Three top-ranked words for each industry."]}]
Do they report results only on English data?
{"label_key": "1907.09369", "label_file": "paper_tab_qa", "q_uid": "fd2c6c26fd0ab3c10aae4f2550c5391576a77491", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "Yes", "type": "boolean"}]
[{"raw_evidence": ["FLOAT SELECTED: Table 2: Results of final classification in Wang et al."], "highlighted_evidence": ["FLOAT SELECTED: Table 2: Results of final classification in Wang et al."]}]
Does the paper report the performance of a baseline model on South African languages LID?
{"label_key": "1911.07555", "label_file": "paper_tab_qa", "q_uid": "307e8ab37b67202fe22aedd9a98d9d06aaa169c5", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "Yes", "type": "boolean"}, {"answer": "Yes", "type": "boolean"}]
[{"raw_evidence": ["FLOAT SELECTED: Table 2: LID Accuracy Results. The models we executed ourselves are marked with *. The results that are not available from our own tests or the literature are indicated with ’—’."], "highlighted_evidence": ["FLOAT SELECTED: Table 2: LID Accuracy Results. The models we executed ourselves are marked with *. The results that are not available from our own tests or the literature are indicated with ’—’."]}, {"raw_evidence": ["The average classification accuracy results are summarised in Table TABREF9. The accuracies reported are for classifying a piece of text by its specific language label. Classifying text only by language group or family is a much easier task as reported in BIBREF8.", "FLOAT SELECTED: Table 2: LID Accuracy Results. The models we executed ourselves are marked with *. The results that are not available from our own tests or the literature are indicated with ’—’."], "highlighted_evidence": ["The average classification accuracy results are summarised in Table TABREF9.", "FLOAT SELECTED: Table 2: LID Accuracy Results. The models we executed ourselves are marked with *. The results that are not available from our own tests or the literature are indicated with ’—’."]}]
Does the algorithm improve on the state-of-the-art methods?
{"label_key": "1911.07555", "label_file": "paper_tab_qa", "q_uid": "e5c8e9e54e77960c8c26e8e238168a603fcdfcc6", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "Yes", "type": "boolean"}, {"answer": "From all reported results proposed method (NB+Lex) shows best accuracy on all 3 datasets - some models are not evaluated and not available in literature.", "type": "abstractive"}]
[{"raw_evidence": ["FLOAT SELECTED: Table 2: LID Accuracy Results. The models we executed ourselves are marked with *. The results that are not available from our own tests or the literature are indicated with ’—’."], "highlighted_evidence": ["FLOAT SELECTED: Table 2: LID Accuracy Results. The models we executed ourselves are marked with *. The results that are not available from our own tests or the literature are indicated with ’—’."]}, {"raw_evidence": ["FLOAT SELECTED: Table 2: LID Accuracy Results. The models we executed ourselves are marked with *. The results that are not available from our own tests or the literature are indicated with ’—’.", "The average classification accuracy results are summarised in Table TABREF9. The accuracies reported are for classifying a piece of text by its specific language label. Classifying text only by language group or family is a much easier task as reported in BIBREF8."], "highlighted_evidence": ["FLOAT SELECTED: Table 2: LID Accuracy Results. The models we executed ourselves are marked with *. The results that are not available from our own tests or the literature are indicated with ’—’.", "The average classification accuracy results are summarised in Table TABREF9. The accuracies reported are for classifying a piece of text by its specific language label."]}]
Is the dataset balanced between speakers of different L1s?
{"label_key": "1804.11346", "label_file": "paper_tab_qa", "q_uid": "2ceced87af4c8fdebf2dc959aa700a5c95bd518f", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "No", "type": "boolean"}, {"answer": "No", "type": "boolean"}]
[{"raw_evidence": ["FLOAT SELECTED: Table 2: Distribution by L1s and source corpora."], "highlighted_evidence": ["FLOAT SELECTED: Table 2: Distribution by L1s and source corpora."]}, {"raw_evidence": ["FLOAT SELECTED: Table 2: Distribution by L1s and source corpora."], "highlighted_evidence": ["FLOAT SELECTED: Table 2: Distribution by L1s and source corpora."]}]
What state-of-the-art results are achieved?
{"label_key": "1909.00175", "label_file": "paper_tab_qa", "q_uid": "badc9db40adbbf2ea7bac29f2e4e3b6b9175b1f9", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "F1 score of 92.19 on homographic pun detection, 80.19 on homographic pun location, 89.76 on heterographic pun detection.", "type": "abstractive"}, {"answer": "for the homographic dataset F1 score of 92.19 and 80.19 on detection and location and for the heterographic dataset F1 score of 89.76 on detection", "type": "abstractive"}]
[{"raw_evidence": ["FLOAT SELECTED: Table 1: Comparison results on two benchmark datasets. (P.: Precision, R.: Recall, F1: F1 score.)"], "highlighted_evidence": ["FLOAT SELECTED: Table 1: Comparison results on two benchmark datasets. (P.: Precision, R.: Recall, F1: F1 score.)"]}, {"raw_evidence": ["FLOAT SELECTED: Table 1: Comparison results on two benchmark datasets. (P.: Precision, R.: Recall, F1: F1 score.)"], "highlighted_evidence": ["FLOAT SELECTED: Table 1: Comparison results on two benchmark datasets. (P.: Precision, R.: Recall, F1: F1 score.)"]}]
What baselines do they compare with?
{"label_key": "1909.00175", "label_file": "paper_tab_qa", "q_uid": "67b66fe67a3cb2ce043070513664203e564bdcbd", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "They compare with the following models: by Pedersen (2017), by Pramanick and Das (2017), by Mikhalkova and Karyakin (2017), by Vadehra (2017), Indurthi and Oota (2017), by Vechtomova (2017), by (Cai et al., 2018), and CRF.", "type": "abstractive"}]
[{"raw_evidence": ["FLOAT SELECTED: Table 1: Comparison results on two benchmark datasets. (P.: Precision, R.: Recall, F1: F1 score.)"], "highlighted_evidence": ["FLOAT SELECTED: Table 1: Comparison results on two benchmark datasets. (P.: Precision, R.: Recall, F1: F1 score.)"]}]
How big are significant improvements?
{"label_key": "1910.06036", "label_file": "paper_tab_qa", "q_uid": "92294820ac0d9421f086139e816354970f066d8a", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "Metrics show better results on all metrics compared to baseline except Bleu1 on Zhou split (worse by 0.11 compared to baseline). Bleu1 score on DuSplit is 45.66 compared to best baseline 43.47, other metrics on average by 1", "type": "abstractive"}]
[{"raw_evidence": ["Table TABREF30 shows automatic evaluation results for our model and baselines (copied from their papers). Our proposed model which combines structured answer-relevant relations and unstructured sentences achieves significant improvements over proximity-based answer-aware models BIBREF9, BIBREF15 on both dataset splits. Presumably, our structured answer-relevant relation is a generalization of the context explored by the proximity-based methods because they can only capture short dependencies around answer fragments while our extractions can capture both short and long dependencies given the answer fragments. Moreover, our proposed framework is a general one to jointly leverage structured relations and unstructured sentences. All compared baseline models which only consider unstructured sentences can be further enhanced under our framework.", "FLOAT SELECTED: Table 4: The main experimental results for our model and several baselines. ‘-’ means no results reported in their papers. (Bn: BLEU-n, MET: METEOR, R-L: ROUGE-L)"], "highlighted_evidence": ["Table TABREF30 shows automatic evaluation results for our model and baselines (copied from their papers).", "FLOAT SELECTED: Table 4: The main experimental results for our model and several baselines. ‘-’ means no results reported in their papers. (Bn: BLEU-n, MET: METEOR, R-L: ROUGE-L)"]}]
What was their highest MRR score?
{"label_key": "2002.01984", "label_file": "paper_tab_qa", "q_uid": "9ec1f88ceec84a10dc070ba70e90a792fba8ce71", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "0.5115", "type": "abstractive"}, {"answer": "0.6103", "type": "extractive"}]
[{"raw_evidence": ["FLOAT SELECTED: Table 1: Factoid Questions. In Batch 3 we obtained the highest score. Also the relative distance between our best system and the top performing system shrunk between Batch 4 and 5."], "highlighted_evidence": ["FLOAT SELECTED: Table 1: Factoid Questions. In Batch 3 we obtained the highest score. Also the relative distance between our best system and the top performing system shrunk between Batch 4 and 5."]}, {"raw_evidence": ["Sharma et al. BIBREF3 describe a system with two stage process for factoid and list type question answering. Their system extracts relevant entities and then runs supervised classifier to rank the entities. Wiese et al. BIBREF4 propose neural network based model for Factoid and List-type question answering task. The model is based on Fast QA and predicts the answer span in the passage for a given question. The model is trained on SQuAD data set and fine tuned on the BioASQ data. Dimitriadis et al. BIBREF5 proposed two stage process for Factoid question answering task. Their system uses general purpose tools such as Metamap, BeCas to identify candidate sentences. These candidate sentences are represented in the form of features, and are then ranked by the binary classifier. Classifier is trained on candidate sentences extracted from relevant questions, snippets and correct answers from BioASQ challenge. For factoid question answering task highest ‘MRR’ achieved in the 6th edition of BioASQ competition is ‘0.4325’. Our system is a neural network model based on contextual word embeddings BIBREF1 and achieved a ‘MRR’ score ‘0.6103’ in one of the test batches for Factoid Question Answering task."], "highlighted_evidence": ["Our system is a neural network model based on contextual word embeddings BIBREF1 and achieved a ‘MRR’ score ‘0.6103’ in one of the test batches for Factoid Question Answering task."]}]
Do the authors hypothesize that humans' robustness to noise is due to their general knowledge?
{"label_key": "1809.03449", "label_file": "paper_tab_qa", "q_uid": "52f9cd05d8312ae3c7a43689804bac63f7cac34b", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "Yes", "type": "boolean"}, {"answer": "Yes", "type": "boolean"}]
[{"raw_evidence": ["To verify the effectiveness of general knowledge, we first study the relationship between the amount of general knowledge and the performance of KAR. As shown in Table TABREF13 , by increasing INLINEFORM0 from 0 to 5 in the data enrichment method, the amount of general knowledge rises monotonically, but the performance of KAR first rises until INLINEFORM1 reaches 3 and then drops down. Then we conduct an ablation study by replacing the knowledge aided attention mechanisms with the mutual attention proposed by BIBREF3 and the self attention proposed by BIBREF4 separately, and find that the F1 score of KAR drops by INLINEFORM2 on the development set, INLINEFORM3 on AddSent, and INLINEFORM4 on AddOneSent. Finally we find that after only one epoch of training, KAR already achieves an EM of INLINEFORM5 and an F1 score of INLINEFORM6 on the development set, which is even better than the final performance of several strong baselines, such as DCN (EM / F1: INLINEFORM7 / INLINEFORM8 ) BIBREF36 and BiDAF (EM / F1: INLINEFORM9 / INLINEFORM10 ) BIBREF3 . The above empirical findings imply that general knowledge indeed plays an effective role in KAR.", "FLOAT SELECTED: Table 2: The amount of the extraction results and the performance of KAR under each setting for χ."], "highlighted_evidence": ["To verify the effectiveness of general knowledge, we first study the relationship between the amount of general knowledge and the performance of KAR. As shown in Table TABREF13 , by increasing INLINEFORM0 from 0 to 5 in the data enrichment method, the amount of general knowledge rises monotonically, but the performance of KAR first rises until INLINEFORM1 reaches 3 and then drops down.", "FLOAT SELECTED: Table 2: The amount of the extraction results and the performance of KAR under each setting for χ."]}, {"raw_evidence": ["OF COURSE NOT. There is a huge gap between MRC models and human beings, which is mainly reflected in the hunger for data and the robustness to noise. On the one hand, developing MRC models requires a large amount of training examples (i.e. the passage-question pairs labeled with answer spans), while human beings can achieve good performance on evaluation examples (i.e. the passage-question pairs to address) without training examples. On the other hand, BIBREF6 revealed that intentionally injected noise (e.g. misleading sentences) in evaluation examples causes the performance of MRC models to drop significantly, while human beings are far less likely to suffer from this. The reason for these phenomena, we believe, is that MRC models can only utilize the knowledge contained in each given passage-question pair, but in addition to this, human beings can also utilize general knowledge. A typical category of general knowledge is inter-word semantic connections. As shown in Table TABREF1 , such general knowledge is essential to the reading comprehension ability of human beings."], "highlighted_evidence": ["On the other hand, BIBREF6 revealed that intentionally injected noise (e.g. misleading sentences) in evaluation examples causes the performance of MRC models to drop significantly, while human beings are far less likely to suffer from this. The reason for these phenomena, we believe, is that MRC models can only utilize the knowledge contained in each given passage-question pair, but in addition to this, human beings can also utilize general knowledge."]}]
What is the previous state-of-the-art in summarization?
{"label_key": "1903.09722", "label_file": "paper_tab_qa", "q_uid": "ab0fd94dfc291cf3e54e9b7a7f78b852ddc1a797", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "BIBREF26 ", "type": "extractive"}, {"answer": "BIBREF26", "type": "extractive"}]
[{"raw_evidence": ["Following BIBREF11 , we experiment on the non-anonymized version of . When generating summaries, we follow standard practice of tuning the maximum output length and disallow repeating the same trigram BIBREF27 , BIBREF14 . For this task we train language model representations on the combination of newscrawl and the training data. Table TABREF16 shows that pre-trained embeddings can significantly improve on top of a strong baseline transformer. We also compare to BIBREF26 who use a task-specific architecture compared to our generic sequence to sequence baseline. Pre-trained representations are complementary to their method.", "FLOAT SELECTED: Table 3: Abstractive summarization results on CNNDailyMail. ELMo inputs achieve a new state of the art."], "highlighted_evidence": ["Following BIBREF11 , we experiment on the non-anonymized version of . When generating summaries, we follow standard practice of tuning the maximum output length and disallow repeating the same trigram BIBREF27 , BIBREF14 . For this task we train language model representations on the combination of newscrawl and the training data. Table TABREF16 shows that pre-trained embeddings can significantly improve on top of a strong baseline transformer. We also compare to BIBREF26 who use a task-specific architecture compared to our generic sequence to sequence baseline. Pre-trained representations are complementary to their method.", "FLOAT SELECTED: Table 3: Abstractive summarization results on CNNDailyMail. ELMo inputs achieve a new state of the art."]}, {"raw_evidence": ["Following BIBREF11 , we experiment on the non-anonymized version of . When generating summaries, we follow standard practice of tuning the maximum output length and disallow repeating the same trigram BIBREF27 , BIBREF14 . For this task we train language model representations on the combination of newscrawl and the training data. Table TABREF16 shows that pre-trained embeddings can significantly improve on top of a strong baseline transformer. We also compare to BIBREF26 who use a task-specific architecture compared to our generic sequence to sequence baseline. Pre-trained representations are complementary to their method.", "FLOAT SELECTED: Table 3: Abstractive summarization results on CNNDailyMail. ELMo inputs achieve a new state of the art."], "highlighted_evidence": ["Table TABREF16 shows that pre-trained embeddings can significantly improve on top of a strong baseline transformer. We also compare to BIBREF26 who use a task-specific architecture compared to our generic sequence to sequence baseline.", "FLOAT SELECTED: Table 3: Abstractive summarization results on CNNDailyMail. ELMo inputs achieve a new state of the art."]}]
Does the method achieve sota performance on this dataset?
{"label_key": "1806.11432", "label_file": "paper_tab_qa", "q_uid": "701571680724c05ca70c11bc267fb1160ea1460a", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "No", "type": "boolean"}]
[{"raw_evidence": ["That said, these results, though they do show a marginal increase in dev accuracy and a decrease in CE loss, suggest that perhaps listing description is not too predictive of occupancy rate given our parameterizations. While the listing description is surely an influential metric in determining the quality of a listing, other factors such as location, amenities, and home type might play a larger role in the consumer's decision. We were hopeful that these factors would be represented in the price per bedroom of the listing – our control variable – but the relationship may not have been strong enough.", "However, should a strong relationship actually exist and there be instead a problem with our method, there are a few possibilities of what went wrong. We assumed that listings with similar occupancy rates would have similar listing descriptions regardless of price, which is not necessarily a strong assumption. This is coupled with an unexpected sparseness of clean data. With over 40,000 listings, we did not expect to see such poor attention to orthography in what are essentially public advertisements of the properties. In this way, our decision to use a window size of 5, a minimum occurrence count of 2, and a dimensionality of 50 when training our GloVe vectors was ad hoc.", "FLOAT SELECTED: Table 2: GAN Model, Keywords = [parking], Varying Gamma Parameter"], "highlighted_evidence": ["That said, these results, though they do show a marginal increase in dev accuracy and a decrease in CE loss, suggest that perhaps listing description is not too predictive of occupancy rate given our parameterizations. ", "However, should a strong relationship actually exist and there be instead a problem with our method, there are a few possibilities of what went wrong.", "FLOAT SELECTED: Table 2: GAN Model, Keywords = [parking], Varying Gamma Parameter"]}]
What are the baselines used in the paper?
{"label_key": "1806.11432", "label_file": "paper_tab_qa", "q_uid": "600b097475b30480407ce1de81c28c54a0b3b2f8", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "GloVe vectors trained on Wikipedia Corpus with ensembling, and GloVe vectors trained on Airbnb Data without ensembling", "type": "abstractive"}]
[{"raw_evidence": ["FLOAT SELECTED: Table 1: Results of RNN/LSTM"], "highlighted_evidence": ["FLOAT SELECTED: Table 1: Results of RNN/LSTM"]}]
How better is performance compared to previous state-of-the-art models?
{"label_key": "1910.14537", "label_file": "paper_tab_qa", "q_uid": "5fda8539a97828e188ba26aad5cda1b9dd642bc8", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "F1 score of 97.5 on MSR and 95.7 on AS", "type": "abstractive"}, {"answer": "MSR: 97.7 compared to 97.5 of baseline\nAS: 95.7 compared to 95.6 of baseline", "type": "abstractive"}]
[{"raw_evidence": ["FLOAT SELECTED: Table 5: Results on PKU and MSR compared with previous models in closed test. The asterisks indicate the result of model with unsupervised label from (Wang et al., 2019).", "FLOAT SELECTED: Table 6: Results on AS and CITYU compared with previous models in closed test. The asterisks indicate the result of model with unsupervised label from (Wang et al., 2019)."], "highlighted_evidence": ["FLOAT SELECTED: Table 5: Results on PKU and MSR compared with previous models in closed test. The asterisks indicate the result of model with unsupervised label from (Wang et al., 2019).", "FLOAT SELECTED: Table 6: Results on AS and CITYU compared with previous models in closed test. The asterisks indicate the result of model with unsupervised label from (Wang et al., 2019)."]}, {"raw_evidence": ["With unsupervised segmentation features introduced by BIBREF20, our model gets a higher result. Specially, the results in MSR and AS achieve new state-of-the-art and approaching previous state-of-the-art in CITYU and PKU. The unsupervised segmentation features are derived from the given training dataset, thus using them does not violate the rule of closed test of SIGHAN Bakeoff.", "FLOAT SELECTED: Table 6: Results on AS and CITYU compared with previous models in closed test. The asterisks indicate the result of model with unsupervised label from (Wang et al., 2019).", "FLOAT SELECTED: Table 5: Results on PKU and MSR compared with previous models in closed test. The asterisks indicate the result of model with unsupervised label from (Wang et al., 2019)."], "highlighted_evidence": [" Specially, the results in MSR and AS achieve new state-of-the-art and approaching previous state-of-the-art in CITYU and PKU. ", "FLOAT SELECTED: Table 6: Results on AS and CITYU compared with previous models in closed test. The asterisks indicate the result of model with unsupervised label from (Wang et al., 2019).", "FLOAT SELECTED: Table 5: Results on PKU and MSR compared with previous models in closed test. The asterisks indicate the result of model with unsupervised label from (Wang et al., 2019)."]}]
What are strong baselines model is compared to?
{"label_key": "1910.14537", "label_file": "paper_tab_qa", "q_uid": "fabcd71644bb63559d34b38d78f6ef87c256d475", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "Baseline models are:\n- Chen et al., 2015a\n- Chen et al., 2015b\n- Liu et al., 2016\n- Cai and Zhao, 2016\n- Cai et al., 2017\n- Zhou et al., 2017\n- Ma et al., 2018\n- Wang et al., 2019", "type": "abstractive"}]
[{"raw_evidence": ["Tables TABREF25 and TABREF26 reports the performance of recent models and ours in terms of closed test setting. Without the assistance of unsupervised segmentation features userd in BIBREF20, our model outperforms all the other models in MSR and AS except BIBREF18 and get comparable performance in PKU and CITYU. Note that all the other models for this comparison adopt various $n$-gram features while only our model takes unigram ones.", "FLOAT SELECTED: Table 5: Results on PKU and MSR compared with previous models in closed test. The asterisks indicate the result of model with unsupervised label from (Wang et al., 2019)."], "highlighted_evidence": ["Tables TABREF25 and TABREF26 reports the performance of recent models and ours in terms of closed test setting.", "FLOAT SELECTED: Table 5: Results on PKU and MSR compared with previous models in closed test. The asterisks indicate the result of model with unsupervised label from (Wang et al., 2019)."]}]
which neural embedding model works better?
{"label_key": "1702.03342", "label_file": "paper_tab_qa", "q_uid": "2a6003a74d051d0ebbe62e8883533a5f5e55078b", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "the CRX model", "type": "abstractive"}, {"answer": "3C model", "type": "extractive"}]
[{"raw_evidence": ["FLOAT SELECTED: Table 5 Accuracy of concept categorization"], "highlighted_evidence": ["FLOAT SELECTED: Table 5 Accuracy of concept categorization"]}, {"raw_evidence": ["Table 3 presents the results of fine-grained dataless classification measured in micro-averaged F1. As we can notice, ESA achieves its peak performance with a few hundred dimensions of the sparse BOC vector. Using our densification mechanism, both the CRC & 3C models achieve equal performance to ESA at much less dimensions. Densification using the CRC model embeddings gives the best F1 scores on the three tasks. Interestingly, the CRC model improves the F1 score by INLINEFORM0 7% using only 14 concepts on Autos vs. Motorcycles, and by INLINEFORM1 3% using 70 concepts on Guns vs. Mideast vs. Misc. The 3C model, still performs better than ESA on 2 out of the 3 tasks. Both WE INLINEFORM2 and WE INLINEFORM3 improve the performance over ESA but not as our CRC model."], "highlighted_evidence": ["The 3C model, still performs better than ESA on 2 out of the 3 tasks."]}]
What is the degree of dimension reduction of the efficient aggregation method?
{"label_key": "1702.03342", "label_file": "paper_tab_qa", "q_uid": "1b1b0c71f1a4b37c6562d444f75c92eb2c727d9b", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "The number of dimensions can be reduced by up to 212 times.", "type": "abstractive"}]
[{"raw_evidence": ["FLOAT SELECTED: Table 8 Evaluation results of dataless document classification of coarse-grained classes measured in micro-averaged F1 along with # of dimensions (concepts) at which corresponding performance is achieved"], "highlighted_evidence": ["FLOAT SELECTED: Table 8 Evaluation results of dataless document classification of coarse-grained classes measured in micro-averaged F1 along with # of dimensions (concepts) at which corresponding performance is achieved"]}]
For which languages do they build word embeddings for?
{"label_key": "1805.03710", "label_file": "paper_tab_qa", "q_uid": "9c44df7503720709eac933a15569e5761b378046", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "English", "type": "abstractive"}]
[{"raw_evidence": ["FLOAT SELECTED: Table 2: We generate vectors for OOV using subword information and search for the nearest (cosine distance) words in the embedding space. The LV-M segmentation for each word is: {〈hell, o, o, o〉}, {〈marvel, i, cious〉}, {〈louis, ana〉}, {〈re, re, read〉}, {〈 tu, z, read〉}. We omit the LV-N and FT n-grams as they are trivial and too numerous to list."], "highlighted_evidence": ["FLOAT SELECTED: Table 2: We generate vectors for OOV using subword information and search for the nearest (cosine distance) words in the embedding space. The LV-M segmentation for each word is: {〈hell, o, o, o〉}, {〈marvel, i, cious〉}, {〈louis, ana〉}, {〈re, re, read〉}, {〈 tu, z, read〉}. We omit the LV-N and FT n-grams as they are trivial and too numerous to list."]}]
How big was the corpora they trained ELMo on?
{"label_key": "1909.03135", "label_file": "paper_tab_qa", "q_uid": "d509081673f5667060400eb325a8050fa5db7cc8", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "2174000000, 989000000", "type": "abstractive"}, {"answer": "2174 million tokens for English and 989 million tokens for Russian", "type": "abstractive"}]
[{"raw_evidence": ["FLOAT SELECTED: Table 1: Training corpora", "For the experiments described below, we trained our own ELMo models from scratch. For English, the training corpus consisted of the English Wikipedia dump from February 2017. For Russian, it was a concatenation of the Russian Wikipedia dump from December 2018 and the full Russian National Corpus (RNC). The RNC texts were added to the Russian Wikipedia dump so as to make the Russian training corpus more comparable in size to the English one (Wikipedia texts would comprise only half of the size). As Table TABREF3 shows, the English Wikipedia is still two times larger, but at least the order is the same."], "highlighted_evidence": ["FLOAT SELECTED: Table 1: Training corpora", "For the experiments described below, we trained our own ELMo models from scratch. For English, the training corpus consisted of the English Wikipedia dump from February 2017. For Russian, it was a concatenation of the Russian Wikipedia dump from December 2018 and the full Russian National Corpus (RNC). "]}, {"raw_evidence": ["For the experiments described below, we trained our own ELMo models from scratch. For English, the training corpus consisted of the English Wikipedia dump from February 2017. For Russian, it was a concatenation of the Russian Wikipedia dump from December 2018 and the full Russian National Corpus (RNC). The RNC texts were added to the Russian Wikipedia dump so as to make the Russian training corpus more comparable in size to the English one (Wikipedia texts would comprise only half of the size). As Table TABREF3 shows, the English Wikipedia is still two times larger, but at least the order is the same.", "FLOAT SELECTED: Table 1: Training corpora"], "highlighted_evidence": ["For the experiments described below, we trained our own ELMo models from scratch. For English, the training corpus consisted of the English Wikipedia dump from February 2017. For Russian, it was a concatenation of the Russian Wikipedia dump from December 2018 and the full Russian National Corpus (RNC). ", "As Table TABREF3 shows, the English Wikipedia is still two times larger, but at least the order is the same.", "FLOAT SELECTED: Table 1: Training corpora"]}]
What dataset is used?
{"label_key": "1804.07789", "label_file": "paper_tab_qa", "q_uid": "6cd25c637c6b772ce29e8ee81571e8694549c5ab", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "English WIKIBIO, French WIKIBIO , German WIKIBIO ", "type": "abstractive"}, {"answer": "WikiBio dataset, introduce two new biography datasets, one in French and one in German", "type": "extractive"}]
[{"raw_evidence": ["FLOAT SELECTED: Table 1: Comparison of different models on the English WIKIBIO dataset", "FLOAT SELECTED: Table 4: Comparison of different models on the French WIKIBIO dataset", "FLOAT SELECTED: Table 5: Comparison of different models on the German WIKIBIO dataset"], "highlighted_evidence": ["FLOAT SELECTED: Table 1: Comparison of different models on the English WIKIBIO dataset", "FLOAT SELECTED: Table 4: Comparison of different models on the French WIKIBIO dataset", "FLOAT SELECTED: Table 5: Comparison of different models on the German WIKIBIO dataset"]}, {"raw_evidence": ["We use the WikiBio dataset introduced by lebret2016neural. It consists of INLINEFORM0 biography articles from English Wikipedia. A biography article corresponds to a person (sportsman, politician, historical figure, actor, etc.). Each Wikipedia article has an accompanying infobox which serves as the structured input and the task is to generate the first sentence of the article (which typically is a one-line description of the person). We used the same train, valid and test sets which were made publicly available by lebret2016neural.", "We also introduce two new biography datasets, one in French and one in German. These datasets were created and pre-processed using the same procedure as outlined in lebret2016neural. Specifically, we extracted the infoboxes and the first sentence from the corresponding Wikipedia article. As with the English dataset, we split the French and German datasets randomly into train (80%), test (10%) and valid (10%). The French and German datasets extracted by us has been made publicly available. The number of examples was 170K and 50K and the vocabulary size was 297K and 143K for French and German respectively. Although in this work we focus only on generating descriptions in one language, we hope that this dataset will also be useful for developing models which jointly learn to generate descriptions from structured data in multiple languages."], "highlighted_evidence": ["We use the WikiBio dataset introduced by lebret2016neural. It consists of INLINEFORM0 biography articles from English Wikipedia.", "We also introduce two new biography datasets, one in French and one in German. These datasets were created and pre-processed using the same procedure as outlined in lebret2016neural. Specifically, we extracted the infoboxes and the first sentence from the corresponding Wikipedia article."]}]
what topics did they label?
{"label_key": "1810.12085", "label_file": "paper_tab_qa", "q_uid": "ceb767e33fde4b927e730f893db5ece947ffb0d8", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "Demographics Age, DiagnosisHistory, MedicationHistory, ProcedureHistory, Symptoms/Signs, Vitals/Labs, Procedures/Results, Meds/Treatments, Movement, Other.", "type": "abstractive"}, {"answer": "Demographics, Diagnosis History, Medication History, Procedure History, Symptoms, Labs, Procedures, Treatments, Hospital movements, and others", "type": "abstractive"}]
[{"raw_evidence": ["FLOAT SELECTED: Table 1. HPI Categories and Annotation Instructions"], "highlighted_evidence": ["FLOAT SELECTED: Table 1. HPI Categories and Annotation Instructions"]}, {"raw_evidence": ["We developed a classifier to label topics in the history of present illness (HPI) notes, including demographics, diagnosis history, and symptoms/signs, among others. A random sample of 515 history of present illness notes was taken, and each of the notes was manually annotated by one of eight annotators using the software Multi-document Annotation Environment (MAE) BIBREF20 . MAE provides an interactive GUI for annotators and exports the results of each annotation as an XML file with text spans and their associated labels for additional processing. 40% of the HPI notes were labeled by clinicians and 60% by non-clinicians. Table TABREF5 shows the instructions given to the annotators for each of the 10 labels. The entire HPI note was labeled with one of the labels, and instructions were given to label each clause in a sentence with the same label when possible."], "highlighted_evidence": ["We developed a classifier to label topics in the history of present illness (HPI) notes, including demographics, diagnosis history, and symptoms/signs, among others", "Table TABREF5 shows the instructions given to the annotators for each of the 10 labels."]}]
did they compare with other extractive summarization methods?
{"label_key": "1810.12085", "label_file": "paper_tab_qa", "q_uid": "c2cb6c4500d9e02fc9a1bdffd22c3df69655189f", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "No", "type": "boolean"}]
[{"raw_evidence": ["We evaluated our model on the 515 annotated history of present illness notes, which were split in a 70% train set, 15% development set, and a 15% test set. The model is trained using the Adam algorithm for gradient-based optimization BIBREF25 with an initial learning rate = 0.001 and decay = 0.9. A dropout rate of 0.5 was applied for regularization, and each batch size = 20. The model ran for 20 epochs and was halted early if there was no improvement after 3 epochs.", "We evaluated the impact of character embeddings, the choice of pretrained w2v embeddings, and the addition of learned word embeddings on model performance on the dev set. We report performance of the best performing model on the test set.", "Table TABREF16 compares dev set performance of the model using various pretrained word embeddings, with and without character embeddings, and with pretrained versus learned word embeddings. The first row in each section is the performance of the model architecture described in the methods section for comparison. Models using word embeddings trained on the discharge summaries performed better than word embeddings trained on all MIMIC notes, likely because the discharge summary word embeddings better captured word use in discharge summaries alone. Interestingly, the continuous bag of words embeddings outperformed skip gram embeddings, which is surprising because the skip gram architecture typically works better for infrequent words BIBREF26 . As expected, inclusion of character embeddings increases performance by approximately 3%. The model with word embeddings learned in the model achieves the highest performance on the dev set (0.886), which may be because the pretrained worm embeddings were trained on a previous version of MIMIC. As a result, some words in the discharge summaries, such as mi-spelled words or rarer diseases and medications, did not have associated word embeddings. Performing a simple spell correction on out of vocab words may improve performance with pretrained word embeddings.", "FLOAT SELECTED: Table 3. Average Recall for five sections of the discharge summary. Recall for each patient’s sex was calculated by examining the structured data for the patient’s current admission, and recall for the remaining sections was calculated by comparing CUI overlap between the section and the remaining notes for the current admission."], "highlighted_evidence": ["We evaluated our model on the 515 annotated history of present illness notes, which were split in a 70% train set, 15% development set, and a 15% test set.", "We evaluated the impact of character embeddings, the choice of pretrained w2v embeddings, and the addition of learned word embeddings on model performance on the dev set. We report performance of the best performing model on the test set.", "Table TABREF16 compares dev set performance of the model using various pretrained word embeddings, with and without character embeddings, and with pretrained versus learned word embeddings.", "FLOAT SELECTED: Table 3. Average Recall for five sections of the discharge summary. Recall for each patient’s sex was calculated by examining the structured data for the patient’s current admission, and recall for the remaining sections was calculated by comparing CUI overlap between the section and the remaining notes for the current admission."]}]
what levels of document preprocessing are looked at?
{"label_key": "1610.07809", "label_file": "paper_tab_qa", "q_uid": "06eb9f2320451df83e27362c22eb02f4a426a018", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "raw text, text cleaning through document logical structure detection, removal of keyphrase sparse sections of the document", "type": "extractive"}, {"answer": "Level 1, Level 2 and Level 3.", "type": "abstractive"}]
[{"raw_evidence": ["While previous work clearly states that efficient document preprocessing is a prerequisite for the extraction of high quality keyphrases, there is, to our best knowledge, no empirical evidence of how preprocessing affects keyphrase extraction performance. In this paper, we re-assess the performance of several state-of-the-art keyphrase extraction models at increasingly sophisticated levels of preprocessing. Three incremental levels of document preprocessing are experimented with: raw text, text cleaning through document logical structure detection, and removal of keyphrase sparse sections of the document. In doing so, we present the first consistent comparison of different keyphrase extraction models and study their robustness over noisy text. More precisely, our contributions are:"], "highlighted_evidence": ["Three incremental levels of document preprocessing are experimented with: raw text, text cleaning through document logical structure detection, and removal of keyphrase sparse sections of the document. In doing so, we present the first consistent comparison of different keyphrase extraction models and study their robustness over noisy text."]}, {"raw_evidence": ["In this study, we concentrate our effort on re-assessing keyphrase extraction performance on three increasingly sophisticated levels of document preprocessing described below.", "Table shows the average number of sentences and words along with the maximum possible recall for each level of preprocessing. The maximum recall is obtained by computing the fraction of the reference keyphrases that occur in the documents. We observe that the level 2 preprocessing succeeds in eliminating irrelevant text by significantly reducing the number of words (-19%) while maintaining a high maximum recall (-2%). Level 3 preprocessing drastically reduce the number of words to less than a quarter of the original amount while interestingly still preserving high recall.", "FLOAT SELECTED: Table 1: Statistics computed at the different levels of document preprocessing on the training set."], "highlighted_evidence": ["In this study, we concentrate our effort on re-assessing keyphrase extraction performance on three increasingly sophisticated levels of document preprocessing described below.", "Table shows the average number of sentences and words along with the maximum possible recall for each level of preprocessing. The maximum recall is obtained by computing the fraction of the reference keyphrases that occur in the documents. We observe that the level 2 preprocessing succeeds in eliminating irrelevant text by significantly reducing the number of words (-19%) while maintaining a high maximum recall (-2%). Level 3 preprocessing drastically reduce the number of words to less than a quarter of the original amount while interestingly still preserving high recall.", "FLOAT SELECTED: Table 1: Statistics computed at the different levels of document preprocessing on the training set."]}]
How many different phenotypes are present in the dataset?
{"label_key": "2003.03044", "label_file": "paper_tab_qa", "q_uid": "46c9e5f335b2927db995a55a18b7c7621fd3d051", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "15 clinical patient phenotypes", "type": "extractive"}, {"answer": "Thirteen different phenotypes are present in the dataset.", "type": "abstractive"}]
[{"raw_evidence": ["We have created a dataset of discharge summaries and nursing notes, all in the English language, with a focus on frequently readmitted patients, labeled with 15 clinical patient phenotypes believed to be associated with risk of recurrent Intensive Care Unit (ICU) readmission per our domain experts (co-authors LAC, PAT, DAG) as well as the literature. BIBREF10 BIBREF11 BIBREF12"], "highlighted_evidence": ["We have created a dataset of discharge summaries and nursing notes, all in the English language, with a focus on frequently readmitted patients, labeled with 15 clinical patient phenotypes believed to be associated with risk of recurrent Intensive Care Unit (ICU) readmission per our domain experts (co-authors LAC, PAT, DAG) as well as the literature. BIBREF10 BIBREF11 BIBREF12"]}, {"raw_evidence": ["FLOAT SELECTED: Table 1: The thirteen different phenotypes used for our dataset, as well the definition for each phenotype that was used to identify and annotate the phenotype."], "highlighted_evidence": ["FLOAT SELECTED: Table 1: The thirteen different phenotypes used for our dataset, as well the definition for each phenotype that was used to identify and annotate the phenotype."]}]
What are 10 other phenotypes that are annotated?
{"label_key": "2003.03044", "label_file": "paper_tab_qa", "q_uid": "ce0e2a8675055a5468c4c54dbb099cfd743df8a7", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "Adv. Heart Disease, Adv. Lung Disease, Alcohol Abuse, Chronic Neurologic Dystrophies, Dementia, Depression, Developmental Delay, Obesity, Psychiatric disorders and Substance Abuse", "type": "abstractive"}]
[{"raw_evidence": ["Table defines each of the considered clinical patient phenotypes. Table counts the occurrences of these phenotypes across patient notes and Figure contains the corresponding correlation matrix. Lastly, Table presents an overview of some descriptive statistics on the patient notes' lengths.", "FLOAT SELECTED: Table 1: The thirteen different phenotypes used for our dataset, as well the definition for each phenotype that was used to identify and annotate the phenotype."], "highlighted_evidence": ["Table defines each of the considered clinical patient phenotypes.", "FLOAT SELECTED: Table 1: The thirteen different phenotypes used for our dataset, as well the definition for each phenotype that was used to identify and annotate the phenotype."]}]
HOw does the method perform compared with baselines?
{"label_key": "1909.00015", "label_file": "paper_tab_qa", "q_uid": "f8c1b17d265a61502347c9a937269b38fc3fcab1", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "On the datasets DE-EN, JA-EN, RO-EN, and EN-DE, the baseline achieves 29.79, 21.57, 32.70, and 26.02 BLEU score, respectively. The 1.5-entmax achieves 29.83, 22.13, 33.10, and 25.89 BLEU score, which is a difference of +0.04, +0.56, +0.40, and -0.13 BLEU score versus the baseline. The α-entmax achieves 29.90, 21.74, 32.89, and 26.93 BLEU score, which is a difference of +0.11, +0.17, +0.19, +0.91 BLEU score versus the baseline.", "type": "abstractive"}]
[{"raw_evidence": ["FLOAT SELECTED: Table 1: Machine translation tokenized BLEU test results on IWSLT 2017 DE EN, KFTT JA EN, WMT 2016 RO EN and WMT 2014 EN DE, respectively.", "We report test set tokenized BLEU BIBREF32 results in Table TABREF27. We can see that replacing softmax by entmax does not hurt performance in any of the datasets; indeed, sparse attention Transformers tend to have slightly higher BLEU, but their sparsity leads to a better potential for analysis. In the next section, we make use of this potential by exploring the learned internal mechanics of the self-attention heads."], "highlighted_evidence": ["FLOAT SELECTED: Table 1: Machine translation tokenized BLEU test results on IWSLT 2017 DE EN, KFTT JA EN, WMT 2016 RO EN and WMT 2014 EN DE, respectively.", "We report test set tokenized BLEU BIBREF32 results in Table TABREF27. We can see that replacing softmax by entmax does not hurt performance in any of the datasets; indeed, sparse attention Transformers tend to have slightly higher BLEU, but their sparsity leads to a better potential for analysis."]}]
What evaluation metrics did look at?
{"label_key": "1705.01214", "label_file": "paper_tab_qa", "q_uid": "cc608df2884e1e82679f663ed9d9d67a4b6c03f3", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "precision, recall, F1 and accuracy", "type": "abstractive"}, {"answer": "Response time, resource consumption (memory, CPU, network bandwidth), precision, recall, F1, accuracy.", "type": "abstractive"}]
[{"raw_evidence": ["FLOAT SELECTED: Table 15: Evaluation of different classifiers in the first version of the training set"], "highlighted_evidence": ["FLOAT SELECTED: Table 15: Evaluation of different classifiers in the first version of the training set"]}, {"raw_evidence": ["In this section, we describe the validation framework that we created for integration tests. For this, we developed it as a new component of SABIA's system architecture and it provides a high level language which is able to specify interaction scenarios that simulate users interacting with the deployed chatbots. The system testers provide a set of utterances and their corresponding expected responses, and the framework automatically simulates users interacting with the bots and collect metrics, such as time taken to answer an utterance and other resource consumption metrics (e.g., memory, CPU, network bandwidth). Our goal was to: (i) provide a tool for integration tests, (ii) to validate CognIA's implementation, and (iii) to support the system developers in understanding the behavior of the system and which aspects can be improved. Thus, whenever developers modify the system's source code, the modifications must first pass the automatic test before actual deployment.", "FLOAT SELECTED: Table 15: Evaluation of different classifiers in the first version of the training set"], "highlighted_evidence": ["The system testers provide a set of utterances and their corresponding expected responses, and the framework automatically simulates users interacting with the bots and collect metrics, such as time taken to answer an utterance and other resource consumption metrics (e.g., memory, CPU, network bandwidth). ", "FLOAT SELECTED: Table 15: Evaluation of different classifiers in the first version of the training set"]}]
How much improvement is gained from Adversarial Reward Augmented Maximum Likelihood (ARAML)?
{"label_key": "1908.07195", "label_file": "paper_tab_qa", "q_uid": "79f9468e011670993fd162543d1a4b3dd811ac5d", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "ARAM has achieved improvement over all baseline methods using reverese perplexity and slef-BLEU metric. The maximum reverse perplexity improvement 936,16 is gained for EMNLP2017 WMT dataset and 48,44 for COCO dataset.", "type": "abstractive"}, {"answer": "Compared to the baselines, ARAML does not do better in terms of perplexity on COCO and EMNLP 2017 WMT datasets, but it does by up to 0.27 Self-BLEU points on COCO and 0.35 Self-BLEU on EMNLP 2017 WMT. In terms of Grammaticality and Relevance, it scores better than the baselines on up to 75.5% and 73% of the cases respectively.", "type": "abstractive"}]
[{"raw_evidence": ["FLOAT SELECTED: Table 4: Automatic evaluation on COCO and EMNLP2017 WMT. Each metric is presented with mean and standard deviation."], "highlighted_evidence": ["FLOAT SELECTED: Table 4: Automatic evaluation on COCO and EMNLP2017 WMT. Each metric is presented with mean and standard deviation."]}, {"raw_evidence": ["FLOAT SELECTED: Table 4: Automatic evaluation on COCO and EMNLP2017 WMT. Each metric is presented with mean and standard deviation.", "FLOAT SELECTED: Table 5: Human evaluation on WeiboDial. The scores represent the percentages of Win, Lose or Tie when our model is compared with a baseline. κ denotes Fleiss’ kappa (all are moderate agreement). The scores marked with * mean p-value< 0.05 and ** indicates p-value< 0.01 in sign test."], "highlighted_evidence": ["FLOAT SELECTED: Table 4: Automatic evaluation on COCO and EMNLP2017 WMT. Each metric is presented with mean and standard deviation.", "FLOAT SELECTED: Table 5: Human evaluation on WeiboDial. The scores represent the percentages of Win, Lose or Tie when our model is compared with a baseline. κ denotes Fleiss’ kappa (all are moderate agreement). The scores marked with * mean p-value< 0.05 and ** indicates p-value< 0.01 in sign test."]}]
what was their character error rate?
{"label_key": "1703.07090", "label_file": "paper_tab_qa", "q_uid": "1bb7eb5c3d029d95d1abf9f2892c1ec7b6eef306", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "2.49% for layer-wise training, 2.63% for distillation, 6.26% for transfer learning.", "type": "abstractive"}, {"answer": "Their best model achieved a 2.49% Character Error Rate.", "type": "abstractive"}]
[{"raw_evidence": ["FLOAT SELECTED: Table 3. The CER and RTF of 9-layers, 2-layers regular-trained and 2-laryers distilled LSTM.", "FLOAT SELECTED: Table 2. The CER of 6 to 9-layers models trained by regular Xavier Initialization, layer-wise training with CE criterion and CE + sMBR criteria. The teacher of 9-layer model is 8-layers sMBR model, while the others’ teacher is CE model.", "FLOAT SELECTED: Table 4. The CER of different 2-layers models, which are Shenma distilled model, Amap model further trained with Amap dataset, and Shenma model trained with sMBR on Amap dataset."], "highlighted_evidence": ["FLOAT SELECTED: Table 3. The CER and RTF of 9-layers, 2-layers regular-trained and 2-laryers distilled LSTM.", "FLOAT SELECTED: Table 2. The CER of 6 to 9-layers models trained by regular Xavier Initialization, layer-wise training with CE criterion and CE + sMBR criteria. The teacher of 9-layer model is 8-layers sMBR model, while the others’ teacher is CE model.", "FLOAT SELECTED: Table 4. The CER of different 2-layers models, which are Shenma distilled model, Amap model further trained with Amap dataset, and Shenma model trained with sMBR on Amap dataset."]}, {"raw_evidence": ["FLOAT SELECTED: Table 3. The CER and RTF of 9-layers, 2-layers regular-trained and 2-laryers distilled LSTM.", "FLOAT SELECTED: Table 2. The CER of 6 to 9-layers models trained by regular Xavier Initialization, layer-wise training with CE criterion and CE + sMBR criteria. The teacher of 9-layer model is 8-layers sMBR model, while the others’ teacher is CE model."], "highlighted_evidence": ["FLOAT SELECTED: Table 3. The CER and RTF of 9-layers, 2-layers regular-trained and 2-laryers distilled LSTM.", "FLOAT SELECTED: Table 2. The CER of 6 to 9-layers models trained by regular Xavier Initialization, layer-wise training with CE criterion and CE + sMBR criteria. The teacher of 9-layer model is 8-layers sMBR model, while the others’ teacher is CE model."]}]
which lstm models did they compare with?
{"label_key": "1703.07090", "label_file": "paper_tab_qa", "q_uid": "c0af8b7bf52dc15e0b33704822c4a34077e09cd1", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "Unidirectional LSTM networks with 2, 6, 7, 8, and 9 layers.", "type": "abstractive"}]
[{"raw_evidence": ["There is a high real time requirement in real world application, especially in online voice search system. Shenma voice search is one of the most popular mobile search engines in China, and it is a streaming service that intermediate recognition results displayed while users are still speaking. Unidirectional LSTM network is applied, rather than bidirectional one, because it is well suited to real-time streaming speech recognition.", "FLOAT SELECTED: Table 3. The CER and RTF of 9-layers, 2-layers regular-trained and 2-laryers distilled LSTM.", "FLOAT SELECTED: Table 2. The CER of 6 to 9-layers models trained by regular Xavier Initialization, layer-wise training with CE criterion and CE + sMBR criteria. The teacher of 9-layer model is 8-layers sMBR model, while the others’ teacher is CE model."], "highlighted_evidence": ["Unidirectional LSTM network is applied, rather than bidirectional one, because it is well suited to real-time streaming speech recognition.", "FLOAT SELECTED: Table 3. The CER and RTF of 9-layers, 2-layers regular-trained and 2-laryers distilled LSTM.", "FLOAT SELECTED: Table 2. The CER of 6 to 9-layers models trained by regular Xavier Initialization, layer-wise training with CE criterion and CE + sMBR criteria. The teacher of 9-layer model is 8-layers sMBR model, while the others’ teacher is CE model."]}]
What was the baseline?
{"label_key": "1707.03569", "label_file": "paper_tab_qa", "q_uid": "37edc25e39515ffc2d92115d2fcd9e6ceb18898b", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"}
[{"answer": "SVMs, LR, BIBREF2", "type": "extractive"}, {"answer": "SVM INLINEFORM0, SVM INLINEFORM1, LR INLINEFORM2, MaxEnt", "type": "extractive"}]
[{"raw_evidence": ["Experimental results Table TABREF9 illustrates the performance of the models for the different data representations. The upper part of the Table summarizes the performance of the baselines. The entry “Balikas et al.” stands for the winning system of the 2016 edition of the challenge BIBREF2 , which to the best of our knowledge holds the state-of-the-art. Due to the stochasticity of training the biLSTM models, we repeat the experiment 10 times and report the average and the standard deviation of the performance achieved.", "The models To evaluate the multitask learning approach, we compared it with several other models. Support Vector Machines (SVMs) are maximum margin classification algorithms that have been shown to achieve competitive performance in several text classification problems BIBREF16 . SVM INLINEFORM0 stands for an SVM with linear kernel and an one-vs-rest approach for the multi-class problem. Also, SVM INLINEFORM1 is an SVM with linear kernel that employs the crammer-singer strategy BIBREF18 for the multi-class problem. Logistic regression (LR) is another type of linear classification method, with probabilistic motivation. Again, we use two types of Logistic Regression depending on the multi-class strategy: LR INLINEFORM2 that uses an one-vs-rest approach and multinomial Logistic Regression also known as the MaxEnt classifier that uses a multinomial criterion.", "For multitask learning we use the architecture shown in Figure FIGREF2 , which we implemented with Keras BIBREF20 . The embeddings are initialized with the 50-dimensional GloVe embeddings while the output of the biLSTM network is set to dimension 50. The activation function of the hidden layers is the hyperbolic tangent. The weights of the layers were initialized from a uniform distribution, scaled as described in BIBREF21 . We used the Root Mean Square Propagation optimization method. We used dropout for regularizing the network. We trained the network using batches of 128 examples as follows: before selecting the batch, we perform a Bernoulli trial with probability INLINEFORM0 to select the task to train for. With probability INLINEFORM1 we pick a batch for the fine-grained sentiment classification problem, while with probability INLINEFORM2 we pick a batch for the ternary problem. As shown in Figure FIGREF2 , the error is backpropagated until the embeddings, that we fine-tune during the learning process. Notice also that the weights of the network until the layer INLINEFORM3 are shared and therefore affected by both tasks.", "FLOAT SELECTED: Table 3 The scores on MAEM for the systems. The best (lowest) score is shown in bold and is achieved in the multitask setting with the biLSTM architecture of Figure 1."], "highlighted_evidence": ["Experimental results Table TABREF9 illustrates the performance of the models for the different data representations. The upper part of the Table summarizes the performance of the baselines. The entry “Balikas et al.” stands for the winning system of the 2016 edition of the challenge BIBREF2 , which to the best of our knowledge holds the state-of-the-art.", "o evaluate the multitask learning approach, we compared it with several other models. Support Vector Machines (SVMs) are maximum margin classification algorithms that have been shown to achieve competitive performance in several text classification problems BIBREF16 . SVM INLINEFORM0 stands for an SVM with linear kernel and an one-vs-rest approach for the multi-class problem. Also, SVM INLINEFORM1 is an SVM with linear kernel that employs the crammer-singer strategy BIBREF18 for the multi-class problem. Logistic regression (LR) is another type of linear classification method, with probabilistic motivation. Again, we use two types of Logistic Regression depending on the multi-class strategy: LR INLINEFORM2 that uses an one-vs-rest approach and multinomial Logistic Regression also known as the MaxEnt classifier that uses a multinomial criterion.", "For multitask learning we use the architecture shown in Figure FIGREF2 , which we implemented with Keras BIBREF20 . The embeddings are initialized with the 50-dimensional GloVe embeddings while the output of the biLSTM network is set to dimension 50. ", "FLOAT SELECTED: Table 3 The scores on MAEM for the systems. The best (lowest) score is shown in bold and is achieved in the multitask setting with the biLSTM architecture of Figure 1."]}, {"raw_evidence": ["The models To evaluate the multitask learning approach, we compared it with several other models. Support Vector Machines (SVMs) are maximum margin classification algorithms that have been shown to achieve competitive performance in several text classification problems BIBREF16 . SVM INLINEFORM0 stands for an SVM with linear kernel and an one-vs-rest approach for the multi-class problem. Also, SVM INLINEFORM1 is an SVM with linear kernel that employs the crammer-singer strategy BIBREF18 for the multi-class problem. Logistic regression (LR) is another type of linear classification method, with probabilistic motivation. Again, we use two types of Logistic Regression depending on the multi-class strategy: LR INLINEFORM2 that uses an one-vs-rest approach and multinomial Logistic Regression also known as the MaxEnt classifier that uses a multinomial criterion."], "highlighted_evidence": [" To evaluate the multitask learning approach, we compared it with several other models. Support Vector Machines (SVMs) are maximum margin classification algorithms that have been shown to achieve competitive performance in several text classification problems BIBREF16 . SVM INLINEFORM0 stands for an SVM with linear kernel and an one-vs-rest approach for the multi-class problem. Also, SVM INLINEFORM1 is an SVM with linear kernel that employs the crammer-singer strategy BIBREF18 for the multi-class problem. Logistic regression (LR) is another type of linear classification method, with probabilistic motivation. Again, we use two types of Logistic Regression depending on the multi-class strategy: LR INLINEFORM2 that uses an one-vs-rest approach and multinomial Logistic Regression also known as the MaxEnt classifier that uses a multinomial criterion."]}]