input stringlengths 21 146 | metadata stringlengths 226 226 | answers stringlengths 37 1.27k | evidence stringlengths 124 12k |
|---|---|---|---|
what language pairs are explored? | {"label_key": "1912.01214", "label_file": "paper_tab_qa", "q_uid": "5eda469a8a77f028d0c5f1acd296111085614537", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "De-En, En-Fr, Fr-En, En-Es, Ro-En, En-De, Ar-En, En-Ru", "type": "abstractive"}, {"answer": "French-English-Spanish (Fr-En-Es), German-English-French (De-En-Fr) and Romanian-English-German (Ro-En-De), Arabic (Ar), Spanish (Es), and Russian (Ru), and mutual translation between themselves constitutes six zero-shot translation", "type": "extractive"}] | [{"raw_evidence": ["For MultiUN corpus, we use four languages: English (En) is set as the pivot language, which has parallel data with other three languages which do not have parallel data between each other. The three languages are Arabic (Ar), Spanish (Es), and Russian (Ru), and mutual translation between themselves constitutes six zero-shot translation direction for evaluation. We use 80K BPE splits as the vocabulary. Note that all sentences are tokenized by the tokenize.perl script, and we lowercase all data to avoid a large vocabulary for the MultiUN corpus.", "The statistics of Europarl and MultiUN corpora are summarized in Table TABREF18. For Europarl corpus, we evaluate on French-English-Spanish (Fr-En-Es), German-English-French (De-En-Fr) and Romanian-English-German (Ro-En-De), where English acts as the pivot language, its left side is the source language, and its right side is the target language. We remove the multi-parallel sentences between different training corpora to ensure zero-shot settings. We use the devtest2006 as the validation set and the test2006 as the test set for Fr$\\rightarrow $Es and De$\\rightarrow $Fr. For distant language pair Ro$\\rightarrow $De, we extract 1,000 overlapping sentences from newstest2016 as the test set and the 2,000 overlapping sentences split from the training set as the validation set since there is no official validation and test sets. For vocabulary, we use 60K sub-word tokens based on Byte Pair Encoding (BPE) BIBREF33.", "FLOAT SELECTED: Table 1: Data Statistics."], "highlighted_evidence": ["For MultiUN corpus, we use four languages: English (En) is set as the pivot language, which has parallel data with other three languages which do not have parallel data between each other. The three languages are Arabic (Ar), Spanish (Es), and Russian (Ru), and mutual translation between themselves constitutes six zero-shot translation direction for evaluation. ", "The statistics of Europarl and MultiUN corpora are summarized in Table TABREF18. For Europarl corpus, we evaluate on French-English-Spanish (Fr-En-Es), German-English-French (De-En-Fr) and Romanian-English-German (Ro-En-De), where English acts as the pivot language, its left side is the source language, and its right side is the target language. ", "FLOAT SELECTED: Table 1: Data Statistics."]}, {"raw_evidence": ["The statistics of Europarl and MultiUN corpora are summarized in Table TABREF18. For Europarl corpus, we evaluate on French-English-Spanish (Fr-En-Es), German-English-French (De-En-Fr) and Romanian-English-German (Ro-En-De), where English acts as the pivot language, its left side is the source language, and its right side is the target language. We remove the multi-parallel sentences between different training corpora to ensure zero-shot settings. We use the devtest2006 as the validation set and the test2006 as the test set for Fr$\\rightarrow $Es and De$\\rightarrow $Fr. For distant language pair Ro$\\rightarrow $De, we extract 1,000 overlapping sentences from newstest2016 as the test set and the 2,000 overlapping sentences split from the training set as the validation set since there is no official validation and test sets. For vocabulary, we use 60K sub-word tokens based on Byte Pair Encoding (BPE) BIBREF33.", "For MultiUN corpus, we use four languages: English (En) is set as the pivot language, which has parallel data with other three languages which do not have parallel data between each other. The three languages are Arabic (Ar), Spanish (Es), and Russian (Ru), and mutual translation between themselves constitutes six zero-shot translation direction for evaluation. We use 80K BPE splits as the vocabulary. Note that all sentences are tokenized by the tokenize.perl script, and we lowercase all data to avoid a large vocabulary for the MultiUN corpus."], "highlighted_evidence": ["For Europarl corpus, we evaluate on French-English-Spanish (Fr-En-Es), German-English-French (De-En-Fr) and Romanian-English-German (Ro-En-De), where English acts as the pivot language, its left side is the source language, and its right side is the target language. We remove the multi-parallel sentences between different training corpora to ensure zero-shot settings. We use the devtest2006 as the validation set and the test2006 as the test set for Fr$\\rightarrow $Es and De$\\rightarrow $Fr. For distant language pair Ro$\\rightarrow $De, we extract 1,000 overlapping sentences from newstest2016 as the test set and the 2,000 overlapping sentences split from the training set as the validation set since there is no official validation and test sets.", "For MultiUN corpus, we use four languages: English (En) is set as the pivot language, which has parallel data with other three languages which do not have parallel data between each other. The three languages are Arabic (Ar), Spanish (Es), and Russian (Ru), and mutual translation between themselves constitutes six zero-shot translation direction for evaluation."]}] |
What accuracy does the proposed system achieve? | {"label_key": "1801.05147", "label_file": "paper_tab_qa", "q_uid": "ef4dba073d24042f24886580ae77add5326f2130", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "F1 scores of 85.99 on the DL-PS data, 75.15 on the EC-MT data and 71.53 on the EC-UQ data ", "type": "abstractive"}, {"answer": "F1 of 85.99 on the DL-PS dataset (dialog domain); 75.15 on EC-MT and 71.53 on EC-UQ (e-commerce domain)", "type": "abstractive"}] | [{"raw_evidence": ["FLOAT SELECTED: Table 2: Main results on the DL-PS data.", "FLOAT SELECTED: Table 3: Main results on the EC-MT and EC-UQ datasets."], "highlighted_evidence": ["FLOAT SELECTED: Table 2: Main results on the DL-PS data.", "FLOAT SELECTED: Table 3: Main results on the EC-MT and EC-UQ datasets."]}, {"raw_evidence": ["FLOAT SELECTED: Table 2: Main results on the DL-PS data.", "FLOAT SELECTED: Table 3: Main results on the EC-MT and EC-UQ datasets."], "highlighted_evidence": ["FLOAT SELECTED: Table 2: Main results on the DL-PS data.", "FLOAT SELECTED: Table 3: Main results on the EC-MT and EC-UQ datasets."]}] |
On which benchmarks they achieve the state of the art? | {"label_key": "1704.06194", "label_file": "paper_tab_qa", "q_uid": "9ee07edc371e014df686ced4fb0c3a7b9ce3d5dc", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "SimpleQuestions, WebQSP", "type": "extractive"}, {"answer": "WebQSP, SimpleQuestions", "type": "extractive"}] | [{"raw_evidence": ["Finally, like STAGG, which uses multiple relation detectors (see yih2015semantic for the three models used), we also try to use the top-3 relation detectors from Section \"Relation Detection Results\" . As shown on the last row of Table 3 , this gives a significant performance boost, resulting in a new state-of-the-art result on SimpleQuestions and a result comparable to the state-of-the-art on WebQSP.", "FLOAT SELECTED: Table 3: KBQA results on SimpleQuestions (SQ) and WebQSP (WQ) test sets. The numbers in green color are directly comparable to our results since we start with the same entity linking results."], "highlighted_evidence": ["As shown on the last row of Table 3 , this gives a significant performance boost, resulting in a new state-of-the-art result on SimpleQuestions and a result comparable to the state-of-the-art on WebQSP", "FLOAT SELECTED: Table 3: KBQA results on SimpleQuestions (SQ) and WebQSP (WQ) test sets. The numbers in green color are directly comparable to our results since we start with the same entity linking results."]}, {"raw_evidence": ["Table 2 shows the results on two relation detection tasks. The AMPCNN result is from BIBREF20 , which yielded state-of-the-art scores by outperforming several attention-based methods. We re-implemented the BiCNN model from BIBREF4 , where both questions and relations are represented with the word hash trick on character tri-grams. The baseline BiLSTM with relation word sequence appears to be the best baseline on WebQSP and is close to the previous best result of AMPCNN on SimpleQuestions. Our proposed HR-BiLSTM outperformed the best baselines on both tasks by margins of 2-3% (p $<$ 0.001 and 0.01 compared to the best baseline BiLSTM w/ words on SQ and WQ respectively)."], "highlighted_evidence": ["The baseline BiLSTM with relation word sequence appears to be the best baseline on WebQSP and is close to the previous best result of AMPCNN on SimpleQuestions. Our proposed HR-BiLSTM outperformed the best baselines on both tasks by margins of 2-3% (p $<$ 0.001 and 0.01 compared to the best baseline BiLSTM w/ words on SQ and WQ respectively)."]}] |
How do they calculate a static embedding for each word? | {"label_key": "1909.00512", "label_file": "paper_tab_qa", "q_uid": "891c2001d6baaaf0da4e65b647402acac621a7d2", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "They use the first principal component of a word's contextualized representation in a given layer as its static embedding.", "type": "abstractive"}, {"answer": " by taking the first principal component (PC) of its contextualized representations in a given layer", "type": "extractive"}] | [{"raw_evidence": ["FLOAT SELECTED: Table 1: The performance of various static embeddings on word embedding benchmark tasks. The best result for each task is in bold. For the contextualizing models (ELMo, BERT, GPT-2), we use the first principal component of a word’s contextualized representations in a given layer as its static embedding. The static embeddings created using ELMo and BERT’s contextualized representations often outperform GloVe and FastText vectors."], "highlighted_evidence": ["FLOAT SELECTED: Table 1: The performance of various static embeddings on word embedding benchmark tasks. The best result for each task is in bold. For the contextualizing models (ELMo, BERT, GPT-2), we use the first principal component of a word’s contextualized representations in a given layer as its static embedding. The static embeddings created using ELMo and BERT’s contextualized representations often outperform GloVe and FastText vectors."]}, {"raw_evidence": ["As noted earlier, we can create static embeddings for each word by taking the first principal component (PC) of its contextualized representations in a given layer. In Table TABREF34, we plot the performance of these PC static embeddings on several benchmark tasks. These tasks cover semantic similarity, analogy solving, and concept categorization: SimLex999 BIBREF21, MEN BIBREF22, WS353 BIBREF23, RW BIBREF24, SemEval-2012 BIBREF25, Google analogy solving BIBREF0 MSR analogy solving BIBREF26, BLESS BIBREF27 and AP BIBREF28. We leave out layers 3 - 10 in Table TABREF34 because their performance is between those of Layers 2 and 11."], "highlighted_evidence": ["As noted earlier, we can create static embeddings for each word by taking the first principal component (PC) of its contextualized representations in a given layer. "]}] |
What is the performance of BERT on the task? | {"label_key": "2003.03106", "label_file": "paper_tab_qa", "q_uid": "66c96c297c2cffdf5013bab5e95b59101cb38655", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "F1 scores are:\nHUBES-PHI: Detection(0.965), Classification relaxed (0.95), Classification strict (0.937)\nMedoccan: Detection(0.972), Classification (0.967)", "type": "abstractive"}, {"answer": "BERT remains only 0.3 F1-score points behind, and would have achieved the second position among all the MEDDOCAN shared task competitors. Taking into account that only 3% of the gold labels remain incorrectly annotated, Table ", "type": "extractive"}] | [{"raw_evidence": ["To finish with this experiment set, Table also shows the strict classification precision, recall and F1-score for the compared systems. Despite the fact that, in general, the systems obtain high values, BERT outperforms them again. BERT's F1-score is 1.9 points higher than the next most competitive result in the comparison. More remarkably, the recall obtained by BERT is about 5 points above.", "FLOAT SELECTED: Table 5: Results of Experiment A: NUBES-PHI", "The results of the two MEDDOCAN scenarios –detection and classification– are shown in Table . These results follow the same pattern as in the previous experiments, with the CRF classifier being the most precise of all, and BERT outperforming both the CRF and spaCy classifiers thanks to its greater recall. We also show the results of mao2019hadoken who, despite of having used a BERT-based system, achieve lower scores than our models. The reason why it should be so remain unclear.", "FLOAT SELECTED: Table 8: Results of Experiment B: MEDDOCAN"], "highlighted_evidence": ["To finish with this experiment set, Table also shows the strict classification precision, recall and F1-score for the compared systems.", "FLOAT SELECTED: Table 5: Results of Experiment A: NUBES-PHI", "The results of the two MEDDOCAN scenarios –detection and classification– are shown in Table .", "FLOAT SELECTED: Table 8: Results of Experiment B: MEDDOCAN"]}, {"raw_evidence": ["In this experiment set, our BERT implementation is compared to several systems that participated in the MEDDOCAN challenge: a CRF classifier BIBREF18, a spaCy entity recogniser BIBREF18, and NLNDE BIBREF12, the winner of the shared task and current state of the art for sensitive information detection and classification in Spanish clinical text. Specifically, we include the results of a domain-independent NLNDE model (S2), and the results of a model enriched with domain-specific embeddings (S3). Finally, we include the results obtained by mao2019hadoken with a CRF output layer on top of BERT embeddings. MEDDOCAN consists of two scenarios:", "The results of the two MEDDOCAN scenarios –detection and classification– are shown in Table . These results follow the same pattern as in the previous experiments, with the CRF classifier being the most precise of all, and BERT outperforming both the CRF and spaCy classifiers thanks to its greater recall. We also show the results of mao2019hadoken who, despite of having used a BERT-based system, achieve lower scores than our models. The reason why it should be so remain unclear.", "FLOAT SELECTED: Table 8: Results of Experiment B: MEDDOCAN"], "highlighted_evidence": ["In this experiment set, our BERT implementation is compared to several systems that participated in the MEDDOCAN challenge: a CRF classifier BIBREF18, a spaCy entity recogniser BIBREF18, and NLNDE BIBREF12, the winner of the shared task and current state of the art for sensitive information detection and classification in Spanish clinical text. Specifically, we include the results of a domain-independent NLNDE model (S2), and the results of a model enriched with domain-specific embeddings (S3).", "The results of the two MEDDOCAN scenarios –detection and classification– are shown in Table . These results follow the same pattern as in the previous experiments, with the CRF classifier being the most precise of all, and BERT outperforming both the CRF and spaCy classifiers thanks to its greater recall. We also show the results of mao2019hadoken who, despite of having used a BERT-based system, achieve lower scores than our models. The reason why it should be so remain unclear.", "FLOAT SELECTED: Table 8: Results of Experiment B: MEDDOCAN"]}] |
What state-of-the-art compression techniques were used in the comparison? | {"label_key": "1909.11687", "label_file": "paper_tab_qa", "q_uid": "efe9bad55107a6be7704ed97ecce948a8ca7b1d2", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "baseline without knowledge distillation (termed NoKD), Patient Knowledge Distillation (PKD)", "type": "extractive"}, {"answer": "NoKD, PKD, BERTBASE teacher model", "type": "extractive"}] | [{"raw_evidence": ["For the language modeling evaluation, we also evaluate a baseline without knowledge distillation (termed NoKD), with a model parameterized identically to the distilled student models but trained directly on the teacher model objective from scratch. For downstream tasks, we compare with NoKD as well as Patient Knowledge Distillation (PKD) from BIBREF34, who distill the 12-layer BERTBASE model into 3 and 6-layer BERT models by using the teacher model's hidden states."], "highlighted_evidence": ["For the language modeling evaluation, we also evaluate a baseline without knowledge distillation (termed NoKD), with a model parameterized identically to the distilled student models but trained directly on the teacher model objective from scratch. For downstream tasks, we compare with NoKD as well as Patient Knowledge Distillation (PKD) from BIBREF34, who distill the 12-layer BERTBASE model into 3 and 6-layer BERT models by using the teacher model's hidden states."]}, {"raw_evidence": ["FLOAT SELECTED: Table 3: Results of the distilled models, the teacher model and baselines on the downstream language understanding task test sets, obtained from the GLUE server, along with the size parameters and compression ratios of the respective models compared to the teacher BERTBASE. MNLI-m and MNLI-mm refer to the genre-matched and genre-mismatched test sets for MNLI.", "For the language modeling evaluation, we also evaluate a baseline without knowledge distillation (termed NoKD), with a model parameterized identically to the distilled student models but trained directly on the teacher model objective from scratch. For downstream tasks, we compare with NoKD as well as Patient Knowledge Distillation (PKD) from BIBREF34, who distill the 12-layer BERTBASE model into 3 and 6-layer BERT models by using the teacher model's hidden states.", "Table TABREF21 shows results on the downstream language understanding tasks, as well as model sizes, for our approaches, the BERTBASE teacher model, and the PKD and NoKD baselines. We note that models trained with our proposed approaches perform strongly and consistently improve upon the identically parametrized NoKD baselines, indicating that the dual training and shared projection techniques are effective, without incurring significant losses against the BERTBASE teacher model. Comparing with the PKD baseline, our 192-dimensional models, achieving a higher compression rate than either of the PKD models, perform better than the 3-layer PKD baseline and are competitive with the larger 6-layer baseline on task accuracy while being nearly 5 times as small."], "highlighted_evidence": ["FLOAT SELECTED: Table 3: Results of the distilled models, the teacher model and baselines on the downstream language understanding task test sets, obtained from the GLUE server, along with the size parameters and compression ratios of the respective models compared to the teacher BERTBASE. MNLI-m and MNLI-mm refer to the genre-matched and genre-mismatched test sets for MNLI.", "For the language modeling evaluation, we also evaluate a baseline without knowledge distillation (termed NoKD), with a model parameterized identically to the distilled student models but trained directly on the teacher model objective from scratch. For downstream tasks, we compare with NoKD as well as Patient Knowledge Distillation (PKD) from BIBREF34, who distill the 12-layer BERTBASE model into 3 and 6-layer BERT models by using the teacher model's hidden states.", "Table TABREF21 shows results on the downstream language understanding tasks, as well as model sizes, for our approaches, the BERTBASE teacher model, and the PKD and NoKD baselines"]}] |
What discourse relations does it work best/worst for? | {"label_key": "1804.05918", "label_file": "paper_tab_qa", "q_uid": "f17ca24b135f9fe6bb25dc5084b13e1637ec7744", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "explicit discourse relations", "type": "extractive"}, {"answer": "Best: Expansion (Exp). Worst: Comparison (Comp).", "type": "abstractive"}] | [{"raw_evidence": ["The second row shows the performance of our basic paragraph-level model which predicts both implicit and explicit discourse relations in a paragraph. Compared to the variant system (the first row), the basic model further improved the classification performance on the first three implicit relations. Especially on the contingency relation, the classification performance was improved by another 1.42 percents. Moreover, the basic model yields good performance for recognizing explicit discourse relations as well, which is comparable with previous best result (92.05% macro F1-score and 93.09% accuracy as reported in BIBREF11 ).", "After untying parameters in the softmax prediction layer, implicit discourse relation classification performance was improved across all four relations, meanwhile, the explicit discourse relation classification performance was also improved. The CRF layer further improved implicit discourse relation recognition performance on the three small classes. In summary, our full paragraph-level neural network model achieves the best macro-average F1-score of 48.82% in predicting implicit discourse relations, which outperforms previous neural tensor network models (e.g., BIBREF18 ) by more than 2 percents and outperforms the best previous system BIBREF19 by 1 percent.", "As we explained in section 4.2, we ran our models for 10 times to obtain stable average performance. Then we also created ensemble models by applying majority voting to combine results of ten runs. From table 5 , each ensemble model obtains performance improvements compared with single model. The full model achieves performance boosting of (51.84 - 48.82 = 3.02) and (94.17 - 93.21 = 0.96) in macro F1-scores for predicting implicit and explicit discourse relations respectively. Furthermore, the ensemble model achieves the best performance for predicting both implicit and explicit discourse relations simultaneously."], "highlighted_evidence": ["the basic model yields good performance for recognizing explicit discourse relations as well, which is comparable with previous best result (92.05% macro F1-score and 93.09% accuracy as reported in BIBREF11 ).", "After untying parameters in the softmax prediction layer, implicit discourse relation classification performance was improved across all four relations, meanwhile, the explicit discourse relation classification performance was also improved.", "Then we also created ensemble models by applying majority voting to combine results of ten runs. From table 5 , each ensemble model obtains performance improvements compared with single model. The full model achieves performance boosting of (51.84 - 48.82 = 3.02) and (94.17 - 93.21 = 0.96) in macro F1-scores for predicting implicit and explicit discourse relations respectively. "]}, {"raw_evidence": ["FLOAT SELECTED: Table 3: Multi-class Classification Results on PDTB. We report accuracy (Acc) and macro-average F1scores for both explicit and implicit discourse relation predictions. We also report class-wise F1 scores.", "The Penn Discourse Treebank (PDTB): We experimented with PDTB v2.0 BIBREF7 which is the largest annotated corpus containing 36k discourse relations in 2,159 Wall Street Journal (WSJ) articles. In this work, we focus on the top-level discourse relation senses which are consist of four major semantic classes: Comparison (Comp), Contingency (Cont), Expansion (Exp) and Temporal (Temp). We followed the same PDTB section partition BIBREF12 as previous work and used sections 2-20 as training set, sections 21-22 as test set, and sections 0-1 as development set. Table 1 presents the data distributions we collected from PDTB.", "Multi-way Classification: The first section of table 3 shows macro average F1-scores and accuracies of previous works. The second section of table 3 shows the multi-class classification results of our implemented baseline systems. Consistent with results of previous works, neural tensors, when applied to Bi-LSTMs, improved implicit discourse relation prediction performance. However, the performance on the three small classes (Comp, Cont and Temp) remains low."], "highlighted_evidence": ["FLOAT SELECTED: Table 3: Multi-class Classification Results on PDTB. We report accuracy (Acc) and macro-average F1scores for both explicit and implicit discourse relation predictions. We also report class-wise F1 scores.", "In this work, we focus on the top-level discourse relation senses which are consist of four major semantic classes: Comparison (Comp), Contingency (Cont), Expansion (Exp) and Temporal (Temp).", "However, the performance on the three small classes (Comp, Cont and Temp) remains low."]}] |
Which 7 Indian languages do they experiment with? | {"label_key": "2002.01664", "label_file": "paper_tab_qa", "q_uid": "75df70ce7aa714ec4c6456d0c51f82a16227f2cb", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "Hindi, English, Kannada, Telugu, Assamese, Bengali and Malayalam", "type": "abstractive"}, {"answer": "Kannada, Hindi, Telugu, Malayalam, Bengali, English and Assamese (in table, missing in text)", "type": "abstractive"}] | [{"raw_evidence": ["FLOAT SELECTED: Table 1: Dataset"], "highlighted_evidence": ["FLOAT SELECTED: Table 1: Dataset"]}, {"raw_evidence": ["In this section, we describe our dataset collection process. We collected and curated around 635Hrs of audio data for 7 Indian languages, namely Kannada, Hindi, Telugu, Malayalam, Bengali, and English. We collected the data from the All India Radio news channel where an actor will be reading news for about 5-10 mins. To cover many speakers for the dataset, we crawled data from 2010 to 2019. Since the audio is very long to train any deep neural network directly, we segment the audio clips into smaller chunks using Voice activity detector. Since the audio clips will have music embedded during the news, we use Inhouse music detection model to remove the music segments from the dataset to make the dataset clean and our dataset contains 635Hrs of clean audio which is divided into 520Hrs of training data containing 165K utterances and 115Hrs of testing data containing 35K utterances. The amount of audio data for training and testing for each of the language is shown in the table bellow.", "FLOAT SELECTED: Table 1: Dataset"], "highlighted_evidence": ["We collected and curated around 635Hrs of audio data for 7 Indian languages, namely Kannada, Hindi, Telugu, Malayalam, Bengali, and English.", "The amount of audio data for training and testing for each of the language is shown in the table bellow.", "FLOAT SELECTED: Table 1: Dataset"]}] |
Do they use graphical models? | {"label_key": "1809.00540", "label_file": "paper_tab_qa", "q_uid": "a99fdd34422f4231442c220c97eafc26c76508dd", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "No", "type": "boolean"}, {"answer": "No", "type": "boolean"}] | [{"raw_evidence": ["FLOAT SELECTED: Table 2: Clustering results on the labeled dataset. We compare our algorithm (with and without timestamps) with the online micro-clustering routine of Aggarwal and Yu (2006) (denoted by CluStream). The F1 values are for the precision (P) and recall (R) in the following columns. See Table 3 for a legend of the different models. Best result for each language is in bold."], "highlighted_evidence": ["FLOAT SELECTED: Table 2: Clustering results on the labeled dataset. We compare our algorithm (with and without timestamps) with the online micro-clustering routine of Aggarwal and Yu (2006) (denoted by CluStream). The F1 values are for the precision (P) and recall (R) in the following columns. See Table 3 for a legend of the different models. Best result for each language is in bold."]}, {"raw_evidence": [], "highlighted_evidence": []}] |
What metric is used for evaluation? | {"label_key": "1809.00540", "label_file": "paper_tab_qa", "q_uid": "d604f5fb114169f75f9a38fab18c1e866c5ac28b", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "F1, precision, recall, accuracy", "type": "abstractive"}, {"answer": "Precision, recall, F1, accuracy", "type": "abstractive"}] | [{"raw_evidence": ["FLOAT SELECTED: Table 2: Clustering results on the labeled dataset. We compare our algorithm (with and without timestamps) with the online micro-clustering routine of Aggarwal and Yu (2006) (denoted by CluStream). The F1 values are for the precision (P) and recall (R) in the following columns. See Table 3 for a legend of the different models. Best result for each language is in bold.", "FLOAT SELECTED: Table 3: Accuracy of the SVM ranker on the English training set. TOKENS are the word token features, LEMMAS are the lemma features for title and body, ENTS are named entity features and TS are timestamp features. All features are described in detail in §4, and are listed for both the title and the body."], "highlighted_evidence": ["FLOAT SELECTED: Table 2: Clustering results on the labeled dataset. We compare our algorithm (with and without timestamps) with the online micro-clustering routine of Aggarwal and Yu (2006) (denoted by CluStream). The F1 values are for the precision (P) and recall (R) in the following columns. See Table 3 for a legend of the different models. Best result for each language is in bold.", "FLOAT SELECTED: Table 3: Accuracy of the SVM ranker on the English training set. TOKENS are the word token features, LEMMAS are the lemma features for title and body, ENTS are named entity features and TS are timestamp features. All features are described in detail in §4, and are listed for both the title and the body."]}, {"raw_evidence": ["To investigate the importance of each feature, we now consider in Table TABREF37 the accuracy of the SVM ranker for English as described in § SECREF19 . We note that adding features increases the accuracy of the SVM ranker, especially the timestamp features. However, the timestamp feature actually interferes with our optimization of INLINEFORM0 to identify when new clusters are needed, although they improve the SVM reranking accuracy. We speculate this is true because high accuracy in the reranking problem does not necessarily help with identifying when new clusters need to be opened.", "FLOAT SELECTED: Table 2: Clustering results on the labeled dataset. We compare our algorithm (with and without timestamps) with the online micro-clustering routine of Aggarwal and Yu (2006) (denoted by CluStream). The F1 values are for the precision (P) and recall (R) in the following columns. See Table 3 for a legend of the different models. Best result for each language is in bold.", "Table TABREF35 gives the final monolingual results on the three datasets. For English, we see that the significant improvement we get using our algorithm over the algorithm of aggarwal2006framework is due to an increased recall score. We also note that the trained models surpass the baseline for all languages, and that the timestamp feature (denoted by TS), while not required to beat the baseline, has a very relevant contribution in all cases. Although the results for both the baseline and our models seem to differ across languages, one can verify a consistent improvement from the latter to the former, suggesting that the score differences should be mostly tied to the different difficulty found across the datasets for each language. The presented scores show that our learning framework generalizes well to different languages and enables high quality clustering results."], "highlighted_evidence": ["To investigate the importance of each feature, we now consider in Table TABREF37 the accuracy of the SVM ranker for English as described in § SECREF19 . ", "FLOAT SELECTED: Table 2: Clustering results on the labeled dataset. We compare our algorithm (with and without timestamps) with the online micro-clustering routine of Aggarwal and Yu (2006) (denoted by CluStream). The F1 values are for the precision (P) and recall (R) in the following columns. See Table 3 for a legend of the different models. Best result for each language is in bold.", "Table TABREF35 gives the final monolingual results on the three datasets."]}] |
Which eight NER tasks did they evaluate on? | {"label_key": "2004.03354", "label_file": "paper_tab_qa", "q_uid": "1d3e914d0890fc09311a70de0b20974bf7f0c9fe", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "BC5CDR-disease, NCBI-disease, BC5CDR-chem, BC4CHEMD, BC2GM, JNLPBA, LINNAEUS, Species-800", "type": "abstractive"}, {"answer": "BC5CDR-disease, NCBI-disease, BC5CDR-chem, BC4CHEMD, BC2GM, JNLPBA, LINNAEUS, Species-800", "type": "abstractive"}] | [{"raw_evidence": ["FLOAT SELECTED: Table 2: Top: Examples of within-space and cross-space nearest neighbors (NNs) by cosine similarity in GreenBioBERT’s wordpiece embedding layer. Blue: Original wordpiece space. Green: Aligned Word2Vec space. Bottom: Biomedical NER test set precision / recall / F1 (%) measured with the CoNLL NER scorer. Boldface: Best model in row. Underlined: Best inexpensive model (without target-domain pretraining) in row."], "highlighted_evidence": ["FLOAT SELECTED: Table 2: Top: Examples of within-space and cross-space nearest neighbors (NNs) by cosine similarity in GreenBioBERT’s wordpiece embedding layer. Blue: Original wordpiece space. Green: Aligned Word2Vec space. Bottom: Biomedical NER test set precision / recall / F1 (%) measured with the CoNLL NER scorer. Boldface: Best model in row. Underlined: Best inexpensive model (without target-domain pretraining) in row."]}, {"raw_evidence": ["FLOAT SELECTED: Table 2: Top: Examples of within-space and cross-space nearest neighbors (NNs) by cosine similarity in GreenBioBERT’s wordpiece embedding layer. Blue: Original wordpiece space. Green: Aligned Word2Vec space. Bottom: Biomedical NER test set precision / recall / F1 (%) measured with the CoNLL NER scorer. Boldface: Best model in row. Underlined: Best inexpensive model (without target-domain pretraining) in row."], "highlighted_evidence": ["FLOAT SELECTED: Table 2: Top: Examples of within-space and cross-space nearest neighbors (NNs) by cosine similarity in GreenBioBERT’s wordpiece embedding layer. Blue: Original wordpiece space. Green: Aligned Word2Vec space. Bottom: Biomedical NER test set precision / recall / F1 (%) measured with the CoNLL NER scorer. Boldface: Best model in row. Underlined: Best inexpensive model (without target-domain pretraining) in row."]}] |
Do they test their framework performance on commonly used language pairs, such as English-to-German? | {"label_key": "1611.04798", "label_file": "paper_tab_qa", "q_uid": "897ba53ef44f658c128125edd26abf605060fb13", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "Yes", "type": "boolean"}, {"answer": "Yes", "type": "boolean"}] | [{"raw_evidence": ["FLOAT SELECTED: Table 1: Results of the English→German systems in a simulated under-resourced scenario."], "highlighted_evidence": ["FLOAT SELECTED: Table 1: Results of the English→German systems in a simulated under-resourced scenario."]}, {"raw_evidence": ["A standard NMT system employs parallel data only. While good parallel corpora are limited in number, getting monolingual data of an arbitrary language is trivial. To make use of German monolingual corpus in an English INLINEFORM0 German NMT system, sennrich2016b built a separate German INLINEFORM1 English NMT using the same parallel corpus, then they used that system to translate the German monolingual corpus back to English, forming a synthesis parallel data. gulcehre2015 trained another RNN-based language model to score the monolingual corpus and integrate it to the NMT system through shallow or deep fusion. Both methods requires to train separate systems with possibly different hyperparameters for each. Conversely, by applying mix-source method to the big monolingual data, we need to train only one network. We mix the TED parallel corpus and the substantial monolingual corpus (EPPS+NC+CommonCrawl) and train a mix-source NMT system from those data."], "highlighted_evidence": [" standard NMT system employs parallel data only. While good parallel corpora are limited in number, getting monolingual data of an arbitrary language is trivial. To make use of German monolingual corpus in an English INLINEFORM0 German NMT system, sennrich2016b built a separate German INLINEFORM1 English NMT using the same parallel corpus, then they used that system to translate the German monolingual corpus back to English, forming a synthesis parallel data. gulcehre2015 trained another RNN-based language model to score the monolingual corpus and integrate it to the NMT system through shallow or deep fusion. Both methods requires to train separate systems with possibly different hyperparameters for each. Conversely, by applying mix-source method to the big monolingual data, we need to train only one network. We mix the TED parallel corpus and the substantial monolingual corpus (EPPS+NC+CommonCrawl) and train a mix-source NMT system from those data."]}] |
What languages are evaluated? | {"label_key": "1809.01541", "label_file": "paper_tab_qa", "q_uid": "c32adef59efcb9d1a5b10e1d7c999a825c9e6d9a", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "German, English, Spanish, Finnish, French, Russian, Swedish.", "type": "abstractive"}] | [{"raw_evidence": ["FLOAT SELECTED: Table 2: Official shared task test set results."], "highlighted_evidence": ["FLOAT SELECTED: Table 2: Official shared task test set results."]}] |
What is MSD prediction? | {"label_key": "1809.01541", "label_file": "paper_tab_qa", "q_uid": "32a3c248b928d4066ce00bbb0053534ee62596e7", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "The task of predicting MSD tags: V, PST, V.PCTP, PASS.", "type": "abstractive"}, {"answer": "morphosyntactic descriptions (MSD)", "type": "extractive"}] | [{"raw_evidence": ["FLOAT SELECTED: Table 1: Example input sentence. Context MSD tags and lemmas, marked in gray, are only available in Track 1. The cyan square marks the main objective of predicting the word form made. The magenta square marks the auxiliary objective of predicting the MSD tag V;PST;V.PTCP;PASS."], "highlighted_evidence": ["FLOAT SELECTED: Table 1: Example input sentence. Context MSD tags and lemmas, marked in gray, are only available in Track 1. The cyan square marks the main objective of predicting the word form made. The magenta square marks the auxiliary objective of predicting the MSD tag V;PST;V.PTCP;PASS."]}, {"raw_evidence": ["There are two tracks of Task 2 of CoNLL–SIGMORPHON 2018: in Track 1 the context is given in terms of word forms, lemmas and morphosyntactic descriptions (MSD); in Track 2 only word forms are available. See Table TABREF1 for an example. Task 2 is additionally split in three settings based on data size: high, medium and low, with high-resource datasets consisting of up to 70K instances per language, and low-resource datasets consisting of only about 1K instances."], "highlighted_evidence": ["There are two tracks of Task 2 of CoNLL–SIGMORPHON 2018: in Track 1 the context is given in terms of word forms, lemmas and morphosyntactic descriptions (MSD); in Track 2 only word forms are available."]}] |
What other models do they compare to? | {"label_key": "1809.09194", "label_file": "paper_tab_qa", "q_uid": "d3dbb5c22ef204d85707d2d24284cc77fa816b6c", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "SAN Baseline, BNA, DocQA, R.M-Reader, R.M-Reader+Verifier and DocQA+ELMo", "type": "abstractive"}, {"answer": "BNA, DocQA, R.M-Reader, R.M-Reader + Verifier, DocQA + ELMo, R.M-Reader+Verifier+ELMo", "type": "abstractive"}] | [{"raw_evidence": ["Table TABREF21 reports comparison results in literature published . Our model achieves state-of-the-art on development dataset in setting without pre-trained large language model (ELMo). Comparing with the much complicated model R.M.-Reader + Verifier, which includes several components, our model still outperforms by 0.7 in terms of F1 score. Furthermore, we observe that ELMo gives a great boosting on the performance, e.g., 2.8 points in terms of F1 for DocQA. This encourages us to incorporate ELMo into our model in future.", "The results in terms of EM and F1 is summarized in Table TABREF20 . We observe that Joint SAN outperforms the SAN baseline with a large margin, e.g., 67.89 vs 69.27 (+1.38) and 70.68 vs 72.20 (+1.52) in terms of EM and F1 scores respectively, so it demonstrates the effectiveness of the joint optimization. By incorporating the output information of classifier into Joint SAN, it obtains a slight improvement, e.g., 72.2 vs 72.66 (+0.46) in terms of F1 score. By analyzing the results, we found that in most cases when our model extract an NULL string answer, the classifier also predicts it as an unanswerable question with a high probability.", "FLOAT SELECTED: Table 2: Comparison with published results in literature. 1: results are extracted from (Rajpurkar et al., 2018); 2: results are extracted from (Hu et al., 2018). ∗: it is unclear which model is used. #: we only evaluate the Joint SAN in the submission."], "highlighted_evidence": ["Table TABREF21 reports comparison results in literature published .", "The results in terms of EM and F1 is summarized in Table TABREF20 . We observe that Joint SAN outperforms the SAN baseline with a large margin, e.g., 67.89 vs 69.27 (+1.38) and 70.68 vs 72.20 (+1.52) in terms of EM and F1 scores respectively, so it demonstrates the effectiveness of the joint optimization.", "FLOAT SELECTED: Table 2: Comparison with published results in literature. 1: results are extracted from (Rajpurkar et al., 2018); 2: results are extracted from (Hu et al., 2018). ∗: it is unclear which model is used. #: we only evaluate the Joint SAN in the submission."]}, {"raw_evidence": ["FLOAT SELECTED: Table 2: Comparison with published results in literature. 1: results are extracted from (Rajpurkar et al., 2018); 2: results are extracted from (Hu et al., 2018). ∗: it is unclear which model is used. #: we only evaluate the Joint SAN in the submission.", "Table TABREF21 reports comparison results in literature published . Our model achieves state-of-the-art on development dataset in setting without pre-trained large language model (ELMo). Comparing with the much complicated model R.M.-Reader + Verifier, which includes several components, our model still outperforms by 0.7 in terms of F1 score. Furthermore, we observe that ELMo gives a great boosting on the performance, e.g., 2.8 points in terms of F1 for DocQA. This encourages us to incorporate ELMo into our model in future."], "highlighted_evidence": ["FLOAT SELECTED: Table 2: Comparison with published results in literature. 1: results are extracted from (Rajpurkar et al., 2018); 2: results are extracted from (Hu et al., 2018). ∗: it is unclear which model is used. #: we only evaluate the Joint SAN in the submission.", "Table TABREF21 reports comparison results in literature published ."]}] |
How much better than the baseline is LiLi? | {"label_key": "1802.06024", "label_file": "paper_tab_qa", "q_uid": "286078813136943dfafb5155ee15d2429e7601d9", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "In case of Freebase knowledge base, LiLi model had better F1 score than the single model by 0.20 , 0.01, 0.159 for kwn, unk, and all test Rel type. The values for WordNet are 0.25, 0.1, 0.2. \n", "type": "abstractive"}] | [{"raw_evidence": ["Baselines. As none of the existing KBC methods can solve the OKBC problem, we choose various versions of LiLi as baselines.", "Single: Version of LiLi where we train a single prediction model INLINEFORM0 for all test relations.", "Sep: We do not transfer (past learned) weights for initializing INLINEFORM0 , i.e., we disable LL.", "F-th): Here, we use a fixed prediction threshold 0.5 instead of relation-specific threshold INLINEFORM0 .", "BG: The missing or connecting links (when the user does not respond) are filled with “@-RelatedTo-@\" blindly, no guessing mechanism.", "w/o PTS: LiLi does not ask for additional clues via past task selection for skillset improvement.", "Evaluation-I: Strategy Formulation Ability. Table 5 shows the list of inference strategies formulated by LiLi for various INLINEFORM0 and INLINEFORM1 , which control the strategy formulation of LiLi. When INLINEFORM2 , LiLi cannot interact with user and works like a closed-world method. Thus, INLINEFORM3 drops significantly (0.47). When INLINEFORM4 , i.e. with only one interaction per query, LiLi acquires knowledge well for instances where either of the entities or relation is unknown. However, as one unknown entity may appear in multiple test triples, once the entity becomes known, LiLi doesn’t need to ask for it again and can perform inference on future triples causing significant increase in INLINEFORM5 (0.97). When INLINEFORM6 , LiLi is able to perform inference on all instances and INLINEFORM7 becomes 1. For INLINEFORM8 , LiLi uses INLINEFORM9 only once (as only one MLQ satisfies INLINEFORM10 ) compared to INLINEFORM11 . In summary, LiLi’s RL-model can effectively formulate query-specific inference strategies (based on specified parameter values). Evaluation-II: Predictive Performance. Table 6 shows the comparative performance of LiLi with baselines. To judge the overall improvements, we performed paired t-test considering +ve F1 scores on each relation as paired data. Considering both KBs and all relation types, LiLi outperforms Sep with INLINEFORM12 . If we set INLINEFORM13 (training with very few clues), LiLi outperforms Sep with INLINEFORM14 on Freebase considering MCC. Thus, the lifelong learning mechanism is effective in transferring helpful knowledge. Single model performs better than Sep for unknown relations due to the sharing of knowledge (weights) across tasks. However, for known relations, performance drops because, as a new relation arrives to the system, old weights get corrupted and catastrophic forgetting occurs. For unknown relations, as the relations are evaluated just after training, there is no chance for catastrophic forgetting. The performance improvement ( INLINEFORM15 ) of LiLi over F-th on Freebase signifies that the relation-specific threshold INLINEFORM16 works better than fixed threshold 0.5 because, if all prediction values for test instances lie above (or below) 0.5, F-th predicts all instances as +ve (-ve) which degrades its performance. Due to the utilization of contextual similarity (highly correlated with class labels) of entity-pairs, LiLi’s guessing mechanism works better ( INLINEFORM17 ) than blind guessing (BG). The past task selection mechanism of LiLi also improves its performance over w/o PTS, as it acquires more clues during testing for poorly performed tasks (evaluated on validation set). For Freebase, due to a large number of past tasks [9 (25% of 38)], the performance difference is more significant ( INLINEFORM18 ). For WordNet, the number is relatively small [3 (25% of 14)] and hence, the difference is not significant.", "FLOAT SELECTED: Table 6: Comparison of predictive performance of various versions of LiLi [kwn = known, unk = unknown, all = overall]."], "highlighted_evidence": ["Baselines. As none of the existing KBC methods can solve the OKBC problem, we choose various versions of LiLi as baselines.\n\nSingle: Version of LiLi where we train a single prediction model INLINEFORM0 for all test relations.\n\nSep: We do not transfer (past learned) weights for initializing INLINEFORM0 , i.e., we disable LL.\n\nF-th): Here, we use a fixed prediction threshold 0.5 instead of relation-specific threshold INLINEFORM0 .\n\nBG: The missing or connecting links (when the user does not respond) are filled with “@-RelatedTo-@\" blindly, no guessing mechanism.\n\nw/o PTS: LiLi does not ask for additional clues via past task selection for skillset improvement.", "Table 6 shows the comparative performance of LiLi with baselines. To judge the overall improvements, we performed paired t-test considering +ve F1 scores on each relation as paired data. Considering both KBs and all relation types, LiLi outperforms Sep with INLINEFORM12 . If we set INLINEFORM13 (training with very few clues), LiLi outperforms Sep with INLINEFORM14 on Freebase considering MCC. Thus, the lifelong learning mechanism is effective in transferring helpful knowledge. ", "FLOAT SELECTED: Table 6: Comparison of predictive performance of various versions of LiLi [kwn = known, unk = unknown, all = overall]."]}] |
How many labels do the datasets have? | {"label_key": "1809.00530", "label_file": "paper_tab_qa", "q_uid": "6aa2a1e2e3666f2b2a1f282d4cbdd1ca325eb9de", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "719313", "type": "abstractive"}, {"answer": "Book, Electronics, Beauty and Music each have 6000, IMDB 84919, Yelp 231163, Cell Phone 194792 and Baby 160792 labeled data.", "type": "abstractive"}] | [{"raw_evidence": ["FLOAT SELECTED: Table 1: Summary of datasets."], "highlighted_evidence": ["FLOAT SELECTED: Table 1: Summary of datasets."]}, {"raw_evidence": ["Large-scale datasets: We further conduct experiments on four much larger datasets: IMDB (I), Yelp2014 (Y), Cell Phone (C), and Baby (B). IMDB and Yelp2014 were previously used in BIBREF25 , BIBREF26 . Cell phone and Baby are from the large-scale Amazon dataset BIBREF24 , BIBREF27 . Detailed statistics are summarized in Table TABREF9 . We keep all reviews in the original datasets and consider a transductive setting where all target examples are used for both training (without label information) and evaluation. We perform sampling to balance the classes of labeled source data in each minibatch INLINEFORM3 during training.", "Small-scale datasets: Our new dataset was derived from the large-scale Amazon datasets released by McAuley et al. ( BIBREF24 ). It contains four domains: Book (BK), Electronics (E), Beauty (BT), and Music (M). Each domain contains two datasets. Set 1 contains 6000 instances with exactly balanced class labels, and set 2 contains 6000 instances that are randomly sampled from the large dataset, preserving the original label distribution, which we believe better reflects the label distribution in real life. The examples in these two sets do not overlap. Detailed statistics of the generated datasets are given in Table TABREF9 .", "FLOAT SELECTED: Table 1: Summary of datasets."], "highlighted_evidence": ["Large-scale datasets: We further conduct experiments on four much larger datasets: IMDB (I), Yelp2014 (Y), Cell Phone (C), and Baby (B). IMDB and Yelp2014 were previously used in BIBREF25 , BIBREF26 . Cell phone and Baby are from the large-scale Amazon dataset BIBREF24 , BIBREF27 . Detailed statistics are summarized in Table TABREF9 .", "Small-scale datasets: Our new dataset was derived from the large-scale Amazon datasets released by McAuley et al. ( BIBREF24 ). It contains four domains: Book (BK), Electronics (E), Beauty (BT), and Music (M). Each domain contains two datasets. Set 1 contains 6000 instances with exactly balanced class labels, and set 2 contains 6000 instances that are randomly sampled from the large dataset, preserving the original label distribution, which we believe better reflects the label distribution in real life. The examples in these two sets do not overlap. Detailed statistics of the generated datasets are given in Table TABREF9 .", "FLOAT SELECTED: Table 1: Summary of datasets."]}] |
What are the source and target domains? | {"label_key": "1809.00530", "label_file": "paper_tab_qa", "q_uid": "9176d2ba1c638cdec334971c4c7f1bb959495a8e", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "Book, electronics, beauty, music, IMDB, Yelp, cell phone, baby, DVDs, kitchen", "type": "abstractive"}, {"answer": "we use set 1 of the source domain as the only source with sentiment label information during training, and we evaluate the trained model on set 1 of the target domain, Book (BK), Electronics (E), Beauty (BT), and Music (M)", "type": "extractive"}] | [{"raw_evidence": ["FLOAT SELECTED: Table 1: Summary of datasets.", "Most previous works BIBREF0 , BIBREF1 , BIBREF6 , BIBREF7 , BIBREF29 carried out experiments on the Amazon benchmark released by Blitzer et al. ( BIBREF0 ). The dataset contains 4 different domains: Book (B), DVDs (D), Electronics (E), and Kitchen (K). Following their experimental settings, we consider the binary classification task to predict whether a review is positive or negative on the target domain. Each domain consists of 1000 positive and 1000 negative reviews respectively. We also allow 4000 unlabeled reviews to be used for both the source and the target domains, of which the positive and negative reviews are balanced as well, following the settings in previous works. We construct 12 cross-domain sentiment classification tasks and split the labeled data in each domain into a training set of 1600 reviews and a test set of 400 reviews. The classifier is trained on the training set of the source domain and is evaluated on the test set of the target domain. The comparison results are shown in Table TABREF37 ."], "highlighted_evidence": ["FLOAT SELECTED: Table 1: Summary of datasets.", "The dataset contains 4 different domains: Book (B), DVDs (D), Electronics (E), and Kitchen (K). "]}, {"raw_evidence": ["Small-scale datasets: Our new dataset was derived from the large-scale Amazon datasets released by McAuley et al. ( BIBREF24 ). It contains four domains: Book (BK), Electronics (E), Beauty (BT), and Music (M). Each domain contains two datasets. Set 1 contains 6000 instances with exactly balanced class labels, and set 2 contains 6000 instances that are randomly sampled from the large dataset, preserving the original label distribution, which we believe better reflects the label distribution in real life. The examples in these two sets do not overlap. Detailed statistics of the generated datasets are given in Table TABREF9 .", "In all our experiments on the small-scale datasets, we use set 1 of the source domain as the only source with sentiment label information during training, and we evaluate the trained model on set 1 of the target domain. Since we cannot control the label distribution of unlabeled data during training, we consider two different settings:"], "highlighted_evidence": ["It contains four domains: Book (BK), Electronics (E), Beauty (BT), and Music (M). Each domain contains two datasets. Set 1 contains 6000 instances with exactly balanced class labels, and set 2 contains 6000 instances that are randomly sampled from the large dataset, preserving the original label distribution, which we believe better reflects the label distribution in real life. The examples in these two sets do not overlap. Detailed statistics of the generated datasets are given in Table TABREF9 .\n\nIn all our experiments on the small-scale datasets, we use set 1 of the source domain as the only source with sentiment label information during training, and we evaluate the trained model on set 1 of the target domain."]}] |
Which datasets are used? | {"label_key": "1912.08960", "label_file": "paper_tab_qa", "q_uid": "b1bc9ae9d40e7065343c12f860a461c7c730a612", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "Existential (OneShape, MultiShapes), Spacial (TwoShapes, Multishapes), Quantification (Count, Ratio) datasets are generated from ShapeWorldICE", "type": "abstractive"}, {"answer": "ShapeWorldICE datasets: OneShape, MultiShapes, TwoShapes, MultiShapes, Count, and Ratio", "type": "abstractive"}] | [{"raw_evidence": ["We develop a variety of ShapeWorldICE datasets, with a similar idea to the “skill tasks” in the bAbI framework BIBREF22. Table TABREF4 gives an overview for different ShapeWorldICE datasets we use in this paper. We consider three different types of captioning tasks, each of which focuses on a distinct aspect of reasoning abilities. Existential descriptions examine whether a certain object is present in an image. Spatial descriptions identify spatial relationships among visual objects. Quantification descriptions involve count-based and ratio-based statements, with an explicit focus on inspecting models for their counting ability. We develop two variants for each type of dataset to enable different levels of visual complexity or specific aspects of the same reasoning type. All the training and test captions sampled in this work are in English.", "FLOAT SELECTED: Table 1: Sample captions and images from ShapeWorldICE datasets (truthful captions in blue, false in red). Images from Existential-OneShape only contain one object, while images from Spatial-TwoShapes contain two objects. Images from the other four datasets follow the same distribution with multiple abstract objects present in a visual scene."], "highlighted_evidence": ["We develop a variety of ShapeWorldICE datasets, with a similar idea to the “skill tasks” in the bAbI framework BIBREF22. Table TABREF4 gives an overview for different ShapeWorldICE datasets we use in this paper.", "FLOAT SELECTED: Table 1: Sample captions and images from ShapeWorldICE datasets (truthful captions in blue, false in red). Images from Existential-OneShape only contain one object, while images from Spatial-TwoShapes contain two objects. Images from the other four datasets follow the same distribution with multiple abstract objects present in a visual scene."]}, {"raw_evidence": ["Practical evaluation of GTD is currently only possible on synthetic data. We construct a range of datasets designed for image captioning evaluation. We call this diagnostic evaluation benchmark ShapeWorldICE (ShapeWorld for Image Captioning Evaluation). We illustrate the evaluation of specific image captioning models on ShapeWorldICE. We empirically demonstrate that the existing metrics BLEU and SPICE do not capture true caption-image agreement in all scenarios, while the GTD framework allows a fine-grained investigation of how well existing models cope with varied visual situations and linguistic constructions.", "We develop a variety of ShapeWorldICE datasets, with a similar idea to the “skill tasks” in the bAbI framework BIBREF22. Table TABREF4 gives an overview for different ShapeWorldICE datasets we use in this paper. We consider three different types of captioning tasks, each of which focuses on a distinct aspect of reasoning abilities. Existential descriptions examine whether a certain object is present in an image. Spatial descriptions identify spatial relationships among visual objects. Quantification descriptions involve count-based and ratio-based statements, with an explicit focus on inspecting models for their counting ability. We develop two variants for each type of dataset to enable different levels of visual complexity or specific aspects of the same reasoning type. All the training and test captions sampled in this work are in English.", "FLOAT SELECTED: Table 1: Sample captions and images from ShapeWorldICE datasets (truthful captions in blue, false in red). Images from Existential-OneShape only contain one object, while images from Spatial-TwoShapes contain two objects. Images from the other four datasets follow the same distribution with multiple abstract objects present in a visual scene."], "highlighted_evidence": ["Practical evaluation of GTD is currently only possible on synthetic data. We construct a range of datasets designed for image captioning evaluation. We call this diagnostic evaluation benchmark ShapeWorldICE (ShapeWorld for Image Captioning Evaluation). We illustrate the evaluation of specific image captioning models on ShapeWorldICE.", "We develop a variety of ShapeWorldICE datasets, with a similar idea to the “skill tasks” in the bAbI framework BIBREF22. Table TABREF4 gives an overview for different ShapeWorldICE datasets we use in this paper.", "FLOAT SELECTED: Table 1: Sample captions and images from ShapeWorldICE datasets (truthful captions in blue, false in red). Images from Existential-OneShape only contain one object, while images from Spatial-TwoShapes contain two objects. Images from the other four datasets follow the same distribution with multiple abstract objects present in a visual scene."]}] |
What are previous state of the art results? | {"label_key": "2002.11910", "label_file": "paper_tab_qa", "q_uid": "9da1e124d28b488b0d94998d32aa2fa8a5ebec51", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "Overall F1 score:\n- He and Sun (2017) 58.23\n- Peng and Dredze (2017) 58.99\n- Xu et al. (2018) 59.11", "type": "abstractive"}, {"answer": "For Named entity the maximum precision was 66.67%, and the average 62.58%, same values for Recall was 55.97% and 50.33%, and for F1 57.14% and 55.64%. Where for Nominal Mention had maximum recall of 74.48% and average of 73.67%, Recall had values of 54.55% and 53.7%, and F1 had values of 62.97% and 62.12%. Finally the Overall F1 score had maximum value of 59.11% and average of 58.77%", "type": "abstractive"}] | [{"raw_evidence": ["FLOAT SELECTED: Table 1: The results of two previous models, and results of this study, in which we apply a boundary assembling method. Precision, recall, and F1 scores are shown for both named entity and nominal mention. For both tasks and their overall performance, we outperform the other two models."], "highlighted_evidence": ["FLOAT SELECTED: Table 1: The results of two previous models, and results of this study, in which we apply a boundary assembling method. Precision, recall, and F1 scores are shown for both named entity and nominal mention. For both tasks and their overall performance, we outperform the other two models."]}, {"raw_evidence": ["Our best model performance with its Precision, Recall, and F1 scores on named entity and nominal mention are shown in Table TABREF5. This best model performance is achieved with a dropout rate of 0.1, and a learning rate of 0.05. Our results are compared with state-of-the-art models BIBREF15, BIBREF19, BIBREF20 on the same Sina Weibo training and test datasets. Our model shows an absolute improvement of 2% for the overall F1 score.", "FLOAT SELECTED: Table 1: The results of two previous models, and results of this study, in which we apply a boundary assembling method. Precision, recall, and F1 scores are shown for both named entity and nominal mention. For both tasks and their overall performance, we outperform the other two models."], "highlighted_evidence": ["Our best model performance with its Precision, Recall, and F1 scores on named entity and nominal mention are shown in Table TABREF5. ", "FLOAT SELECTED: Table 1: The results of two previous models, and results of this study, in which we apply a boundary assembling method. Precision, recall, and F1 scores are shown for both named entity and nominal mention. For both tasks and their overall performance, we outperform the other two models."]}] |
What is the model performance on target language reading comprehension? | {"label_key": "1909.09587", "label_file": "paper_tab_qa", "q_uid": "37be0d479480211291e068d0d3823ad0c13321d3", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "Table TABREF6, Table TABREF8", "type": "extractive"}, {"answer": "when testing on English, the F1 score of the model training on Chinese (Zh) is 53.8, F1 score is only 44.1 for the model training on Zh-En", "type": "extractive"}] | [{"raw_evidence": ["Table TABREF6 shows the result of different models trained on either Chinese or English and tested on Chinese. In row (f), multi-BERT is fine-tuned on English but tested on Chinese, which achieves competitive performance compared with QANet trained on Chinese. We also find that multi-BERT trained on English has relatively lower EM compared with the model with comparable F1 scores. This shows that the model learned with zero-shot can roughly identify the answer spans in context but less accurate. In row (c), we fine-tuned a BERT model pre-trained on English monolingual corpus (English BERT) on Chinese RC training data directly by appending fastText-initialized Chinese word embeddings to the original word embeddings of English-BERT. Its F1 score is even lower than that of zero-shot transferring multi-BERT (rows (c) v.s. (e)). The result implies multi-BERT does acquire better cross-lingual capability through pre-training on multilingual corpus. Table TABREF8 shows the results of multi-BERT fine-tuned on different languages and then tested on English , Chinese and Korean. The top half of the table shows the results of training data without translation. It is not surprising that when the training and testing sets are in the same language, the best results are achieved, and multi-BERT shows transfer capability when training and testing sets are in different languages, especially between Chinese and Korean.", "FLOAT SELECTED: Table 1: EM/F1 scores over Chinese testing set.", "FLOAT SELECTED: Table 2: EM/F1 score of multi-BERTs fine-tuned on different training sets and tested on different languages (En: English, Fr: French, Zh: Chinese, Jp: Japanese, Kr: Korean, xx-yy: translated from xx to yy). The text in bold means training data language is the same as testing data language."], "highlighted_evidence": ["Table TABREF6 shows the result of different models trained on either Chinese or English and tested on Chinese. In row (f), multi-BERT is fine-tuned on English but tested on Chinese, which achieves competitive performance compared with QANet trained on Chinese. We also find that multi-BERT trained on English has relatively lower EM compared with the model with comparable F1 scores. ", "Table TABREF8 shows the results of multi-BERT fine-tuned on different languages and then tested on English , Chinese and Korean. The top half of the table shows the results of training data without translation. It is not surprising that when the training and testing sets are in the same language, the best results are achieved, and multi-BERT shows transfer capability when training and testing sets are in different languages, especially between Chinese and Korean.", "FLOAT SELECTED: Table 1: EM/F1 scores over Chinese testing set.", "FLOAT SELECTED: Table 2: EM/F1 score of multi-BERTs fine-tuned on different training sets and tested on different languages (En: English, Fr: French, Zh: Chinese, Jp: Japanese, Kr: Korean, xx-yy: translated from xx to yy). The text in bold means training data language is the same as testing data language."]}, {"raw_evidence": ["Table TABREF6 shows the result of different models trained on either Chinese or English and tested on Chinese. In row (f), multi-BERT is fine-tuned on English but tested on Chinese, which achieves competitive performance compared with QANet trained on Chinese. We also find that multi-BERT trained on English has relatively lower EM compared with the model with comparable F1 scores. This shows that the model learned with zero-shot can roughly identify the answer spans in context but less accurate. In row (c), we fine-tuned a BERT model pre-trained on English monolingual corpus (English BERT) on Chinese RC training data directly by appending fastText-initialized Chinese word embeddings to the original word embeddings of English-BERT. Its F1 score is even lower than that of zero-shot transferring multi-BERT (rows (c) v.s. (e)). The result implies multi-BERT does acquire better cross-lingual capability through pre-training on multilingual corpus. Table TABREF8 shows the results of multi-BERT fine-tuned on different languages and then tested on English , Chinese and Korean. The top half of the table shows the results of training data without translation. It is not surprising that when the training and testing sets are in the same language, the best results are achieved, and multi-BERT shows transfer capability when training and testing sets are in different languages, especially between Chinese and Korean.", "In the lower half of Table TABREF8, the results are obtained by the translated training data. First, we found that when testing on English and Chinese, translation always degrades the performance (En v.s. En-XX, Zh v.s. Zh-XX). Even though we translate the training data into the same language as testing data, using the untranslated data still yield better results. For example, when testing on English, the F1 score of the model training on Chinese (Zh) is 53.8, while the F1 score is only 44.1 for the model training on Zh-En. This shows that translation degrades the quality of data. There are some exceptions when testing on Korean. Translating the English training data into Chinese, Japanese and Korean still improve the performance on Korean. We also found that when translated into the same language, the English training data is always better than the Chinese data (En-XX v.s. Zh-XX), with only one exception (En-Fr v.s. Zh-Fr when testing on KorQuAD). This may be because we have less Chinese training data than English. These results show that the quality and the size of dataset are much more important than whether the training and testing are in the same language or not.", "FLOAT SELECTED: Table 2: EM/F1 score of multi-BERTs fine-tuned on different training sets and tested on different languages (En: English, Fr: French, Zh: Chinese, Jp: Japanese, Kr: Korean, xx-yy: translated from xx to yy). The text in bold means training data language is the same as testing data language."], "highlighted_evidence": ["Table TABREF8 shows the results of multi-BERT fine-tuned on different languages and then tested on English , Chinese and Korean.", "For example, when testing on English, the F1 score of the model training on Chinese (Zh) is 53.8, while the F1 score is only 44.1 for the model training on Zh-En.", "FLOAT SELECTED: Table 2: EM/F1 score of multi-BERTs fine-tuned on different training sets and tested on different languages (En: English, Fr: French, Zh: Chinese, Jp: Japanese, Kr: Korean, xx-yy: translated from xx to yy). The text in bold means training data language is the same as testing data language."]}] |
What source-target language pairs were used in this work? | {"label_key": "1909.09587", "label_file": "paper_tab_qa", "q_uid": "a3d9b101765048f4b61cbd3eaa2439582ebb5c77", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "En-Fr, En-Zh, En-Jp, En-Kr, Zh-En, Zh-Fr, Zh-Jp, Zh-Kr to English, Chinese or Korean", "type": "abstractive"}, {"answer": "English , Chinese", "type": "extractive"}, {"answer": "English, Chinese, Korean, we translated the English and Chinese datasets into more languages, with Google Translate", "type": "extractive"}] | [{"raw_evidence": ["FLOAT SELECTED: Table 2: EM/F1 score of multi-BERTs fine-tuned on different training sets and tested on different languages (En: English, Fr: French, Zh: Chinese, Jp: Japanese, Kr: Korean, xx-yy: translated from xx to yy). The text in bold means training data language is the same as testing data language."], "highlighted_evidence": ["FLOAT SELECTED: Table 2: EM/F1 score of multi-BERTs fine-tuned on different training sets and tested on different languages (En: English, Fr: French, Zh: Chinese, Jp: Japanese, Kr: Korean, xx-yy: translated from xx to yy). The text in bold means training data language is the same as testing data language."]}, {"raw_evidence": ["In the lower half of Table TABREF8, the results are obtained by the translated training data. First, we found that when testing on English and Chinese, translation always degrades the performance (En v.s. En-XX, Zh v.s. Zh-XX). Even though we translate the training data into the same language as testing data, using the untranslated data still yield better results. For example, when testing on English, the F1 score of the model training on Chinese (Zh) is 53.8, while the F1 score is only 44.1 for the model training on Zh-En. This shows that translation degrades the quality of data. There are some exceptions when testing on Korean. Translating the English training data into Chinese, Japanese and Korean still improve the performance on Korean. We also found that when translated into the same language, the English training data is always better than the Chinese data (En-XX v.s. Zh-XX), with only one exception (En-Fr v.s. Zh-Fr when testing on KorQuAD). This may be because we have less Chinese training data than English. These results show that the quality and the size of dataset are much more important than whether the training and testing are in the same language or not.", "FLOAT SELECTED: Table 2: EM/F1 score of multi-BERTs fine-tuned on different training sets and tested on different languages (En: English, Fr: French, Zh: Chinese, Jp: Japanese, Kr: Korean, xx-yy: translated from xx to yy). The text in bold means training data language is the same as testing data language."], "highlighted_evidence": ["In the lower half of Table TABREF8, the results are obtained by the translated training data. First, we found that when testing on English and Chinese, translation always degrades the performance (En v.s. En-XX, Zh v.s. Zh-XX). Even though we translate the training data into the same language as testing data, using the untranslated data still yield better results. ", "FLOAT SELECTED: Table 2: EM/F1 score of multi-BERTs fine-tuned on different training sets and tested on different languages (En: English, Fr: French, Zh: Chinese, Jp: Japanese, Kr: Korean, xx-yy: translated from xx to yy). The text in bold means training data language is the same as testing data language."]}, {"raw_evidence": ["We have training and testing sets in three different languages: English, Chinese and Korean. The English dataset is SQuAD BIBREF2. The Chinese dataset is DRCD BIBREF14, a Chinese RC dataset with 30,000+ examples in the training set and 10,000+ examples in the development set. The Korean dataset is KorQuAD BIBREF15, a Korean RC dataset with 60,000+ examples in the training set and 10,000+ examples in the development set, created in exactly the same procedure as SQuAD. We always use the development sets of SQuAD, DRCD and KorQuAD for testing since the testing sets of the corpora have not been released yet.", "Next, to construct a diverse cross-lingual RC dataset with compromised quality, we translated the English and Chinese datasets into more languages, with Google Translate. An obvious issue with this method is that some examples might no longer have a recoverable span. To solve the problem, we use fuzzy matching to find the most possible answer, which calculates minimal edit distance between translated answer and all possible spans. If the minimal edit distance is larger than min(10, lengths of translated answer - 1), we drop the examples during training, and treat them as noise when testing. In this way, we can recover more than 95% of examples. The following generated datasets are recovered with same setting."], "highlighted_evidence": ["We have training and testing sets in three different languages: English, Chinese and Korean.", "Next, to construct a diverse cross-lingual RC dataset with compromised quality, we translated the English and Chinese datasets into more languages, with Google Translate."]}] |
Which baselines did they compare against? | {"label_key": "1809.02286", "label_file": "paper_tab_qa", "q_uid": "0ad4359e3e7e5e5f261c2668fe84c12bc762b3b8", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "Various tree structured neural networks including variants of Tree-LSTM, Tree-based CNN, RNTN, and non-tree models including variants of LSTMs, CNNs, residual, and self-attention based networks", "type": "abstractive"}, {"answer": "Sentence classification baselines: RNTN (Socher et al. 2013), AdaMC-RNTN (Dong et al. 2014), TE-RNTN (Qian et al. 2015), TBCNN (Mou et al. 2015), Tree-LSTM (Tai, Socher, and Manning 2015), AdaHT-LSTM-CM (Liu, Qiu, and Huang 2017), DC-TreeLSTM (Liu, Qiu, and Huang 2017), TE-LSTM (Huang, Qian, and Zhu 2017), BiConTree (Teng and Zhang 2017), Gumbel Tree-LSTM (Choi, Yoo, and Lee 2018), TreeNet (Cheng et al. 2018), CNN (Kim 2014), AdaSent (Zhao, Lu, and Poupart 2015), LSTM-CNN (Zhou et al. 2016), byte-mLSTM (Radford, Jozefowicz, and Sutskever 2017), BCN + Char + CoVe (McCann et al. 2017), BCN + Char + ELMo (Peters et al. 2018). \nStanford Natural Language Inference baselines: Latent Syntax Tree-LSTM (Yogatama et al. 2017), Tree-based CNN (Mou et al. 2016), Gumbel Tree-LSTM (Choi, Yoo, and Lee 2018), NSE (Munkhdalai and Yu 2017), Reinforced Self- Attention Network (Shen et al. 2018), Residual stacked encoders: (Nie and Bansal 2017), BiLSTM with generalized pooling (Chen, Ling, and Zhu 2018).", "type": "abstractive"}] | [{"raw_evidence": ["FLOAT SELECTED: Table 1: The comparison of various models on different sentence classification tasks. We report the test accuracy of each model in percentage. Our SATA Tree-LSTM shows superior or competitive performance on all tasks, compared to previous treestructured models as well as other sophisticated models. ?: Latent tree-structured models. †: Models which are pre-trained with large external corpora.", "Our experimental results on the SNLI dataset are shown in table 2 . In this table, we report the test accuracy and number of trainable parameters for each model. Our SATA-LSTM again demonstrates its decent performance compared against the neural models built on both syntactic trees and latent trees, as well as the non-tree models. (Latent Syntax Tree-LSTM: BIBREF10 ( BIBREF10 ), Tree-based CNN: BIBREF35 ( BIBREF35 ), Gumbel Tree-LSTM: BIBREF11 ( BIBREF11 ), NSE: BIBREF36 ( BIBREF36 ), Reinforced Self-Attention Network: BIBREF4 ( BIBREF4 ), Residual stacked encoders: BIBREF37 ( BIBREF37 ), BiLSTM with generalized pooling: BIBREF38 ( BIBREF38 ).) Note that the number of learned parameters in our model is also comparable to other sophisticated models, showing the efficiency of our model."], "highlighted_evidence": ["FLOAT SELECTED: Table 1: The comparison of various models on different sentence classification tasks. We report the test accuracy of each model in percentage. Our SATA Tree-LSTM shows superior or competitive performance on all tasks, compared to previous treestructured models as well as other sophisticated models. ?: Latent tree-structured models. †: Models which are pre-trained with large external corpora.", "Our experimental results on the SNLI dataset are shown in table 2 . In this table, we report the test accuracy and number of trainable parameters for each model. Our SATA-LSTM again demonstrates its decent performance compared against the neural models built on both syntactic trees and latent trees, as well as the non-tree models. (Latent Syntax Tree-LSTM: BIBREF10 ( BIBREF10 ), Tree-based CNN: BIBREF35 ( BIBREF35 ), Gumbel Tree-LSTM: BIBREF11 ( BIBREF11 ), NSE: BIBREF36 ( BIBREF36 ), Reinforced Self-Attention Network: BIBREF4 ( BIBREF4 ), Residual stacked encoders: BIBREF37 ( BIBREF37 ), BiLSTM with generalized pooling: BIBREF38 ( BIBREF38 ).)"]}, {"raw_evidence": ["FLOAT SELECTED: Table 1: The comparison of various models on different sentence classification tasks. We report the test accuracy of each model in percentage. Our SATA Tree-LSTM shows superior or competitive performance on all tasks, compared to previous treestructured models as well as other sophisticated models. ?: Latent tree-structured models. †: Models which are pre-trained with large external corpora.", "Our experimental results on the SNLI dataset are shown in table 2 . In this table, we report the test accuracy and number of trainable parameters for each model. Our SATA-LSTM again demonstrates its decent performance compared against the neural models built on both syntactic trees and latent trees, as well as the non-tree models. (Latent Syntax Tree-LSTM: BIBREF10 ( BIBREF10 ), Tree-based CNN: BIBREF35 ( BIBREF35 ), Gumbel Tree-LSTM: BIBREF11 ( BIBREF11 ), NSE: BIBREF36 ( BIBREF36 ), Reinforced Self-Attention Network: BIBREF4 ( BIBREF4 ), Residual stacked encoders: BIBREF37 ( BIBREF37 ), BiLSTM with generalized pooling: BIBREF38 ( BIBREF38 ).) Note that the number of learned parameters in our model is also comparable to other sophisticated models, showing the efficiency of our model."], "highlighted_evidence": ["FLOAT SELECTED: Table 1: The comparison of various models on different sentence classification tasks. We report the test accuracy of each model in percentage. Our SATA Tree-LSTM shows superior or competitive performance on all tasks, compared to previous treestructured models as well as other sophisticated models. ?: Latent tree-structured models. †: Models which are pre-trained with large external corpora.", "Our SATA-LSTM again demonstrates its decent performance compared against the neural models built on both syntactic trees and latent trees, as well as the non-tree models. (Latent Syntax Tree-LSTM: BIBREF10 ( BIBREF10 ), Tree-based CNN: BIBREF35 ( BIBREF35 ), Gumbel Tree-LSTM: BIBREF11 ( BIBREF11 ), NSE: BIBREF36 ( BIBREF36 ), Reinforced Self-Attention Network: BIBREF4 ( BIBREF4 ), Residual stacked encoders: BIBREF37 ( BIBREF37 ), BiLSTM with generalized pooling: BIBREF38 ( BIBREF38 ).)"]}] |
What baselines did they consider? | {"label_key": "1809.01202", "label_file": "paper_tab_qa", "q_uid": "4cbe5a36b492b99f9f9fea8081fe4ba10a7a0e94", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "state-of-the-art PDTB taggers", "type": "extractive"}, {"answer": "Linear SVM, RBF SVM, and Random Forest", "type": "abstractive"}] | [{"raw_evidence": ["We first use state-of-the-art PDTB taggers for our baseline BIBREF13 , BIBREF12 for the evaluation of the causality prediction of our models ( BIBREF12 requires sentences extracted from the text as its input, so we used our parser to extract sentences from the message). Then, we compare how models work for each task and disassembled them to inspect how each part of the models can affect their final prediction performances. We conducted McNemar's test to determine whether the performance differences are statistically significant at $p < .05$ ."], "highlighted_evidence": ["We first use state-of-the-art PDTB taggers for our baseline BIBREF13 , BIBREF12 for the evaluation of the causality prediction of our models ( BIBREF12 requires sentences extracted from the text as its input, so we used our parser to extract sentences from the message)."]}, {"raw_evidence": ["FLOAT SELECTED: Table 5: Causal explanation identification performance. Bold indicates significant imrpovement over next best model (p < .05)"], "highlighted_evidence": ["FLOAT SELECTED: Table 5: Causal explanation identification performance. Bold indicates significant imrpovement over next best model (p < .05)"]}] |
By how much more does PARENT correlate with human judgements in comparison to other text generation metrics? | {"label_key": "1906.01081", "label_file": "paper_tab_qa", "q_uid": "ffa7f91d6406da11ddf415ef094aaf28f3c3872d", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "Best proposed metric has average correlation with human judgement of 0.913 and 0.846 compared to best compared metrics result of 0.758 and 0.829 on WikiBio and WebNLG challenge.", "type": "abstractive"}, {"answer": "Their average correlation tops the best other model by 0.155 on WikiBio.", "type": "abstractive"}] | [{"raw_evidence": ["We use bootstrap sampling (500 iterations) over the 1100 tables for which we collected human annotations to get an idea of how the correlation of each metric varies with the underlying data. In each iteration, we sample with replacement, tables along with their references and all the generated texts for that table. Then we compute aggregated human evaluation and metric scores for each of the models and compute the correlation between the two. We report the average correlation across all bootstrap samples for each metric in Table TABREF37 . The distribution of correlations for the best performing metrics are shown in Figure FIGREF38 .", "FLOAT SELECTED: Table 2: Correlation of metrics with human judgments on WikiBio. A superscript of C/W indicates that the correlation is significantly lower than that of PARENTC/W using a bootstrap confidence test for α = 0.1.", "FLOAT SELECTED: Table 4: Average pearson correlation across 500 bootstrap samples of each metric to human ratings for each aspect of the generations from the WebNLG challenge.", "The human ratings were collected on 3 distinct aspects – grammaticality, fluency and semantics, where semantics corresponds to the degree to which a generated text agrees with the meaning of the underlying RDF triples. We report the correlation of several metrics with these ratings in Table TABREF48 . Both variants of PARENT are either competitive or better than the other metrics in terms of the average correlation to all three aspects. This shows that PARENT is applicable for high quality references as well."], "highlighted_evidence": ["We report the average correlation across all bootstrap samples for each metric in Table TABREF37 .", "FLOAT SELECTED: Table 2: Correlation of metrics with human judgments on WikiBio. A superscript of C/W indicates that the correlation is significantly lower than that of PARENTC/W using a bootstrap confidence test for α = 0.1.", "FLOAT SELECTED: Table 4: Average pearson correlation across 500 bootstrap samples of each metric to human ratings for each aspect of the generations from the WebNLG challenge.", "We report the correlation of several metrics with these ratings in Table TABREF48 ."]}, {"raw_evidence": ["FLOAT SELECTED: Table 2: Correlation of metrics with human judgments on WikiBio. A superscript of C/W indicates that the correlation is significantly lower than that of PARENTC/W using a bootstrap confidence test for α = 0.1."], "highlighted_evidence": ["FLOAT SELECTED: Table 2: Correlation of metrics with human judgments on WikiBio. A superscript of C/W indicates that the correlation is significantly lower than that of PARENTC/W using a bootstrap confidence test for α = 0.1."]}] |
Which stock market sector achieved the best performance? | {"label_key": "1812.10479", "label_file": "paper_tab_qa", "q_uid": "b634ff1607ce5756655e61b9a6f18bc736f84c83", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "Energy with accuracy of 0.538", "type": "abstractive"}, {"answer": "Energy", "type": "abstractive"}] | [{"raw_evidence": ["FLOAT SELECTED: Table 8: Sector-level performance comparison."], "highlighted_evidence": ["FLOAT SELECTED: Table 8: Sector-level performance comparison."]}, {"raw_evidence": ["FLOAT SELECTED: Table 7: Our volatility model performance compared with GARCH(1,1). Best performance in bold. Our model has superior performance across the three evaluation metrics and taking into consideration the state-of-the-art volatility proxies, namely Garman-Klass (σ̂PK) and Parkinson (σ̂PK)."], "highlighted_evidence": ["FLOAT SELECTED: Table 7: Our volatility model performance compared with GARCH(1,1). Best performance in bold. Our model has superior performance across the three evaluation metrics and taking into consideration the state-of-the-art volatility proxies, namely Garman-Klass (σ̂PK) and Parkinson (σ̂PK)."]}] |
How much does their model outperform existing models? | {"label_key": "1909.08089", "label_file": "paper_tab_qa", "q_uid": "de5b6c25e35b3a6c5e40e350fc5e52c160b33490", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "Best proposed model result vs best previous result:\nArxiv dataset: Rouge 1 (43.62 vs 42.81), Rouge L (29.30 vs 31.80), Meteor (21.78 vs 21.35)\nPubmed dataset: Rouge 1 (44.85 vs 44.29), Rouge L (31.48 vs 35.21), Meteor (20.83 vs 20.56)", "type": "abstractive"}, {"answer": "On arXiv dataset, the proposed model outperforms baselie model by (ROUGE-1,2,L) 0.67 0.72 0.77 respectively and by Meteor 0.31.\n", "type": "abstractive"}] | [{"raw_evidence": ["The performance of all models on arXiv and Pubmed is shown in Table TABREF28 and Table TABREF29 , respectively. Follow the work BIBREF18 , we use the approximate randomization as the statistical significance test method BIBREF32 with a Bonferroni correction for multiple comparisons, at the confidence level 0.01 ( INLINEFORM0 ). As we can see in these tables, on both datasets, the neural extractive models outperforms the traditional extractive models on informativeness (ROUGE-1,2) by a wide margin, but results are mixed on ROUGE-L. Presumably, this is due to the neural training process, which relies on a goal standard based on ROUGE-1. Exploring other training schemes and/or a combination of traditional and neural approaches is left as future work. Similarly, the neural extractive models also dominate the neural abstractive models on ROUGE-1,2, but these abstractive models tend to have the highest ROUGE-L scores, possibly because they are trained directly on gold standard abstract summaries.", "FLOAT SELECTED: Table 4.1: Results on the arXiv dataset. For models with an ∗, we report results from [8]. Models are traditional extractive in the first block, neural abstractive in the second block, while neural extractive in the third block. The Oracle (last row) corresponds to using the ground truth labels, obtained (for training) by the greedy algorithm, see Section 4.1.2. Results that are not significantly distinguished from the best systems are bold.", "FLOAT SELECTED: Table 4.2: Results on the Pubmed dataset. For models with an ∗, we report results from [8]. See caption of Table 4.1 above for details on compared models. Results that are not significantly distinguished from the best systems are bold."], "highlighted_evidence": ["The performance of all models on arXiv and Pubmed is shown in Table TABREF28 and Table TABREF29 , respectively.", "As we can see in these tables, on both datasets, the neural extractive models outperforms the traditional extractive models on informativeness (ROUGE-1,2) by a wide margin, but results are mixed on ROUGE-L.", "FLOAT SELECTED: Table 4.1: Results on the arXiv dataset. For models with an ∗, we report results from [8]. Models are traditional extractive in the first block, neural abstractive in the second block, while neural extractive in the third block. The Oracle (last row) corresponds to using the ground truth labels, obtained (for training) by the greedy algorithm, see Section 4.1.2. Results that are not significantly distinguished from the best systems are bold.", "FLOAT SELECTED: Table 4.2: Results on the Pubmed dataset. For models with an ∗, we report results from [8]. See caption of Table 4.1 above for details on compared models. Results that are not significantly distinguished from the best systems are bold."]}, {"raw_evidence": ["FLOAT SELECTED: Table 4.1: Results on the arXiv dataset. For models with an ∗, we report results from [8]. Models are traditional extractive in the first block, neural abstractive in the second block, while neural extractive in the third block. The Oracle (last row) corresponds to using the ground truth labels, obtained (for training) by the greedy algorithm, see Section 4.1.2. Results that are not significantly distinguished from the best systems are bold.", "FLOAT SELECTED: Table 4.2: Results on the Pubmed dataset. For models with an ∗, we report results from [8]. See caption of Table 4.1 above for details on compared models. Results that are not significantly distinguished from the best systems are bold."], "highlighted_evidence": ["FLOAT SELECTED: Table 4.1: Results on the arXiv dataset. For models with an ∗, we report results from [8]. Models are traditional extractive in the first block, neural abstractive in the second block, while neural extractive in the third block. The Oracle (last row) corresponds to using the ground truth labels, obtained (for training) by the greedy algorithm, see Section 4.1.2. Results that are not significantly distinguished from the best systems are bold.", "FLOAT SELECTED: Table 4.2: Results on the Pubmed dataset. For models with an ∗, we report results from [8]. See caption of Table 4.1 above for details on compared models. Results that are not significantly distinguished from the best systems are bold."]}] |
What embedding techniques are explored in the paper? | {"label_key": "1609.00559", "label_file": "paper_tab_qa", "q_uid": "8b3d3953454c88bde88181897a7a2c0c8dd87e23", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "Skip–gram, CBOW", "type": "extractive"}, {"answer": "integrated vector-res, vector-faith, Skip–gram, CBOW", "type": "extractive"}] | [{"raw_evidence": ["muneeb2015evalutating trained both the Skip–gram and CBOW models over the PubMed Central Open Access (PMC) corpus of approximately 1.25 million articles. They evaluated the models on a subset of the UMNSRS data, removing word pairs that did not occur in their training corpus more than ten times. chiu2016how evaluated both the the Skip–gram and CBOW models over the PMC corpus and PubMed. They also evaluated the models on a subset of the UMNSRS ignoring those words that did not appear in their training corpus. Pakhomov2016corpus trained CBOW model over three different types of corpora: clinical (clinical notes from the Fairview Health System), biomedical (PMC corpus), and general English (Wikipedia). They evaluated their method using a subset of the UMNSRS restricting to single word term pairs and removing those not found within their training corpus. sajad2015domain trained the Skip–gram model over CUIs identified by MetaMap on the OHSUMED corpus, a collection of 348,566 biomedical research articles. They evaluated the method on the complete UMNSRS, MiniMayoSRS and the MayoSRS datasets; any subset information about the dataset was not explicitly stated therefore we believe a direct comparison may be possible.", "FLOAT SELECTED: Table 4: Comparison with Previous Work"], "highlighted_evidence": ["chiu2016how evaluated both the the Skip–gram and CBOW models over the PMC corpus and PubMed.", "FLOAT SELECTED: Table 4: Comparison with Previous Work"]}, {"raw_evidence": ["Table TABREF31 shows a comparison to the top correlation scores reported by each of these works on the respective datasets (or subsets) they evaluated their methods on. N refers to the number of term pairs in the dataset the authors report they evaluated their method. The table also includes our top scoring results: the integrated vector-res and vector-faith. The results show that integrating semantic similarity measures into second–order co–occurrence vectors obtains a higher or on–par correlation with human judgments as the previous works reported results with the exception of the UMNSRS rel dataset. The results reported by Pakhomov2016corpus and chiu2016how obtain a higher correlation although the results can not be directly compared because both works used different subsets of the term pairs from the UMNSRS dataset.", "muneeb2015evalutating trained both the Skip–gram and CBOW models over the PubMed Central Open Access (PMC) corpus of approximately 1.25 million articles. They evaluated the models on a subset of the UMNSRS data, removing word pairs that did not occur in their training corpus more than ten times. chiu2016how evaluated both the the Skip–gram and CBOW models over the PMC corpus and PubMed. They also evaluated the models on a subset of the UMNSRS ignoring those words that did not appear in their training corpus. Pakhomov2016corpus trained CBOW model over three different types of corpora: clinical (clinical notes from the Fairview Health System), biomedical (PMC corpus), and general English (Wikipedia). They evaluated their method using a subset of the UMNSRS restricting to single word term pairs and removing those not found within their training corpus. sajad2015domain trained the Skip–gram model over CUIs identified by MetaMap on the OHSUMED corpus, a collection of 348,566 biomedical research articles. They evaluated the method on the complete UMNSRS, MiniMayoSRS and the MayoSRS datasets; any subset information about the dataset was not explicitly stated therefore we believe a direct comparison may be possible."], "highlighted_evidence": ["Table TABREF31 shows a comparison to the top correlation scores reported by each of these works on the respective datasets (or subsets) they evaluated their methods on. N refers to the number of term pairs in the dataset the authors report they evaluated their method. The table also includes our top scoring results: the integrated vector-res and vector-faith.", "chiu2016how evaluated both the the Skip–gram and CBOW models over the PMC corpus and PubMed. They also evaluated the models on a subset of the UMNSRS ignoring those words that did not appear in their training corpus. Pakhomov2016corpus trained CBOW model over three different types of corpora: clinical (clinical notes from the Fairview Health System), biomedical (PMC corpus), and general English (Wikipedia)."]}] |
Which other approaches do they compare their model with? | {"label_key": "1904.10503", "label_file": "paper_tab_qa", "q_uid": "5a65ad10ff954d0f27bb3ccd9027e3d8f7f6bb76", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "Akbik et al. (2018), Link et al. (2012)", "type": "abstractive"}, {"answer": "They compare to Akbik et al. (2018) and Link et al. (2012).", "type": "abstractive"}] | [{"raw_evidence": ["FLOAT SELECTED: Table 3: Comparison with existing models."], "highlighted_evidence": ["FLOAT SELECTED: Table 3: Comparison with existing models."]}, {"raw_evidence": ["In this paper, we present a deep neural network model for the task of fine-grained named entity classification using ELMo embeddings and Wikidata. The proposed model learns representations for entity mentions based on its context and incorporates the rich structure of Wikidata to augment these labels into finer-grained subtypes. We can see comparisons of our model made on Wiki(gold) in Table TABREF20 . We note that the model performs similarly to existing systems without being trained or tuned on that particular dataset. Future work may include refining the clustering method described in Section 2.2 to extend to types other than person, location, organization, and also to include disambiguation of entity types.", "FLOAT SELECTED: Table 3: Comparison with existing models."], "highlighted_evidence": ["We can see comparisons of our model made on Wiki(gold) in Table TABREF20 .", "FLOAT SELECTED: Table 3: Comparison with existing models."]}] |
How is non-standard pronunciation identified? | {"label_key": "1912.01772", "label_file": "paper_tab_qa", "q_uid": "f9bf6bef946012dd42835bf0c547c0de9c1d229f", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "Original transcription was labeled with additional labels in [] brackets with nonstandard pronunciation.", "type": "abstractive"}] | [{"raw_evidence": ["In addition, the transcription includes annotations for noises and disfluencies including aborted words, mispronunciations, poor intelligibility, repeated and corrected words, false starts, hesitations, undefined sound or pronunciations, non-verbal articulations, and pauses. Foreign words, in this case Spanish words, are also labelled as such.", "FLOAT SELECTED: Table 2: Example of an utterance along with the different annotations. We additionally highlight the code-switching annotations ([SPA] indicates Spanish words) as well as pre-normalized transcriptions that indicating non-standard pronunciations ([!1pu’] indicates that the previous 1 word was pronounced as ‘pu’’ instead of ‘pues’)."], "highlighted_evidence": ["In addition, the transcription includes annotations for noises and disfluencies including aborted words, mispronunciations, poor intelligibility, repeated and corrected words, false starts, hesitations, undefined sound or pronunciations, non-verbal articulations, and pauses. Foreign words, in this case Spanish words, are also labelled as such.", "FLOAT SELECTED: Table 2: Example of an utterance along with the different annotations. We additionally highlight the code-switching annotations ([SPA] indicates Spanish words) as well as pre-normalized transcriptions that indicating non-standard pronunciations ([!1pu’] indicates that the previous 1 word was pronounced as ‘pu’’ instead of ‘pues’)."]}] |
What kind of celebrities do they obtain tweets from? | {"label_key": "1909.04002", "label_file": "paper_tab_qa", "q_uid": "4d28c99750095763c81bcd5544491a0ba51d9070", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "Amitabh Bachchan, Ariana Grande, Barack Obama, Bill Gates, Donald Trump,\nEllen DeGeneres, J K Rowling, Jimmy Fallon, Justin Bieber, Kevin Durant, Kim Kardashian, Lady Gaga, LeBron James,Narendra Modi, Oprah Winfrey", "type": "abstractive"}, {"answer": "Celebrities from varioius domains - Acting, Music, Politics, Business, TV, Author, Sports, Modeling. ", "type": "abstractive"}] | [{"raw_evidence": ["FLOAT SELECTED: Table 1: Twitter celebrities in our dataset, with tweet counts before and after filtering (Foll. denotes followers in millions)"], "highlighted_evidence": ["FLOAT SELECTED: Table 1: Twitter celebrities in our dataset, with tweet counts before and after filtering (Foll. denotes followers in millions)"]}, {"raw_evidence": ["FLOAT SELECTED: Table 1: Twitter celebrities in our dataset, with tweet counts before and after filtering (Foll. denotes followers in millions)"], "highlighted_evidence": ["FLOAT SELECTED: Table 1: Twitter celebrities in our dataset, with tweet counts before and after filtering (Foll. denotes followers in millions)"]}] |
What summarization algorithms did the authors experiment with? | {"label_key": "1712.00991", "label_file": "paper_tab_qa", "q_uid": "443d2448136364235389039cbead07e80922ec5c", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "LSA, TextRank, LexRank and ILP-based summary.", "type": "abstractive"}, {"answer": "LSA, TextRank, LexRank", "type": "abstractive"}] | [{"raw_evidence": ["We considered a dataset of 100 employees, where for each employee multiple peer comments were recorded. Also, for each employee, a manual summary was generated by an HR personnel. The summaries generated by our ILP-based approach were compared with the corresponding manual summaries using the ROUGE BIBREF22 unigram score. For comparing performance of our ILP-based summarization algorithm, we explored a few summarization algorithms provided by the Sumy package. A common parameter which is required by all these algorithms is number of sentences keep in the final summary. ILP-based summarization requires a similar parameter K, which is automatically decided based on number of total candidate phrases. Assuming a sentence is equivalent to roughly 3 phrases, for Sumy algorithms, we set number of sentences parameter to the ceiling of K/3. Table TABREF51 shows average and standard deviation of ROUGE unigram f1 scores for each algorithm, over the 100 summaries. The performance of ILP-based summarization is comparable with the other algorithms, as the two sample t-test does not show statistically significant difference. Also, human evaluators preferred phrase-based summary generated by our approach to the other sentence-based summaries.", "FLOAT SELECTED: Table 9. Comparative performance of various summarization algorithms"], "highlighted_evidence": ["Table TABREF51 shows average and standard deviation of ROUGE unigram f1 scores for each algorithm, over the 100 summaries.", "For comparing performance of our ILP-based summarization algorithm, we explored a few summarization algorithms provided by the Sumy package.", "FLOAT SELECTED: Table 9. Comparative performance of various summarization algorithms"]}, {"raw_evidence": ["FLOAT SELECTED: Table 9. Comparative performance of various summarization algorithms", "We considered a dataset of 100 employees, where for each employee multiple peer comments were recorded. Also, for each employee, a manual summary was generated by an HR personnel. The summaries generated by our ILP-based approach were compared with the corresponding manual summaries using the ROUGE BIBREF22 unigram score. For comparing performance of our ILP-based summarization algorithm, we explored a few summarization algorithms provided by the Sumy package. A common parameter which is required by all these algorithms is number of sentences keep in the final summary. ILP-based summarization requires a similar parameter K, which is automatically decided based on number of total candidate phrases. Assuming a sentence is equivalent to roughly 3 phrases, for Sumy algorithms, we set number of sentences parameter to the ceiling of K/3. Table TABREF51 shows average and standard deviation of ROUGE unigram f1 scores for each algorithm, over the 100 summaries. The performance of ILP-based summarization is comparable with the other algorithms, as the two sample t-test does not show statistically significant difference. Also, human evaluators preferred phrase-based summary generated by our approach to the other sentence-based summaries."], "highlighted_evidence": ["FLOAT SELECTED: Table 9. Comparative performance of various summarization algorithms", "For comparing performance of our ILP-based summarization algorithm, we explored a few summarization algorithms provided by the Sumy package. ", "Table TABREF51 shows average and standard deviation of ROUGE unigram f1 scores for each algorithm, over the 100 summaries. "]}] |
What evaluation metrics are looked at for classification tasks? | {"label_key": "1712.00991", "label_file": "paper_tab_qa", "q_uid": "fb3d30d59ed49e87f63d3735b876d45c4c6b8939", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "Precision, Recall, F-measure, accuracy", "type": "extractive"}, {"answer": "Precision, Recall and F-measure", "type": "extractive"}] | [{"raw_evidence": ["Precision, Recall and F-measure for this multi-label classification are computed using a strategy similar to the one described in BIBREF21 . Let INLINEFORM0 be the set of predicted labels and INLINEFORM1 be the set of actual labels for the INLINEFORM2 instance. Precision and recall for this instance are computed as follows: INLINEFORM3", "We randomly selected 2000 sentences from the supervisor assessment corpus and manually tagged them (dataset D1). This labelled dataset contained 705, 103, 822 and 370 sentences having the class labels STRENGTH, WEAKNESS, SUGGESTION or OTHER respectively. We trained several multi-class classifiers on this dataset. Table TABREF10 shows the results of 5-fold cross-validation experiments on dataset D1. For the first 5 classifiers, we used their implementation from the SciKit Learn library in Python (scikit-learn.org). The features used for these classifiers were simply the sentence words along with their frequencies. For the last 2 classifiers (in Table TABREF10 ), we used our own implementation. The overall accuracy for a classifier is defined as INLINEFORM0 , where the denominator is 2000 for dataset D1. Note that the pattern-based approach is unsupervised i.e., it did not use any training data. Hence, the results shown for it are for the entire dataset and not based on cross-validation.", "FLOAT SELECTED: Table 1. Results of 5-fold cross validation for sentence classification on dataset D1.", "FLOAT SELECTED: Table 7. Results of 5-fold cross validation for multi-class multi-label classification on dataset D2."], "highlighted_evidence": ["Precision, Recall and F-measure for this multi-label classification are computed using a strategy similar to the one described in BIBREF21 . ", "The overall accuracy for a classifier is defined as INLINEFORM0 , where the denominator is 2000 for dataset D1. ", "FLOAT SELECTED: Table 1. Results of 5-fold cross validation for sentence classification on dataset D1.", "FLOAT SELECTED: Table 7. Results of 5-fold cross validation for multi-class multi-label classification on dataset D2."]}, {"raw_evidence": ["Precision, Recall and F-measure for this multi-label classification are computed using a strategy similar to the one described in BIBREF21 . Let INLINEFORM0 be the set of predicted labels and INLINEFORM1 be the set of actual labels for the INLINEFORM2 instance. Precision and recall for this instance are computed as follows: INLINEFORM3"], "highlighted_evidence": ["Precision, Recall and F-measure for this multi-label classification are computed using a strategy similar to the one described in BIBREF21 ."]}] |
What methods were used for sentence classification? | {"label_key": "1712.00991", "label_file": "paper_tab_qa", "q_uid": "197b276d0610ebfacd57ab46b0b29f3033c96a40", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "Logistic Regression, Multinomial Naive Bayes, Random Forest, AdaBoost, Linear SVM, SVM with ADWSK and Pattern-based", "type": "abstractive"}, {"answer": "Logistic Regression, Multinomial Naive Bayes, Random Forest, AdaBoost, Linear SVM, SVM with ADWSK, Pattern-based approach", "type": "abstractive"}] | [{"raw_evidence": ["We randomly selected 2000 sentences from the supervisor assessment corpus and manually tagged them (dataset D1). This labelled dataset contained 705, 103, 822 and 370 sentences having the class labels STRENGTH, WEAKNESS, SUGGESTION or OTHER respectively. We trained several multi-class classifiers on this dataset. Table TABREF10 shows the results of 5-fold cross-validation experiments on dataset D1. For the first 5 classifiers, we used their implementation from the SciKit Learn library in Python (scikit-learn.org). The features used for these classifiers were simply the sentence words along with their frequencies. For the last 2 classifiers (in Table TABREF10 ), we used our own implementation. The overall accuracy for a classifier is defined as INLINEFORM0 , where the denominator is 2000 for dataset D1. Note that the pattern-based approach is unsupervised i.e., it did not use any training data. Hence, the results shown for it are for the entire dataset and not based on cross-validation.", "FLOAT SELECTED: Table 1. Results of 5-fold cross validation for sentence classification on dataset D1."], "highlighted_evidence": ["Table TABREF10 shows the results of 5-fold cross-validation experiments on dataset D1. For the first 5 classifiers, we used their implementation from the SciKit Learn library in Python (scikit-learn.org). The features used for these classifiers were simply the sentence words along with their frequencies. For the last 2 classifiers (in Table TABREF10 ), we used our own implementation.", "FLOAT SELECTED: Table 1. Results of 5-fold cross validation for sentence classification on dataset D1."]}, {"raw_evidence": ["FLOAT SELECTED: Table 1. Results of 5-fold cross validation for sentence classification on dataset D1.", "FLOAT SELECTED: Table 7. Results of 5-fold cross validation for multi-class multi-label classification on dataset D2.", "We manually tagged the same 2000 sentences in Dataset D1 with attributes, where each sentence may get 0, 1, 2, etc. up to 15 class labels (this is dataset D2). This labelled dataset contained 749, 206, 289, 207, 91, 223, 191, 144, 103, 80, 82, 42, 29, 15, 24 sentences having the class labels listed in Table TABREF20 in the same order. The number of sentences having 0, 1, 2, or more than 2 attributes are: 321, 1070, 470 and 139 respectively. We trained several multi-class multi-label classifiers on this dataset. Table TABREF21 shows the results of 5-fold cross-validation experiments on dataset D2.", "We randomly selected 2000 sentences from the supervisor assessment corpus and manually tagged them (dataset D1). This labelled dataset contained 705, 103, 822 and 370 sentences having the class labels STRENGTH, WEAKNESS, SUGGESTION or OTHER respectively. We trained several multi-class classifiers on this dataset. Table TABREF10 shows the results of 5-fold cross-validation experiments on dataset D1. For the first 5 classifiers, we used their implementation from the SciKit Learn library in Python (scikit-learn.org). The features used for these classifiers were simply the sentence words along with their frequencies. For the last 2 classifiers (in Table TABREF10 ), we used our own implementation. The overall accuracy for a classifier is defined as INLINEFORM0 , where the denominator is 2000 for dataset D1. Note that the pattern-based approach is unsupervised i.e., it did not use any training data. Hence, the results shown for it are for the entire dataset and not based on cross-validation."], "highlighted_evidence": ["FLOAT SELECTED: Table 1. Results of 5-fold cross validation for sentence classification on dataset D1.", "FLOAT SELECTED: Table 7. Results of 5-fold cross validation for multi-class multi-label classification on dataset D2.", "We trained several multi-class multi-label classifiers on this dataset. Table TABREF21 shows the results of 5-fold cross-validation experiments on dataset D2.", "We trained several multi-class classifiers on this dataset. Table TABREF10 shows the results of 5-fold cross-validation experiments on dataset D1. "]}] |
What modern MRC gold standards are analyzed? | {"label_key": "2003.04642", "label_file": "paper_tab_qa", "q_uid": "9ecde59ffab3c57ec54591c3c7826a9188b2b270", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "fit our problem definition and were published in the years 2016 to 2019, have at least $(2019 - publication\\ year) \\times 20$ citations", "type": "extractive"}, {"answer": "MSMARCO, HOTPOTQA, RECORD, MULTIRC, NEWSQA, and DROP.", "type": "abstractive"}] | [{"raw_evidence": ["We select contemporary MRC benchmarks to represent all four commonly used problem definitions BIBREF15. In selecting relevant datasets, we do not consider those that are considered “solved”, i.e. where the state of the art performance surpasses human performance, as is the case with SQuAD BIBREF28, BIBREF7. Concretely, we selected gold standards that fit our problem definition and were published in the years 2016 to 2019, have at least $(2019 - publication\\ year) \\times 20$ citations, and bucket them according to the answer selection styles as described in Section SECREF4 We randomly draw one from each bucket and add two randomly drawn datasets from the candidate pool. This leaves us with the datasets described in Table TABREF19. For a more detailed description, we refer to Appendix ."], "highlighted_evidence": ["Concretely, we selected gold standards that fit our problem definition and were published in the years 2016 to 2019, have at least $(2019 - publication\\ year) \\times 20$ citations, and bucket them according to the answer selection styles as described in Section SECREF4"]}, {"raw_evidence": ["FLOAT SELECTED: Table 1: Summary of selected datasets"], "highlighted_evidence": ["FLOAT SELECTED: Table 1: Summary of selected datasets"]}] |
What was the score of the proposed model? | {"label_key": "1904.07904", "label_file": "paper_tab_qa", "q_uid": "38f58f13c7f23442d5952c8caf126073a477bac0", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "Best results authors obtain is EM 51.10 and F1 63.11", "type": "abstractive"}, {"answer": "EM Score of 51.10", "type": "abstractive"}] | [{"raw_evidence": ["To better demonstrate the effectiveness of the proposed model, we compare with baselines and show the results in Table TABREF12 . The baselines are: (a) trained on S-SQuAD, (b) trained on T-SQuAD and then fine-tuned on S-SQuAD, and (c) previous best model trained on S-SQuAD BIBREF5 by using Dr.QA BIBREF20 . We also compare to the approach proposed by Lan et al. BIBREF16 in the row (d). This approach is originally proposed for spoken language understanding, and we adopt the same approach on the setting here. The approach models domain-specific features from the source and target domains separately by two different embedding encoders with a shared embedding encoder for modeling domain-general features. The domain-general parameters are adversarially trained by domain discriminator."], "highlighted_evidence": ["To better demonstrate the effectiveness of the proposed model, we compare with baselines and show the results in Table TABREF12 ."]}, {"raw_evidence": ["FLOAT SELECTED: Table 2. The EM/F1 scores of proposed adversarial domain adaptation approaches over Spoken-SQuAD."], "highlighted_evidence": ["FLOAT SELECTED: Table 2. The EM/F1 scores of proposed adversarial domain adaptation approaches over Spoken-SQuAD."]}] |
What hyperparameters are explored? | {"label_key": "2003.11645", "label_file": "paper_tab_qa", "q_uid": "27275fe9f6a9004639f9ac33c3a5767fea388a98", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "Dimension size, window size, architecture, algorithm, epochs, hidden dimension size, learning rate, loss function, optimizer algorithm.", "type": "abstractive"}, {"answer": "Hyperparameters explored were: dimension size, window size, architecture, algorithm and epochs.", "type": "abstractive"}] | [{"raw_evidence": ["FLOAT SELECTED: Table 1: Hyper-parameter choices", "FLOAT SELECTED: Table 2: Network hyper-parameters"], "highlighted_evidence": ["FLOAT SELECTED: Table 1: Hyper-parameter choices", "FLOAT SELECTED: Table 2: Network hyper-parameters"]}, {"raw_evidence": ["To form the vocabulary, words occurring less than 5 times in the corpora were dropped, stop words removed using the natural language toolkit (NLTK) (BIBREF22) and data pre-processing carried out. Table TABREF2 describes most hyper-parameters explored for each dataset. In all, 80 runs (of about 160 minutes) were conducted for the 15MB Wiki Abstract dataset with 80 serialized models totaling 15.136GB while 80 runs (for over 320 hours) were conducted for the 711MB SW dataset, with 80 serialized models totaling over 145GB. Experiments for all combinations for 300 dimensions were conducted on the 3.9GB training set of the BW corpus and additional runs for other dimensions for the window 8 + skipgram + heirarchical softmax combination to verify the trend of quality of word vectors as dimensions are increased.", "FLOAT SELECTED: Table 1: Hyper-parameter choices"], "highlighted_evidence": ["Table TABREF2 describes most hyper-parameters explored for each dataset.", "FLOAT SELECTED: Table 1: Hyper-parameter choices"]}] |
Do they test both skipgram and c-bow? | {"label_key": "2003.11645", "label_file": "paper_tab_qa", "q_uid": "c2d1387e08cf25cb6b1f482178cca58030e85b70", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "Yes", "type": "boolean"}, {"answer": "Yes", "type": "boolean"}] | [{"raw_evidence": ["FLOAT SELECTED: Table 1: Hyper-parameter choices"], "highlighted_evidence": ["FLOAT SELECTED: Table 1: Hyper-parameter choices"]}, {"raw_evidence": ["FLOAT SELECTED: Table 1: Hyper-parameter choices", "To form the vocabulary, words occurring less than 5 times in the corpora were dropped, stop words removed using the natural language toolkit (NLTK) (BIBREF22) and data pre-processing carried out. Table TABREF2 describes most hyper-parameters explored for each dataset. In all, 80 runs (of about 160 minutes) were conducted for the 15MB Wiki Abstract dataset with 80 serialized models totaling 15.136GB while 80 runs (for over 320 hours) were conducted for the 711MB SW dataset, with 80 serialized models totaling over 145GB. Experiments for all combinations for 300 dimensions were conducted on the 3.9GB training set of the BW corpus and additional runs for other dimensions for the window 8 + skipgram + heirarchical softmax combination to verify the trend of quality of word vectors as dimensions are increased."], "highlighted_evidence": ["FLOAT SELECTED: Table 1: Hyper-parameter choices", "Table TABREF2 describes most hyper-parameters explored for each dataset."]}] |
what is the state of the art? | {"label_key": "1608.06757", "label_file": "paper_tab_qa", "q_uid": "c2b8ee872b99f698b3d2082d57f9408a91e1b4c1", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "Babelfy, DBpedia Spotlight, Entityclassifier.eu, FOX, LingPipe MUC-7, NERD-ML, Stanford NER, TagMe 2", "type": "abstractive"}] | [{"raw_evidence": ["FLOAT SELECTED: Table 2: Comparison of annotators trained for common English news texts (micro-averaged scores on match per annotation span). The table shows micro-precision, recall and NER-style F1 for CoNLL2003, KORE50, ACE2004 and MSNBC datasets."], "highlighted_evidence": ["FLOAT SELECTED: Table 2: Comparison of annotators trained for common English news texts (micro-averaged scores on match per annotation span). The table shows micro-precision, recall and NER-style F1 for CoNLL2003, KORE50, ACE2004 and MSNBC datasets."]}] |
Do the authors also analyze transformer-based architectures? | {"label_key": "1806.04330", "label_file": "paper_tab_qa", "q_uid": "8bf7f1f93d0a2816234d36395ab40c481be9a0e0", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "No", "type": "boolean"}, {"answer": "No", "type": "boolean"}] | [{"raw_evidence": ["FLOAT SELECTED: Table 1: Summary of representative neural models for sentence pair modeling. The upper half contains sentence encoding models, and the lower half contains sentence pair interaction models."], "highlighted_evidence": ["FLOAT SELECTED: Table 1: Summary of representative neural models for sentence pair modeling. The upper half contains sentence encoding models, and the lower half contains sentence pair interaction models."]}, {"raw_evidence": [], "highlighted_evidence": []}] |
what were the baselines? | {"label_key": "1904.03288", "label_file": "paper_tab_qa", "q_uid": "2ddb51b03163d309434ee403fef42d6b9aecc458", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "LF-MMI Attention\nSeq2Seq \nRNN-T \nChar E2E LF-MMI \nPhone E2E LF-MMI \nCTC + Gram-CTC", "type": "abstractive"}] | [{"raw_evidence": ["We also evaluate the Jasper model's performance on a conversational English corpus. The Hub5 Year 2000 (Hub5'00) evaluation (LDC2002S09, LDC2005S13) is widely used in academia. It is divided into two subsets: Switchboard (SWB) and Callhome (CHM). The training data for both the acoustic and language models consisted of the 2000hr Fisher+Switchboard training data (LDC2004S13, LDC2005S13, LDC97S62). Jasper DR 10x5 was trained using SGD with momentum for 50 epochs. We compare to other models trained using the same data and report Hub5'00 results in Table TABREF31 .", "FLOAT SELECTED: Table 7: Hub5’00, WER (%)"], "highlighted_evidence": [" We compare to other models trained using the same data and report Hub5'00 results in Table TABREF31 .", "FLOAT SELECTED: Table 7: Hub5’00, WER (%)"]}] |
what competitive results did they obtain? | {"label_key": "1904.03288", "label_file": "paper_tab_qa", "q_uid": "e587559f5ab6e42f7d981372ee34aebdc92b646e", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "In case of read speech datasets, their best model got the highest nov93 score of 16.1 and the highest nov92 score of 13.3.\nIn case of Conversational Speech, their best model got the highest SWB of 8.3 and the highest CHM of 19.3. ", "type": "abstractive"}, {"answer": "On WSJ datasets author's best approach achieves 9.3 and 6.9 WER compared to best results of 7.5 and 4.1 on nov93 and nov92 subsets.\nOn Hub5'00 datasets author's best approach achieves WER of 7.8 and 16.2 compared to best result of 7.3 and 14.2 on Switchboard (SWB) and Callhome (CHM) subsets.", "type": "abstractive"}] | [{"raw_evidence": ["We trained a smaller Jasper 10x3 model with SGD with momentum optimizer for 400 epochs on a combined WSJ dataset (80 hours): LDC93S6A (WSJ0) and LDC94S13A (WSJ1). The results are provided in Table TABREF29 .", "FLOAT SELECTED: Table 6: WSJ End-to-End Models, WER (%)", "FLOAT SELECTED: Table 7: Hub5’00, WER (%)", "We also evaluate the Jasper model's performance on a conversational English corpus. The Hub5 Year 2000 (Hub5'00) evaluation (LDC2002S09, LDC2005S13) is widely used in academia. It is divided into two subsets: Switchboard (SWB) and Callhome (CHM). The training data for both the acoustic and language models consisted of the 2000hr Fisher+Switchboard training data (LDC2004S13, LDC2005S13, LDC97S62). Jasper DR 10x5 was trained using SGD with momentum for 50 epochs. We compare to other models trained using the same data and report Hub5'00 results in Table TABREF31 ."], "highlighted_evidence": ["We trained a smaller Jasper 10x3 model with SGD with momentum optimizer for 400 epochs on a combined WSJ dataset (80 hours): LDC93S6A (WSJ0) and LDC94S13A (WSJ1). The results are provided in Table TABREF29 .", "FLOAT SELECTED: Table 6: WSJ End-to-End Models, WER (%)", "FLOAT SELECTED: Table 7: Hub5’00, WER (%)", "We also evaluate the Jasper model's performance on a conversational English corpus. The Hub5 Year 2000 (Hub5'00) evaluation (LDC2002S09, LDC2005S13) is widely used in academia. It is divided into two subsets: Switchboard (SWB) and Callhome (CHM). The training data for both the acoustic and language models consisted of the 2000hr Fisher+Switchboard training data (LDC2004S13, LDC2005S13, LDC97S62). Jasper DR 10x5 was trained using SGD with momentum for 50 epochs. We compare to other models trained using the same data and report Hub5'00 results in Table TABREF31 ."]}, {"raw_evidence": ["FLOAT SELECTED: Table 6: WSJ End-to-End Models, WER (%)", "FLOAT SELECTED: Table 7: Hub5’00, WER (%)", "We trained a smaller Jasper 10x3 model with SGD with momentum optimizer for 400 epochs on a combined WSJ dataset (80 hours): LDC93S6A (WSJ0) and LDC94S13A (WSJ1). The results are provided in Table TABREF29 .", "We also evaluate the Jasper model's performance on a conversational English corpus. The Hub5 Year 2000 (Hub5'00) evaluation (LDC2002S09, LDC2005S13) is widely used in academia. It is divided into two subsets: Switchboard (SWB) and Callhome (CHM). The training data for both the acoustic and language models consisted of the 2000hr Fisher+Switchboard training data (LDC2004S13, LDC2005S13, LDC97S62). Jasper DR 10x5 was trained using SGD with momentum for 50 epochs. We compare to other models trained using the same data and report Hub5'00 results in Table TABREF31 ."], "highlighted_evidence": ["FLOAT SELECTED: Table 6: WSJ End-to-End Models, WER (%)", "FLOAT SELECTED: Table 7: Hub5’00, WER (%)", "We trained a smaller Jasper 10x3 model with SGD with momentum optimizer for 400 epochs on a combined WSJ dataset (80 hours): LDC93S6A (WSJ0) and LDC94S13A (WSJ1). The results are provided in Table TABREF29 .", "We compare to other models trained using the same data and report Hub5'00 results in Table TABREF31 ."]}] |
By how much is performance improved with multimodality? | {"label_key": "1909.13714", "label_file": "paper_tab_qa", "q_uid": "f68508adef6f4bcdc0cc0a3ce9afc9a2b6333cc5", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "by 2.3-6.8 points in f1 score for intent recognition and 0.8-3.5 for slot filling", "type": "abstractive"}, {"answer": "F1 score increased from 0.89 to 0.92", "type": "abstractive"}] | [{"raw_evidence": ["FLOAT SELECTED: Table 1: Speech Embeddings Experiments: Precision/Recall/F1-scores (%) of NLU Models"], "highlighted_evidence": ["FLOAT SELECTED: Table 1: Speech Embeddings Experiments: Precision/Recall/F1-scores (%) of NLU Models"]}, {"raw_evidence": ["For incorporating speech embeddings experiments, performance results of NLU models on in-cabin data with various feature concatenations can be found in Table TABREF3, using our previous hierarchical joint model (H-Joint-2). When used in isolation, Word2Vec and Speech2Vec achieves comparable performances, which cannot reach GloVe performance. This was expected as the pre-trained Speech2Vec vectors have lower vocabulary coverage than GloVe. Yet, we observed that concatenating GloVe + Speech2Vec, and further GloVe + Word2Vec + Speech2Vec yields better NLU results: F1-score increased from 0.89 to 0.91 for intent recognition, from 0.96 to 0.97 for slot filling.", "For multimodal (audio & video) features exploration, performance results of the compared models with varying modality/feature concatenations can be found in Table TABREF4. Since these audio/video features are extracted per utterance (on segmented audio & video clips), we experimented with the utterance-level intent recognition task only, using hierarchical joint learning (H-Joint-2). We investigated the audio-visual feature additions on top of text-only and text+speech embedding models. Adding openSMILE/IS10 features from audio, as well as incorporating intermediate CNN/Inception-ResNet-v2 features from video brought slight improvements to our intent models, reaching 0.92 F1-score. These initial results using feature concatenations may need further explorations, especially for certain intent-types such as stop (audio intensity) or relevant slots such as passenger gestures/gaze (from cabin video) and outside objects (from road video)."], "highlighted_evidence": ["Yet, we observed that concatenating GloVe + Speech2Vec, and further GloVe + Word2Vec + Speech2Vec yields better NLU results: F1-score increased from 0.89 to 0.91 for intent recognition, from 0.96 to 0.97 for slot filling.", "We investigated the audio-visual feature additions on top of text-only and text+speech embedding models. Adding openSMILE/IS10 features from audio, as well as incorporating intermediate CNN/Inception-ResNet-v2 features from video brought slight improvements to our intent models, reaching 0.92 F1-score."]}] |
How much is performance improved on NLI? | {"label_key": "1909.03405", "label_file": "paper_tab_qa", "q_uid": "bdc91d1283a82226aeeb7a2f79dbbc57d3e84a1a", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": " improvement on the RTE dataset is significant, i.e., 4% absolute gain over the BERTBase", "type": "extractive"}, {"answer": "The average score improved by 1.4 points over the previous best result.", "type": "abstractive"}] | [{"raw_evidence": ["Table TABREF21 illustrates the experimental results, showing that our method is beneficial for all of NLI tasks. The improvement on the RTE dataset is significant, i.e., 4% absolute gain over the BERTBase. Besides NLI, our model also performs better than BERTBase in the STS task. The STS tasks are semantically similar to the NLI tasks, and hence able to take advantage of PSP as well. Actually, the proposed method has a positive effect whenever the input is a sentence pair. The improvements suggest that the PSP task encourages the model to learn more detailed semantics in the pre-training, which improves the model on the downstream learning tasks. Moreover, our method is surprisingly able to achieve slightly better results in the single-sentence problem. The improvement should be attributed to better semantic representation.", "FLOAT SELECTED: Table 2: Results on the test set of GLUE benchmark. The performance was obtained by the official evaluation server. The number below each task is the number of training examples. The ”Average” column follows the setting in the BERT paper, which excludes the problematic WNLI task. F1 scores are reported for QQP and MRPC, Spearman correlations are reported for STS-B, and accuracy scores are reported for the other tasks. All the listed models are trained on the Wikipedia and the Book Corpus datasets. The results are the average of 5 runs."], "highlighted_evidence": ["Table TABREF21 illustrates the experimental results, showing that our method is beneficial for all of NLI tasks. The improvement on the RTE dataset is significant, i.e., 4% absolute gain over the BERTBase.", "FLOAT SELECTED: Table 2: Results on the test set of GLUE benchmark. The performance was obtained by the official evaluation server. The number below each task is the number of training examples. The ”Average” column follows the setting in the BERT paper, which excludes the problematic WNLI task. F1 scores are reported for QQP and MRPC, Spearman correlations are reported for STS-B, and accuracy scores are reported for the other tasks. All the listed models are trained on the Wikipedia and the Book Corpus datasets. The results are the average of 5 runs."]}, {"raw_evidence": ["FLOAT SELECTED: Table 2: Results on the test set of GLUE benchmark. The performance was obtained by the official evaluation server. The number below each task is the number of training examples. The ”Average” column follows the setting in the BERT paper, which excludes the problematic WNLI task. F1 scores are reported for QQP and MRPC, Spearman correlations are reported for STS-B, and accuracy scores are reported for the other tasks. All the listed models are trained on the Wikipedia and the Book Corpus datasets. The results are the average of 5 runs."], "highlighted_evidence": ["FLOAT SELECTED: Table 2: Results on the test set of GLUE benchmark. The performance was obtained by the official evaluation server. The number below each task is the number of training examples. The ”Average” column follows the setting in the BERT paper, which excludes the problematic WNLI task. F1 scores are reported for QQP and MRPC, Spearman correlations are reported for STS-B, and accuracy scores are reported for the other tasks. All the listed models are trained on the Wikipedia and the Book Corpus datasets. The results are the average of 5 runs."]}] |
what was the baseline? | {"label_key": "1907.03060", "label_file": "paper_tab_qa", "q_uid": "761de1610e934189850e8fda707dc5239dd58092", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "pivot-based translation relying on a helping language BIBREF10, nduction of phrase tables from monolingual data BIBREF14 , attentional RNN-based model (RNMT) BIBREF2, Transformer model BIBREF18, bi-directional model BIBREF11, multi-to-multi (M2M) model BIBREF8, back-translation BIBREF17", "type": "extractive"}, {"answer": "M2M Transformer", "type": "abstractive"}] | [{"raw_evidence": ["We began with evaluating standard MT paradigms, i.e., PBSMT BIBREF3 and NMT BIBREF1 . As for PBSMT, we also examined two advanced methods: pivot-based translation relying on a helping language BIBREF10 and induction of phrase tables from monolingual data BIBREF14 .", "As for NMT, we compared two types of encoder-decoder architectures: attentional RNN-based model (RNMT) BIBREF2 and the Transformer model BIBREF18 . In addition to standard uni-directional modeling, to cope with the low-resource problem, we examined two multi-directional models: bi-directional model BIBREF11 and multi-to-multi (M2M) model BIBREF8 .", "After identifying the best model, we also examined the usefulness of a data augmentation method based on back-translation BIBREF17 ."], "highlighted_evidence": ["We began with evaluating standard MT paradigms, i.e., PBSMT BIBREF3 and NMT BIBREF1 . As for PBSMT, we also examined two advanced methods: pivot-based translation relying on a helping language BIBREF10 and induction of phrase tables from monolingual data BIBREF14 .\n\nAs for NMT, we compared two types of encoder-decoder architectures: attentional RNN-based model (RNMT) BIBREF2 and the Transformer model BIBREF18 . In addition to standard uni-directional modeling, to cope with the low-resource problem, we examined two multi-directional models: bi-directional model BIBREF11 and multi-to-multi (M2M) model BIBREF8 .\n\nAfter identifying the best model, we also examined the usefulness of a data augmentation method based on back-translation BIBREF17 ."]}, {"raw_evidence": ["In this paper, we challenged the difficult task of Ja INLINEFORM0 Ru news domain translation in an extremely low-resource setting. We empirically confirmed the limited success of well-established solutions when restricted to in-domain data. Then, to incorporate out-of-domain data, we proposed a multilingual multistage fine-tuning approach and observed that it substantially improves Ja INLINEFORM1 Ru translation by over 3.7 BLEU points compared to a strong baseline, as summarized in Table TABREF53 . This paper contains an empirical comparison of several existing approaches and hence we hope that our paper can act as a guideline to researchers attempting to tackle extremely low-resource translation.", "FLOAT SELECTED: Table 13: Summary of our investigation: BLEU scores of the best NMT systems at each step."], "highlighted_evidence": ["Then, to incorporate out-of-domain data, we proposed a multilingual multistage fine-tuning approach and observed that it substantially improves Ja INLINEFORM1 Ru translation by over 3.7 BLEU points compared to a strong baseline, as summarized in Table TABREF53 . ", "FLOAT SELECTED: Table 13: Summary of our investigation: BLEU scores of the best NMT systems at each step."]}] |
How larger are the training sets of these versions of ELMo compared to the previous ones? | {"label_key": "1911.10049", "label_file": "paper_tab_qa", "q_uid": "603fee7314fa65261812157ddfc2c544277fcf90", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "By 14 times.", "type": "abstractive"}, {"answer": "up to 1.95 times larger", "type": "abstractive"}] | [{"raw_evidence": ["Recently, ELMoForManyLangs BIBREF6 project released pre-trained ELMo models for a number of different languages BIBREF7. These models, however, were trained on a significantly smaller datasets. They used 20-million-words data randomly sampled from the raw text released by the CoNLL 2017 Shared Task - Automatically Annotated Raw Texts and Word Embeddings BIBREF8, which is a combination of Wikipedia dump and common crawl. The quality of these models is questionable. For example, we compared the Latvian model by ELMoForManyLangs with a model we trained on a complete (wikidump + common crawl) Latvian corpus, which has about 280 million tokens. The difference of each model on the word analogy task is shown in Figure FIGREF16 in Section SECREF5. As the results of the ELMoForManyLangs embeddings are significantly worse than using the full corpus, we can conclude that these embeddings are not of sufficient quality. For that reason, we computed ELMo embeddings for seven languages on much larger corpora. As this effort requires access to large amount of textual data and considerable computational resources, we made the precomputed models publicly available by depositing them to Clarin repository."], "highlighted_evidence": ["They used 20-million-words data randomly sampled from the raw text released by the CoNLL 2017 Shared Task - Automatically Annotated Raw Texts and Word Embeddings BIBREF8, which is a combination of Wikipedia dump and common crawl. ", "For example, we compared the Latvian model by ELMoForManyLangs with a model we trained on a complete (wikidump + common crawl) Latvian corpus, which has about 280 million tokens."]}, {"raw_evidence": ["Although ELMo is trained on character level and is able to handle out-of-vocabulary words, a vocabulary file containing most common tokens is used for efficiency during training and embedding generation. The original ELMo model was trained on a one billion word large English corpus, with a given vocabulary file of about 800,000 words. Later, ELMo models for other languages were trained as well, but limited to larger languages with many resources, like German and Japanese.", "FLOAT SELECTED: Table 1: The training corpora used. We report their size (in billions of tokens), and ELMo vocabulary size (in millions of tokens)."], "highlighted_evidence": ["The original ELMo model was trained on a one billion word large English corpus, with a given vocabulary file of about 800,000 words.", "FLOAT SELECTED: Table 1: The training corpora used. We report their size (in billions of tokens), and ELMo vocabulary size (in millions of tokens)."]}] |
What is the improvement in performance for Estonian in the NER task? | {"label_key": "1911.10049", "label_file": "paper_tab_qa", "q_uid": "09a1173e971e0fcdbf2fbecb1b077158ab08f497", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "5 percent points.", "type": "abstractive"}, {"answer": "0.05 F1", "type": "abstractive"}] | [{"raw_evidence": ["FLOAT SELECTED: Table 4: The results of NER evaluation task, averaged over 5 training and evaluation runs. The scores are average F1 score of the three named entity classes. The columns show FastText, ELMo, and the difference between them (∆(E − FT ))."], "highlighted_evidence": ["FLOAT SELECTED: Table 4: The results of NER evaluation task, averaged over 5 training and evaluation runs. The scores are average F1 score of the three named entity classes. The columns show FastText, ELMo, and the difference between them (∆(E − FT ))."]}, {"raw_evidence": ["FLOAT SELECTED: Table 4: The results of NER evaluation task, averaged over 5 training and evaluation runs. The scores are average F1 score of the three named entity classes. The columns show FastText, ELMo, and the difference between them (∆(E − FT ))."], "highlighted_evidence": ["FLOAT SELECTED: Table 4: The results of NER evaluation task, averaged over 5 training and evaluation runs. The scores are average F1 score of the three named entity classes. The columns show FastText, ELMo, and the difference between them (∆(E − FT ))."]}] |
what is the state of the art on WSJ? | {"label_key": "1812.06864", "label_file": "paper_tab_qa", "q_uid": "70e9210fe64f8d71334e5107732d764332a81cb1", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "CNN-DNN-BLSTM-HMM", "type": "abstractive"}, {"answer": "HMM-based system", "type": "extractive"}] | [{"raw_evidence": ["Table TABREF11 shows Word Error Rates (WER) on WSJ for the current state-of-the-art and our models. The current best model trained on this dataset is an HMM-based system which uses a combination of convolutional, recurrent and fully connected layers, as well as speaker adaptation, and reaches INLINEFORM0 WER on nov92. DeepSpeech 2 shows a WER of INLINEFORM1 but uses 150 times more training data for the acoustic model and huge text datasets for LM training. Finally, the state-of-the-art among end-to-end systems trained only on WSJ, and hence the most comparable to our system, uses lattice-free MMI on augmented data (with speed perturbation) and gets INLINEFORM2 WER. Our baseline system, trained on mel-filterbanks, and decoded with a n-gram language model has a INLINEFORM3 WER. Replacing the n-gram LM by a convolutional one reduces the WER to INLINEFORM4 , and puts our model on par with the current best end-to-end system. Replacing the speech features by a learnable frontend finally reduces the WER to INLINEFORM5 and then to INLINEFORM6 when doubling the number of learnable filters, improving over DeepSpeech 2 and matching the performance of the best HMM-DNN system.", "FLOAT SELECTED: Table 1: WER (%) on the open vocabulary task of WSJ."], "highlighted_evidence": ["Table TABREF11 shows Word Error Rates (WER) on WSJ for the current state-of-the-art and our models. The current best model trained on this dataset is an HMM-based system which uses a combination of convolutional, recurrent and fully connected layers, as well as speaker adaptation, and reaches INLINEFORM0 WER on nov92.", "FLOAT SELECTED: Table 1: WER (%) on the open vocabulary task of WSJ."]}, {"raw_evidence": ["Table TABREF11 shows Word Error Rates (WER) on WSJ for the current state-of-the-art and our models. The current best model trained on this dataset is an HMM-based system which uses a combination of convolutional, recurrent and fully connected layers, as well as speaker adaptation, and reaches INLINEFORM0 WER on nov92. DeepSpeech 2 shows a WER of INLINEFORM1 but uses 150 times more training data for the acoustic model and huge text datasets for LM training. Finally, the state-of-the-art among end-to-end systems trained only on WSJ, and hence the most comparable to our system, uses lattice-free MMI on augmented data (with speed perturbation) and gets INLINEFORM2 WER. Our baseline system, trained on mel-filterbanks, and decoded with a n-gram language model has a INLINEFORM3 WER. Replacing the n-gram LM by a convolutional one reduces the WER to INLINEFORM4 , and puts our model on par with the current best end-to-end system. Replacing the speech features by a learnable frontend finally reduces the WER to INLINEFORM5 and then to INLINEFORM6 when doubling the number of learnable filters, improving over DeepSpeech 2 and matching the performance of the best HMM-DNN system."], "highlighted_evidence": ["Table TABREF11 shows Word Error Rates (WER) on WSJ for the current state-of-the-art and our models. The current best model trained on this dataset is an HMM-based system which uses a combination of convolutional, recurrent and fully connected layers, as well as speaker adaptation, and reaches INLINEFORM0 WER on nov92."]}] |
what is the size of the augmented dataset? | {"label_key": "1811.12254", "label_file": "paper_tab_qa", "q_uid": "57f23dfc264feb62f45d9a9e24c60bd73d7fe563", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "609", "type": "abstractive"}] | [{"raw_evidence": ["FLOAT SELECTED: Table 1: Speech datasets used. Note that HAPD, HAFP and FP only have samples from healthy subjects. Detailed description in App. 2.", "All datasets shown in Tab. SECREF2 were transcribed manually by trained transcriptionists, employing the same list of annotations and protocols, with the same set of features extracted from the transcripts (see Sec. SECREF3 ). HAPD and HAFP are jointly referred to as HA.", "Binary classification of each speech transcript as AD or HC is performed. We do 5-fold cross-validation, stratified by subject so that each subject's samples do not occur in both training and testing sets in each fold. The minority class is oversampled in the training set using SMOTE BIBREF14 to deal with the class imbalance. We consider a Random Forest (100 trees), Naïve Bayes (with equal priors), SVM (with RBF kernel), and a 2-layer neural network (10 units, Adam optimizer, 500 epochs) BIBREF15 . Additionally, we augment the DB data with healthy samples from FP with varied ages.", "We augment DB with healthy samples from FP with varying ages (Tab. SECREF11 ), considering 50 samples for each 15 year duration starting from age 30. Adding the same number of samples from bins of age greater than 60 leads to greater increase in performance. This could be because the average age of participants in the datasets (DB, HA etc.) we use are greater than 60. Note that despite such a trend, addition of healthy data produces fair classifiers with respect to samples with age INLINEFORM0 60 and those with age INLINEFORM1 60 (balanced F1 scores of 75.6% and 76.1% respectively; further details in App. SECREF43 .)", "FLOAT SELECTED: Table 3: Augmenting DB with healthy data of varied ages. Scores averaged across 4 classifiers."], "highlighted_evidence": ["FLOAT SELECTED: Table 1: Speech datasets used. Note that HAPD, HAFP and FP only have samples from healthy subjects. Detailed description in App. 2.", "\nAll datasets shown in Tab. SECREF2 were transcribed manually by trained transcriptionists, employing the same list of annotations and protocols, with the same set of features extracted from the transcripts (see Sec. SECREF3 ). HAPD and HAFP are jointly referred to as HA.", "Additionally, we augment the DB data with healthy samples from FP with varied ages.", "We augment DB with healthy samples from FP with varying ages (Tab. SECREF11 ), considering 50 samples for each 15 year duration starting from age 30. ", "FLOAT SELECTED: Table 3: Augmenting DB with healthy data of varied ages. Scores averaged across 4 classifiers."]}] |
How many sentences does the dataset contain? | {"label_key": "1908.05828", "label_file": "paper_tab_qa", "q_uid": "d51dc36fbf6518226b8e45d4c817e07e8f642003", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "3606", "type": "abstractive"}, {"answer": "6946", "type": "extractive"}] | [{"raw_evidence": ["FLOAT SELECTED: Table 1: Dataset statistics"], "highlighted_evidence": ["FLOAT SELECTED: Table 1: Dataset statistics"]}, {"raw_evidence": ["In order to label our dataset with POS-tags, we first created POS annotated dataset of 6946 sentences and 16225 unique words extracted from POS-tagged Nepali National Corpus and trained a BiLSTM model with 95.14% accuracy which was used to create POS-tags for our dataset."], "highlighted_evidence": ["In order to label our dataset with POS-tags, we first created POS annotated dataset of 6946 sentences and 16225 unique words extracted from POS-tagged Nepali National Corpus and trained a BiLSTM model with 95.14% accuracy which was used to create POS-tags for our dataset."]}] |
What is the baseline? | {"label_key": "1908.05828", "label_file": "paper_tab_qa", "q_uid": "cb77d6a74065cb05318faf57e7ceca05e126a80d", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "CNN modelBIBREF0, Stanford CRF modelBIBREF21", "type": "extractive"}, {"answer": "Bam et al. SVM, Ma and Hovy w/glove, Lample et al. w/fastText, Lample et al. w/word2vec", "type": "abstractive"}] | [{"raw_evidence": ["Similar approaches has been applied to many South Asian languages like HindiBIBREF6, IndonesianBIBREF7, BengaliBIBREF19 and In this paper, we present the neural network architecture for NER task in Nepali language, which doesn't require any manual feature engineering nor any data pre-processing during training. First we are comparing BiLSTMBIBREF14, BiLSTM+CNNBIBREF20, BiLSTM+CRFBIBREF1, BiLSTM+CNN+CRFBIBREF2 models with CNN modelBIBREF0 and Stanford CRF modelBIBREF21. Secondly, we show the comparison between models trained on general word embeddings, word embedding + character-level embedding, word embedding + part-of-speech(POS) one-hot encoding and word embedding + grapheme clustered or sub-word embeddingBIBREF22. The experiments were performed on the dataset that we created and on the dataset received from ILPRL lab. Our extensive study shows that augmenting word embedding with character or grapheme-level representation and POS one-hot encoding vector yields better results compared to using general word embedding alone."], "highlighted_evidence": ["First we are comparing BiLSTMBIBREF14, BiLSTM+CNNBIBREF20, BiLSTM+CRFBIBREF1, BiLSTM+CNN+CRFBIBREF2 models with CNN modelBIBREF0 and Stanford CRF modelBIBREF21."]}, {"raw_evidence": ["FLOAT SELECTED: Table 6: Comparison with previous models based on Test F1 score"], "highlighted_evidence": ["FLOAT SELECTED: Table 6: Comparison with previous models based on Test F1 score"]}] |
What is the size of the dataset? | {"label_key": "1908.05828", "label_file": "paper_tab_qa", "q_uid": "a1b3e2107302c5a993baafbe177684ae88d6f505", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "Dataset contains 3606 total sentences and 79087 total entities.", "type": "abstractive"}, {"answer": "ILPRL contains 548 sentences, OurNepali contains 3606 sentences", "type": "abstractive"}] | [{"raw_evidence": ["After much time, we received the dataset from Bal Krishna Bal, ILPRL, KU. This dataset follows standard CoNLL-2003 IOB formatBIBREF25 with POS tags. This dataset is prepared by ILPRL Lab, KU and KEIV Technologies. Few corrections like correcting the NER tags had to be made on the dataset. The statistics of both the dataset is presented in table TABREF23.", "FLOAT SELECTED: Table 1: Dataset statistics"], "highlighted_evidence": ["The statistics of both the dataset is presented in table TABREF23.", "FLOAT SELECTED: Table 1: Dataset statistics"]}, {"raw_evidence": ["FLOAT SELECTED: Table 1: Dataset statistics", "Dataset Statistics ::: OurNepali dataset", "Since, we there was no publicly available standard Nepali NER dataset and did not receive any dataset from the previous researchers, we had to create our own dataset. This dataset contains the sentences collected from daily newspaper of the year 2015-2016. This dataset has three major classes Person (PER), Location (LOC) and Organization (ORG). Pre-processing was performed on the text before creation of the dataset, for example all punctuations and numbers besides ',', '-', '|' and '.' were removed. Currently, the dataset is in standard CoNLL-2003 IO formatBIBREF25.", "Dataset Statistics ::: ILPRL dataset", "After much time, we received the dataset from Bal Krishna Bal, ILPRL, KU. This dataset follows standard CoNLL-2003 IOB formatBIBREF25 with POS tags. This dataset is prepared by ILPRL Lab, KU and KEIV Technologies. Few corrections like correcting the NER tags had to be made on the dataset. The statistics of both the dataset is presented in table TABREF23."], "highlighted_evidence": ["FLOAT SELECTED: Table 1: Dataset statistics", "Dataset Statistics ::: OurNepali dataset\nSince, we there was no publicly available standard Nepali NER dataset and did not receive any dataset from the previous researchers, we had to create our own dataset.", "Dataset Statistics ::: ILPRL dataset\nAfter much time, we received the dataset from Bal Krishna Bal, ILPRL, KU. ", " The statistics of both the dataset is presented in table TABREF23."]}] |
How many different types of entities exist in the dataset? | {"label_key": "1908.05828", "label_file": "paper_tab_qa", "q_uid": "1462eb312944926469e7cee067dfc7f1267a2a8c", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "OurNepali contains 3 different types of entities, ILPRL contains 4 different types of entities", "type": "abstractive"}, {"answer": "three", "type": "extractive"}] | [{"raw_evidence": ["FLOAT SELECTED: Table 1: Dataset statistics", "Table TABREF24 presents the total entities (PER, LOC, ORG and MISC) from both of the dataset used in our experiments. The dataset is divided into three parts with 64%, 16% and 20% of the total dataset into training set, development set and test set respectively."], "highlighted_evidence": ["FLOAT SELECTED: Table 1: Dataset statistics", "Table TABREF24 presents the total entities (PER, LOC, ORG and MISC) from both of the dataset used in our experiments."]}, {"raw_evidence": ["Since, we there was no publicly available standard Nepali NER dataset and did not receive any dataset from the previous researchers, we had to create our own dataset. This dataset contains the sentences collected from daily newspaper of the year 2015-2016. This dataset has three major classes Person (PER), Location (LOC) and Organization (ORG). Pre-processing was performed on the text before creation of the dataset, for example all punctuations and numbers besides ',', '-', '|' and '.' were removed. Currently, the dataset is in standard CoNLL-2003 IO formatBIBREF25."], "highlighted_evidence": ["This dataset has three major classes Person (PER), Location (LOC) and Organization (ORG)."]}] |
How big is the new Nepali NER dataset? | {"label_key": "1908.05828", "label_file": "paper_tab_qa", "q_uid": "f59f1f5b528a2eec5cfb1e49c87699e0c536cc45", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "3606 sentences", "type": "abstractive"}, {"answer": "Dataset contains 3606 total sentences and 79087 total entities.", "type": "abstractive"}] | [{"raw_evidence": ["FLOAT SELECTED: Table 1: Dataset statistics", "After much time, we received the dataset from Bal Krishna Bal, ILPRL, KU. This dataset follows standard CoNLL-2003 IOB formatBIBREF25 with POS tags. This dataset is prepared by ILPRL Lab, KU and KEIV Technologies. Few corrections like correcting the NER tags had to be made on the dataset. The statistics of both the dataset is presented in table TABREF23."], "highlighted_evidence": ["FLOAT SELECTED: Table 1: Dataset statistics", "The statistics of both the dataset is presented in table TABREF23.\n\n"]}, {"raw_evidence": ["After much time, we received the dataset from Bal Krishna Bal, ILPRL, KU. This dataset follows standard CoNLL-2003 IOB formatBIBREF25 with POS tags. This dataset is prepared by ILPRL Lab, KU and KEIV Technologies. Few corrections like correcting the NER tags had to be made on the dataset. The statistics of both the dataset is presented in table TABREF23.", "FLOAT SELECTED: Table 1: Dataset statistics"], "highlighted_evidence": ["The statistics of both the dataset is presented in table TABREF23.", "FLOAT SELECTED: Table 1: Dataset statistics"]}] |
What is the performance improvement of the grapheme-level representation model over the character-level model? | {"label_key": "1908.05828", "label_file": "paper_tab_qa", "q_uid": "9bd080bb2a089410fd7ace82e91711136116af6c", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "On OurNepali test dataset Grapheme-level representation model achieves average 0.16% improvement, on ILPRL test dataset it achieves maximum 1.62% improvement", "type": "abstractive"}, {"answer": "BiLSTM+CNN(grapheme-level) which turns out to be performing on par with BiLSTM+CNN(character-level) under the same configuration", "type": "extractive"}] | [{"raw_evidence": ["FLOAT SELECTED: Table 5: Comparison of different variation of our models"], "highlighted_evidence": ["FLOAT SELECTED: Table 5: Comparison of different variation of our models"]}, {"raw_evidence": ["We also present a neural architecture BiLSTM+CNN(grapheme-level) which turns out to be performing on par with BiLSTM+CNN(character-level) under the same configuration. We believe this will not only help Nepali language but also other languages falling under the umbrellas of Devanagari languages. Our model BiLSTM+CNN(grapheme-level) and BiLSTM+CNN(G)+POS outperforms all other model experimented in OurNepali and ILPRL dataset respectively."], "highlighted_evidence": ["We also present a neural architecture BiLSTM+CNN(grapheme-level) which turns out to be performing on par with BiLSTM+CNN(character-level) under the same configuration."]}] |
What is the performance of classifiers? | {"label_key": "2002.02070", "label_file": "paper_tab_qa", "q_uid": "d53299fac8c94bd0179968eb868506124af407d1", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "Table TABREF10, The KNN classifier seem to perform the best across all four metrics. This is probably due to the multi-class nature of the data set, While these classifiers did not perform particularly well, they provide a good starting point for future work on this subject", "type": "extractive"}, {"answer": "Using F1 Micro measure, the KNN classifier perform 0.6762, the RF 0.6687, SVM 0.6712 and MLP 0.6778.", "type": "abstractive"}] | [{"raw_evidence": ["In order to evaluate our classifiers, we perform 4-fold cross validation on a shuffled data set. Table TABREF10 shows the F1 micro and F1 macro scores for all the classifiers. The KNN classifier seem to perform the best across all four metrics. This is probably due to the multi-class nature of the data set.", "FLOAT SELECTED: Table 2: Evaluation metrics for all classifiers."], "highlighted_evidence": ["In order to evaluate our classifiers, we perform 4-fold cross validation on a shuffled data set. Table TABREF10 shows the F1 micro and F1 macro scores for all the classifiers. The KNN classifier seem to perform the best across all four metrics. This is probably due to the multi-class nature of the data set.", "FLOAT SELECTED: Table 2: Evaluation metrics for all classifiers."]}, {"raw_evidence": ["FLOAT SELECTED: Table 2: Evaluation metrics for all classifiers."], "highlighted_evidence": ["FLOAT SELECTED: Table 2: Evaluation metrics for all classifiers."]}] |
What classifiers have been trained? | {"label_key": "2002.02070", "label_file": "paper_tab_qa", "q_uid": "29f2954098f055fb19d9502572f085862d75bf61", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "KNN\nRF\nSVM\nMLP", "type": "abstractive"}, {"answer": " K Nearest Neighbors (KNN), Random Forest (RF), Support Vector Machine (SVM), Multi-layer Perceptron (MLP)", "type": "extractive"}] | [{"raw_evidence": ["FLOAT SELECTED: Table 2: Evaluation metrics for all classifiers.", "In order to evaluate our classifiers, we perform 4-fold cross validation on a shuffled data set. Table TABREF10 shows the F1 micro and F1 macro scores for all the classifiers. The KNN classifier seem to perform the best across all four metrics. This is probably due to the multi-class nature of the data set."], "highlighted_evidence": ["FLOAT SELECTED: Table 2: Evaluation metrics for all classifiers.", "In order to evaluate our classifiers, we perform 4-fold cross validation on a shuffled data set. Table TABREF10 shows the F1 micro and F1 macro scores for all the classifiers."]}, {"raw_evidence": ["We train a series of classifiers in order to classify car-speak. We train three classifiers on the review vectors that we prepared in Section SECREF8. The classifiers we use are K Nearest Neighbors (KNN), Random Forest (RF), Support Vector Machine (SVM), and Multi-layer Perceptron (MLP) BIBREF13."], "highlighted_evidence": [" The classifiers we use are K Nearest Neighbors (KNN), Random Forest (RF), Support Vector Machine (SVM), and Multi-layer Perceptron (MLP) BIBREF13."]}] |
What other sentence embeddings methods are evaluated? | {"label_key": "1908.10084", "label_file": "paper_tab_qa", "q_uid": "e2db361ae9ad9dbaa9a85736c5593eb3a471983d", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "GloVe, BERT, Universal Sentence Encoder, TF-IDF, InferSent", "type": "abstractive"}, {"answer": "Avg. GloVe embeddings, Avg. fast-text embeddings, Avg. BERT embeddings, BERT CLS-vector, InferSent - GloVe and Universal Sentence Encoder.", "type": "abstractive"}] | [{"raw_evidence": ["FLOAT SELECTED: Table 1: Spearman rank correlation ρ between the cosine similarity of sentence representations and the gold labels for various Textual Similarity (STS) tasks. Performance is reported by convention as ρ × 100. STS12-STS16: SemEval 2012-2016, STSb: STSbenchmark, SICK-R: SICK relatedness dataset.", "FLOAT SELECTED: Table 3: Average Pearson correlation r and average Spearman’s rank correlation ρ on the Argument Facet Similarity (AFS) corpus (Misra et al., 2016). Misra et al. proposes 10-fold cross-validation. We additionally evaluate in a cross-topic scenario: Methods are trained on two topics, and are evaluated on the third topic."], "highlighted_evidence": ["FLOAT SELECTED: Table 1: Spearman rank correlation ρ between the cosine similarity of sentence representations and the gold labels for various Textual Similarity (STS) tasks. Performance is reported by convention as ρ × 100. STS12-STS16: SemEval 2012-2016, STSb: STSbenchmark, SICK-R: SICK relatedness dataset.", "FLOAT SELECTED: Table 3: Average Pearson correlation r and average Spearman’s rank correlation ρ on the Argument Facet Similarity (AFS) corpus (Misra et al., 2016). Misra et al. proposes 10-fold cross-validation. We additionally evaluate in a cross-topic scenario: Methods are trained on two topics, and are evaluated on the third topic."]}, {"raw_evidence": ["We compare the SBERT sentence embeddings to other sentence embeddings methods on the following seven SentEval transfer tasks:", "The results can be found in Table TABREF15. SBERT is able to achieve the best performance in 5 out of 7 tasks. The average performance increases by about 2 percentage points compared to InferSent as well as the Universal Sentence Encoder. Even though transfer learning is not the purpose of SBERT, it outperforms other state-of-the-art sentence embeddings methods on this task.", "FLOAT SELECTED: Table 5: Evaluation of SBERT sentence embeddings using the SentEval toolkit. SentEval evaluates sentence embeddings on different sentence classification tasks by training a logistic regression classifier using the sentence embeddings as features. Scores are based on a 10-fold cross-validation."], "highlighted_evidence": ["We compare the SBERT sentence embeddings to other sentence embeddings methods on the following seven SentEval transfer tasks:", "The results can be found in Table TABREF15.", "FLOAT SELECTED: Table 5: Evaluation of SBERT sentence embeddings using the SentEval toolkit. SentEval evaluates sentence embeddings on different sentence classification tasks by training a logistic regression classifier using the sentence embeddings as features. Scores are based on a 10-fold cross-validation."]}] |
which non-english language had the best performance? | {"label_key": "1806.04511", "label_file": "paper_tab_qa", "q_uid": "e79a5b6b6680bd2f63e9f4adbaae1d7795d81e38", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "Russian", "type": "extractive"}, {"answer": "Russsian", "type": "abstractive"}] | [{"raw_evidence": ["Considering the improvements over the majority baseline achieved by the RNN model for both non-English (on the average 22.76% relative improvement; 15.82% relative improvement on Spanish, 72.71% vs. 84.21%, 30.53% relative improvement on Turkish, 56.97% vs. 74.36%, 37.13% relative improvement on Dutch, 59.63% vs. 81.77%, and 7.55% relative improvement on Russian, 79.60% vs. 85.62%) and English test sets (27.34% relative improvement), we can draw the conclusion that our model is robust to handle multiple languages. Building separate models for each language requires both labeled and unlabeled data. Even though having lots of labeled data in every language is the perfect case, it is unrealistic. Therefore, eliminating the resource requirement in this resource-constrained task is crucial. The fact that machine translation can be used in reusing models from different languages is promising for reducing the data requirements."], "highlighted_evidence": ["Considering the improvements over the majority baseline achieved by the RNN model for both non-English (on the average 22.76% relative improvement; 15.82% relative improvement on Spanish, 72.71% vs. 84.21%, 30.53% relative improvement on Turkish, 56.97% vs. 74.36%, 37.13% relative improvement on Dutch, 59.63% vs. 81.77%, and 7.55% relative improvement on Russian, 79.60% vs. 85.62%) and English test sets (27.34% relative improvement), we can draw the conclusion that our model is robust to handle multiple languages."]}, {"raw_evidence": ["FLOAT SELECTED: Table 3: Accuracy results (%) for RNN-based approach compared with majority baseline and lexicon-based baseline."], "highlighted_evidence": ["FLOAT SELECTED: Table 3: Accuracy results (%) for RNN-based approach compared with majority baseline and lexicon-based baseline."]}] |
How big is the dataset used in this work? | {"label_key": "1910.06592", "label_file": "paper_tab_qa", "q_uid": "3e1829e96c968cbd8ad8e9ce850e3a92a76b26e4", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "Total dataset size: 171 account (522967 tweets)", "type": "abstractive"}, {"answer": "212 accounts", "type": "abstractive"}] | [{"raw_evidence": ["Data. We build a dataset of Twitter accounts based on two lists annotated in previous works. For the non-factual accounts, we rely on a list of 180 Twitter accounts from BIBREF1. This list was created based on public resources where suspicious Twitter accounts were annotated with the main fake news types (clickbait, propaganda, satire, and hoax). We discard the satire labeled accounts since their intention is not to mislead or deceive. On the other hand, for the factual accounts, we use a list with another 32 Twitter accounts from BIBREF19 that are considered trustworthy by independent third parties. We discard some accounts that publish news in languages other than English (e.g., Russian or Arabic). Moreover, to ensure the quality of the data, we remove the duplicate, media-based, and link-only tweets. For each account, we collect the maximum amount of tweets allowed by Twitter API. Table TABREF13 presents statistics on our dataset.", "FLOAT SELECTED: Table 1: Statistics on the data with respect to each account type: propaganda (P), clickbait (C), hoax (H), and real news (R)."], "highlighted_evidence": ["Table TABREF13 presents statistics on our dataset.", "FLOAT SELECTED: Table 1: Statistics on the data with respect to each account type: propaganda (P), clickbait (C), hoax (H), and real news (R)."]}, {"raw_evidence": ["Data. We build a dataset of Twitter accounts based on two lists annotated in previous works. For the non-factual accounts, we rely on a list of 180 Twitter accounts from BIBREF1. This list was created based on public resources where suspicious Twitter accounts were annotated with the main fake news types (clickbait, propaganda, satire, and hoax). We discard the satire labeled accounts since their intention is not to mislead or deceive. On the other hand, for the factual accounts, we use a list with another 32 Twitter accounts from BIBREF19 that are considered trustworthy by independent third parties. We discard some accounts that publish news in languages other than English (e.g., Russian or Arabic). Moreover, to ensure the quality of the data, we remove the duplicate, media-based, and link-only tweets. For each account, we collect the maximum amount of tweets allowed by Twitter API. Table TABREF13 presents statistics on our dataset."], "highlighted_evidence": [" For the non-factual accounts, we rely on a list of 180 Twitter accounts from BIBREF1.", "On the other hand, for the factual accounts, we use a list with another 32 Twitter accounts from BIBREF19 that are considered trustworthy by independent third parties."]}] |
What is the size of the new dataset? | {"label_key": "1902.09666", "label_file": "paper_tab_qa", "q_uid": "74fb77a624ea9f1821f58935a52cca3086bb0981", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "14,100 tweets", "type": "abstractive"}, {"answer": "Dataset contains total of 14100 annotations.", "type": "abstractive"}] | [{"raw_evidence": ["FLOAT SELECTED: Table 3: Distribution of label combinations in OLID."], "highlighted_evidence": ["FLOAT SELECTED: Table 3: Distribution of label combinations in OLID."]}, {"raw_evidence": ["FLOAT SELECTED: Table 3: Distribution of label combinations in OLID.", "The data included in OLID has been collected from Twitter. We retrieved the data using the Twitter API by searching for keywords and constructions that are often included in offensive messages, such as `she is' or `to:BreitBartNews'. We carried out a first round of trial annotation of 300 instances with six experts. The goal of the trial annotation was to 1) evaluate the proposed tagset; 2) evaluate the data retrieval method; and 3) create a gold standard with instances that could be used as test questions in the training and test setting annotation which was carried out using crowdsourcing. The breakdown of keywords and their offensive content in the trial data of 300 tweets is shown in Table TABREF14 . We included a left (@NewYorker) and far-right (@BreitBartNews) news accounts because there tends to be political offense in the comments. One of the best offensive keywords was tweets that were flagged as not being safe by the Twitter `safe' filter (the `-' indicates `not safe'). The vast majority of content on Twitter is not offensive so we tried different strategies to keep a reasonable number of tweets in the offensive class amounting to around 30% of the dataset including excluding some keywords that were not high in offensive content such as `they are` and `to:NewYorker`. Although `he is' is lower in offensive content we kept it as a keyword to avoid gender bias. In addition to the keywords in the trial set, we searched for more political keywords which tend to be higher in offensive content, and sampled our dataset such that 50% of the the tweets come from political keywords and 50% come from non-political keywords. In addition to the keywords `gun control', and `to:BreitbartNews', political keywords used to collect these tweets are `MAGA', `antifa', `conservative' and `liberal'. We computed Fliess' INLINEFORM0 on the trial set for the five annotators on 21 of the tweets. INLINEFORM1 is .83 for Layer A (OFF vs NOT) indicating high agreement. As to normalization and anonymization, no user metadata or Twitter IDs have been stored, and URLs and Twitter mentions have been substituted to placeholders. We follow prior work in related areas (burnap2015cyber,davidson2017automated) and annotate our data using crowdsourcing using the platform Figure Eight. We ensure data quality by: 1) we only received annotations from individuals who were experienced in the platform; and 2) we used test questions to discard annotations of individuals who did not reach a certain threshold. Each instance in the dataset was annotated by multiple annotators and inter-annotator agreement has been calculated. We first acquired two annotations for each instance. In case of 100% agreement, we considered these as acceptable annotations, and in case of disagreement, we requested more annotations until the agreement was above 66%. After the crowdsourcing annotation, we used expert adjudication to guarantee the quality of the annotation. The breakdown of the data into training and testing for the labels from each level is shown in Table TABREF15 ."], "highlighted_evidence": ["FLOAT SELECTED: Table 3: Distribution of label combinations in OLID.", "The breakdown of the data into training and testing for the labels from each level is shown in Table TABREF15 ."]}] |
How long is the dataset for each step of hierarchy? | {"label_key": "1902.09666", "label_file": "paper_tab_qa", "q_uid": "1b72aa2ec3ce02131e60626639f0cf2056ec23ca", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "Level A: 14100 Tweets\nLevel B: 4640 Tweets\nLevel C: 4089 Tweets", "type": "abstractive"}] | [{"raw_evidence": ["The data included in OLID has been collected from Twitter. We retrieved the data using the Twitter API by searching for keywords and constructions that are often included in offensive messages, such as `she is' or `to:BreitBartNews'. We carried out a first round of trial annotation of 300 instances with six experts. The goal of the trial annotation was to 1) evaluate the proposed tagset; 2) evaluate the data retrieval method; and 3) create a gold standard with instances that could be used as test questions in the training and test setting annotation which was carried out using crowdsourcing. The breakdown of keywords and their offensive content in the trial data of 300 tweets is shown in Table TABREF14 . We included a left (@NewYorker) and far-right (@BreitBartNews) news accounts because there tends to be political offense in the comments. One of the best offensive keywords was tweets that were flagged as not being safe by the Twitter `safe' filter (the `-' indicates `not safe'). The vast majority of content on Twitter is not offensive so we tried different strategies to keep a reasonable number of tweets in the offensive class amounting to around 30% of the dataset including excluding some keywords that were not high in offensive content such as `they are` and `to:NewYorker`. Although `he is' is lower in offensive content we kept it as a keyword to avoid gender bias. In addition to the keywords in the trial set, we searched for more political keywords which tend to be higher in offensive content, and sampled our dataset such that 50% of the the tweets come from political keywords and 50% come from non-political keywords. In addition to the keywords `gun control', and `to:BreitbartNews', political keywords used to collect these tweets are `MAGA', `antifa', `conservative' and `liberal'. We computed Fliess' INLINEFORM0 on the trial set for the five annotators on 21 of the tweets. INLINEFORM1 is .83 for Layer A (OFF vs NOT) indicating high agreement. As to normalization and anonymization, no user metadata or Twitter IDs have been stored, and URLs and Twitter mentions have been substituted to placeholders. We follow prior work in related areas (burnap2015cyber,davidson2017automated) and annotate our data using crowdsourcing using the platform Figure Eight. We ensure data quality by: 1) we only received annotations from individuals who were experienced in the platform; and 2) we used test questions to discard annotations of individuals who did not reach a certain threshold. Each instance in the dataset was annotated by multiple annotators and inter-annotator agreement has been calculated. We first acquired two annotations for each instance. In case of 100% agreement, we considered these as acceptable annotations, and in case of disagreement, we requested more annotations until the agreement was above 66%. After the crowdsourcing annotation, we used expert adjudication to guarantee the quality of the annotation. The breakdown of the data into training and testing for the labels from each level is shown in Table TABREF15 .", "FLOAT SELECTED: Table 3: Distribution of label combinations in OLID."], "highlighted_evidence": [" The breakdown of the data into training and testing for the labels from each level is shown in Table TABREF15 .", "FLOAT SELECTED: Table 3: Distribution of label combinations in OLID."]}] |
What different correlations result when using different variants of ROUGE scores? | {"label_key": "1604.00400", "label_file": "paper_tab_qa", "q_uid": "bf52c01bf82612d0c7bbf2e6a5bb2570c322936f", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "we observe that many variants of Rouge scores do not have high correlations with human pyramid scores", "type": "extractive"}, {"answer": "Using Pearson corelation measure, for example, ROUGE-1-P is 0.257 and ROUGE-3-F 0.878.", "type": "abstractive"}] | [{"raw_evidence": ["Table TABREF23 shows the Pearson, Spearman and Kendall correlation of Rouge and Sera, with pyramid scores. Both Rouge and Sera are calculated with stopwords removed and with stemming. Our experiments with inclusion of stopwords and without stemming showed similar results and thus, we do not include those to avoid redundancy.", "Another important observation is regarding the effectiveness of Rouge scores (top part of Table TABREF23 ). Interestingly, we observe that many variants of Rouge scores do not have high correlations with human pyramid scores. The lowest F-score correlations are for Rouge-1 and Rouge-L (with INLINEFORM0 =0.454). Weak correlation of Rouge-1 shows that matching unigrams between the candidate summary and gold summaries is not accurate in quantifying the quality of the summary. On higher order n-grams, however, we can see that Rouge correlates better with pyramid. In fact, the highest overall INLINEFORM1 is obtained by Rouge-3. Rouge-L and its weighted version Rouge-W, both have weak correlations with pyramid. Skip-bigrams (Rouge-S) and its combination with unigrams (Rouge-SU) also show sub-optimal correlations. Note that INLINEFORM2 and INLINEFORM3 correlations are more reliable in our setup due to the small sample size."], "highlighted_evidence": ["Table TABREF23 shows the Pearson, Spearman and Kendall correlation of Rouge and Sera, with pyramid scores.", "Interestingly, we observe that many variants of Rouge scores do not have high correlations with human pyramid scores. The lowest F-score correlations are for Rouge-1 and Rouge-L (with INLINEFORM0 =0.454). Weak correlation of Rouge-1 shows that matching unigrams between the candidate summary and gold summaries is not accurate in quantifying the quality of the summary."]}, {"raw_evidence": ["We provided an analysis of existing evaluation metrics for scientific summarization with evaluation of all variants of Rouge. We showed that Rouge may not be the best metric for summarization evaluation; especially in summaries with high terminology variations and paraphrasing (e.g. scientific summaries). Furthermore, we showed that different variants of Rouge result in different correlation values with human judgments, indicating that not all Rouge scores are equally effective. Among all variants of Rouge, Rouge-2 and Rouge-3 are better correlated with manual judgments in the context of scientific summarization. We furthermore proposed an alternative and more effective approach for scientific summarization evaluation (Summarization Evaluation by Relevance Analysis - Sera). Results revealed that in general, the proposed evaluation metric achieves higher correlations with semi-manual pyramid evaluation scores in comparison with Rouge.", "FLOAT SELECTED: Table 2: Correlation between variants of ROUGE and SERA, with human pyramid scores. All variants of ROUGE are displayed. F : F-Score; R: Recall; P : Precision; DIS: Discounted variant of SERA; KW: using Keyword query reformulation; NP: Using noun phrases for query reformulation. The numbers in front of the SERA metrics indicate the rank cut-off point."], "highlighted_evidence": ["Furthermore, we showed that different variants of Rouge result in different correlation values with human judgments, indicating that not all Rouge scores are equally effective. Among all variants of Rouge, Rouge-2 and Rouge-3 are better correlated with manual judgments in the context of scientific summarization. ", "FLOAT SELECTED: Table 2: Correlation between variants of ROUGE and SERA, with human pyramid scores. All variants of ROUGE are displayed. F : F-Score; R: Recall; P : Precision; DIS: Discounted variant of SERA; KW: using Keyword query reformulation; NP: Using noun phrases for query reformulation. The numbers in front of the SERA metrics indicate the rank cut-off point."]}] |
What tasks were evaluated? | {"label_key": "1810.12196", "label_file": "paper_tab_qa", "q_uid": "52f8a3e3cd5d42126b5307adc740b71510a6bdf5", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "ReviewQA's test set", "type": "extractive"}, {"answer": "Detection of an aspect in a review, Prediction of the customer general satisfaction, Prediction of the global trend of an aspect in a given review, Prediction of whether the rating of a given aspect is above or under a given value, Prediction of the exact rating of an aspect in a review, Prediction of the list of all the positive/negative aspects mentioned in the review, Comparison between aspects, Prediction of the strengths and weaknesses in a review", "type": "abstractive"}] | [{"raw_evidence": ["Table TABREF19 displays the performance of the 4 baselines on the ReviewQA's test set. These results are the performance achieved by our own implementation of these 4 models. According to our results, the simple LSTM network and the MemN2N perform very poorly on this dataset. Especially on the most advanced reasoning tasks. Indeed, the task 5 which corresponds to the prediction of the exact rating of an aspect seems to be very challenging for these model. Maybe the tokenization by sentence to create the memory blocks of the MemN2N, which is appropriated in the case of the bAbI tasks, is not a good representation of the documents when it has to handle human generated comments. However, the logistic regression achieves reasonable performance on these tasks, and do not suffer from catastrophic performance on any tasks. Its worst result comes on task 6 and one of the reason is probably that this architecture is not designed to predict a list of answers. On the contrary, the deep projective reader achieves encouraging on this dataset. It outperforms all the other baselines, with very good scores on the first fourth tasks. The question/document and document/document attention layers proposed in BIBREF12 seem once again to produce rich encodings of the inputs which are relevant for our projection layer."], "highlighted_evidence": ["Table TABREF19 displays the performance of the 4 baselines on the ReviewQA's test set. These results are the performance achieved by our own implementation of these 4 models."]}, {"raw_evidence": ["FLOAT SELECTED: Table 1: Descriptions and examples of the 8 tasks evaluated in ReviewQA.", "We introduce a list of 8 different competencies that a reading system should master in order to process reviews and text documents in general. These 8 tasks require different competencies and a different level of understanding of the document to be well answered. For instance, detecting if an aspect is mentioned in a review will require less understanding of the review than predicting explicitly the rating of this aspect. Table TABREF10 presents the 8 tasks we have introduced in this dataset with an example of a question that corresponds to each task. We also provide the expected type of the answer (Yes/No question, rating question...). It can be an additional tool to analyze the errors of the readers."], "highlighted_evidence": ["FLOAT SELECTED: Table 1: Descriptions and examples of the 8 tasks evaluated in ReviewQA.", "Table TABREF10 presents the 8 tasks we have introduced in this dataset with an example of a question that corresponds to each task."]}] |
UDA Paper Tab QA (orgrctera/uda_paper_tab_qa)
Overview
This dataset is the Paper Tab slice of the UDA (Unstructured Document Analysis) benchmark: 393 question–answer instances over academic research papers, packaged for retrieval-oriented evaluation in RAG pipelines. Questions are designed so that answers are grounded in tabular and semi-structured content (figures, result tables, statistics blocks) alongside surrounding paper text—mirroring how analysts read papers in practice.
UDA is a benchmark suite for Retrieval-Augmented Generation (RAG) over messy, real-world documents (notably PDF and HTML), where evidence mixes long-form prose, tables, and layout-specific structure. The full suite comprises 2,965 real-world documents and 29,590 expert-annotated Q&A pairs across finance, academia, and knowledge-base settings. Within the academia track, Paper Tab complements Paper Text by focusing on table-centric QA rather than purely narrative spans.
In the UDA paper’s reported configuration, the Paper Tab subset covers on the order of 307 source PDFs with 393 labeled Q&A pairs (multi-page papers, long contexts; the paper reports roughly 6.1k words and 11 pages per document on average—statistics refer to the original benchmark release).
This Hub release stores one row per retrieval task instance: systems must retrieve the right regions (tables, captions, related paragraphs) and ground answers in that evidence—consistent with UDA’s emphasis on parsing, chunking, and retrieval as first-class problems, not only generation.
Task
- Task type: Retrieval (within a RAG / document-analysis pipeline) for Paper Tab QA—question answering over scientific papers where tables and structured displays carry key evidence.
- Input: A natural-language question (
input) about results, comparisons, settings, or quantities presented in tabular or semi-tabular form in the paper. - Supervision / reference:
expected_outputis a JSON string with answers (possibly multiple, with types such as extractive vs. abstractive consolidations) and evidence objects linking to raw and highlighted supporting strings (including table references where applicable).metadatarecords UDA identifiers (sub_benchmark:paper_tab_qa).
Evaluation typically combines retrieval quality (whether the correct table rows, captions, or adjacent text are retrieved) with answer correctness (string overlap, numeric match, or protocol-specific scoring), following UDA and the original benchmark tooling.
Background
Why “Paper Tab” in UDA?
Scientific PDFs bundle narrative argument with dense numeric and structural evidence in tables and figures. Models that only read running text often miss the answer; systems must align questions to the right table, read headers and units, and sometimes integrate caption text and body paragraphs. Paper Tab QA encodes that difficulty inside a broader RAG benchmark: documents stay in realistic, unstructured form, stressing document IR and layout-aware parsing upstream of any LLM.
UDA benchmark (suite containing this slice)
UDA revisits LLM- and RAG-based document analysis across domains using thousands of real-world documents and tens of thousands of expert-annotated Q&A pairs, with sources kept in original formats to stress parsing and retrieval as well as generation. Subsets span finance (e.g. hybrid table–text reports) and academia (paper text vs. paper tables), plus knowledge-base-style tasks in the full suite.
Data fields
| Column | Type | Description |
|---|---|---|
input |
string |
Question text posed over the paper. |
expected_output |
string |
JSON with answers (list of objects with answer and type, e.g. extractive / abstractive) and evidence (list of objects with raw_evidence and highlighted_evidence string lists, often pointing at table lines and surrounding text). |
metadata |
struct | benchmark_name (uda_paper_tab_qa), benchmark_type (uda), split, sub_benchmark (paper_tab_qa), and value (JSON string with identifiers such as label_key—often an arXiv-style id—label_file, q_uid). |
Splits: Single split default with 393 examples.
Examples
The following rows illustrate the schema (expected_output is formatted for readability; strings may be truncated).
Example 1 — language pairs (table + multi-paragraph evidence)
input:what language pairs are explored?expected_output(excerpt):
{
"answers": [
{
"answer": "De-En, En-Fr, Fr-En, En-Es, Ro-En, En-De, Ar-En, En-Ru",
"type": "abstractive"
},
{
"answer": "French-English-Spanish (Fr-En-Es), German-English-French (De-En-Fr) and Romanian-English-German (Ro-En-De), Arabic (Ar), Spanish (Es), and Russian (Ru), and mutual translation between themselves constitutes six zero-shot translation",
"type": "extractive"
}
],
"evidence": [
{
"raw_evidence": [
"For MultiUN corpus, we use four languages: English (En) is set as the pivot language...",
"The statistics of Europarl and MultiUN corpora are summarized in Table TABREF18...",
"FLOAT SELECTED: Table 1: Data Statistics."
],
"highlighted_evidence": ["..."]
}
]
}
metadata.value(parsed):
{
"label_key": "1912.01214",
"label_file": "paper_tab_qa",
"q_uid": "5eda469a8a77f028d0c5f1acd296111085614537"
}
Example 2 — quantitative improvement (table-centric)
input:By how much is performance improved with multimodality?expected_output(excerpt):
{
"answers": [
{
"answer": "by 2.3-6.8 points in f1 score for intent recognition and 0.8-3.5 for slot filling",
"type": "abstractive"
},
{
"answer": "F1 score increased from 0.89 to 0.92",
"type": "abstractive"
}
],
"evidence": [
{
"raw_evidence": [
"FLOAT SELECTED: Table 1: Speech Embeddings Experiments: Precision/Recall/F1-scores (%) of NLU Models"
],
"highlighted_evidence": [
"FLOAT SELECTED: Table 1: Speech Embeddings Experiments: Precision/Recall/F1-scores (%) of NLU Models"
]
}
]
}
References
UDA benchmark (primary reference for Paper Tab QA)
Yulong Hui, Yao Lu, Huanchen Zhang. UDA: A Benchmark Suite for Retrieval Augmented Generation in Real-world Document Analysis. NeurIPS 2024 (Datasets and Benchmarks Track).
- Abstract (short): Introduces UDA with thousands of real-world documents and tens of thousands of expert-annotated Q&A pairs spanning finance, academia, and knowledge bases; evaluates LLM- and RAG-based pipelines and highlights the role of data parsing and retrieval in end-to-end document analysis.
- arXiv: https://arxiv.org/abs/2406.15187
- NeurIPS proceedings: https://proceedings.neurips.cc/paper_files/paper/2024/hash/7c06759d1a8567f087b02e8589454917-Abstract-Datasets_and_Benchmarks_Track.html
- Code & resources: https://github.com/qinchuanhui/UDA-Benchmark
Related Hub resources
- Aggregated UDA QA release (reference): qinchuanhui/UDA-QA
Citation
If you use this dataset, please cite UDA (and this dataset record as appropriate):
@article{hui2024uda,
title = {UDA: A Benchmark Suite for Retrieval Augmented Generation in Real-world Document Analysis},
author = {Hui, Yulong and Lu, Yao and Zhang, Huanchen},
journal = {arXiv preprint arXiv:2406.15187},
year = {2024}
}
License
Use this dataset in compliance with the UDA benchmark and upstream data licenses. Verify conditions for your use case (including redistribution or commercial use) against the official UDA repository and documentation.
- Downloads last month
- 8