input stringlengths 21 146 | metadata stringlengths 226 226 | answers stringlengths 37 1.27k | evidence stringlengths 124 12k |
|---|---|---|---|
what language pairs are explored? | {"label_key": "1912.01214", "label_file": "paper_tab_qa", "q_uid": "5eda469a8a77f028d0c5f1acd296111085614537", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "De-En, En-Fr, Fr-En, En-Es, Ro-En, En-De, Ar-En, En-Ru", "type": "abstractive"}, {"answer": "French-English-Spanish (Fr-En-Es), German-English-French (De-En-Fr) and Romanian-English-German (Ro-En-De), Arabic (Ar), Spanish (Es), and Russian (Ru), and mutual translation between themselves constitutes six zer... | [{"raw_evidence": ["For MultiUN corpus, we use four languages: English (En) is set as the pivot language, which has parallel data with other three languages which do not have parallel data between each other. The three languages are Arabic (Ar), Spanish (Es), and Russian (Ru), and mutual translation between themselves ... |
What accuracy does the proposed system achieve? | {"label_key": "1801.05147", "label_file": "paper_tab_qa", "q_uid": "ef4dba073d24042f24886580ae77add5326f2130", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "F1 scores of 85.99 on the DL-PS data, 75.15 on the EC-MT data and 71.53 on the EC-UQ data ", "type": "abstractive"}, {"answer": "F1 of 85.99 on the DL-PS dataset (dialog domain); 75.15 on EC-MT and 71.53 on EC-UQ (e-commerce domain)", "type": "abstractive"}] | [{"raw_evidence": ["FLOAT SELECTED: Table 2: Main results on the DL-PS data.", "FLOAT SELECTED: Table 3: Main results on the EC-MT and EC-UQ datasets."], "highlighted_evidence": ["FLOAT SELECTED: Table 2: Main results on the DL-PS data.", "FLOAT SELECTED: Table 3: Main results on the EC-MT and EC-UQ datasets."]}, {"raw... |
On which benchmarks they achieve the state of the art? | {"label_key": "1704.06194", "label_file": "paper_tab_qa", "q_uid": "9ee07edc371e014df686ced4fb0c3a7b9ce3d5dc", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "SimpleQuestions, WebQSP", "type": "extractive"}, {"answer": "WebQSP, SimpleQuestions", "type": "extractive"}] | [{"raw_evidence": ["Finally, like STAGG, which uses multiple relation detectors (see yih2015semantic for the three models used), we also try to use the top-3 relation detectors from Section \"Relation Detection Results\" . As shown on the last row of Table 3 , this gives a significant performance boost, resulting in a ... |
How do they calculate a static embedding for each word? | {"label_key": "1909.00512", "label_file": "paper_tab_qa", "q_uid": "891c2001d6baaaf0da4e65b647402acac621a7d2", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "They use the first principal component of a word's contextualized representation in a given layer as its static embedding.", "type": "abstractive"}, {"answer": " by taking the first principal component (PC) of its contextualized representations in a given layer", "type": "extractive"}] | [{"raw_evidence": ["FLOAT SELECTED: Table 1: The performance of various static embeddings on word embedding benchmark tasks. The best result for each task is in bold. For the contextualizing models (ELMo, BERT, GPT-2), we use the first principal component of a word’s contextualized representations in a given layer as i... |
What is the performance of BERT on the task? | {"label_key": "2003.03106", "label_file": "paper_tab_qa", "q_uid": "66c96c297c2cffdf5013bab5e95b59101cb38655", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "F1 scores are:\nHUBES-PHI: Detection(0.965), Classification relaxed (0.95), Classification strict (0.937)\nMedoccan: Detection(0.972), Classification (0.967)", "type": "abstractive"}, {"answer": "BERT remains only 0.3 F1-score points behind, and would have achieved the second position among all the MEDDOCA... | [{"raw_evidence": ["To finish with this experiment set, Table also shows the strict classification precision, recall and F1-score for the compared systems. Despite the fact that, in general, the systems obtain high values, BERT outperforms them again. BERT's F1-score is 1.9 points higher than the next most competitive ... |
What state-of-the-art compression techniques were used in the comparison? | {"label_key": "1909.11687", "label_file": "paper_tab_qa", "q_uid": "efe9bad55107a6be7704ed97ecce948a8ca7b1d2", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "baseline without knowledge distillation (termed NoKD), Patient Knowledge Distillation (PKD)", "type": "extractive"}, {"answer": "NoKD, PKD, BERTBASE teacher model", "type": "extractive"}] | [{"raw_evidence": ["For the language modeling evaluation, we also evaluate a baseline without knowledge distillation (termed NoKD), with a model parameterized identically to the distilled student models but trained directly on the teacher model objective from scratch. For downstream tasks, we compare with NoKD as well ... |
What discourse relations does it work best/worst for? | {"label_key": "1804.05918", "label_file": "paper_tab_qa", "q_uid": "f17ca24b135f9fe6bb25dc5084b13e1637ec7744", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "explicit discourse relations", "type": "extractive"}, {"answer": "Best: Expansion (Exp). Worst: Comparison (Comp).", "type": "abstractive"}] | [{"raw_evidence": ["The second row shows the performance of our basic paragraph-level model which predicts both implicit and explicit discourse relations in a paragraph. Compared to the variant system (the first row), the basic model further improved the classification performance on the first three implicit relations.... |
Which 7 Indian languages do they experiment with? | {"label_key": "2002.01664", "label_file": "paper_tab_qa", "q_uid": "75df70ce7aa714ec4c6456d0c51f82a16227f2cb", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "Hindi, English, Kannada, Telugu, Assamese, Bengali and Malayalam", "type": "abstractive"}, {"answer": "Kannada, Hindi, Telugu, Malayalam, Bengali, English and Assamese (in table, missing in text)", "type": "abstractive"}] | [{"raw_evidence": ["FLOAT SELECTED: Table 1: Dataset"], "highlighted_evidence": ["FLOAT SELECTED: Table 1: Dataset"]}, {"raw_evidence": ["In this section, we describe our dataset collection process. We collected and curated around 635Hrs of audio data for 7 Indian languages, namely Kannada, Hindi, Telugu, Malayalam, Be... |
Do they use graphical models? | {"label_key": "1809.00540", "label_file": "paper_tab_qa", "q_uid": "a99fdd34422f4231442c220c97eafc26c76508dd", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "No", "type": "boolean"}, {"answer": "No", "type": "boolean"}] | [{"raw_evidence": ["FLOAT SELECTED: Table 2: Clustering results on the labeled dataset. We compare our algorithm (with and without timestamps) with the online micro-clustering routine of Aggarwal and Yu (2006) (denoted by CluStream). The F1 values are for the precision (P) and recall (R) in the following columns. See T... |
What metric is used for evaluation? | {"label_key": "1809.00540", "label_file": "paper_tab_qa", "q_uid": "d604f5fb114169f75f9a38fab18c1e866c5ac28b", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "F1, precision, recall, accuracy", "type": "abstractive"}, {"answer": "Precision, recall, F1, accuracy", "type": "abstractive"}] | [{"raw_evidence": ["FLOAT SELECTED: Table 2: Clustering results on the labeled dataset. We compare our algorithm (with and without timestamps) with the online micro-clustering routine of Aggarwal and Yu (2006) (denoted by CluStream). The F1 values are for the precision (P) and recall (R) in the following columns. See T... |
Which eight NER tasks did they evaluate on? | {"label_key": "2004.03354", "label_file": "paper_tab_qa", "q_uid": "1d3e914d0890fc09311a70de0b20974bf7f0c9fe", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "BC5CDR-disease, NCBI-disease, BC5CDR-chem, BC4CHEMD, BC2GM, JNLPBA, LINNAEUS, Species-800", "type": "abstractive"}, {"answer": "BC5CDR-disease, NCBI-disease, BC5CDR-chem, BC4CHEMD, BC2GM, JNLPBA, LINNAEUS, Species-800", "type": "abstractive"}] | [{"raw_evidence": ["FLOAT SELECTED: Table 2: Top: Examples of within-space and cross-space nearest neighbors (NNs) by cosine similarity in GreenBioBERT’s wordpiece embedding layer. Blue: Original wordpiece space. Green: Aligned Word2Vec space. Bottom: Biomedical NER test set precision / recall / F1 (%) measured with th... |
Do they test their framework performance on commonly used language pairs, such as English-to-German? | {"label_key": "1611.04798", "label_file": "paper_tab_qa", "q_uid": "897ba53ef44f658c128125edd26abf605060fb13", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "Yes", "type": "boolean"}, {"answer": "Yes", "type": "boolean"}] | [{"raw_evidence": ["FLOAT SELECTED: Table 1: Results of the English→German systems in a simulated under-resourced scenario."], "highlighted_evidence": ["FLOAT SELECTED: Table 1: Results of the English→German systems in a simulated under-resourced scenario."]}, {"raw_evidence": ["A standard NMT system employs parallel d... |
What languages are evaluated? | {"label_key": "1809.01541", "label_file": "paper_tab_qa", "q_uid": "c32adef59efcb9d1a5b10e1d7c999a825c9e6d9a", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "German, English, Spanish, Finnish, French, Russian, Swedish.", "type": "abstractive"}] | [{"raw_evidence": ["FLOAT SELECTED: Table 2: Official shared task test set results."], "highlighted_evidence": ["FLOAT SELECTED: Table 2: Official shared task test set results."]}] |
What is MSD prediction? | {"label_key": "1809.01541", "label_file": "paper_tab_qa", "q_uid": "32a3c248b928d4066ce00bbb0053534ee62596e7", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "The task of predicting MSD tags: V, PST, V.PCTP, PASS.", "type": "abstractive"}, {"answer": "morphosyntactic descriptions (MSD)", "type": "extractive"}] | [{"raw_evidence": ["FLOAT SELECTED: Table 1: Example input sentence. Context MSD tags and lemmas, marked in gray, are only available in Track 1. The cyan square marks the main objective of predicting the word form made. The magenta square marks the auxiliary objective of predicting the MSD tag V;PST;V.PTCP;PASS."], "hi... |
What other models do they compare to? | {"label_key": "1809.09194", "label_file": "paper_tab_qa", "q_uid": "d3dbb5c22ef204d85707d2d24284cc77fa816b6c", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "SAN Baseline, BNA, DocQA, R.M-Reader, R.M-Reader+Verifier and DocQA+ELMo", "type": "abstractive"}, {"answer": "BNA, DocQA, R.M-Reader, R.M-Reader + Verifier, DocQA + ELMo, R.M-Reader+Verifier+ELMo", "type": "abstractive"}] | [{"raw_evidence": ["Table TABREF21 reports comparison results in literature published . Our model achieves state-of-the-art on development dataset in setting without pre-trained large language model (ELMo). Comparing with the much complicated model R.M.-Reader + Verifier, which includes several components, our model st... |
How much better than the baseline is LiLi? | {"label_key": "1802.06024", "label_file": "paper_tab_qa", "q_uid": "286078813136943dfafb5155ee15d2429e7601d9", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "In case of Freebase knowledge base, LiLi model had better F1 score than the single model by 0.20 , 0.01, 0.159 for kwn, unk, and all test Rel type. The values for WordNet are 0.25, 0.1, 0.2. \n", "type": "abstractive"}] | [{"raw_evidence": ["Baselines. As none of the existing KBC methods can solve the OKBC problem, we choose various versions of LiLi as baselines.", "Single: Version of LiLi where we train a single prediction model INLINEFORM0 for all test relations.", "Sep: We do not transfer (past learned) weights for initializing INLIN... |
How many labels do the datasets have? | {"label_key": "1809.00530", "label_file": "paper_tab_qa", "q_uid": "6aa2a1e2e3666f2b2a1f282d4cbdd1ca325eb9de", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "719313", "type": "abstractive"}, {"answer": "Book, Electronics, Beauty and Music each have 6000, IMDB 84919, Yelp 231163, Cell Phone 194792 and Baby 160792 labeled data.", "type": "abstractive"}] | [{"raw_evidence": ["FLOAT SELECTED: Table 1: Summary of datasets."], "highlighted_evidence": ["FLOAT SELECTED: Table 1: Summary of datasets."]}, {"raw_evidence": ["Large-scale datasets: We further conduct experiments on four much larger datasets: IMDB (I), Yelp2014 (Y), Cell Phone (C), and Baby (B). IMDB and Yelp2014 w... |
What are the source and target domains? | {"label_key": "1809.00530", "label_file": "paper_tab_qa", "q_uid": "9176d2ba1c638cdec334971c4c7f1bb959495a8e", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "Book, electronics, beauty, music, IMDB, Yelp, cell phone, baby, DVDs, kitchen", "type": "abstractive"}, {"answer": "we use set 1 of the source domain as the only source with sentiment label information during training, and we evaluate the trained model on set 1 of the target domain, Book (BK), Electronics ... | [{"raw_evidence": ["FLOAT SELECTED: Table 1: Summary of datasets.", "Most previous works BIBREF0 , BIBREF1 , BIBREF6 , BIBREF7 , BIBREF29 carried out experiments on the Amazon benchmark released by Blitzer et al. ( BIBREF0 ). The dataset contains 4 different domains: Book (B), DVDs (D), Electronics (E), and Kitchen (K)... |
Which datasets are used? | {"label_key": "1912.08960", "label_file": "paper_tab_qa", "q_uid": "b1bc9ae9d40e7065343c12f860a461c7c730a612", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "Existential (OneShape, MultiShapes), Spacial (TwoShapes, Multishapes), Quantification (Count, Ratio) datasets are generated from ShapeWorldICE", "type": "abstractive"}, {"answer": "ShapeWorldICE datasets: OneShape, MultiShapes, TwoShapes, MultiShapes, Count, and Ratio", "type": "abstractive"}] | [{"raw_evidence": ["We develop a variety of ShapeWorldICE datasets, with a similar idea to the “skill tasks” in the bAbI framework BIBREF22. Table TABREF4 gives an overview for different ShapeWorldICE datasets we use in this paper. We consider three different types of captioning tasks, each of which focuses on a distin... |
What are previous state of the art results? | {"label_key": "2002.11910", "label_file": "paper_tab_qa", "q_uid": "9da1e124d28b488b0d94998d32aa2fa8a5ebec51", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "Overall F1 score:\n- He and Sun (2017) 58.23\n- Peng and Dredze (2017) 58.99\n- Xu et al. (2018) 59.11", "type": "abstractive"}, {"answer": "For Named entity the maximum precision was 66.67%, and the average 62.58%, same values for Recall was 55.97% and 50.33%, and for F1 57.14% and 55.64%. Where for Nomin... | [{"raw_evidence": ["FLOAT SELECTED: Table 1: The results of two previous models, and results of this study, in which we apply a boundary assembling method. Precision, recall, and F1 scores are shown for both named entity and nominal mention. For both tasks and their overall performance, we outperform the other two mode... |
What is the model performance on target language reading comprehension? | {"label_key": "1909.09587", "label_file": "paper_tab_qa", "q_uid": "37be0d479480211291e068d0d3823ad0c13321d3", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "Table TABREF6, Table TABREF8", "type": "extractive"}, {"answer": "when testing on English, the F1 score of the model training on Chinese (Zh) is 53.8, F1 score is only 44.1 for the model training on Zh-En", "type": "extractive"}] | [{"raw_evidence": ["Table TABREF6 shows the result of different models trained on either Chinese or English and tested on Chinese. In row (f), multi-BERT is fine-tuned on English but tested on Chinese, which achieves competitive performance compared with QANet trained on Chinese. We also find that multi-BERT trained on... |
What source-target language pairs were used in this work? | {"label_key": "1909.09587", "label_file": "paper_tab_qa", "q_uid": "a3d9b101765048f4b61cbd3eaa2439582ebb5c77", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "En-Fr, En-Zh, En-Jp, En-Kr, Zh-En, Zh-Fr, Zh-Jp, Zh-Kr to English, Chinese or Korean", "type": "abstractive"}, {"answer": "English , Chinese", "type": "extractive"}, {"answer": "English, Chinese, Korean, we translated the English and Chinese datasets into more languages, with Google Translate", "type": "ex... | [{"raw_evidence": ["FLOAT SELECTED: Table 2: EM/F1 score of multi-BERTs fine-tuned on different training sets and tested on different languages (En: English, Fr: French, Zh: Chinese, Jp: Japanese, Kr: Korean, xx-yy: translated from xx to yy). The text in bold means training data language is the same as testing data lan... |
Which baselines did they compare against? | {"label_key": "1809.02286", "label_file": "paper_tab_qa", "q_uid": "0ad4359e3e7e5e5f261c2668fe84c12bc762b3b8", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "Various tree structured neural networks including variants of Tree-LSTM, Tree-based CNN, RNTN, and non-tree models including variants of LSTMs, CNNs, residual, and self-attention based networks", "type": "abstractive"}, {"answer": "Sentence classification baselines: RNTN (Socher et al. 2013), AdaMC-RNTN (D... | [{"raw_evidence": ["FLOAT SELECTED: Table 1: The comparison of various models on different sentence classification tasks. We report the test accuracy of each model in percentage. Our SATA Tree-LSTM shows superior or competitive performance on all tasks, compared to previous treestructured models as well as other sophis... |
What baselines did they consider? | {"label_key": "1809.01202", "label_file": "paper_tab_qa", "q_uid": "4cbe5a36b492b99f9f9fea8081fe4ba10a7a0e94", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "state-of-the-art PDTB taggers", "type": "extractive"}, {"answer": "Linear SVM, RBF SVM, and Random Forest", "type": "abstractive"}] | [{"raw_evidence": ["We first use state-of-the-art PDTB taggers for our baseline BIBREF13 , BIBREF12 for the evaluation of the causality prediction of our models ( BIBREF12 requires sentences extracted from the text as its input, so we used our parser to extract sentences from the message). Then, we compare how models w... |
By how much more does PARENT correlate with human judgements in comparison to other text generation metrics? | {"label_key": "1906.01081", "label_file": "paper_tab_qa", "q_uid": "ffa7f91d6406da11ddf415ef094aaf28f3c3872d", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "Best proposed metric has average correlation with human judgement of 0.913 and 0.846 compared to best compared metrics result of 0.758 and 0.829 on WikiBio and WebNLG challenge.", "type": "abstractive"}, {"answer": "Their average correlation tops the best other model by 0.155 on WikiBio.", "type": "abstrac... | [{"raw_evidence": ["We use bootstrap sampling (500 iterations) over the 1100 tables for which we collected human annotations to get an idea of how the correlation of each metric varies with the underlying data. In each iteration, we sample with replacement, tables along with their references and all the generated texts... |
Which stock market sector achieved the best performance? | {"label_key": "1812.10479", "label_file": "paper_tab_qa", "q_uid": "b634ff1607ce5756655e61b9a6f18bc736f84c83", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "Energy with accuracy of 0.538", "type": "abstractive"}, {"answer": "Energy", "type": "abstractive"}] | [{"raw_evidence": ["FLOAT SELECTED: Table 8: Sector-level performance comparison."], "highlighted_evidence": ["FLOAT SELECTED: Table 8: Sector-level performance comparison."]}, {"raw_evidence": ["FLOAT SELECTED: Table 7: Our volatility model performance compared with GARCH(1,1). Best performance in bold. Our model has ... |
How much does their model outperform existing models? | {"label_key": "1909.08089", "label_file": "paper_tab_qa", "q_uid": "de5b6c25e35b3a6c5e40e350fc5e52c160b33490", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "Best proposed model result vs best previous result:\nArxiv dataset: Rouge 1 (43.62 vs 42.81), Rouge L (29.30 vs 31.80), Meteor (21.78 vs 21.35)\nPubmed dataset: Rouge 1 (44.85 vs 44.29), Rouge L (31.48 vs 35.21), Meteor (20.83 vs 20.56)", "type": "abstractive"}, {"answer": "On arXiv dataset, the proposed m... | [{"raw_evidence": ["The performance of all models on arXiv and Pubmed is shown in Table TABREF28 and Table TABREF29 , respectively. Follow the work BIBREF18 , we use the approximate randomization as the statistical significance test method BIBREF32 with a Bonferroni correction for multiple comparisons, at the confidenc... |
What embedding techniques are explored in the paper? | {"label_key": "1609.00559", "label_file": "paper_tab_qa", "q_uid": "8b3d3953454c88bde88181897a7a2c0c8dd87e23", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "Skip–gram, CBOW", "type": "extractive"}, {"answer": "integrated vector-res, vector-faith, Skip–gram, CBOW", "type": "extractive"}] | [{"raw_evidence": ["muneeb2015evalutating trained both the Skip–gram and CBOW models over the PubMed Central Open Access (PMC) corpus of approximately 1.25 million articles. They evaluated the models on a subset of the UMNSRS data, removing word pairs that did not occur in their training corpus more than ten times. chi... |
Which other approaches do they compare their model with? | {"label_key": "1904.10503", "label_file": "paper_tab_qa", "q_uid": "5a65ad10ff954d0f27bb3ccd9027e3d8f7f6bb76", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "Akbik et al. (2018), Link et al. (2012)", "type": "abstractive"}, {"answer": "They compare to Akbik et al. (2018) and Link et al. (2012).", "type": "abstractive"}] | [{"raw_evidence": ["FLOAT SELECTED: Table 3: Comparison with existing models."], "highlighted_evidence": ["FLOAT SELECTED: Table 3: Comparison with existing models."]}, {"raw_evidence": ["In this paper, we present a deep neural network model for the task of fine-grained named entity classification using ELMo embeddings... |
How is non-standard pronunciation identified? | {"label_key": "1912.01772", "label_file": "paper_tab_qa", "q_uid": "f9bf6bef946012dd42835bf0c547c0de9c1d229f", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "Original transcription was labeled with additional labels in [] brackets with nonstandard pronunciation.", "type": "abstractive"}] | [{"raw_evidence": ["In addition, the transcription includes annotations for noises and disfluencies including aborted words, mispronunciations, poor intelligibility, repeated and corrected words, false starts, hesitations, undefined sound or pronunciations, non-verbal articulations, and pauses. Foreign words, in this c... |
What kind of celebrities do they obtain tweets from? | {"label_key": "1909.04002", "label_file": "paper_tab_qa", "q_uid": "4d28c99750095763c81bcd5544491a0ba51d9070", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "Amitabh Bachchan, Ariana Grande, Barack Obama, Bill Gates, Donald Trump,\nEllen DeGeneres, J K Rowling, Jimmy Fallon, Justin Bieber, Kevin Durant, Kim Kardashian, Lady Gaga, LeBron James,Narendra Modi, Oprah Winfrey", "type": "abstractive"}, {"answer": "Celebrities from varioius domains - Acting, Music, Po... | [{"raw_evidence": ["FLOAT SELECTED: Table 1: Twitter celebrities in our dataset, with tweet counts before and after filtering (Foll. denotes followers in millions)"], "highlighted_evidence": ["FLOAT SELECTED: Table 1: Twitter celebrities in our dataset, with tweet counts before and after filtering (Foll. denotes follow... |
What summarization algorithms did the authors experiment with? | {"label_key": "1712.00991", "label_file": "paper_tab_qa", "q_uid": "443d2448136364235389039cbead07e80922ec5c", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "LSA, TextRank, LexRank and ILP-based summary.", "type": "abstractive"}, {"answer": "LSA, TextRank, LexRank", "type": "abstractive"}] | [{"raw_evidence": ["We considered a dataset of 100 employees, where for each employee multiple peer comments were recorded. Also, for each employee, a manual summary was generated by an HR personnel. The summaries generated by our ILP-based approach were compared with the corresponding manual summaries using the ROUGE ... |
What evaluation metrics are looked at for classification tasks? | {"label_key": "1712.00991", "label_file": "paper_tab_qa", "q_uid": "fb3d30d59ed49e87f63d3735b876d45c4c6b8939", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "Precision, Recall, F-measure, accuracy", "type": "extractive"}, {"answer": "Precision, Recall and F-measure", "type": "extractive"}] | [{"raw_evidence": ["Precision, Recall and F-measure for this multi-label classification are computed using a strategy similar to the one described in BIBREF21 . Let INLINEFORM0 be the set of predicted labels and INLINEFORM1 be the set of actual labels for the INLINEFORM2 instance. Precision and recall for this instance... |
What methods were used for sentence classification? | {"label_key": "1712.00991", "label_file": "paper_tab_qa", "q_uid": "197b276d0610ebfacd57ab46b0b29f3033c96a40", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "Logistic Regression, Multinomial Naive Bayes, Random Forest, AdaBoost, Linear SVM, SVM with ADWSK and Pattern-based", "type": "abstractive"}, {"answer": "Logistic Regression, Multinomial Naive Bayes, Random Forest, AdaBoost, Linear SVM, SVM with ADWSK, Pattern-based approach", "type": "abstractive"}] | [{"raw_evidence": ["We randomly selected 2000 sentences from the supervisor assessment corpus and manually tagged them (dataset D1). This labelled dataset contained 705, 103, 822 and 370 sentences having the class labels STRENGTH, WEAKNESS, SUGGESTION or OTHER respectively. We trained several multi-class classifiers on... |
What modern MRC gold standards are analyzed? | {"label_key": "2003.04642", "label_file": "paper_tab_qa", "q_uid": "9ecde59ffab3c57ec54591c3c7826a9188b2b270", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "fit our problem definition and were published in the years 2016 to 2019, have at least $(2019 - publication\\ year) \\times 20$ citations", "type": "extractive"}, {"answer": "MSMARCO, HOTPOTQA, RECORD, MULTIRC, NEWSQA, and DROP.", "type": "abstractive"}] | [{"raw_evidence": ["We select contemporary MRC benchmarks to represent all four commonly used problem definitions BIBREF15. In selecting relevant datasets, we do not consider those that are considered “solved”, i.e. where the state of the art performance surpasses human performance, as is the case with SQuAD BIBREF28, ... |
What was the score of the proposed model? | {"label_key": "1904.07904", "label_file": "paper_tab_qa", "q_uid": "38f58f13c7f23442d5952c8caf126073a477bac0", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "Best results authors obtain is EM 51.10 and F1 63.11", "type": "abstractive"}, {"answer": "EM Score of 51.10", "type": "abstractive"}] | [{"raw_evidence": ["To better demonstrate the effectiveness of the proposed model, we compare with baselines and show the results in Table TABREF12 . The baselines are: (a) trained on S-SQuAD, (b) trained on T-SQuAD and then fine-tuned on S-SQuAD, and (c) previous best model trained on S-SQuAD BIBREF5 by using Dr.QA BI... |
What hyperparameters are explored? | {"label_key": "2003.11645", "label_file": "paper_tab_qa", "q_uid": "27275fe9f6a9004639f9ac33c3a5767fea388a98", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "Dimension size, window size, architecture, algorithm, epochs, hidden dimension size, learning rate, loss function, optimizer algorithm.", "type": "abstractive"}, {"answer": "Hyperparameters explored were: dimension size, window size, architecture, algorithm and epochs.", "type": "abstractive"}] | [{"raw_evidence": ["FLOAT SELECTED: Table 1: Hyper-parameter choices", "FLOAT SELECTED: Table 2: Network hyper-parameters"], "highlighted_evidence": ["FLOAT SELECTED: Table 1: Hyper-parameter choices", "FLOAT SELECTED: Table 2: Network hyper-parameters"]}, {"raw_evidence": ["To form the vocabulary, words occurring less... |
Do they test both skipgram and c-bow? | {"label_key": "2003.11645", "label_file": "paper_tab_qa", "q_uid": "c2d1387e08cf25cb6b1f482178cca58030e85b70", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "Yes", "type": "boolean"}, {"answer": "Yes", "type": "boolean"}] | [{"raw_evidence": ["FLOAT SELECTED: Table 1: Hyper-parameter choices"], "highlighted_evidence": ["FLOAT SELECTED: Table 1: Hyper-parameter choices"]}, {"raw_evidence": ["FLOAT SELECTED: Table 1: Hyper-parameter choices", "To form the vocabulary, words occurring less than 5 times in the corpora were dropped, stop words ... |
what is the state of the art? | {"label_key": "1608.06757", "label_file": "paper_tab_qa", "q_uid": "c2b8ee872b99f698b3d2082d57f9408a91e1b4c1", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "Babelfy, DBpedia Spotlight, Entityclassifier.eu, FOX, LingPipe MUC-7, NERD-ML, Stanford NER, TagMe 2", "type": "abstractive"}] | [{"raw_evidence": ["FLOAT SELECTED: Table 2: Comparison of annotators trained for common English news texts (micro-averaged scores on match per annotation span). The table shows micro-precision, recall and NER-style F1 for CoNLL2003, KORE50, ACE2004 and MSNBC datasets."], "highlighted_evidence": ["FLOAT SELECTED: Table... |
Do the authors also analyze transformer-based architectures? | {"label_key": "1806.04330", "label_file": "paper_tab_qa", "q_uid": "8bf7f1f93d0a2816234d36395ab40c481be9a0e0", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "No", "type": "boolean"}, {"answer": "No", "type": "boolean"}] | [{"raw_evidence": ["FLOAT SELECTED: Table 1: Summary of representative neural models for sentence pair modeling. The upper half contains sentence encoding models, and the lower half contains sentence pair interaction models."], "highlighted_evidence": ["FLOAT SELECTED: Table 1: Summary of representative neural models f... |
what were the baselines? | {"label_key": "1904.03288", "label_file": "paper_tab_qa", "q_uid": "2ddb51b03163d309434ee403fef42d6b9aecc458", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "LF-MMI Attention\nSeq2Seq \nRNN-T \nChar E2E LF-MMI \nPhone E2E LF-MMI \nCTC + Gram-CTC", "type": "abstractive"}] | [{"raw_evidence": ["We also evaluate the Jasper model's performance on a conversational English corpus. The Hub5 Year 2000 (Hub5'00) evaluation (LDC2002S09, LDC2005S13) is widely used in academia. It is divided into two subsets: Switchboard (SWB) and Callhome (CHM). The training data for both the acoustic and language ... |
what competitive results did they obtain? | {"label_key": "1904.03288", "label_file": "paper_tab_qa", "q_uid": "e587559f5ab6e42f7d981372ee34aebdc92b646e", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "In case of read speech datasets, their best model got the highest nov93 score of 16.1 and the highest nov92 score of 13.3.\nIn case of Conversational Speech, their best model got the highest SWB of 8.3 and the highest CHM of 19.3. ", "type": "abstractive"}, {"answer": "On WSJ datasets author's best approa... | [{"raw_evidence": ["We trained a smaller Jasper 10x3 model with SGD with momentum optimizer for 400 epochs on a combined WSJ dataset (80 hours): LDC93S6A (WSJ0) and LDC94S13A (WSJ1). The results are provided in Table TABREF29 .", "FLOAT SELECTED: Table 6: WSJ End-to-End Models, WER (%)", "FLOAT SELECTED: Table 7: Hub5’... |
By how much is performance improved with multimodality? | {"label_key": "1909.13714", "label_file": "paper_tab_qa", "q_uid": "f68508adef6f4bcdc0cc0a3ce9afc9a2b6333cc5", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "by 2.3-6.8 points in f1 score for intent recognition and 0.8-3.5 for slot filling", "type": "abstractive"}, {"answer": "F1 score increased from 0.89 to 0.92", "type": "abstractive"}] | [{"raw_evidence": ["FLOAT SELECTED: Table 1: Speech Embeddings Experiments: Precision/Recall/F1-scores (%) of NLU Models"], "highlighted_evidence": ["FLOAT SELECTED: Table 1: Speech Embeddings Experiments: Precision/Recall/F1-scores (%) of NLU Models"]}, {"raw_evidence": ["For incorporating speech embeddings experiment... |
How much is performance improved on NLI? | {"label_key": "1909.03405", "label_file": "paper_tab_qa", "q_uid": "bdc91d1283a82226aeeb7a2f79dbbc57d3e84a1a", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": " improvement on the RTE dataset is significant, i.e., 4% absolute gain over the BERTBase", "type": "extractive"}, {"answer": "The average score improved by 1.4 points over the previous best result.", "type": "abstractive"}] | [{"raw_evidence": ["Table TABREF21 illustrates the experimental results, showing that our method is beneficial for all of NLI tasks. The improvement on the RTE dataset is significant, i.e., 4% absolute gain over the BERTBase. Besides NLI, our model also performs better than BERTBase in the STS task. The STS tasks are s... |
what was the baseline? | {"label_key": "1907.03060", "label_file": "paper_tab_qa", "q_uid": "761de1610e934189850e8fda707dc5239dd58092", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "pivot-based translation relying on a helping language BIBREF10, nduction of phrase tables from monolingual data BIBREF14 , attentional RNN-based model (RNMT) BIBREF2, Transformer model BIBREF18, bi-directional model BIBREF11, multi-to-multi (M2M) model BIBREF8, back-translation BIBREF17", "type": "extracti... | [{"raw_evidence": ["We began with evaluating standard MT paradigms, i.e., PBSMT BIBREF3 and NMT BIBREF1 . As for PBSMT, we also examined two advanced methods: pivot-based translation relying on a helping language BIBREF10 and induction of phrase tables from monolingual data BIBREF14 .", "As for NMT, we compared two typ... |
How larger are the training sets of these versions of ELMo compared to the previous ones? | {"label_key": "1911.10049", "label_file": "paper_tab_qa", "q_uid": "603fee7314fa65261812157ddfc2c544277fcf90", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "By 14 times.", "type": "abstractive"}, {"answer": "up to 1.95 times larger", "type": "abstractive"}] | [{"raw_evidence": ["Recently, ELMoForManyLangs BIBREF6 project released pre-trained ELMo models for a number of different languages BIBREF7. These models, however, were trained on a significantly smaller datasets. They used 20-million-words data randomly sampled from the raw text released by the CoNLL 2017 Shared Task ... |
What is the improvement in performance for Estonian in the NER task? | {"label_key": "1911.10049", "label_file": "paper_tab_qa", "q_uid": "09a1173e971e0fcdbf2fbecb1b077158ab08f497", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "5 percent points.", "type": "abstractive"}, {"answer": "0.05 F1", "type": "abstractive"}] | [{"raw_evidence": ["FLOAT SELECTED: Table 4: The results of NER evaluation task, averaged over 5 training and evaluation runs. The scores are average F1 score of the three named entity classes. The columns show FastText, ELMo, and the difference between them (∆(E − FT ))."], "highlighted_evidence": ["FLOAT SELECTED: Ta... |
what is the state of the art on WSJ? | {"label_key": "1812.06864", "label_file": "paper_tab_qa", "q_uid": "70e9210fe64f8d71334e5107732d764332a81cb1", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "CNN-DNN-BLSTM-HMM", "type": "abstractive"}, {"answer": "HMM-based system", "type": "extractive"}] | [{"raw_evidence": ["Table TABREF11 shows Word Error Rates (WER) on WSJ for the current state-of-the-art and our models. The current best model trained on this dataset is an HMM-based system which uses a combination of convolutional, recurrent and fully connected layers, as well as speaker adaptation, and reaches INLINE... |
what is the size of the augmented dataset? | {"label_key": "1811.12254", "label_file": "paper_tab_qa", "q_uid": "57f23dfc264feb62f45d9a9e24c60bd73d7fe563", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "609", "type": "abstractive"}] | [{"raw_evidence": ["FLOAT SELECTED: Table 1: Speech datasets used. Note that HAPD, HAFP and FP only have samples from healthy subjects. Detailed description in App. 2.", "All datasets shown in Tab. SECREF2 were transcribed manually by trained transcriptionists, employing the same list of annotations and protocols, with... |
How many sentences does the dataset contain? | {"label_key": "1908.05828", "label_file": "paper_tab_qa", "q_uid": "d51dc36fbf6518226b8e45d4c817e07e8f642003", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "3606", "type": "abstractive"}, {"answer": "6946", "type": "extractive"}] | [{"raw_evidence": ["FLOAT SELECTED: Table 1: Dataset statistics"], "highlighted_evidence": ["FLOAT SELECTED: Table 1: Dataset statistics"]}, {"raw_evidence": ["In order to label our dataset with POS-tags, we first created POS annotated dataset of 6946 sentences and 16225 unique words extracted from POS-tagged Nepali Na... |
What is the baseline? | {"label_key": "1908.05828", "label_file": "paper_tab_qa", "q_uid": "cb77d6a74065cb05318faf57e7ceca05e126a80d", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "CNN modelBIBREF0, Stanford CRF modelBIBREF21", "type": "extractive"}, {"answer": "Bam et al. SVM, Ma and Hovy w/glove, Lample et al. w/fastText, Lample et al. w/word2vec", "type": "abstractive"}] | [{"raw_evidence": ["Similar approaches has been applied to many South Asian languages like HindiBIBREF6, IndonesianBIBREF7, BengaliBIBREF19 and In this paper, we present the neural network architecture for NER task in Nepali language, which doesn't require any manual feature engineering nor any data pre-processing duri... |
What is the size of the dataset? | {"label_key": "1908.05828", "label_file": "paper_tab_qa", "q_uid": "a1b3e2107302c5a993baafbe177684ae88d6f505", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "Dataset contains 3606 total sentences and 79087 total entities.", "type": "abstractive"}, {"answer": "ILPRL contains 548 sentences, OurNepali contains 3606 sentences", "type": "abstractive"}] | [{"raw_evidence": ["After much time, we received the dataset from Bal Krishna Bal, ILPRL, KU. This dataset follows standard CoNLL-2003 IOB formatBIBREF25 with POS tags. This dataset is prepared by ILPRL Lab, KU and KEIV Technologies. Few corrections like correcting the NER tags had to be made on the dataset. The statis... |
How many different types of entities exist in the dataset? | {"label_key": "1908.05828", "label_file": "paper_tab_qa", "q_uid": "1462eb312944926469e7cee067dfc7f1267a2a8c", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "OurNepali contains 3 different types of entities, ILPRL contains 4 different types of entities", "type": "abstractive"}, {"answer": "three", "type": "extractive"}] | [{"raw_evidence": ["FLOAT SELECTED: Table 1: Dataset statistics", "Table TABREF24 presents the total entities (PER, LOC, ORG and MISC) from both of the dataset used in our experiments. The dataset is divided into three parts with 64%, 16% and 20% of the total dataset into training set, development set and test set resp... |
How big is the new Nepali NER dataset? | {"label_key": "1908.05828", "label_file": "paper_tab_qa", "q_uid": "f59f1f5b528a2eec5cfb1e49c87699e0c536cc45", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "3606 sentences", "type": "abstractive"}, {"answer": "Dataset contains 3606 total sentences and 79087 total entities.", "type": "abstractive"}] | [{"raw_evidence": ["FLOAT SELECTED: Table 1: Dataset statistics", "After much time, we received the dataset from Bal Krishna Bal, ILPRL, KU. This dataset follows standard CoNLL-2003 IOB formatBIBREF25 with POS tags. This dataset is prepared by ILPRL Lab, KU and KEIV Technologies. Few corrections like correcting the NER... |
What is the performance improvement of the grapheme-level representation model over the character-level model? | {"label_key": "1908.05828", "label_file": "paper_tab_qa", "q_uid": "9bd080bb2a089410fd7ace82e91711136116af6c", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "On OurNepali test dataset Grapheme-level representation model achieves average 0.16% improvement, on ILPRL test dataset it achieves maximum 1.62% improvement", "type": "abstractive"}, {"answer": "BiLSTM+CNN(grapheme-level) which turns out to be performing on par with BiLSTM+CNN(character-level) under the s... | [{"raw_evidence": ["FLOAT SELECTED: Table 5: Comparison of different variation of our models"], "highlighted_evidence": ["FLOAT SELECTED: Table 5: Comparison of different variation of our models"]}, {"raw_evidence": ["We also present a neural architecture BiLSTM+CNN(grapheme-level) which turns out to be performing on p... |
What is the performance of classifiers? | {"label_key": "2002.02070", "label_file": "paper_tab_qa", "q_uid": "d53299fac8c94bd0179968eb868506124af407d1", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "Table TABREF10, The KNN classifier seem to perform the best across all four metrics. This is probably due to the multi-class nature of the data set, While these classifiers did not perform particularly well, they provide a good starting point for future work on this subject", "type": "extractive"}, {"ans... | [{"raw_evidence": ["In order to evaluate our classifiers, we perform 4-fold cross validation on a shuffled data set. Table TABREF10 shows the F1 micro and F1 macro scores for all the classifiers. The KNN classifier seem to perform the best across all four metrics. This is probably due to the multi-class nature of the d... |
What classifiers have been trained? | {"label_key": "2002.02070", "label_file": "paper_tab_qa", "q_uid": "29f2954098f055fb19d9502572f085862d75bf61", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "KNN\nRF\nSVM\nMLP", "type": "abstractive"}, {"answer": " K Nearest Neighbors (KNN), Random Forest (RF), Support Vector Machine (SVM), Multi-layer Perceptron (MLP)", "type": "extractive"}] | [{"raw_evidence": ["FLOAT SELECTED: Table 2: Evaluation metrics for all classifiers.", "In order to evaluate our classifiers, we perform 4-fold cross validation on a shuffled data set. Table TABREF10 shows the F1 micro and F1 macro scores for all the classifiers. The KNN classifier seem to perform the best across all f... |
What other sentence embeddings methods are evaluated? | {"label_key": "1908.10084", "label_file": "paper_tab_qa", "q_uid": "e2db361ae9ad9dbaa9a85736c5593eb3a471983d", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "GloVe, BERT, Universal Sentence Encoder, TF-IDF, InferSent", "type": "abstractive"}, {"answer": "Avg. GloVe embeddings, Avg. fast-text embeddings, Avg. BERT embeddings, BERT CLS-vector, InferSent - GloVe and Universal Sentence Encoder.", "type": "abstractive"}] | [{"raw_evidence": ["FLOAT SELECTED: Table 1: Spearman rank correlation ρ between the cosine similarity of sentence representations and the gold labels for various Textual Similarity (STS) tasks. Performance is reported by convention as ρ × 100. STS12-STS16: SemEval 2012-2016, STSb: STSbenchmark, SICK-R: SICK relatednes... |
which non-english language had the best performance? | {"label_key": "1806.04511", "label_file": "paper_tab_qa", "q_uid": "e79a5b6b6680bd2f63e9f4adbaae1d7795d81e38", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "Russian", "type": "extractive"}, {"answer": "Russsian", "type": "abstractive"}] | [{"raw_evidence": ["Considering the improvements over the majority baseline achieved by the RNN model for both non-English (on the average 22.76% relative improvement; 15.82% relative improvement on Spanish, 72.71% vs. 84.21%, 30.53% relative improvement on Turkish, 56.97% vs. 74.36%, 37.13% relative improvement on Dut... |
How big is the dataset used in this work? | {"label_key": "1910.06592", "label_file": "paper_tab_qa", "q_uid": "3e1829e96c968cbd8ad8e9ce850e3a92a76b26e4", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "Total dataset size: 171 account (522967 tweets)", "type": "abstractive"}, {"answer": "212 accounts", "type": "abstractive"}] | [{"raw_evidence": ["Data. We build a dataset of Twitter accounts based on two lists annotated in previous works. For the non-factual accounts, we rely on a list of 180 Twitter accounts from BIBREF1. This list was created based on public resources where suspicious Twitter accounts were annotated with the main fake news ... |
What is the size of the new dataset? | {"label_key": "1902.09666", "label_file": "paper_tab_qa", "q_uid": "74fb77a624ea9f1821f58935a52cca3086bb0981", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "14,100 tweets", "type": "abstractive"}, {"answer": "Dataset contains total of 14100 annotations.", "type": "abstractive"}] | [{"raw_evidence": ["FLOAT SELECTED: Table 3: Distribution of label combinations in OLID."], "highlighted_evidence": ["FLOAT SELECTED: Table 3: Distribution of label combinations in OLID."]}, {"raw_evidence": ["FLOAT SELECTED: Table 3: Distribution of label combinations in OLID.", "The data included in OLID has been col... |
How long is the dataset for each step of hierarchy? | {"label_key": "1902.09666", "label_file": "paper_tab_qa", "q_uid": "1b72aa2ec3ce02131e60626639f0cf2056ec23ca", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "Level A: 14100 Tweets\nLevel B: 4640 Tweets\nLevel C: 4089 Tweets", "type": "abstractive"}] | [{"raw_evidence": ["The data included in OLID has been collected from Twitter. We retrieved the data using the Twitter API by searching for keywords and constructions that are often included in offensive messages, such as `she is' or `to:BreitBartNews'. We carried out a first round of trial annotation of 300 instances ... |
What different correlations result when using different variants of ROUGE scores? | {"label_key": "1604.00400", "label_file": "paper_tab_qa", "q_uid": "bf52c01bf82612d0c7bbf2e6a5bb2570c322936f", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "we observe that many variants of Rouge scores do not have high correlations with human pyramid scores", "type": "extractive"}, {"answer": "Using Pearson corelation measure, for example, ROUGE-1-P is 0.257 and ROUGE-3-F 0.878.", "type": "abstractive"}] | [{"raw_evidence": ["Table TABREF23 shows the Pearson, Spearman and Kendall correlation of Rouge and Sera, with pyramid scores. Both Rouge and Sera are calculated with stopwords removed and with stemming. Our experiments with inclusion of stopwords and without stemming showed similar results and thus, we do not include ... |
What tasks were evaluated? | {"label_key": "1810.12196", "label_file": "paper_tab_qa", "q_uid": "52f8a3e3cd5d42126b5307adc740b71510a6bdf5", "benchmark_name": "uda_paper_tab_qa", "benchmark_type": "uda", "sub_benchmark": "paper_tab_qa", "split": "default"} | [{"answer": "ReviewQA's test set", "type": "extractive"}, {"answer": "Detection of an aspect in a review, Prediction of the customer general satisfaction, Prediction of the global trend of an aspect in a given review, Prediction of whether the rating of a given aspect is above or under a given value, Prediction of the ... | [{"raw_evidence": ["Table TABREF19 displays the performance of the 4 baselines on the ReviewQA's test set. These results are the performance achieved by our own implementation of these 4 models. According to our results, the simple LSTM network and the MemN2N perform very poorly on this dataset. Especially on the most ... |
UDA Paper Tab QA (orgrctera/uda_paper_tab_qa_qa)
Overview
This dataset is the Paper Tab QA slice of the UDA (Unstructured Document Analysis) benchmark: 393 question–answer instances over academic research papers (PDF), packaged for question answering evaluation in RAG and document-analysis pipelines.
UDA (Hui et al., NeurIPS 2024 Datasets & Benchmarks) is a suite for Retrieval-Augmented Generation over messy, real-world documents. Sources are kept in original formats (e.g. PDF, HTML) so that parsing, chunking, and retrieval are evaluated alongside generation. The benchmark spans finance, academia, and Wikipedia-style knowledge bases, with 2,965 documents and 29,590 expert-annotated Q&A pairs in total.
Within the academia track, PaperTab targets questions whose answers are grounded in tabular and figure-related content inside scholarly PDFs (as opposed to narrative-only PaperText). The UDA paper reports 307 PaperTab documents and 393 Q&A pairs for this subset; this Hub release exposes 393 rows in the default split, aligned with that configuration.
Task
- Task type: Question answering (QA) for Paper Tab QA — answer natural-language questions about information that appears in tables, statistics, and related structured fragments within academic papers.
- Input: A question string (
input) about the paper (e.g. reported metrics, experimental settings, language pairs, or results tied to tabular evidence). - Supervision / reference:
expected_outputis a JSON string with gold answers (possibly multiple references withabstractive,extractive, or other answer types) and evidence spans (e.g.raw_evidence,highlighted_evidence, including lines such asFLOAT SELECTED: Table …that point to table regions).metadatarecords UDA identifiers (sub_benchmark:paper_tab_qa).
Systems are typically evaluated on answer correctness (exact or semantic match to reference answers) and grounding (whether retrieved or generated content is supported by the same tables and passages as the gold evidence), consistent with UDA’s emphasis on end-to-end document analysis.
Background
Paper Tab QA in UDA
Academic PDFs mix body text, equations, and tables/figures. Paper Tab QA instances are designed so that answering well requires attending to tabular and semi-structured material—not only adjacent paragraphs. That makes the subset a useful stress test for layout-aware parsing, table retrieval, and multi-evidence reasoning in scientific domains.
UDA benchmark (suite)
UDA revisits LLM- and RAG-based solutions for document analysis across domains and query types. The authors highlight that data parsing and retrieval design strongly affect answer quality when documents are long and unstructured. The suite and code are public; see references below.
Data fields
| Column | Type | Description |
|---|---|---|
input |
string |
Natural-language question about the paper. |
expected_output |
string |
JSON string with answers (list of { "answer", "type" }) and evidence (lists of supporting strings, often including table captions and FLOAT SELECTED: … markers). |
metadata |
struct | benchmark_name (uda_paper_tab_qa), benchmark_type (uda), split, sub_benchmark (paper_tab_qa), and value (JSON string with label_key, label_file, q_uid). |
Splits: Single split default with 393 examples.
Examples
Illustrative rows from this release (expected_output is shown formatted for readability; strings may be truncated in documentation).
Example 1 — language pairs (abstractive / extractive answers)
input:what language pairs are explored?expected_output(excerpt):
{
"answers": [
{
"answer": "De-En, En-Fr, Fr-En, En-Es, Ro-En, En-De, Ar-En, En-Ru",
"type": "abstractive"
},
{
"answer": "French-English-Spanish (Fr-En-Es), German-English-French (De-En-Fr) and Romanian-English-German (Ro-En-De), Arabic (Ar), Spanish (Es), and Russian (Ru), and mutual translation between themselves constitutes six zero-shot translation",
"type": "extractive"
}
],
"evidence": [
{
"raw_evidence": [
"For MultiUN corpus, we use four languages: English (En) is set as the pivot language...",
"The statistics of Europarl and MultiUN corpora are summarized in Table TABREF18...",
"FLOAT SELECTED: Table 1: Data Statistics."
],
"highlighted_evidence": ["..."]
}
]
}
Example 2 — reported accuracy (abstractive answers, table evidence)
input:What accuracy does the proposed system achieve?expected_output(excerpt):
{
"answers": [
{
"answer": "F1 scores of 85.99 on the DL-PS data, 75.15 on the EC-MT data and 71.53 on the EC-UQ data ",
"type": "abstractive"
},
{
"answer": "F1 of 85.99 on the DL-PS dataset (dialog domain); 75.15 on EC-MT and 71.53 on EC-UQ (e-commerce domain)",
"type": "abstractive"
}
],
"evidence": [
{
"raw_evidence": [
"FLOAT SELECTED: Table 2: Main results on the DL-PS data.",
"FLOAT SELECTED: Table 3: Main results on the EC-MT and EC-UQ datasets."
],
"highlighted_evidence": [
"FLOAT SELECTED: Table 2: Main results on the DL-PS data.",
"FLOAT SELECTED: Table 3: Main results on the EC-MT and EC-UQ datasets."
]
}
]
}
metadata.value (example):
{
"label_key": "1912.01214",
"label_file": "paper_tab_qa",
"q_uid": "5eda469a8a77f028d0c5f1acd296111085614537"
}
References
UDA benchmark (primary reference)
Yulong Hui, Yao Lu, Huanchen Zhang. UDA: A Benchmark Suite for Retrieval Augmented Generation in Real-world Document Analysis. NeurIPS 2024 (Datasets and Benchmarks Track).
Abstract: The use of Retrieval-Augmented Generation (RAG) has improved Large Language Models (LLMs) in collaborating with external data, yet significant challenges exist in real-world scenarios. In areas such as academic literature and finance question answering, data are often found in raw text and tables in HTML or PDF formats, which can be lengthy and highly unstructured. In this paper, we introduce a benchmark suite, namely Unstructured Document Analysis (UDA), that involves 2,965 real-world documents and 29,590 expert-annotated Q&A pairs. We revisit popular LLM- and RAG-based solutions for document analysis and evaluate the design choices and answer qualities across multiple document domains and diverse query types. Our evaluation yields interesting findings and highlights the importance of data parsing and retrieval.
- arXiv: https://arxiv.org/abs/2406.15187
- DOI: https://doi.org/10.48550/arXiv.2406.15187
- NeurIPS proceedings: Datasets and Benchmarks Track abstract
- Code & resources: https://github.com/qinchuanhui/UDA-Benchmark
Related Hub resources
- Aggregated UDA QA (reference): qinchuanhui/UDA-QA
Citation
If you use this dataset, please cite UDA (and this dataset record as appropriate):
@article{hui2024uda,
title = {UDA: A Benchmark Suite for Retrieval Augmented Generation in Real-world Document Analysis},
author = {Hui, Yulong and Lu, Yao and Zhang, Huanchen},
journal = {arXiv preprint arXiv:2406.15187},
year = {2024}
}
License
Use this dataset in compliance with the UDA benchmark and underlying paper data licenses and terms. Verify conditions for your use case before redistribution or commercial use.
- Downloads last month
- 23