table_id_paper stringlengths 15 15 | caption stringlengths 14 1.88k | row_header_level int32 1 9 | row_headers large_stringlengths 15 1.75k | column_header_level int32 1 6 | column_headers large_stringlengths 7 1.01k | contents large_stringlengths 18 2.36k | metrics_loc stringclasses 2
values | metrics_type large_stringlengths 5 532 | target_entity large_stringlengths 2 330 | table_html_clean large_stringlengths 274 7.88k | table_name stringclasses 9
values | table_id stringclasses 9
values | paper_id stringlengths 8 8 | page_no int32 1 13 | dir stringclasses 8
values | description large_stringlengths 103 3.8k | class_sentence stringlengths 3 120 | sentences large_stringlengths 110 3.92k | header_mention stringlengths 12 1.8k | valid int32 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
D19-1259table_4 | Human performance (single-annotator). | 2 | [['Setting', 'Reasoning-Free'], ['Setting', 'Reasoning-Required']] | 1 | [['Accuracy (%)'], ['Macro-F1 (%)']] | [['90.4', '84.18'], ['78', '72.19']] | column | ['accuracy (%)', 'Macro-F1 (%)'] | ['Reasoning-Free'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy (%)</th> <th>Macro-F1 (%)</th> </tr> </thead> <tbody> <tr> <td>Setting || Reasoning-Free</td> <td>90.4</td> <td>84.18</td> </tr> <tr> <td>Setting || Reasoning-Required... | Table 4 | table_4 | D19-1259 | 7 | emnlp2019 | 5 Experiments . 5.1 Human Performance . Human performance is measured during the annotation: As shown in Algorithm 1, annotations of annotator 1 and annotator 2 are used to calculate reasoning-free and reasoning-required human performance, respectively, against the discussed ground truth labels. Human performance on th... | [2, 2, 2, 1, 2, 2, 1] | ['5 Experiments .', '5.1 Human Performance .', 'Human performance is measured during the annotation: As shown in Algorithm 1, annotations of annotator 1 and annotator 2 are used to calculate reasoning-free and reasoning-required human performance, respectively, against the discussed ground truth labels.', 'Human perfor... | [None, None, None, None, None, None, ['Reasoning-Free', 'Accuracy (%)', 'Macro-F1 (%)']] | 1 |
D19-1268table_2 | Evaluation results on link prediction | 2 | [['Model', 'SE'], ['Model', 'SME'], ['Model', 'TransE'], ['Model', 'TransH'], ['Model', 'TransR'], ['Model', 'TranSparse'], ['Model', 'STransE'], ['Model', 'ITransF'], ['Model', 'HolE'], ['Model', 'ComplEx'], ['Model', 'ANALOGY'], ['Model', 'ProjE'], ['Model', 'RTransE'], ['Model', 'PTransE (ADD, 2-step)'], ['Model', '... | 3 | [['WN18', 'Mean Rank', 'Raw'], ['WN18', 'Mean Rank', 'Filtered'], ['WN18', 'Hits@10(%)', 'Raw'], ['WN18', 'Hits@10(%)', 'Filtered'], ['FB15K', 'Mean Rank', 'Raw'], ['FB15K', 'Mean Rank', 'Filtered'], ['FB15K', 'Hits@10(%)', 'Raw'], ['FB15K', 'Hits@10(%)', 'Filtered']] | [['1011', '985', '68.5', '80.5', '273', '162', '28.8', '39.8'], ['545', '533', '65.1', '74.1', '274', '154', '30.7', '40.8'], ['263', '251', '75.4', '89.2', '243', '125', '34.9', '47.1'], ['318', '303', '75.4', '86.7', '212', '87', '45.7', '64.4'], ['238', '225', '79.8', '92', '198', '77', '48.2', '68.7'], ['223', '211... | column | ['Mean Rank', 'Mean Rank', 'Hits@10(%)', 'Hits@10(%)', 'Mean Rank', 'Mean Rank', 'Hits@10(%)', 'Hits@10(%)'] | ['OPTransE'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>WN18 || Mean Rank || Raw</th> <th>WN18 || Mean Rank || Filtered</th> <th>WN18 || Hits@10(%) || Raw</th> <th>WN18 || Hits@10(%) || Filtered</th> <th>FB15K || Mean Rank || Raw</th> <th>FB15K |... | Table 2 | table_2 | D19-1268 | 7 | emnlp2019 | 4.4 Results Table 2 shows the performances of different methods on the link prediction task according to various metrics. Numbers in bold mean the best results among all methods and the underlined ones mean the second best. The evaluation results of baselines are from their original work, and ”-” in the table means the... | [1, 2, 2, 2, 1, 2, 2, 2, 1, 2, 2] | ['4.4 Results Table 2 shows the performances of different methods on the link prediction task according to various metrics.', 'Numbers in bold mean the best results among all methods and the underlined ones mean the second best.', 'The evaluation results of baselines are from their original work, and ”-” in the table m... | [None, None, None, None, ['PTransE (ADD, 2-step)', 'PTransE (MUL, 2-step)', 'PTransE (ADD, 3-step)', 'OPTransE', 'TransE', 'RPE (ACOM)', 'RPE (MCOM)', 'TransR'], None, ['OPTransE'], ['OPTransE'], ['OPTransE', 'RTransE', 'PTransE (ADD, 2-step)', 'PTransE (MUL, 2-step)', 'PTransE (ADD, 3-step)', 'PaSKoGE', 'RPE (ACOM)', ... | 1 |
D19-1272table_2 | Overall average results by model (with % changes from the input) | 2 | [['Model', 'Input'], ['Model', 'SMERTI-Transformer'], ['Model', 'SMERTI-RNN'], ['Model', 'W2V-STEM'], ['Model', 'GWN-STEM'], ['Model', 'NWN-STEM']] | 1 | [['SPA'], ['SLOR'], ['CSS'], ['STES']] | [['-', '0.5962', '0.1166', '-'], ['0.6606', '0.5255 (-11.86%)', '0.2857 (+145.03%)', '0.4337'], ['0.6574', '0.5122 (-14.09%)', '0.2927 (+151.03%)', '0.4354'], ['0.6667', '0.4672 (-21.64%)', '0.2851 (+144.51%)', '0.4197'], ['0.8903', '0.4864 (-18.42%)', '0.1419 (+21.70%)', '0.2934'], ['0.9116', '0.4832 (-18.95%)', '0.13... | column | ['SPA', 'SLOR', 'CSS', 'STES'] | ['SMERTI-Transformer', 'SMERTI-RNN'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>SPA</th> <th>SLOR</th> <th>CSS</th> <th>STES</th> </tr> </thead> <tbody> <tr> <td>Model || Input</td> <td>-</td> <td>0.5962</td> <td>0.1166</td> <td>-</td> </tr> ... | Table 2 | table_2 | D19-1272 | 10 | emnlp2019 | Table 2 shows overall average results by model. As seen in Table 2, both SMERTI variations achieve higher STES and outperform the other models overall, with the WordNet models performing the worst. SMERTI excels especially on fluency and content similarity. The transformer variation achieves slightly higher SLOR, while... | [1, 1, 2, 1, 0, 0, 0] | ['Table 2 shows overall average results by model.', 'As seen in Table 2, both SMERTI variations achieve higher STES and outperform the other models overall, with the WordNet models performing the worst.', 'SMERTI excels especially on fluency and content similarity.', 'The transformer variation achieves slightly higher ... | [None, ['SMERTI-Transformer', 'SMERTI-RNN', 'STES'], ['SMERTI-Transformer', 'SMERTI-RNN'], ['SMERTI-Transformer', 'SLOR', 'SMERTI-RNN', 'CSS'], None, None, None] | 1 |
D19-1278table_2 | Model performance (Precision, Recall, F1) on PMB data (v.2.1.0, test set); models were trained on gold standard data. | 1 | [['van Noord et al. (2018)'], ['seq2seq+copy'], ['seq2graph']] | 1 | [['P'], ['R'], ['F1'], ['illformed']] | [['-', '-', '72.8', '20%'], ['75.57', '67.27', '71.18', '4.12%'], ['75.51', '71.69', '73.55', '0.40%']] | column | ['P', 'R', 'F1', 'illformed'] | ['seq2graph'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> <th>illformed</th> </tr> </thead> <tbody> <tr> <td>van Noord et al. (2018)</td> <td>-</td> <td>-</td> <td>72.8</td> <td>20%</td> </tr>... | Table 2 | table_2 | D19-1278 | 8 | emnlp2019 | 5 Results System comparison Table 2 summarizes our results on the PMB gold data (v.2.1.0, test set). We compare our graph decoder against the system of van Noord et al. (2018) and our implementation of a seq2seq model, enhanced with a copy mechanism. Overall, we see that our graph decoder outperforms both models. Moreo... | [1, 1, 1, 1] | ['5 Results System comparison Table 2 summarizes our results on the PMB gold data (v.2.1.0, test set).', 'We compare our graph decoder against the system of van Noord et al. (2018) and our implementation of a seq2seq model, enhanced with a copy mechanism.', 'Overall, we see that our graph decoder outperforms both model... | [None, ['van Noord et al. (2018)', 'seq2seq+copy'], ['van Noord et al. (2018)', 'seq2seq+copy', 'seq2graph'], ['seq2graph']] | 1 |
D19-1282table_1 | Comparisons with large pre-trained language model fine-tuning with different amount of training data. | 2 | [['Model', 'Random guess'], ['Model', 'GPT-FINETUNING'], ['Model', 'GPT-KAGNET'], ['Model', 'BERT-BASE-FINETUNING'], ['Model', 'BERT-BASE-KAGNET'], ['Model', 'BERT-LARGE-FINETUNING'], ['Model', 'BERT-LARGE-KAGNET'], ['Model', 'Human Performance']] | 2 | [['10(%) of Ihtrain', 'IHdev-Acc. (%)'], ['10(%) of Ihtrain', 'IHtest-Acc. (%)'], ['50(%) of Ihtrain', 'IHdev-Acc. (%)'], ['50(%) of Ihtrain', 'IHtest-Acc. (%)'], ['100(%) of Ihtrain', 'IHdev-Acc. (%)'], ['100(%) of Ihtrain', 'IHtest-Acc. (%)']] | [['20', '20', '20', '20', '20', '20'], ['27.55', '26.51', '32.46', '31.28', '47.35', '45.58'], ['28.13', '26.98', '33.72', '32.33', '48.95', '46.79'], ['30.11', '29.78', '38.66', '36.83', '53.48', '53.26'], ['31.05', '30.94', '40.32', '39.01', '55.57', '56.19'], ['35.71', '32.88', '55.45', '49.88', '60.61', '55.84'], [... | column | ['IHdev-Acc. (%)', 'IHtest-Acc. (%)', 'IHdev-Acc. (%)', 'IHtest-Acc. (%)', 'IHdev-Acc. (%)', 'IHtest-Acc. (%)'] | ['GPT-KAGNET', 'BERT-BASE-KAGNET', 'BERT-LARGE-KAGNET'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>10(%) of Ihtrain || IHdev-Acc. (%)</th> <th>10(%) of Ihtrain || IHtest-Acc. (%)</th> <th>50(%) of Ihtrain || IHdev-Acc. (%)</th> <th>50(%) of Ihtrain || IHtest-Acc. (%)</th> <th>100(%) of Ihtrain... | Table 1 | table_1 | D19-1282 | 6 | emnlp2019 | We conduct the experiments with our in-house splits to investigate whether our KA GNE T can also work well on other universal language encoders (GPT and BERT-BASE), particularly with different fractions of the dataset (say 10%, 50%, 100% of the training data). Table 1 shows that our KA GNE T-based methods using fixed p... | [2, 1, 1] | ['We conduct the experiments with our in-house splits to investigate whether our KA GNE T can also work well on other universal language encoders (GPT and BERT-BASE), particularly with different fractions of the dataset (say 10%, 50%, 100% of the training data).', 'Table 1 shows that our KA GNE T-based methods using fi... | [None, ['GPT-KAGNET', 'BERT-BASE-KAGNET', 'BERT-LARGE-KAGNET', 'GPT-FINETUNING', 'BERT-BASE-FINETUNING', 'BERT-LARGE-FINETUNING'], ['10(%) of Ihtrain']] | 1 |
D19-1284table_4 | Results on WIKISQL. We compare accuracy with weakly-supervised or fully-supervised settings. Our method outperforms previous weakly-supervised methods and most of published fully-supervised methods. | 3 | [['Model', 'Weakly-supervised setting', 'REINFORCE (Williams, 1992)'], ['Model', 'Weakly-supervised setting', 'Iterative ML (Liang et al., 2017)'], ['Model', 'Weakly-supervised setting', 'Hard EM (Liang et al., 2018)'], ['Model', 'Weakly-supervised setting', 'Beam-based MML (Liang et al., 2018)'], ['Model', 'Weakly-sup... | 2 | [['Accuracy', 'dev'], ['Accuracy', 'test']] | [['< 10', '-'], ['70.1', '-'], ['70.2', '-'], ['70.7', '-'], ['71.8', '72.4'], ['74.5', '74.2'], ['74.9', '74.8'], ['70.6', '70.5'], ['84.4', '83.9'], ['69.8', '68'], ['74.5', '73.5'], ['79', '78.5'], ['87.2', '86.2'], ['89.5', '88.7']] | column | ['Accuracy', 'Accuracy'] | ['Ours'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy || dev</th> <th>Accuracy || test</th> </tr> </thead> <tbody> <tr> <td>Model || Weakly-supervised setting || REINFORCE (Williams, 1992)</td> <td>< 10</td> <td>-</td> </tr> ... | Table 4 | table_4 | D19-1284 | 7 | emnlp2019 | Table 4 shows training method significantly outperforms all the weaklysupervised learning algorithms, including 10% gain over the previous state of the art. These results indicate that precomputing a solution set and training a model through hard updates play a significant role to the performance. Given that our method... | [1, 2, 2, 1] | ['Table 4 shows training method significantly outperforms all the weaklysupervised learning algorithms, including 10% gain over the previous state of the art.', 'These results indicate that precomputing a solution set and training a model through hard updates play a significant role to the performance.', 'Given that ou... | [['Fully-supervised setting', 'Weakly-supervised setting'], None, None, ['Ours', 'Fully-supervised setting']] | 1 |
D19-1291table_3 | Results for Intra-turn Relation Prediction with Gold and Predicted Premises | 2 | [['Method', 'All relations'], ['Method', 'Menini et al. (2018)'], ['Method', 'Menini et al. (2018) + RST Features'], ['Method', 'RST Features'], ['Method', 'Morio and Fujita (2018)'], ['Method', 'BERT Devlin et al. (2019)'], ['Method', 'IMHO Context Fine-Tuned BERT'], ['Method', ' + RST Ensemble']] | 2 | [['Precision', 'Gold'], ['Precision', 'Pred'], ['Recall', 'Gold'], ['Recall', 'Pred'], ['F-Score', 'Gold'], ['F-Score', 'Pred']] | [['5', '-', '100', '-', '9', '-'], ['7', '5.9', '82', '80', '13', '11'], ['7.4', '6.1', '83', '81', '13.7', '11.4'], ['6.3', '5.7', '79.5', '77', '11.8', '10.6'], ['10', '-', '48.8', '-', '16.6', '-'], ['12', '11', '67', '60', '20.3', '18.5'], ['14.3', '13.2', '69', '65', '23.7', '21.8'], ['16.7', '15.5', '73', '70.2',... | column | ['Precision', 'Precision', 'Recall', 'Recall', 'F-Score', 'F-Score'] | ['IMHO Context Fine-Tuned BERT', ' + RST Ensemble'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Precision || Gold</th> <th>Precision || Pred</th> <th>Recall || Gold</th> <th>Recall || Pred</th> <th>F-Score || Gold</th> <th>F-Score || Pred</th> </tr> </thead> <tbody> <tr> <... | Table 3 | table_3 | D19-1291 | 7 | emnlp2019 | Intra-turn Relations . We report the results of our binary classification task in Table 3 in terms of Precision, Recall and F-score for the “true” class, i.e., when a relation is present. We report results given both gold premises and predicted premises (using our best model from 5.1). Our best results are obtained fro... | [2, 1, 1, 1, 1] | ['Intra-turn Relations .', 'We report the results of our binary classification task in Table 3 in terms of Precision, Recall and F-score for the “true” class, i.e., when a relation is present.', 'We report results given both gold premises and predicted premises (using our best model from 5.1).', 'Our best results are o... | [None, ['Precision', 'Recall', 'F-Score'], ['Gold', 'Pred'], [' + RST Ensemble'], ['IMHO Context Fine-Tuned BERT', 'Morio and Fujita (2018)']] | 1 |
D19-1294table_2 | Performance for classifying review segments as good or bad for recommendation justification. | 2 | [['Method', 'BOW-Xgboost'], ['Method', 'CNN'], ['Method', 'LSTM-MaxPool'], ['Method', 'BERT'], ['Method', 'BERT-SA (one epoch)'], ['Method', 'BERT-SA (three epoch)']] | 1 | [['F1'], ['Recall'], ['Precision']] | [['0.559', '0.679', '0.475'], ['0.644', '0.596', '0.7'], ['0.675', '0.703', '0.65'], ['0.747', '0.7', '0.8'], ['0.481', '0.975', '0.32'], ['0.491', '1', '0.325']] | column | ['F1', 'Recall', 'Precision'] | ['BERT-SA (three epoch)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>F1</th> <th>Recall</th> <th>Precision</th> </tr> </thead> <tbody> <tr> <td>Method || BOW-Xgboost</td> <td>0.559</td> <td>0.679</td> <td>0.475</td> </tr> <tr> <td>Meth... | Table 2 | table_2 | D19-1294 | 3 | emnlp2019 | Table 2 presents results for our binary classification task. The BERT classifier has higher F1-score and precision than other classifiers. The BERTSA model after three epochs only achieves an F1 score of 0.491, which confirms the difference between sentiment analysis and our good/bad task, i.e., even if the segment has... | [1, 1, 1] | ['Table 2 presents results for our binary classification task.', 'The BERT classifier has higher F1-score and precision than other classifiers.', 'The BERTSA model after three epochs only achieves an F1 score of 0.491, which confirms the difference between sentiment analysis and our good/bad task, i.e., even if the seg... | [None, ['BERT', 'F1', 'Precision'], ['BERT-SA (three epoch)', 'F1']] | 1 |
D19-1296table_1 | Performance results of the compared methods. | 1 | [['SECTION'], ['INFOBOX'], ['RELATED'], ['O RACLE RE'], ['PROP']] | 1 | [['Acc'], ['Prec'], ['Rec'], ['F1']] | [['50.56', '100', '1.12', '2.21'], ['53.71', '100', '7.41', '13.81'], ['68.86', '66.23', '76.97', '71.2'], ['75.89', '100', '51.77', '68.22'], ['81.45', '98.28', '64.02', '77.53']] | column | ['Acc', 'Prec', 'Rec', 'F1'] | ['PROP'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Acc</th> <th>Prec</th> <th>Rec</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>SECTION</td> <td>50.56</td> <td>100</td> <td>1.12</td> <td>2.21</td> </tr> <tr> ... | Table 1 | table_1 | D19-1296 | 6 | emnlp2019 | Results . Table 1 lists the accuracy (Acc), precision (Prec), recall (Rec), and F1 of the compared methods. The SECTION method achieved 100% precision, which indicates that our technique of exploiting causality-describing sections in Wikipedia could accurately extract causalities. As the method’s recall indicates, howe... | [2, 1, 1, 1, 1, 1, 1, 1, 1, 1] | ['Results .', 'Table 1 lists the accuracy (Acc), precision (Prec), recall (Rec), and F1 of the compared methods.', 'The SECTION method achieved 100% precision, which indicates that our technique of exploiting causality-describing sections in Wikipedia could accurately extract causalities.', 'As the method’s recall indi... | [None, ['Acc', 'Prec', 'Rec', 'F1'], ['SECTION', 'Prec'], None, ['INFOBOX', 'Prec'], ['RELATED'], ['O RACLE RE', 'Prec'], ['O RACLE RE'], ['PROP', 'F1'], ['PROP', 'O RACLE RE']] | 1 |
D19-1298table_2 | Results on the arXiv dataset. For models with an ∗, we report results from (Cohan et al., 2018). Models are traditional extractive in the first block, neural abstractive in the second block, while neural extractive in the third block. The Oracle (last row) corresponds to using the ground truth labels, obtained (for tra... | 2 | [['Model', 'SumBasic*'], ['Model', 'LSA*'], ['Model', 'LexRank*'], ['Model', 'Attn-Seq2Seq*'], ['Model', 'Pntr-Gen-Seq2Seq*'], ['Model', 'Discourse-aware*'], ['Model', 'Baseline'], ['Model', 'Cheng & Lapata'], ['Model', 'SummaRuNNer'], ['Model', 'Ours-attentive context'], ['Model', 'Ours-concat'], ['Model', 'Lead'], ['... | 1 | [['ROUGE-1'], ['ROUGE-2'], ['ROUGE-L'], ['METEOR']] | [['29.47', '6.95', '26.3', '-'], ['29.91', '7.42', '25.67', '-'], ['33.85', '10.73', '28.99', '-'], ['29.3', '6', '25.56', '-'], ['32.06', '9.04', '25.16', '-'], ['35.8', '11.05', '31.8', '-'], ['42.91', '16.65', '28.53', '21.35'], ['42.24', '15.97', '27.88', '20.97'], ['42.81', '16.52', '28.23', '21.35'], ['43.58', '1... | column | ['ROUGE-1', 'ROUGE-2', 'ROUGE-L', 'METEOR'] | ['Ours-attentive context', 'Ours-concat'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>ROUGE-1</th> <th>ROUGE-2</th> <th>ROUGE-L</th> <th>METEOR</th> </tr> </thead> <tbody> <tr> <td>Model || SumBasic*</td> <td>29.47</td> <td>6.95</td> <td>26.3</td> <td>-... | Table 2 | table_2 | D19-1298 | 7 | emnlp2019 | The performance of all models on arXiv and Pubmed is shown in Table 2 and Table 3, respectively. Follow the work (Kedzie et al., 2018), we use the approximate randomization as the statistical significance test method (Riezler and Maxwell, 2005) with a Bonferroni correction for multiple comparisons, at the confidence le... | [2, 2, 1, 2, 2, 1, 1, 2, 1] | ['The performance of all models on arXiv and Pubmed is shown in Table 2 and Table 3, respectively.', 'Follow the work (Kedzie et al., 2018), we use the approximate randomization as the statistical significance test method (Riezler and Maxwell, 2005) with a Bonferroni correction for multiple comparisons, at the confiden... | [None, None, ['Baseline', 'Cheng & Lapata', 'SummaRuNNer', 'Ours-attentive context', 'Ours-concat', 'SumBasic*', 'LSA*', 'LexRank*', 'ROUGE-1', 'ROUGE-2', 'ROUGE-L'], ['ROUGE-1'], None, ['Baseline', 'Cheng & Lapata', 'SummaRuNNer', 'Ours-attentive context', 'Ours-concat', 'ROUGE-1', 'ROUGE-2', 'ROUGE-L', 'Attn-Seq2Seq*... | 1 |
D19-1300table_1 | Results on the combined CNN/DailyMail test set. We report F1 scores of ROUGE-1 (R1), ROUGE2 (R2), and ROUGE-L (RL). The result of Lead-3 is taken from Dong et al. (2018). | 2 | [['Model', 'Lead-3'], ['Model', 'SummaRuNNer'], ['Model', 'DQN'], ['Model', 'Refresh'], ['Model', 'RNES'], ['Model', 'BANDITSUM'], ['Model', 'HER']] | 2 | [[' ROUGE', 'R1'], ['ROUGE', ' R2'], ['ROUGE', ' RL']] | [['40', '17.5', '36.2'], ['39.6', '16.2', '35.3'], ['39.4', '16.1', '35.6'], ['40', '18.2', '36.6'], ['41.3', '18.9', '37.6'], ['41.5', '18.7', '37.6'], ['42.3', '18.9', '37.9']] | column | ['R1', 'R2', 'RL'] | ['HER'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>ROUGE || R1</th> <th>ROUGE || R2</th> <th>ROUGE || RL</th> </tr> </thead> <tbody> <tr> <td>Model || Lead-3</td> <td>40</td> <td>17.5</td> <td>36.2</td> </tr> <tr> <... | Table 1 | table_1 | D19-1300 | 7 | emnlp2019 | 4 Experimental Results . 4.1 Quantitative Analysis . We first report the ROUGE metrics on the combined CNN/DailyMail test sets in Table 1 and the separate results in Table 2. We can get several observations from these two tables. Firstly, our model generally performs the best and even surpasses 42 on ROUGE-1 score on t... | [2, 2, 1, 1, 1, 2, 2, 2, 2, 1, 1] | ['4 Experimental Results .', '4.1 Quantitative Analysis .', 'We first report the ROUGE metrics on the combined CNN/DailyMail test sets in Table 1 and the separate results in Table 2.', 'We can get several observations from these two tables.', 'Firstly, our model generally performs the best and even surpasses 42 on ROUG... | [None, None, [' ROUGE'], None, ['HER', 'R1'], None, None, None, None, ['Refresh', 'RNES'], ['BANDITSUM']] | 1 |
D19-1300table_3 | The results of ablation test on the test split of the combined CNN/DailyMail dataset. L and F are short for local net and rough reading. | 2 | [['Model', 'HER'], ['Model', 'HER-3'], ['Model', 'HER-3 w/o policy'], ['Model', 'HER-3 w/o policy&L'], ['Model', 'HER-3 w/o policy&F']] | 2 | [['ROUGE', 'R1'], ['ROUGE', ' R2'], ['ROUGE', ' RL']] | [['42.3', '18.9', '37.9'], ['42', '18.5', '37.6'], ['41.7', '18.3', '37.1'], ['41.2', '18.4', '37'], ['40.6', '18.2', '36.9']] | column | ['R1', 'R2', 'RL'] | ['HER'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>ROUGE || R1</th> <th>ROUGE || R2</th> <th>ROUGE || RL</th> </tr> </thead> <tbody> <tr> <td>Model || HER</td> <td>42.3</td> <td>18.9</td> <td>37.9</td> </tr> <tr> <t... | Table 3 | table_3 | D19-1300 | 7 | emnlp2019 | 4.2 Ablation Test. Next, we conduct ablation test by removing the modules of the proposed HER step by step. Firstly, we replace the automatic termination mechanism with a fixed extracting strategy that always selects three sentences for every document and we present the model as HER-3. Based on HER-3, we also remove ba... | [2, 1, 2, 1, 1, 1, 2, 2] | ['4.2 Ablation Test.', 'Next, we conduct ablation test by removing the modules of the proposed HER step by step.', 'Firstly, we replace the automatic termination mechanism with a fixed extracting strategy that always selects three sentences for every document and we present the model as HER-3.', 'Based on HER-3, we als... | [None, ['HER'], ['HER-3'], ['HER-3', 'HER-3 w/o policy', 'HER-3 w/o policy&L', 'HER-3 w/o policy&F'], None, ['HER'], None, None] | 1 |
D19-1301table_1 | ROUGE scores on the English evaluation sets of both Gigaword and DUC2004. On Gigaword, the fulllength F-1 based ROUGE scores are reported. On DUC2004, the recall based ROUGE scores are reported. “-” denotes no score is available in that work. | 2 | [['System', 'ABS (Rush et al. 2015)'], ['System', 'ABS+ (Rush et al. 2015)'], ['System', 'RAS-Elman (Chopra et al. 2016)'], ['System', 'words-lvt5k-1sent (Nallapati et al. 2016)'], ['System', 'SEASSbeam (Zhou et al. 2017)'], ['System', 'RNNMRT (Ayana et al. 2016)'], ['System', 'Actor-Critic (Li et al. 2018)'], ['System... | 2 | [['Gigaword', 'R-1'], ['Gigaword', 'R-2'], ['Gigaword', 'R-L'], ['DUC2004', 'R-1'], ['DUC2005', 'R-2'], ['DUC2006', 'R-L']] | [['29.55', '11.32', '26.42', '26.55', '7.06', '22.05'], ['29.76', '11.88', '26.96', '28.18', '8.49', '23.81'], ['33.78', '15.97', '31.15', '28.97', '8.26', '24.06'], ['35.3', '16.64', '32.62', '28.61', '9.42', '25.24'], ['36.15', '17.54', '33.63', '29.21', '9.56', '25.51'], ['36.54', '16.59', '33.44', '30.41', '10.87',... | column | ['R-1', 'R-2', 'R-L', 'R-1', 'R-2', 'R-L'] | ['Transformer+ContrastiveAttention', 'Transformer'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Gigaword || R-1</th> <th>Gigaword || R-2</th> <th>Gigaword || R-L</th> <th>DUC2004 || R-1</th> <th>DUC2005 || R-2</th> <th>DUC2006 || R-L</th> </tr> </thead> <tbody> <tr> <td>Sy... | Table 1 | table_1 | D19-1301 | 6 | emnlp2019 | 4.3 Results. 4.3.1 English Results. The experimental results on the English evaluation sets are listed in Table 1. We report the full-length F-1 scores of ROUGE-1 (R-1), ROUGE2 (R-2), and ROUGE-L (R-L) on the evaluation set of the annotated Gigaword, while report the recall-based scores of the R-1, R-2, and R-L on the ... | [2, 2, 1, 1, 1, 1, 1, 2, 2, 2, 1, 1, 1, 1] | ['4.3 Results.', '4.3.1 English Results.', 'The experimental results on the English evaluation sets are listed in Table 1.', 'We report the full-length F-1 scores of ROUGE-1 (R-1), ROUGE2 (R-2), and ROUGE-L (R-L) on the evaluation set of the annotated Gigaword, while report the recall-based scores of the R-1, R-2, and ... | [None, None, None, ['R-1', 'R-2', 'R-L'], ['Transformer+ContrastiveAttention'], None, ['ABS (Rush et al. 2015)', 'ABS+ (Rush et al. 2015)'], ['RAS-Elman (Chopra et al. 2016)'], ['RNNMRT (Ayana et al. 2016)', 'Actor-Critic (Li et al. 2018)', 'StructuredLoss (Edunov et al. 2018)'], ['DRGD (Li et al. 2017)'], ['FactAware ... | 1 |
D19-1301table_2 | The full-length F-1 based ROUGE scores on the Chinese evaluation set of LCSTS. | 2 | [['System', 'RNN context (Hu et al., 2015)'], ['System', 'CopyNet (Gu et al., 2016)'], ['System', 'RNNMRT (Ayana et al., 2016)'], ['System', 'RNNdistraction (Chen et al., 2016)'], ['System', 'DRGD (Li et al., 2017)'], ['System', 'Actor-Critic (Li et al., 2018)'], ['System', 'Global (Lin et al., 2018)'], ['System', 'Tra... | 1 | [[' R-1'], [' R-2'], [' R-L']] | [['29.9', '17.4', '27.2'], ['34.4', '21.6', '31.3'], ['38.2', '25.2', '35.4'], ['35.2', '22.6', '32.5'], ['36.99', '24.15', '34.21'], ['37.51', '24.68', '35.02'], ['39.4', '26.9', '36.5'], ['41.93', '28.28', '38.32'], ['44.35', '30.65', '40.58']] | column | ['R-1', 'R-2', 'R-L'] | ['Transformer+ContrastiveAttention'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>R-1</th> <th>R-2</th> <th>R-L</th> </tr> </thead> <tbody> <tr> <td>System || RNN context (Hu et al., 2015)</td> <td>29.9</td> <td>17.4</td> <td>27.2</td> </tr> <tr> <... | Table 2 | table_2 | D19-1301 | 7 | emnlp2019 | 4.3.2 Chinese Results. Table 2 presents the evaluation results on LCSTS. The upper rows list the performances of the related works, the bottom rows list the performances of our Transformer baseline and the integration of the contrastive attention mechanism into Transformer. We only take character sequences as source-su... | [2, 1, 2, 2, 1, 1] | ['4.3.2 Chinese Results.', 'Table 2 presents the evaluation results on LCSTS.', 'The upper rows list the performances of the related works, the bottom rows list the performances of our Transformer baseline and the integration of the contrastive attention mechanism into Transformer.', 'We only take character sequences a... | [None, None, ['Transformer'], None, ['Transformer'], None] | 1 |
D19-1307table_4 | Human evaluation on extractive summaries. Our system receives significantly higher human ratings on average. “Best%”: in how many percentage of documents a system receives the highest human rating. | 1 | [['Avg. Human Rating'], ['Best%']] | 1 | [['Ours'], ['Refresh'], ['ExtAbsRL']] | [['2.52', '2.27', '1.66'], ['70', '33.3', '6.7']] | row | ['Avg. Human Rating', 'best%'] | ['Ours'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Ours</th> <th>Refresh</th> <th>ExtAbsRL</th> </tr> </thead> <tbody> <tr> <td>Avg. Human Rating</td> <td>2.52</td> <td>2.27</td> <td>1.66</td> </tr> <tr> <td>Best%</td... | Table 4 | table_4 | D19-1307 | 9 | emnlp2019 | Table 4 presents the human evaluation results. Summaries generated by NeuralTD receives significantly higher human evaluation scores than those by Refresh and ExtAbsRL. Also, the average human rating for Refresh is significantly higher than ExtAbsRL. | [1, 1, 1] | ['Table 4 presents the human evaluation results.', 'Summaries generated by NeuralTD receives significantly higher human evaluation scores than those by Refresh and ExtAbsRL.', 'Also, the average human rating for Refresh is significantly higher than ExtAbsRL.'] | [None, ['Ours', ' Refresh', ' ExtAbsRL'], ['Avg. Human Rating', ' Refresh', ' ExtAbsRL']] | 1 |
D19-1307table_5 | Performance of ExtAbsRL with different reward functions, measured in terms of ROUGE (center) and human judgements (right). Using our learned reward yields significantly (p = 0.0057) higher average human rating. “Pref%”: in how many percentage of documents a system receives the higher human rating. | 2 | [['Reward', 'R-L (original)'], ['Reward', 'Learned (ours)']] | 1 | [['R-1'], [' R-2'], [' R-L'], [' Human'], [' Pref%']] | [['40.9', '17.8', '38.5', '1.75', '15'], ['39.2', '17.4', '37.5', '2.2', '75']] | column | ['R-1', 'R-2', 'R-L', 'Human', 'Pref%'] | ['Learned (ours)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>R-1</th> <th>R-2</th> <th>R-L</th> <th>Human</th> <th>Pref%</th> </tr> </thead> <tbody> <tr> <td>Reward || R-L (original)</td> <td>40.9</td> <td>17.8</td> <td>38.5</td... | Table 5 | table_5 | D19-1307 | 9 | emnlp2019 | 7.3 Abstractive Summarisation. Table 5 compares the ROUGE scores of using different rewards to train the extractor in ExtAbsRL (the abstractor is pre-trained, and is applied to rephrase the extracted sentences). Again, when ROUGE is used as rewards, the generated summaries have higher ROUGE scores. We randomly sampled ... | [2, 1, 1, 2, 2, 1, 2, 1, 2] | ['7.3 Abstractive Summarisation.', 'Table 5 compares the ROUGE scores of using different rewards to train the extractor in ExtAbsRL (the abstractor is pre-trained, and is applied to rephrase the extracted sentences).', 'Again, when ROUGE is used as rewards, the generated summaries have higher ROUGE scores.', 'We random... | [None, None, ['Learned (ours)', 'R-1', ' R-2', ' R-L'], None, None, [' Human', 'R-L (original)'], [' Human'], None, None] | 1 |
D19-1308table_4 | Comparison of single-expert selector with state-of-the-art abstractive summarization methods on CNN-DM. R stands for ROUGE (Lin, 2004) | 2 | [['Method', 'PG (See et al., 2017)'], ['Method', 'Bottom-Up (Gehrmann et al., 2018)'], ['Method', 'DCA (Celikyilmaz et al., 2018)'], ['Method', 'SELECTOR & 10-Beam PG (Ours)']] | 1 | [[' R-1'], [' R-2'], [' R-L']] | [['39.53', '17.28', '36.38'], ['41.22', '18.68', '38.34'], ['41.69', '19.47', '37.92'], ['41.72', '18.74', '38.79']] | column | ['R-1', 'R-2', 'R-L'] | ['SELECTOR & 10-Beam PG (Ours)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>R-1</th> <th>R-2</th> <th>R-L</th> </tr> </thead> <tbody> <tr> <td>Method || PG (See et al., 2017)</td> <td>39.53</td> <td>17.28</td> <td>36.38</td> </tr> <tr> <td>Me... | Table 4 | table_4 | D19-1308 | 8 | emnlp2019 | Comparison with State-of-the-art . Table 4 compares the performance of SELECTOR with the state-of-the-art bottom-up content selection of Gehrmann et al. (2018) in abstractive summarization. SELECTOR passes focus embeddings at the decoding step, whereas the bottom-up selection method only uses the masked words for the c... | [2, 1, 2, 1, 1, 0] | ['Comparison with State-of-the-art .', 'Table 4 compares the performance of SELECTOR with the state-of-the-art bottom-up content selection of Gehrmann et al. (2018) in abstractive summarization.', 'SELECTOR passes focus embeddings at the decoding step, whereas the bottom-up selection method only uses the masked words f... | [None, ['SELECTOR & 10-Beam PG (Ours)'], ['SELECTOR & 10-Beam PG (Ours)'], ['SELECTOR & 10-Beam PG (Ours)', 'Bottom-Up (Gehrmann et al., 2018)'], ['SELECTOR & 10-Beam PG (Ours)', 'Bottom-Up (Gehrmann et al., 2018)', ' R-1', ' R-L'], None] | 1 |
D19-1308table_5 | Training time: Comparison of training time on CNN-DM. See 4.6 for implementation details. | 2 | [['Method', 'PG'], ['Method', '3-M. Decoder'], ['Method', '5-M. Decoder'], ['Method', 'SELECTOR (Ours)'], ['Method', '3-M. SELECTOR (Ours)'], ['Method', '5-M. SELECTOR (Ours)']] | 1 | [[' Training time (ms. / step)']] | [['641.2'], [' 1804.1 (\x02 2.81)'], [' 2367.6 (\x02 4.37)'], [' 692.1 (\x02 1.08)'], [' 740.8 (\x02 1.16)'], [' 747.6 (\x02 1.17)']] | column | ['Training time (ms. / step)'] | ['SELECTOR (Ours)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Training time (ms. / step)</th> </tr> </thead> <tbody> <tr> <td>Method || PG</td> <td>641.2</td> </tr> <tr> <td>Method || 3-M. Decoder</td> <td>1804.1 ( 2.81)</td> </tr> <tr>... | Table 5 | table_5 | D19-1308 | 9 | emnlp2019 | Efficient Training . Table 5 shows that SELECTOR trains up to 3.7 times faster than mixture decoder (Shen et al., 2019). Training time of mixture decoder linearly increases with the number of decoders, while parallel focus inference of SELECTOR makes additional training time negligible. | [2, 1, 1] | ['Efficient Training .', 'Table 5 shows that SELECTOR trains up to 3.7 times faster than mixture decoder (Shen et al., 2019).', 'Training time of mixture decoder linearly increases with the number of decoders, while parallel focus inference of SELECTOR makes additional training time negligible.'] | [None, ['SELECTOR (Ours)', '3-M. Decoder', '5-M. Decoder'], ['3-M. Decoder', '5-M. Decoder', 'SELECTOR (Ours)']] | 1 |
D19-1312table_4 | Evaluation for seen and unseen entities. | 3 | [['Seen', 'Type', 'demonstrative'], ['Seen', 'Type', 'description'], ['Seen', 'Type', 'name'], ['Seen', 'Type', 'pronoun'], ['Seen', 'Type', 'total'], ['Unseen', 'Type', 'demonstrative'], ['Unseen', 'Type', 'description'], ['Unseen', 'Type', 'name'], ['Unseen', 'Type', 'pronoun'], ['Unseen', 'Type', 'total']] | 1 | [['Acc.'], [' Support']] | [['0.00%', '22'], ['48.72%', '862'], ['79.11%', '2547'], ['90.00%', '160'], ['71.82%', '3591'], ['0.00%', '3'], ['20.54%', '409'], ['74.74%', '2423'], ['88.33%', '120'], ['67.72%', '2955']] | column | ['Acc.', 'Support'] | ['Seen'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Acc.</th> <th>Support</th> </tr> </thead> <tbody> <tr> <td>Seen || Type || demonstrative</td> <td>0.00%</td> <td>22</td> </tr> <tr> <td>Seen || Type || description</td> <t... | Table 4 | table_4 | D19-1312 | 8 | emnlp2019 | 6.2 Seen Entities vs. Unseen Entities. In the evaluation, we also distinguished the results for seen and unseen entities. We trained the model on a training set contains 64,353 referring expressions and evaluate the model on a test set with 3,591 referring expressions related to seen entities and 2,955 expressions rela... | [2, 1, 1, 1, 1, 1, 2, 2, 2] | ['6.2 Seen Entities vs. Unseen Entities.', 'In the evaluation, we also distinguished the results for seen and unseen entities.', 'We trained the model on a training set contains 64,353 referring expressions and evaluate the model on a test set with 3,591 referring expressions related to seen entities and 2,955 expressi... | [['Seen', 'Unseen'], ['Seen', 'Unseen'], ['Seen', 'Unseen'], None, ['Seen'], ['Seen', 'Unseen', 'description'], ['name', 'pronoun', 'description'], None, None] | 1 |
D19-1313table_3 | Performances on Twitter dataset. | 2 | [['Models', 'Seq2Seq-BS'], ['Models', 'VAE-SVG-eq'], ['Models', 'GAN'], ['Models', 'DP-GAN'], ['Models', 'IRL'], ['Models', 'D-PAGE'], ['Models', 'PG-BS'], ['Models', 'ours']] | 2 | [['Twitter (Quality)', 'BLEU2'], ['Twitter (Quality)', 'BLEU3'], ['Twitter (Quality)', 'BLEU4'], ['Twitter (Quality)', 'METEOR'], ['Twitter (Diversity)', 'self-BLEU2'], ['Twitter (Diversity)', 'self-BLEU3'], ['Twitter (Diversity)', 'self-BLEU4']] | [['32.69', '28.25', '24.99', '22.51', '82.79', '80.29', '77.98'], ['26.43', '23.04', '20.57', '18.34', '91.46', '90.51', '89.67'], ['23.1', '20.45', '18.47', '15.33', '94.35', '93.75', '93.22'], ['33.07', '28.68', '25.49', '22.84', '82.52', '80.05', '77.63'], ['32.96', '28.6', '25.39', '22.72', '83.53', '81.22', '78.95... | column | ['BLEU2', 'BLEU3', 'BLEU4', 'METEOR', 'self-BLEU2', 'self-BLEU3', 'self-BLEU4'] | ['ours'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Twitter (Quality) || BLEU2</th> <th>Twitter (Quality) || BLEU3</th> <th>Twitter (Quality) || BLEU4</th> <th>Twitter (Quality) || METEOR</th> <th>Twitter (Diversity) || self-BLEU2</th> <th>Tw... | Table 3 | table_3 | D19-1313 | 7 | emnlp2019 | Table 3 shows scores on Twitter dataset. Our model achieves the best BLEU score and outperforms all baselines for METEOR. For diversity, our model again performs considerably better than other models. Since part of sentence pairs in Twitter dataset may be mislabelled by the labeling algorithm, it is harder to generate ... | [1, 1, 1, 2, 1] | ['Table 3 shows scores on Twitter dataset.', 'Our model achieves the best BLEU score and outperforms all baselines for METEOR.', 'For diversity, our model again performs considerably better than other models.', 'Since part of sentence pairs in Twitter dataset may be mislabelled by the labeling algorithm, it is harder t... | [None, ['ours', 'BLEU2', 'BLEU3', 'BLEU4', 'self-BLEU2', 'self-BLEU3', 'self-BLEU4', 'METEOR'], ['ours', 'Twitter (Diversity)'], ['Twitter (Quality)'], ['ours', 'Twitter (Diversity)', 'Twitter (Quality)']] | 1 |
D19-1313table_4 | Human evaluation results. | 2 | [['Models', 'D-PAGE'], ['Models', 'PG-BS'], ['Models', 'DP-GAN'], ['Models', 'ours']] | 2 | [['Quora', 'Fluency'], ['Quora', 'Consistency'], ['Twitter', 'Fluency'], ['Twitter', ' Consistency']] | [['4.21', '3.44', '3.66', '3.08'], ['4.2', '3.34', '3.85', '3.17'], ['4.27', '3.49', '4.09', '3.3'], ['4.57', '3.82', '4.24', '3.59']] | column | ['fluency', 'consistency', 'fluency', 'consistency'] | ['ours'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Quora || Fluency</th> <th>Quora || Consistency</th> <th>Twitter || Fluency</th> <th>Twitter || Consistency</th> </tr> </thead> <tbody> <tr> <td>Models || D-PAGE</td> <td>4.21</td> ... | Table 4 | table_4 | D19-1313 | 7 | emnlp2019 | For human evaluation, we randomly select 100 input sentences from the test set of each dataset and get the generated results of different models for these inputs. We follow the human evaluation guideline in (Li et al., 2018). The sentence pairs are scored for two aspects of the generated results: fluency (whether the g... | [2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1] | ['For human evaluation, we randomly select 100 input sentences from the test set of each dataset and get the generated results of different models for these inputs.', 'We follow the human evaluation guideline in (Li et al., 2018).', 'The sentence pairs are scored for two aspects of the generated results: fluency (wheth... | [None, None, ['Fluency', 'Consistency'], ['Fluency', 'Consistency'], None, None, None, None, None, None, ['ours', 'Fluency', 'Consistency']] | 1 |
D19-1315table_1 | Comparison of human evaluation results with their consistency. HG-KG, HG-CC and KG-CC indicate the consistency scores between the headline generation and key phrase generation, headline generation and category classification and key phrase generation and category classification, respectively. | 1 | [['Baseline'], ['Proposed'], ['Gold']] | 1 | [['HG-KG'], [' HG-CC'], ['KG-CC'], ['3 Outputs']] | [['56.80%', '37.60%', '37.60%', '30.00%'], ['58.80%', '39.60%', '39.20%', '32.40%'], ['65.20%', '44.40%', '48.80%', '35.20%']] | column | ['accuracy', 'accuracy', 'accuracy', 'accuracy'] | ['Proposed'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>HG-KG</th> <th>HG-CC</th> <th>KG-CC</th> <th>3 Outputs</th> </tr> </thead> <tbody> <tr> <td>Baseline</td> <td>56.80%</td> <td>37.60%</td> <td>37.60%</td> <td>30.00%</t... | Table 1 | table_1 | D19-1315 | 6 | emnlp2019 | First, we conduct a human evaluation to measure the consistency among three outputs. Table 1 shows the evaluation results with their consistency. The scores are the percentage of articles evaluated as consistent by a majority of workers. We consider all three outputs as consistent if all pairs of outputs are consistent... | [2, 1, 2, 2, 1] | ['First, we conduct a human evaluation to measure the consistency among three outputs.', 'Table 1 shows the evaluation results with their consistency.', 'The scores are the percentage of articles evaluated as consistent by a majority of workers.', 'We consider all three outputs as consistent if all pairs of outputs are... | [None, None, None, None, ['Proposed', '3 Outputs']] | 1 |
D19-1315table_2 | Comparison of human evaluation results for headlines. Scores are an average of ten crowd-sourcing workers with five scale rating. | 1 | [['Baseline'], ['Proposed'], ['Gold']] | 1 | [['Adequacy'], [' Fluency'], [' Occupation Adequacy']] | [['3.34', '3.69', '3.45'], ['3.76', '3.86', '3.89'], ['4.09', '4.12', '4.13']] | column | ['Adequacy', 'Fluency', 'Occupation Adequacy'] | ['Proposed'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Adequacy</th> <th>Fluency</th> <th>Occupation Adequacy</th> </tr> </thead> <tbody> <tr> <td>Baseline</td> <td>3.34</td> <td>3.69</td> <td>3.45</td> </tr> <tr> <td>Pro... | Table 2 | table_2 | D19-1315 | 6 | emnlp2019 | Second, to evaluate the quality of the generated headline, we implement a human evaluation to measure the adequacy and fluency. Table 2 shows the evaluation result along with the adequacy and fluency. The proposed method improves the adequacy by 0.42pt and the occupation adequacy by 0.44pt. Proposed method can generate m... | [2, 1, 1, 1] | ['Second, to evaluate the quality of the generated headline, we implement a human evaluation to measure the adequacy and fluency.', 'Table 2 shows the evaluation result along with the adequacy and fluency.', 'The proposed method improves the adequacy by 0.42pt and the occupation adequacy by 0.44pt.', 'Proposed method can... | [None, ['Adequacy', ' Fluency'], ['Adequacy', ' Occupation Adequacy'], ['Proposed']] | 1 |
D19-1315table_3 | Automatic evaluation results based on the ROUGE metrics and accuracy (%) of classification of job advertisement dataset. R-1, R-2 and R-L indicate the F1 scores of ROUGE-1, ROUGE-2 and ROUGE-L, respectively. The proposed method (MTL + SD + HCL) achieved the best scores (bold) for all tasks, and the ROUGE scores are sta... | 1 | [['Baseline (Pointer-Generator Network)'], ['Multi-Task Learning (MTL)'], ['MTL + Scheduling (SD)'], ['Proposed (MTL + SD + Hierarchical Consistency Loss (HCL))'], ['Lead-1']] | 2 | [['Headline generation', 'R-1'], ['Headline generation', 'R-2'], ['Headline generation', 'R-L'], ['Key phrase generation', 'R-1'], ['Key phrase generation', 'R-2'], ['Key phrase generation', 'R-L'], [' Classification', 'Accuracy']] | [['25.1', '5.3', '21.1', '30.9', '10.6', '28.7', '62.8'], ['26.2', '5.8', '21.6', '32.3', '10.9', '30', '64.1'], ['26.3', '6', '21.8', '32.3', '10.4', '29.9', '63.9'], [' *26.9', ' *6.1', ' *22.4', ' *32.8', ' *11.2', ' *30.5', ' *64.4'], ['19', '4.3', '13.5', ' -', '-', '-', '-']] | column | ['R-1', 'R-2', 'R-L', 'R-1', 'R-2', 'R-L', 'Accuracy'] | ['Proposed (MTL + SD + Hierarchical Consistency Loss (HCL))'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Headline generation || R-1</th> <th>Headline generation || R-2</th> <th>Headline generation || R-L</th> <th>Key phrase generation || R-1</th> <th>Key phrase generation || R-2</th> <th>Key ph... | Table 3 | table_3 | D19-1315 | 7 | emnlp2019 | Automatic evaluation of job advertisement corpus. We implement an automatic evaluation using the ROUGE metrics (Lin, 2004) and accuracy. We conduct the experiment ten times, and calculate the average score. Table 3 shows the effect of the proposed methods: multi-task learning (MTL), scheduling strategy (SD) and hierarc... | [2, 2, 2, 1, 1, 1, 0] | ['Automatic evaluation of job advertisement corpus.', 'We implement an automatic evaluation using the ROUGE metrics (Lin, 2004) and accuracy.', 'We conduct the experiment ten times, and calculate the average score.', 'Table 3 shows the effect of the proposed methods: multi-task learning (MTL), scheduling strategy (SD) ... | [None, None, None, ['Proposed (MTL + SD + Hierarchical Consistency Loss (HCL))'], ['Proposed (MTL + SD + Hierarchical Consistency Loss (HCL))'], ['Proposed (MTL + SD + Hierarchical Consistency Loss (HCL))'], None] | 1 |
D19-1315table_4 | Automatic evaluation results based on the ROUGE metrics and accuracy (%) of classification of the CNN and DailyMail datasets. The metrics are the same as in Table 3. The proposed method (MTL + SD + HCL) improved the scores for all tasks. The scores with ⇤ indicate that the scores are statistically significant from the ... | 2 | [['CNN', 'Baseline'], ['CNN', 'Proposed (MTL + SD + HCL)'], ['CNN', 'Lead'], ['DailyMail', 'Baseline'], ['DailyMail', 'Proposed (MTL + SD + HCL)'], ['DailyMail', 'Lead']] | 2 | [['Summarization', 'R-1'], ['Summarization', 'R-2'], ['Summarization', 'R-L'], [' Headline Generation', 'R-1'], ['Headline Generation', 'R-2'], ['Headline Generation', 'R-L'], [' Classification', 'Accuracy']] | [['30.7', '10.6', '27.3', '19.5', '5', '17', '43.8'], [' *31.0', ' *10.9', ' *27.8', '19.6', '5', '17.1', '43.9'], ['33.4', '12.2', '26.1', '17.2', '5', '11.1', ' -'], ['38.4', '15.8', '35', '43.1', '25.3', '39.6', '89'], [' *38.9', ' *16.3', ' *35.4', ' *43.7', '25.5', ' *40.1', '89.8'], ['43.8', '19.2', '37.3', '27.7... | column | ['R-1', 'R-2', 'R-L', 'R-1', 'R-2', 'R-L', 'Accuracy'] | ['Proposed (MTL + SD + HCL)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Summarization || R-1</th> <th>Summarization || R-2</th> <th>Summarization || R-L</th> <th>Headline Generation || R-1</th> <th>Headline Generation || R-2</th> <th>Headline Generation || R-L</... | Table 4 | table_4 | D19-1315 | 7 | emnlp2019 | Automatic evaluation of the CNN-DM dataset. Table 4 shows the results of the CNN and DailyMail datasets, respectively. For both datasets, the proposed method improves the ROUGE scores of the summarization and headline generation. | [2, 1, 1] | ['Automatic evaluation of the CNN-DM dataset.', 'Table 4 shows the results of the CNN and DailyMail datasets, respectively.', 'For both datasets, the proposed method improves the ROUGE scores of the summarization and headline generation.'] | [None, ['CNN', 'DailyMail'], ['Proposed (MTL + SD + HCL)']] | 1 |
D19-1316table_2 | Generation Performance (numbers in brackets correspond to the relaxed measures). | 2 | [['Method', 'Retrieval-based'], ['Method', 'Seq2seq'], ['Method', 'Case frame-based']] | 2 | [['PART-TIME', 'Recall'], ['PART-TIME', 'Precision'], ['PART-TIME', 'F-measure'], [' JOB SMOKING', 'Recall'], ['JOB SMOKING', 'Precision'], ['JOB SMOKING', 'F-measure']] | [[' 0.23 (0.25)', ' 0.61 (0.67)', ' 0.34 (0.37)', ' 0.28 (0.30)', ' 0.72 (0.78)', ' 0.41 (0.44)'], [' 0.06 (0.07)', ' 0.07 (0.08)', ' 0.07 (0.08)', ' 0.10 (0.13)', ' 0.11 (0.13)', ' 0.10 (0.13)'], [' 0.10 (0.10)', ' 0.62 (0.62)', ' 0.16 (0.16)', ' 0.05 (0.05)', ' 0.75 (0.75)', ' 0.09 (0.09)']] | column | ['Recall', 'Precision', 'F-measure', 'Recall', 'Precision', 'F-measure'] | ['Retrieval-based'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>PART-TIME || Recall</th> <th>PART-TIME || Precision</th> <th>PART-TIME || F-measure</th> <th>JOB SMOKING || Recall</th> <th>JOB SMOKING || Precision</th> <th>JOB SMOKING || F-measure</th> ... | Table 2 | table_2 | D19-1316 | 7 | emnlp2019 | 6.2 Results. Table 2 shows the results. It turns out that the simple application of the sequence-to-sequence model does not work well at all on this task; note that it is provided with the information about which sentence should have a feedback comment (i.e., tested on only the sentences having feedback comments). Neve... | [2, 1, 1, 1, 2, 1, 1, 1, 1] | ['6.2 Results.', 'Table 2 shows the results.', 'It turns out that the simple application of the sequence-to-sequence model does not work well at all on this task; note that it is provided with the information about which sentence should have a feedback comment (i.e., tested on only the sentences having feedback comment... | [None, None, ['Seq2seq'], ['Seq2seq'], None, ['Case frame-based'], ['Case frame-based', 'Recall'], ['Retrieval-based', 'Recall', 'Precision'], None] | 1 |
D19-1317table_4 | The main experimental results for our model and several baselines. ‘-’ means no results reported in their papers. (Bn: BLEU-n, MET: METEOR, R-L: ROUGE-L) | 1 | [['s2s (Du et al., 2017)'], ['NQG++ (Zhou et al., 2017)'], ['M2S+cp (Song et al., 2018)'], ['s2s+MP+GSA (Zhao et al., 2018)'], ['Hybrid model (Sun et al., 2018)'], ['ASs2s (Kim et al., 2019)'], ['Our model']] | 2 | [['Du Split (Du et al., 2017)', 'B1'], ['Du Split (Du et al., 2017)', 'B2'], ['Du Split (Du et al., 2017)', 'B3'], ['Du Split (Du et al., 2017)', 'B4'], ['Du Split (Du et al., 2017)', 'MET'], ['Du Split (Du et al., 2017)', 'R-L'], [' Zhou Split (Zhou et al. 2017)', 'B1'], [' Zhou Split (Zhou et al. 2017)', 'B2'], [' Zh... | [['43.09', '25.96', '17.5', '12.28', '16.62', '39.75', '-', ' -', ' -', ' -', ' -', ' -'], ['-', '-', ' -', ' -', ' -', '-', '-', '-', ' -', '13.29', ' -', ' -'], ['-', '-', ' -', '13.98', '18.77', '42.72', '-', '-', ' -', '13.91', ' -', ' -'], ['43.47', '28.23', '20.4', '15.32', '19.29', '43.91', '44.51', '29.07', '21... | column | ['B1', 'B2', 'B3', 'B4', 'MET', 'R-L', 'B1', 'B2', 'B3', 'B4', 'MET', 'R-L'] | ['Our model'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Du Split (Du et al., 2017) || B1</th> <th>Du Split (Du et al., 2017) || B2</th> <th>Du Split (Du et al., 2017) || B3</th> <th>Du Split (Du et al., 2017) || B4</th> <th>Du Split (Du et al., 2017) ... | Table 4 | table_4 | D19-1317 | 6 | emnlp2019 | 4.1 Main Results . Table 4 shows automatic evaluation results for our model and baselines (copied from their papers). Our proposed model which combines structured answer-relevant relations and unstructured sentences achieves significant improvements over proximity-based answer-aware models (Zhou et al., 2017; Sun et al... | [2, 1, 1, 2, 2, 2] | ['4.1 Main Results .', 'Table 4 shows automatic evaluation results for our model and baselines (copied from their papers).', 'Our proposed model which combines structured answer-relevant relations and unstructured sentences achieves significant improvements over proximity-based answer-aware models (Zhou et al., 2017; S... | [None, ['Our model'], ['Our model', 's2s+MP+GSA (Zhao et al., 2018)', 'Hybrid model (Sun et al., 2018)'], None, ['Our model'], None] | 1 |
D19-1321table_5 | Automatic evaluation for recipe text generation. Checklist was trained with its own source code. We also re-printed results from (Kiddon et al., 2016) (i.e., Checklist §). We applied bootstrap resampling (Koehn, 2004) for significance test. Scores that are significantly worse than the best results (in bold) are marked ... | 2 | [['Models', 'Checklist §'], ['Models', 'Checklist'], ['Models', 'CVAE'], ['Models', 'Pointer-S2S'], ['Models', 'Link-S2S'], ['Models', 'PHVM (ours)']] | 1 | [['BLEU (%)'], ['Coverage (%)'], ['Length'], ['Distinct-4 (%)'], ['Repetition-4 (%)']] | [['3', '67.9', '-', '-', '-'], ['2.6**', '66.9*', '67.59', '30.67**', '39.1**'], ['4.6', '63.0**', '57.49**', '52.53**', '38.7**'], ['4.3', '70.4**', '59.18**', '30.72**', '36.4**'], ['1.9**', '53.8**', '40.34**', '24.93**', '31.6**'], ['4.6', '73.2', '70.92', '67.86', '17.3']] | column | ['BLEU (%)', 'Coverage (%)', 'Length', 'Distinct-4 (%)', 'Repetition-4 (%)'] | ['PHVM (ours)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU (%)</th> <th>Coverage (%)</th> <th>Length</th> <th>Distinct-4 (%)</th> <th>Repetition-4 (%)</th> </tr> </thead> <tbody> <tr> <td>Models || Checklist §</td> <td>3</td> ... | Table 5 | table_5 | D19-1321 | 9 | emnlp2019 | Table 5 shows the experimental results. Our model outperforms baselines in terms of coverage and diversity; it manages to use more given ingredients and generates more diversified cooking steps. We also found that Checklist / Link-S2S produces the general phrase “all ingredients” in 14.9% / 24.5% of the generated recip... | [1, 1, 2] | ['Table 5 shows the experimental results.', 'Our model outperforms baselines in terms of coverage and diversity; it manages to use more given ingredients and generates more diversified cooking steps.', 'We also found that Checklist / Link-S2S produces the general phrase “all ingredients” in 14.9% / 24.5% of the generat... | [None, ['PHVM (ours)', 'Coverage (%)'], ['Checklist', 'Link-S2S', 'CVAE', 'Pointer-S2S', 'PHVM (ours)']] | 1 |
D19-1324table_5 | Experimental results on the NYT50 dataset. ROUGE-1, -2 and -L F1 is reported. JECS substantially outperforms our Lead-based systems and our extractive model. | 2 | [['Model', 'Lead'], ['Model', 'LEADDEDUP'], ['Model', 'LEADCOMP'], ['Model', 'EXTRACTION'], ['Model', 'JECS']] | 1 | [['R-1'], ['R-2'], ['R-L']] | [['41.8', '22.6', '35'], ['42', '22.8', '35'], ['42.4', '22.7', '35.4'], ['44.3', '25.5', '37.1'], ['45.5', '25.3', '38.2']] | column | ['R-1', 'R-2', 'R-L'] | ['JECS'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>R-1</th> <th>R-2</th> <th>R-L</th> </tr> </thead> <tbody> <tr> <td>Model || Lead</td> <td>41.8</td> <td>22.6</td> <td>35</td> </tr> <tr> <td>Model || LEADDEDUP</td> ... | Table 5 | table_5 | D19-1324 | 7 | emnlp2019 | We also report the results on the full CNNDM and NYT although they are less compressable. Table 5 shows the experimental results on these datasets. Our models still yield strong performance compared to baselines and past work on the CNNDMdataset. The EXTRACTION model achieves comparable results to past successful extra... | [2, 1, 1, 1, 1, 2, 1] | ['We also report the results on the full CNNDM and NYT although they are less compressable.', 'Table 5 shows the experimental results on these datasets.', 'Our models still yield strong performance compared to baselines and past work on the CNNDMdataset.', 'The EXTRACTION model achieves comparable results to past succe... | [None, None, ['Lead', 'LEADDEDUP', 'LEADCOMP'], ['EXTRACTION', 'JECS'], ['Lead', 'LEADDEDUP', 'LEADCOMP', 'R-2'], None, None] | 1 |
D19-1330table_2 | Translation performance on IWSLT datasets. SyncTrans represents our proposed synchronous translation method. All results of our SyncTrans are significantly better than both Indiv and Multi (p < 0.01). | 2 | [['Method', 'Indiv'], ['Method', 'Indiv + pseudo'], ['Method', 'Multi'], ['Method', 'Multi + pseudo'], ['Method', 'SyncTrans']] | 2 | [['En-Zh/Ja', 'En-Zh'], ['En-Zh/Ja', 'En-Ja'], ['En-De/Fr', 'En-De'], ['En-De/Fr', 'En-Fr']] | [['15.68', '16.56', '27.11', '40.62'], ['16.72', '18.02', '28.47', '40.39'], ['17.06', '18.31', '27.79', '40.97'], ['17.10', '18.40', '28.56', '40.62'], ['17.97', '19.31', '29.16', '41.53']] | column | ['BLEU', 'BLEU', 'BLEU', 'BLEU'] | ['SyncTrans'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>En-Zh/Ja || En-Zh</th> <th>En-Zh/Ja || En-Ja</th> <th>En-De/Fr || En-De</th> <th>En-De/Fr || En-Fr</th> </tr> </thead> <tbody> <tr> <td>Method || Indiv</td> <td>15.68</td> <td>1... | Table 2 | table_2 | D19-1330 | 4 | emnlp2019 | 5.1 Results on IWSLT . Table 2 shows the main translation results of En¨Zh/Ja and En¨De/Fr on IWSLT datasets. We also conduct a typical one-to-many translation adopting Johnson et al. (2017) method on Transformer as our another baseline model, referred to Multi. Compared with Indiv, we can see that Multi achieves bette... | [2, 1, 2, 1, 1] | ['5.1 Results on IWSLT .', 'Table 2 shows the main translation results of En¨Zh/Ja and En¨De/Fr on IWSLT datasets.', 'We also conduct a typical one-to-many translation adopting Johnson et al. (2017) method on Transformer as our another baseline model, referred to Multi.', 'Compared with Indiv, we can see that Multi ach... | [None, ['En-Zh/Ja'], ['Multi'], ['Indiv', 'Multi'], ['SyncTrans', 'Indiv', 'Multi', 'En-Ja']] | 1 |
D19-1334table_3 | Experimental results for robustness analysis. | 2 | [['Threshold', 'K = 1'], ['Threshold', 'K = 5'], ['Threshold', 'K = 10'], ['Threshold', 'K = max']] | 2 | [['MultiHop', 'MRR'], [' Meta-KGR', 'Hits@1'], ['MultiHop', 'MRR'], [' Meta-KGR', 'Hits@1']] | [['20.8', '16.9', '22.3', '19.3'], ['25.7', '20.8', '29.6', '26.6'], ['29.1', '25.0', '31.3', '27.2'], ['42.7', '36.7', '46.9', '41.2']] | column | ['MRR', 'Hits@1', 'MRR', 'Hits@1'] | [' Meta-KGR'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MultiHop || MRR</th> <th>Meta-KGR || Hits@1</th> <th>MultiHop || MRR</th> <th>Meta-KGR || Hits@1</th> </tr> </thead> <tbody> <tr> <td>Threshold || K = 1</td> <td>20.8</td> <td>1... | Table 3 | table_3 | D19-1334 | 5 | emnlp2019 | 5.4 Robustness Analysis . We can use different frequency thresholds K to select few-shot relations. In this section, we will study the impact of K on the performance of our model. In our experiments, some triples will be removed until every few-shot relation has only K triples. We do link prediction experiments on FB15... | [2, 2, 2, 2, 2, 1, 2, 1] | ['5.4 Robustness Analysis .', 'We can use different frequency thresholds K to select few-shot relations.', 'In this section, we will study the impact of K on the performance of our model.', 'In our experiments, some triples will be removed until every few-shot relation has only K triples.', 'We do link prediction exper... | [None, None, None, None, None, None, ['K = max'], [' Meta-KGR', 'K = 1', 'K = 5', 'K = 10', 'K = max', 'MultiHop']] | 1 |
D19-1336table_2 | Human evaluation results. | 2 | [['Model', 'LM (Mikolov et al., 2010)'], ['Model', 'CLM (Mou et al., 2015)'], ['Model', 'CLM+JD (Yu et al., 2018)'], ['Model', 'Pun-GAN'], ['Model', 'Human']] | 1 | [['Ambiguity'], ['Fluency'], ['Overall']] | [['1.6', '3.1', '2.5'], ['2.0', '2.1', '2.0'], ['3.4', '3.6', '3.5'], ['3.9', '3.7', '3.8'], ['4.3', '4.6', '4.5']] | column | ['Ambiguity', 'Fluency', 'Overall'] | ['Pun-GAN'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Ambiguity</th> <th>Fluency</th> <th>Overall</th> </tr> </thead> <tbody> <tr> <td>Model || LM (Mikolov et al., 2010)</td> <td>1.6</td> <td>3.1</td> <td>2.5</td> </tr> <tr> ... | Table 2 | table_2 | D19-1336 | 5 | emnlp2019 | 3.5 Results . Table 2 show the results of human evaluation. We find that: 1) Pun-GAN achieves the best ambiguity score. 2) Compared with CLM+JD which is actually the same as our pre-trained generator, Pun-GAN has a large improvement in unusualness. 3) Pun-GAN can generate more diverse sentence with different tokens and... | [0, 1, 1, 1, 2, 2] | ['3.5 Results .', 'Table 2 show the results of human evaluation.', 'We find that: 1) Pun-GAN achieves the best ambiguity score.', '2) Compared with CLM+JD which is actually the same as our pre-trained generator, Pun-GAN has a large improvement in unusualness.', '3) Pun-GAN can generate more diverse sentence with differ... | [None, None, ['Pun-GAN', 'Ambiguity'], ['Pun-GAN', 'CLM+JD (Yu et al., 2018)', 'Fluency'], ['Pun-GAN'], None] | 1 |
D19-1342table_2 | Experiment results on SemEval 2014 dataset. | 2 | [['Model', 'AT-LSTM'], ['Model', 'ATAE-LSTM'], ['Model', 'GCAE'], ['Model', 'AT-GRU'], ['Model', 'AT-GRU 2-label'], ['Model', 'D-AT-GRU w/o orthogonal'], ['Model', 'D-AT-GRU']] | 1 | [['Overall'], ['Conflict']] | [['77.13%', '11.54%'], ['78.00%', '23.08%'], ['78.30%', '25.00%'], ['77.22%', '19.23%'], ['77.02%', '26.92%'], ['77.22%', '26.92%'], ['78.50%', '40.38%']] | column | ['accuracy', 'accuracy'] | ['D-AT-GRU'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Overall</th> <th>Conflict</th> </tr> </thead> <tbody> <tr> <td>Model || AT-LSTM</td> <td>77.13%</td> <td>11.54%</td> </tr> <tr> <td>Model || ATAE-LSTM</td> <td>78.00%</td>... | Table 2 | table_2 | D19-1342 | 4 | emnlp2019 | Table 2 shows that D-AT-GRU model outperforms all baseline methods. Given that AT-LSTM (Wang et al., 2016) has strong correlation to our base model (AT-GRU), their work can be categorized as a baseline to our model. In contrast, the results prove that the additional components are helpful to recognize conflict opinions.... | [1, 1, 1, 1, 1] | ['Table 2 shows that D-AT-GRU model outperforms all baseline methods.', 'Given that AT-LSTM (Wang et al., 2016) has strong correlation to our base model (AT-GRU), their work can be categorized as a baseline to our model.', 'In contrast, the results prove that the additional components are helpful to recognize conflict o... | [['D-AT-GRU'], ['AT-LSTM', 'AT-GRU'], ['D-AT-GRU', 'Conflict'], ['GCAE'], ['D-AT-GRU', 'GCAE']] | 1 |
D19-1345table_2 | text classification datasets. Model with ”*” means that all word vectors are initialized by Glove word embeddings. We run all models 10 times and report mean results. | 2 | [['Model', 'CNN'], ['Model', 'LSTM'], ['Model', 'Graph-CNN'], ['Model', 'Text-GCN'], ['Model', 'CNN*'], ['Model', 'LSTM*'], ['Model', 'Bi-LSTM*'], ['Model', 'fastText*'], ['Model', 'Text-GCN*'], ['Model', 'Our Model*']] | 1 | [['R8'], ['R52'], ['Ohsumed']] | [['94.0±0.5', '85.3±0.5', '43.9±1.0'], ['93.7±0.8', '85.6±1.0', '41.1±1.0'], ['97.0±0.2', '92.8±0.2', '63.9±0.5'], ['97.1±0.1', '93.6±0.2', '68.4±0.6'], ['95.7±0.5', '87.6±0.5', '58.4±1.0'], ['96.1±0.2', '90.5±0.8', '51.1±1.5'], ['96.3±0.3', '90.5±0.9', '49.3±1.0'], ['96.1±0.2', '92.8±0.1', '57.7±0.5'], ['97.0±0.1', '9... | column | ['accuracy', 'accuracy', 'accuracy'] | ['Our Model*'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>R8</th> <th>R52</th> <th>Ohsumed</th> </tr> </thead> <tbody> <tr> <td>Model || CNN</td> <td>94.0±0.5</td> <td>85.3±0.5</td> <td>43.9±1.0</td> </tr> <tr> <td>Model || ... | Table 2 | table_2 | D19-1345 | 4 | emnlp2019 | 3.3 Experimental. Results Table 2 reports the results of our models against other baseline methods. We can see that our model can achieve the state-of-the-art result. We note that the results of graph-based models are better than traditional models like CNN, LSTM, and fastText. That is likely due to the c... | [2, 1, 1, 1, 2, 2, 2] | ['3.3 Experimental.', 'Results Table 2 reports the results of our models against other baseline methods.', 'We can see that our model can achieve the state-of-the-art result.', 'We note that the results of graph-based models are better than traditional models like CNN, LSTM, and fastText.', 'That is likel... | [None, ['Our Model*'], ['Our Model*'], ['Graph-CNN', 'CNN*', 'LSTM*', 'fastText*'], None, None, None] | 1 |
D19-1350table_1 | Perplexity and topic coherence results of difference models. ‘frequency-based vocab.’ denotes that the vocabulary is constructed by filtering out rare words while ‘RL-based vocab.’ denotes that the vocabulary is dynamically generated by our model using RL. | 3 | [['#Topics = 30, frequency-based vocab.', 'Methods', 'LDA'], ['#Topics = 30, frequency-based vocab.', 'Methods', 'NVDM'], ['#Topics = 30, frequency-based vocab.', 'Methods', 'NGTM'], ['#Topics = 30, frequency-based vocab.', 'Methods', 'Scholar'], ['#Topics = 30, RL-based vocab.', 'Methods', 'LDA'], ['#Topics = 30, RL-b... | 2 | [['20News', 'PPL'], [' 20News', ' Cv'], ['NIPS', 'PPL'], [' NIPS', 'Cv']] | [['1,213.1', '0.503', '1,042.7', '0.507'], ['980.8', '0.497', '931.6', '0.492'], ['929.3', '0.479', '938.9', '0.503'], ['1,345.9', '0.537', '1,350.9', '0.512'], ['1,451.7', '0.522', '1,093.1', '0.534'], ['845.8', '0.510', '768.7', '0.509'], ['791.5', '0.517', '757.2', '0.527'], ['1,158.4', '0.560', '1,273.6', '0.548'],... | column | ['PPl', 'Cv', 'PPl', 'Cv'] | ['VTMRL'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>20News || PPL</th> <th>20News || Cv</th> <th>NIPS || PPL</th> <th>NIPS || Cv</th> </tr> </thead> <tbody> <tr> <td>#Topics = 30, frequency-based vocab. || Methods || LDA</td> <td>1,2... | Table 1 | table_1 | D19-1350 | 4 | emnlp2019 | In our experiments, the models are evaluated based on the perplexity (PPL, lower is better) and topic coherence measure (Cv) based on external corpus (R¨oder et al., 2015) (higher is better). The results with 30 and 50 topics are shown in Table 1. LDA is a conventional topic model, while all the other models are neural... | [2, 1, 2, 1, 1, 1, 1, 1, 2, 2, 1, 1] | ['In our experiments, the models are evaluated based on the perplexity (PPL, lower is better) and topic coherence measure (Cv) based on external corpus (R¨oder et al., 2015) (higher is better).', 'The results with 30 and 50 topics are shown in Table 1.', 'LDA is a conventional topic model, while all the other models ar... | [['PPL', ' Cv'], ['#Topics = 30, frequency-based vocab.', '#Topics = 30, RL-based vocab.', '#Topics = 50, frequency-based vocab.', '#Topics = 50, RL-based vocab.'], ['LDA'], ['NVDM', 'NGTM', 'PPL', 'LDA'], [' Cv', 'NVDM', 'NGTM', 'LDA'], None, ['Scholar', 'NVDM', 'NGTM'], ['VTMRL'], ['VTMRL'], None, ['PPL', '#Topics = ... | 1 |
D19-1359table_3 | F1 scores (%) of UKB+SyntagNet against the best supervised systems for English all-words WSD. Reported systems: • Yuan et al. (2016), ∞ Melacci et al. (2018), (cid:52) Uslu et al. (2018). Statisticallysignificant differences against our results are underlined according to a χ2 test, p < 0.01. | 2 | [['System', 'LSTMLP'], ['System', 'IMSC2V+PR'], ['System', 'fastSense'], ['System', 'UKB+SyntagNet']] | 1 | [['Sens2'], ['Sens3'], ['Sem07'], ['Sem13'], ['Sem15'], ['All']] | [['73.8', '71.8', '63.5', '69.5', '72.6', '71.5'], ['73.8', '71.9', '63.3', '68.2', '72.8', '71.2'], ['73.5', '73.5', '62.4', '66.2', '73.2', '71.1'], ['71.2', '71.6', '59.6', '72.4', '75.6', '71.5']] | column | ['F1', 'F1', 'F1', 'F1', 'F1', 'F1'] | ['UKB+SyntagNet'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Sens2</th> <th>Sens3</th> <th>Sem07</th> <th>Sem13</th> <th>Sem15</th> <th>All</th> </tr> </thead> <tbody> <tr> <td>System || LSTMLP</td> <td>73.8</td> <td>71.8</td> ... | Table 3 | table_3 | D19-1359 | 5 | emnlp2019 | Table 3 compares UKB + SyntagNet against the best supervised English WSD systems (Yuan et al., 2016; Melacci et al., 2018; Uslu et al., 2018). None of the differences across datasets between the best performing supervised system and SyntagNet is statistically significant according to chi-square test (p < 0.01), meaning... | [1, 1] | ['Table 3 compares UKB + SyntagNet against the best supervised English WSD systems (Yuan et al., 2016; Melacci et al., 2018; Uslu et al., 2018).', 'None of the differences across datasets between the best performing supervised system and SyntagNet is statistically significant according to chi-square test (p < 0.01), me... | [['UKB+SyntagNet'], ['UKB+SyntagNet']] | 1 |
D19-1368table_2 | Performance on different datasets against baselines, where h@k denotes hits at k. Results are reported on test sets with the best parameters found in grid search for each model. | 2 | [['FB15K-237', 'SimplE'], ['FB15K-238', 'DistMult'], ['FB15K-239', 'ComplEx'], ['FB15K-240', 'JoBi SimplE'], ['FB15K-241', 'JoBi DistMult'], ['FB15K-242', 'JoBi ComplEx'], ['FB15K', 'DistMult'], ['FB15K', 'ComplEx'], ['FB15K', 'JoBi ComplEx'], ['YAGO3-10', 'DistMult'], ['YAGO3-11', 'ComplEx'], ['YAGO3-12', 'JoBi ComplE... | 1 | [['h@1'], ['h@3'], ['h@10'], ['MRR']] | [['0.160', '0.268', '0.43', '0.248'], ['0.158', '0.271', '0.432', '0.247'], ['0.159', '0.275', '0.441', '0.25'], ['0.188', '0.301', '0.461', '0.277'], ['0.205', '0.316', '0.466', '0.29'], ['0.199', '0.319', '0.479', '0.29'], ['0.587', '0.785', '0.867', '0.697'], ['0.617', '0.803', '0.874', '0.72'], ['0.681', '0.824', '... | column | ['h@1', 'h@3', 'h@10', 'MRR'] | ['JoBi ComplEx'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>h@1</th> <th>h@3</th> <th>h@10</th> <th>MRR</th> </tr> </thead> <tbody> <tr> <td>FB15K-237 || SimplE</td> <td>0.160</td> <td>0.268</td> <td>0.43</td> <td>0.248</td> ... | Table 2 | table_2 | D19-1368 | 4 | emnlp2019 | Discussion. It could be seen in Table 2 that JoBi ComplEx outperforms both ComplEx and Dist-Mult on all three standard datasets, on all the metrics we consider. For Hits@1, JoBi Complex out-performs baseline ComplEx by 4% on FB15K-237, 6.4% on FB15K and 5.6% on YAGO3-10. Moreover, results in Table 2 demonstrate that Jo... | [2, 1, 1, 1, 1] | ['Discussion.', 'It could be seen in Table 2 that JoBi ComplEx outperforms both ComplEx and Dist-Mult on all three standard datasets, on all the metrics we consider.', 'For Hits@1, JoBi Complex out-performs baseline ComplEx by 4% on FB15K-237, 6.4% on FB15K and 5.6% on YAGO3-10.', 'Moreover, results in Table 2 demonstr... | [None, ['JoBi ComplEx', 'ComplEx', 'DistMult'], ['h@1', 'JoBi ComplEx', 'ComplEx', 'FB15K-237', 'FB15K', 'YAGO3-10'], ['JoBi SimplE', 'JoBi DistMult', 'JoBi ComplEx', 'DistMult', 'SimplE'], ['FB15K-237']] | 1 |
D19-1368table_6 | Results of ablation study on ComplEx model. | 1 | [['Baseline'], ['BiasedNeg'], ['Joint'], ['JoBi']] | 1 | [['h@1'], ['h@3'], ['h@10'], ['MRR']] | [['0.277', '0.44', '0.589', '0.383'], ['0.276', '0.427', '0.568', '0.375'], ['0.287', '0.447', '0.601', '0.392'], ['0.333', '0.477', '0.617', '0.428']] | column | ['h@1', 'h@3', 'h@10', 'MRR'] | ['BiasedNeg', 'Joint', 'JoBi'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>h@1</th> <th>h@3</th> <th>h@10</th> <th>MRR</th> </tr> </thead> <tbody> <tr> <td>Baseline</td> <td>0.277</td> <td>0.44</td> <td>0.589</td> <td>0.383</td> </tr> <... | Table 6 | table_6 | D19-1368 | 5 | emnlp2019 | In Table 6 it can be seen that Joint on its own gives a slight performance boost over the baseline, and BiasedNeg performs slightly under the baseline on all measures. However, combining our two techniques in JoBi gives 5.6% points improvement on hits@1. This suggests that biased negative sampling increases the efficac... | [1, 1, 2] | ['In Table 6 it can be seen that Joint on its own gives a slight performance boost over the baseline, and BiasedNeg performs slightly under the baseline on all measures.', 'However, combining our two techniques in JoBi gives 5.6% points improvement on hits@1.', 'This suggests that biased negative sampling increases the... | [['Joint', 'Baseline', 'BiasedNeg'], ['JoBi', 'h@1'], None] | 1 |
D19-1370table_2 | Results on PTB test with encoder pretraining. PPL↓ Recon↓ AU↑ KL -ELBO Method AE 101.27 VAE 101.46 + pretrain + pretrain + anneal 100.68 | 2 | [['Method', 'AE'], ['Method', 'VAE'], ['Method', '+ pretrain'], ['Method', '+ pretrain + anneal']] | 1 | [['PPL#'], ['Recon#'], ['AU'], ['KL'], ['-ELBO']] | [['-', '70.36', '32', '-', '-'], ['101.39', '101.27', '0', '0.00', '101.27'], ['102.26', '101.46', '0', '0.00', '101.46'], ['97.74', '99.67', '2', '1.01', '100.68']] | column | ['PPL#', 'Recon#', 'AU', 'KL', '-ELBO'] | ['+ pretrain + anneal'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>PPL#</th> <th>Recon#</th> <th>AU</th> <th>KL</th> <th>-ELBO</th> </tr> </thead> <tbody> <tr> <td>Method || AE</td> <td>-</td> <td>70.36</td> <td>32</td> <td>-</td... | Table 2 | table_2 | D19-1370 | 3 | emnlp2019 | 2.3 Autoencoder-based Initialization . Based on the observations above we hypothesize that VAEs might benefit from initialization with an non-collapsed encoder, trained via an AE objective. Intuitively, if the encoder is providing useful information from the beginning of training, the decoder is more likely to make use... | [2, 2, 2, 1, 1, 2, 1, 1, 2, 2] | ['2.3 Autoencoder-based Initialization .', 'Based on the observations above we hypothesize that VAEs might benefit from initialization with an non-collapsed encoder, trained via an AE objective.', 'Intuitively, if the encoder is providing useful information from the beginning of training, the decoder is more likely to ... | [None, None, None, None, ['VAE', '+ pretrain', '-ELBO'], ['-ELBO'], ['+ pretrain + anneal', 'PPL#'], ['+ pretrain + anneal', 'AU', 'KL'], ['+ pretrain + anneal', '-ELBO'], ['KL']] | 1 |
D19-1372table_3 | Comparison of Methods on Pun of the Day Dataset. HCF represents Human Centric Features, F for increasing the number of filters, and HN for the use of highway layers in the model. See (Chen and Soo, 2018; Yang et al., 2015) for more details regarding these acronyms. | 2 | [['Methods', 'Word2Vec+HCF'], ['Methods', 'CNN'], ['Methods', 'CNN+F'], ['Methods', 'CNN+HN'], ['Methods', 'CNN+F+HN'], ['Methods', 'Transformer']] | 1 | [['Accuracy'], ['Precision'], ['Recall'], ['F1']] | [['0.797', '0.776', '0.836', '0.705'], ['0.867', '0.88', '0.859', '0.869'], ['0.892', '0.886', '0.907', '0.896'], ['0.892', '0.889', '0.903', '0.896'], ['0.894', '0.866', '0.94', '0.901'], ['0.93', '0.93', '0.931', '0.931']] | column | ['Accuracy', 'Precision', 'Recall', 'F1'] | ['Transformer'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy</th> <th>Precision</th> <th>Recall</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Methods || Word2Vec+HCF</td> <td>0.797</td> <td>0.776</td> <td>0.836</td> ... | Table 3 | table_3 | D19-1372 | 4 | emnlp2019 | The results on the Pun of the Day dataset are shown in Table 3 above. It shows an accuracy of 93 percent, close to 4 percent greater accuracy than the best CNN model proposed. Although the CNN model used a variety of techniques to extract the best features from the dataset, we see that the self-attention layers found e... | [1, 1, 1] | ['The results on the Pun of the Day dataset are shown in Table 3 above.', 'It shows an accuracy of 93 percent, close to 4 percent greater accuracy than the best CNN model proposed.', 'Although the CNN model used a variety of techniques to extract the best features from the dataset, we see that the self-attention layers... | [None, ['CNN+F+HN', 'Transformer'], ['CNN']] | 1 |
D19-1374table_1 | Macro-averaged F1 comparison of per-language models and multilingual models over 48 languages. For non-multilingual models, F1 is the average over each per-language model trained. | 4 | [['Model', 'Meta-LSTM', 'Multilingual?', 'No'], ['Model', 'BERT', 'Multilingual?', 'No'], ['Model', 'Meta-LSTM', 'Multilingual?', 'Yes'], ['Model', 'BERT', 'Multilingual?', 'Yes']] | 1 | [['Part-of-Speech F1'], [' Morphology F1']] | [['94.5', '92.5'], ['95.1', '93'], ['91.1', '82.9'], ['94.5', '91']] | column | ['Part-of-Speech F1', 'Morphology F1'] | ['BERT'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Part-of-Speech F1</th> <th>Morphology F1</th> </tr> </thead> <tbody> <tr> <td>Model || Meta-LSTM || Multilingual? || No</td> <td>94.5</td> <td>92.5</td> </tr> <tr> <td>Model ||... | Table 1 | table_1 | D19-1374 | 4 | emnlp2019 | The results in Table 1 make it clear that the BERT-based model for each task is a solid win over a Meta-LSTM model in both the per-language and multilingual settings. However, the number of parameters of the BERT model is very large (179M parameters), making deploying memory intensive and inference slow: 230m... | [1, 1, 2] | ['The results in Table 1 make it clear that the BERT-based model for each task is a solid win over a Meta-LSTM model in both the per-language and multilingual settings.', 'However, the number of parameters of the BERT model is very large (179M parameters), making deploying memory intensive and inference slow:... | [['BERT', 'Meta-LSTM'], ['BERT'], None] | 1 |
D19-1376table_3 | Unlabeled unsupervised parsing F1 on WSJ40. ‡ trains on the training split of WSJ, while † trains on AllNLI (Htut et al., 2018). The PRPN result is taken from Drozdov et al. (2019). | 2 | [['Model', 'Right Branching'], ['Model', 'yDIORA'], ['Model', 'zPRPN'], ['Model', 'zPaLM-U']] | 1 | [['Unlabeled F1']] | [['40.7'], ['60.6'], ['52.4'], ['42.0']] | column | ['Unlabeled F1'] | ['zPaLM-U'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Unlabeled F1</th> </tr> </thead> <tbody> <tr> <td>Model || Right Branching</td> <td>40.7</td> </tr> <tr> <td>Model || yDIORA</td> <td>60.6</td> </tr> <tr> <td>Model || zP... | Table 3 | table_3 | D19-1376 | 5 | emnlp2019 | In addition to PRPN, we compare to DIORA (Drozdov et al., 2019), which uses an inside-outside dynamic program in an autoencoder. Table 3 shows the F1 results. PaLM outperforms the right branching baseline, but is not as accurate as the other models. This indicates that the type of syntactic trees learne... | [0, 1, 1, 2] | ['In addition to PRPN, we compare to DIORA (Drozdov et al., 2019), which uses an inside-outside dynamic program in an autoencoder.', 'Table 3 shows the F1 results.', 'PaLM outperforms the right branching baseline, but is not as accurate as the other models.', 'This indicates that the type of syntactic t... | [None, ['Unlabeled F1'], ['zPaLM-U', 'Right Branching'], None] | 1 |
D19-1379table_3 | Performance comparison between LM finetuning on target domain unlabeled data of the same size as each test set, “Controlled Unlabeled data (CU),” and transductive LM fine-tuning on each test set (T). Cells show the F1 scores averaged across the target domains. | 1 | [['BC'], ['BN'], ['MZ'], ['NW'], ['PT'], ['TC'], ['WB']] | 2 | [['Syntactic chunking', 'CU'], ['Syntactic chunking', 'T'], ['Semantic role labeling', 'CU'], ['Semantic role labeling', 'T']] | [['90.4', '90.8', '78.6', '79.3'], ['91.1', '91.6', '79.8', '80.4'], ['90', '90.4', '77.9', '78.5'], ['92.1', '92.3', '81.1', '81.7'], ['87.1', '87.3', '73.5', '74'], ['87.1', '87.6', '71.3', '71.6'], ['91.8', '92', '76.6', '77.1']] | column | ['F1', 'F1', 'F1', 'F1'] | ['T'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Syntactic chunking || CU</th> <th>Syntactic chunking || T</th> <th>Semantic role labeling || CU</th> <th>Semantic role labeling || T</th> </tr> </thead> <tbody> <tr> <td>BC</td> <td>... | Table 3 | table_3 | D19-1379 | 4 | emnlp2019 | Comparison between unsupervised domain adaptation and transduction. In unsupervised domain adaptation, target domain unlabeled data (the texts whose domain is the same as that of a test set) is used for adaptation. Although the domain is identical between target domain data and a test set, their word distributions ... | [2, 2, 2, 2, 2, 2, 2, 2, 1, 1, 1] | ['Comparison between unsupervised domain adaptation and transduction.', 'In unsupervised domain adaptation, target domain unlabeled data (the texts whose domain is the same as that of a test set) is used for adaptation.', 'Although the domain is identical between target domain data and a test set, their word distri... | [None, None, None, None, None, None, None, None, None, ['T', 'CU'], ['T', 'CU']] | 1 |
D19-1379table_4 | Performance comparison between LM finetuning on target domain unlabeled data (U) and on the combination of the unlabeled data and test sets (U + T). Cells show the F1 scores averaged across the target domains. | 1 | [['BC'], ['BN'], ['MZ'], ['NW'], ['PT'], ['TC'], ['WB']] | 2 | [['Syntactic chunking', 'U'], ['Syntactic chunking', 'U + T'], ['Semantic role labeling', 'U'], ['Semantic role labeling', 'U + T']] | [['90.5', '91.0', '79.0', '79.4'], ['91.3', '91.6', '80.1', '80.6'], ['90.2', '90.6', '78.3', '78.7'], ['92.1', '92.5', '81.5', '81.9'], ['87.3', '87.7', '73.6', '74.3'], ['87.2', '87.6', '71.4', '72.0'], ['91.8', '92.2', '76.8', '77.2']] | column | ['F1', 'F1', 'F1', 'F1'] | ['U', 'U + T'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Syntactic chunking || U</th> <th>Syntactic chunking || U + T</th> <th>Semantic role labeling || U</th> <th>Semantic role labeling || U + T</th> </tr> </thead> <tbody> <tr> <td>BC</td> ... | Table 4 | table_4 | D19-1379 | 5 | emnlp2019 | Combination of unsupervised domain adaptation and transduction. In real-world situations, large-scale unlabeled data of target domains is sometimes available. In such cases, LMs can be trained on both the target domain unlabeled data and the test sets. Here, we investigate the effectiveness of using both datasets. Tabl... | [2, 2, 2, 2, 1, 1, 2] | ['Combination of unsupervised domain adaptation and transduction.', 'In real-world situations, large-scale unlabeled data of target domains is sometimes available.', 'In such cases, LMs can be trained on both the target domain unlabeled data and the test sets.', 'Here, we investigate the effectiveness of using both dat... | [None, None, None, None, None, ['U + T', 'U'], None] | 1 |
D19-1379table_5 | Standard benchmark results. Cells show the F1 scores on each test set. The CoNLL-2000 and CoNLL-2005/2012 datasets are used for syntactic chunking and SRL, respectively. Results of the transductive models (TRANS) marked with * are statistically significant compared to the baselines (BASE) using the permutation test (p ... | 2 | [['CoNLL', 'BASE'], ['CoNLL', 'TRANS'], ['CoNLL', 'Clark et al. (2018)'], ['CoNLL', 'Peters et al. (2017)'], ['CoNLL', 'Hashimoto et al. (2017)'], ['CoNLL', 'Wang et al. (2019)'], ['CoNLL', 'Li et al. (2019)'], ['CoNLL', 'Ouchi et al. (2018)'], ['CoNLL', 'He et al. (2018)']] | 1 | [['2000'], ['2005 WSJ'], ['2005 Brown'], ['2012']] | [[' 96.6', ' 87.7', ' 78.3', ' 86.2'], [' 96.7', ' 87.9*', ' 79.5*', ' 86.6*'], [' 97.0', '-', '-', '-'], [' 96.4', '-', '-', '-'], [' 95.8', '-', '-', '-'], ['-', ' 88.2', ' 79.3', ' 86.4'], ['-', ' 87.7', ' 80.5', ' 86.0'], ['-', ' 87.6', ' 78.7', ' 86.2'], ['-', ' 87.4', ' 80.4', ' 85.5']] | column | ['F1', 'F1', 'F1', 'F1'] | ['TRANS'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>2000</th> <th>2005 WSJ</th> <th>2005 Brown</th> <th>2012</th> </tr> </thead> <tbody> <tr> <td>CoNLL || BASE</td> <td>96.6</td> <td>87.7</td> <td>78.3</td> <td>86.2</td... | Table 5 | table_5 | D19-1379 | 5 | emnlp2019 | Effects in standard benchmarks. Some studies indicated that when promising new techniques are only evaluated on very basic models, determining how much (if any) improvement will carry over to stronger models can be difficult (Denkowski and Neubig, 2017; Suzuki et al., 2018). Motivated by such studies, we provide th... | [0, 0, 0, 2, 2, 1, 1, 2, 1] | ['Effects in standard benchmarks.', 'Some studies indicated that when promising new techniques are only evaluated on very basic models, determining how much (if any) improvement will carry over to stronger models can be difficult (Denkowski and Neubig, 2017; Suzuki et al., 2018).', 'Motivated by such studies, we pro... | [None, None, None, None, None, None, ['BASE', 'TRANS'], None, ['TRANS', 'BASE']] | 1 |
D19-1380table_4 | Performance in text classification (20-NG, R-8) and sentiment (SST-5) tasks of various models as reported in (Kayal and Tsatsaronis, 2019), where DCT* refers to the implementation in (Kayal and Tsatsaronis, 2019). Our DCT embeddings are denoted as ck in the bottom row. Bold indicates the best result, and italic indicat... | 2 | [['Model', 'PCA'], ['Model', 'DCT*'], ['Model', 'Avg. vec.'], ['Model', 'p-means'], ['Model', 'ELMo'], ['Model', 'BERT'], ['Model', 'EigenSent'], ['Model', 'EigenSent⊕Avg'], ['Model', 'ck']] | 2 | [['20-NG', 'P'], ['20-NG', 'R'], ['20-NG', 'F1'], ['R-8', 'P'], ['R-9', 'R'], ['R-10', 'F1'], ['SST-5', 'P'], ['SST-6', 'R'], ['SST-7', 'F1']] | [['55.43', '54.67', '54.77', '83.83', '83.42', '83.41', '26.47', '25.08', '25.23'], ['61.07', '59.16', '59.78', '90.41', '90.78', '90.38', '30.11', '30.09', '29.53'], ['68.72', '68.19', '68.25', '96.34', '96.3', '96.27', '27.88', '26.44', '24.81'], ['72.2', '71.65', '71.79', '96.69', '96.67', '96.65', '33.77', '33.41',... | column | ['P', 'R', 'F1', 'P', 'R', 'F1', 'P', 'R', 'F1'] | ['ck'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>20-NG || P</th> <th>20-NG || R</th> <th>20-NG || F1</th> <th>R-8 || P</th> <th>R-9 || R</th> <th>R-10 || F1</th> <th>SST-5 || P</th> <th>SST-6 || R</th> <th>SST-7 || F1</th> ... | Table 4 | table_4 | D19-1380 | 5 | emnlp2019 | For fair comparison, we use the same sentiment and text classification datasets, the SST-5, 20 newsgroups (20-NG) and Reuters-8 (R-8), as those used in Kayal and Tsatsaronis (2019). We also evaluate using the same pre-trained word embedding, framework and approaches as described in their work. Table 4 shows the best re... | [2, 2, 1, 1, 1, 1] | ['For fair comparison, we use the same sentiment and text classification datasets, the SST-5, 20 newsgroups (20-NG) and Reuters-8 (R-8), as those used in Kayal and Tsatsaronis (2019).', 'We also evaluate using the same pre-trained word embedding, framework and approaches as described in their work.', 'Table 4 shows the... | [None, None, ['ck'], ['DCT*', 'ck', '20-NG', 'R-8'], ['ck', 'p-means', 'ELMo', 'BERT', '20-NG', 'R-8'], ['ELMo', 'SST-5']] | 1 |
D19-1381table_1 | Event detection performance on the CG task 2013 test dataset. | 2 | [['Model', 'TEES'], ['Model', 'SBNN']] | 1 | [['P'], ['R'], ['F (%)']] | [['61.42', '52.93', '56.86'], ['63.67', '51.43', '56.9']] | column | ['P', 'R', 'F (%)'] | ['TEES'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F (%)</th> </tr> </thead> <tbody> <tr> <td>Model || TEES</td> <td>61.42</td> <td>52.93</td> <td>56.86</td> </tr> <tr> <td>Model || SBNN</td> ... | Table 1 | table_1 | D19-1381 | 5 | emnlp2019 | Table 1 shows the event detection performance of the models on the test set. Our model achieves performance comparable to the state-of-the-art TEES event detection module without the use of any syntactic and hand-engineered features, suggesting it can be applied to other domains with no need for feature engineering. We... | [1, 1, 1] | ['Table 1 shows the event detection performance of the models on the test set.', 'Our model achieves performance comparable to the state-of-the-art TEES event detection module without the use of any syntactic and hand-engineered features, suggesting it can be applied to other domains with no need for feature engineerin... | [None, ['TEES'], ['TEES']] | 1 |
D19-1381table_3 | Comparison on computational efficiency on the CG task 2013 development dataset. | 2 | [['Model', 'TEES'], ['Model', 'SBNN k = 8']] | 1 | [['Number of Classification'], ['Running Time (s)']] | [['6141', '155'], ['4093', '131']] | column | ['Number of Classification', 'Running Time (s)'] | ['SBNN k = 8', 'TEES'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Number of Classification</th> <th>Running Time (s)</th> </tr> </thead> <tbody> <tr> <td>Model || TEES</td> <td>6141</td> <td>155</td> </tr> <tr> <td>Model || SBNN k = 8</td> ... | Table 3 | table_3 | D19-1381 | 5 | emnlp2019 | Table 3 shows the number of classifications (or action scoring function calls in our model) performed by each model with the corresponding actual running time. SBNN performs fewer classifications and in less time than TEES, implying it is more computationally efficient. | [1, 1] | ['Table 3 shows the number of classifications (or action scoring function calls in our model) performed by each model with the corresponding actual running time.', 'SBNN performs fewer classifications and in less time than TEES, implying it is more computationally efficient.'] | [None, ['SBNN k = 8', 'TEES']] | 1 |
D19-1383table_4 | Results on CSPUBSUM | 2 | [['Model', 'SAF + F Ens (Collins et al., 2017)'], ['Model', 'BERT +Transformer'], ['Model', 'Our model'], ['Model', 'Our model + ABSTRACTROUGE']] | 1 | [['ROUGE-L']] | [['0.313'], ['0.287'], ['0.306'], ['0.314']] | column | ['ROUGE-L'] | ['Our model'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>ROUGE-L</th> </tr> </thead> <tbody> <tr> <td>Model || SAF + F Ens (Collins et al., 2017)</td> <td>0.313</td> </tr> <tr> <td>Model || BERT +Transformer</td> <td>0.287</td> </tr> ... | Table 4 | table_4 | D19-1383 | 4 | emnlp2019 | Table 4 summarizes results on CSPUB-SUM. Following Collins et al. (2017) we take the top 10 predicted sentences as the summary and use ROUGE-L scores for evaluation. It is clear that our approach outperforms BERT+TRANSFORMER. The BERT+TRANSFORMER+CRF baseline is not included here beca... | [1, 1, 1, 2, 1, 1, 2] | ['Table 4 summarizes results on CSPUB-SUM.', 'Following Collins et al. (2017) we take the top 10 predicted sentences as the summary and use ROUGE-L scores for evaluation.', 'It is clear that our approach outperforms BERT+TRANSFORMER.', 'The BERT+TRANSFORMER+CRF baseline is not include... | [None, ['ROUGE-L'], ['Our model', 'BERT +Transformer'], None, ['Our model + ABSTRACTROUGE'], ['Our model + ABSTRACTROUGE', 'SAF + F Ens (Collins et al., 2017)'], None] | 1 |
D19-1387table_3 | ROUGE Recall results on NYT test set. Results for comparison systems are taken from the authors’ respective papers or obtained on our data by running publicly released software. Table cells are filled with — whenever results are not available. | 3 | [['Model', ' -', 'ORACLE'], ['Model', ' -', 'LEAD-3'], ['Model', 'Extractive', 'COMPRESS (Durrett et al. 2016)'], ['Model', 'Extractive', 'SUMO (Liu et al. 2019)'], ['Model', 'Extractive', 'TransformerEXT'], ['Model', 'Abstractive', 'PTGEN (See et al. 2017)'], ['Model', 'Abstractive', 'PTGEN + COV (See et al. 2017)'], ... | 1 | [['R1'], ['R2'], ['RL']] | [['49.18', '33.24', '46.02'], ['39.58', '20.11', '35.78'], ['42.2', '24.9', ' -'], ['42.3', '22.7', '38.6'], ['41.95', '22.68', '38.51'], ['42.47', '25.61', ' -'], ['43.71', '26.4', ' -'], ['42.94', '26.02', ' -'], ['35.75', '17.23', '31.41'], ['46.66', '26.35', '42.62'], ['48.92', '30.84', '45.41'], ['49.02', '31.02',... | column | ['R1', 'R2', 'RL'] | ['BERT-based'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>R1</th> <th>R2</th> <th>RL</th> </tr> </thead> <tbody> <tr> <td>Model || - || ORACLE</td> <td>49.18</td> <td>33.24</td> <td>46.02</td> </tr> <tr> <td>Model || - || ... | Table 3 | table_3 | D19-1387 | 7 | emnlp2019 | Table 3 presents results on the NYT dataset. Following the evaluation protocol in Durrett et al. (2016), we use limited-length ROUGE Recall, where predicted summaries are truncated to the length of the gold summaries. Again, we report the performance of the ORACLE upper bound and LEAD-3 baseline. The second block in th... | [1, 1, 1, 1, 1, 1, 1, 1, 1] | ['Table 3 presents results on the NYT dataset.', 'Following the evaluation protocol in Durrett et al. (2016), we use limited-length ROUGE Recall, where predicted summaries are truncated to the length of the gold summaries.', 'Again, we report the performance of the ORACLE upper bound and LEAD-3 baseline.', 'The second ... | [None, None, ['ORACLE', 'LEAD-3'], ['Extractive', 'TransformerEXT'], ['COMPRESS (Durrett et al. 2016)'], ['Abstractive', 'TransformerABS'], ['BERT-based'], ['BERT-based'], ['BERTSUMEXTABS', 'BERTSUMEXT', 'ORACLE']] | 1 |
D19-1388table_4 | Fluency and consistency comparison by human evaluation. | 1 | [['Uni-model'], ['Re 3 Sum'], ['PESG']] | 1 | [['Fluency'], ['Consistency']] | [['1.61', '1.53'], ['1.53', '1.14'], ['1.86*', '1.73*']] | column | ['Fluency', 'Consistency'] | ['PESG'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Fluency</th> <th>Consistency</th> </tr> </thead> <tbody> <tr> <td>Uni-model</td> <td>1.61</td> <td>1.53</td> </tr> <tr> <td>Re 3 Sum</td> <td>1.53</td> <td>1.14</td> ... | Table 4 | table_4 | D19-1388 | 7 | emnlp2019 | For the human evaluation, we asked annotators to rate each summary according to its consistency and fluency. The rating score ranges from 1 to 3, with 3 being the best. Table 4 lists the average scores of each model, showing that PESG outperforms the other baseline models in both fluency and consistency. The kappa stat... | [2, 2, 1, 2, 2, 2] | ['For the human evaluation, we asked annotators to rate each summary according to its consistency and fluency.', 'The rating score ranges from 1 to 3, with 3 being the best.', 'Table 4 lists the average scores of each model, showing that PESG outperforms the other baseline models in both fluency and consistency.', 'The... | [None, None, ['PESG', 'Fluency', 'Consistency'], ['Fluency', 'Consistency'], None, ['Fluency', 'Consistency']] | 1 |
D19-1399table_6 | Performance on the CoNLL-2003 English dataset. | 2 | [['Model', 'Peters et al. (2018a) ELMo'], ['Model', 'BiLSTM-CRF + ELMo (L = 2)'], ['Model', 'DGLSTM-CRF + ELMo (L = 2)']] | 1 | [['Prec.'], ['Rec.'], ['F1']] | [['-', '-', '92.2'], ['92.1', '92.3', '92.2'], ['92.2', '92.5', '92.4']] | column | ['Prec.', 'Rec.', 'F1'] | ['DGLSTM-CRF + ELMo (L = 2)', 'BiLSTM-CRF + ELMo (L = 2)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Prec.</th> <th>Rec.</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Model || Peters et al. (2018a) ELMo</td> <td>-</td> <td>-</td> <td>92.2</td> </tr> <tr> <td>Model... | Table 6 | table_6 | D19-1399 | 8 | emnlp2019 | 4.2 Additional Experiments CoNLL-2003 English Table 6 shows the performance on the CoNLL-2003 English dataset. The dependencies are predicted from Spacy (Honnibal and Montani, 2017). With the contextualized word representations, DGLSTM-CRF outperforms BiLSTM-CRF with 0.2 points in F1 (p < 0.09). The improvement is not ... | [1, 2, 1, 2, 2, 1] | ['4.2 Additional Experiments CoNLL-2003 English Table 6 shows the performance on the CoNLL-2003 English dataset.', 'The dependencies are predicted from Spacy (Honnibal and Montani, 2017).', 'With the contextualized word representations, DGLSTM-CRF outperforms BiLSTM-CRF with 0.2 points in F1 (p < 0.09).', 'The improvem... | [None, None, ['DGLSTM-CRF + ELMo (L = 2)', 'BiLSTM-CRF + ELMo (L = 2)'], None, None, ['DGLSTM-CRF + ELMo (L = 2)']] | 1 |
D19-1400table_4 | AUC performance for various representation methods. AVG refers to a simple unweighted average of word vectors, IDF refers to a document-frequency-based weighting according to equation 1. LANG refers to a weighting scheme that takes the language of origin into consideration, based on Equation 4. The best result per data... | 3 | [['Dataset', 'formality classification', 'Amazon Motors MT'], ['Dataset', 'formality classification', 'Amazon Motors NN'], ['Dataset', 'formality classification', 'Amazon Fashion MT'], ['Dataset', 'formality classification', 'Amazon Fashion NN'], ['Dataset', 'formality classification', 'New York Times MT'], ['Dataset',... | 1 | [['AVG'], ['IDF'], ['LANG']] | [['95.26', '95.47', '95.61'], ['76.93', '86.46', '91.76*'], ['90.31', '91.19', '91.67'], ['61.3', '75.83', '84.59*'], ['82.45', '82.27', '84.3'], ['70.64', '75.14', '81.02*'], ['87.88', '87.82', '91.16*'], ['80.1', '80.6', '88.60*'], ['78.63', '77.89', '79.29'], ['65.56', '68.8', '78.01*'], ['87.22', '88.21', '88.91'],... | column | ['AUC', 'AUC', 'AUC'] | ['LANG'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>AVG</th> <th>IDF</th> <th>LANG</th> </tr> </thead> <tbody> <tr> <td>Dataset || formality classification || Amazon Motors MT</td> <td>95.26</td> <td>95.47</td> <td>95.61</td> ... | Table 4 | table_4 | D19-1400 | 8 | emnlp2019 | Effect of Document Representation. Table 4 considers the effect of the document representation component of the transformation discussed in Section 3.2.3 across the tasks, datasets, and translation methods. The first result column shows the performance of a simple unweighted average of the word vectors. The second resu... | [2, 1, 2, 2, 2, 1, 2, 2, 2] | ['Effect of Document Representation.', 'Table 4 considers the effect of the document representation component of the transformation discussed in Section 3.2.3 across the tasks, datasets, and translation methods.', 'The first result column shows the performance of a simple unweighted average of the word vectors.', 'The ... | [None, None, ['AVG'], ['IDF'], ['LANG'], ['LANG', 'IDF', 'AVG'], None, None, None] | 1 |
D19-1402table_2 | Short Text Classification On-device Results & Comparisons to Prior Work | 2 | [['Model', 'ProSeqo (our on-device model)'], ['Model', 'SGNN(Ravi and Kozareva, 2018)(on-device)'], ['Model', 'RNN(Khanpour et al., 2016)'], ['Model', 'RNN+Attention(Ortega and Vu, 2017)'], ['Model', 'CNN(Lee and Dernoncourt, 2016)'], ['Model', 'GatedIntentAtten.(Goo et al., 2018)'], ['Model', 'GatedFullAtten.(Goo et a... | 1 | [['SWDA'], ['MRDA'], ['ATIS'], ['SNIPS']] | [['88.3', '90.1', '97.8', '97.9'], ['83.1', '86.7', '88.9', '93.4'], ['80.1', '86.8', '-', '-'], ['73.8', '84.3', '-', '-'], ['73.1', '84.6', '-', '-'], ['-', '-', '94.1', '96.8'], ['-', '-', '93.6', '97'], ['-', '-', '92.6', '96.9'], ['-', '-', '91.1', '96.7']] | column | ['accuracy', 'accuracy', 'accuracy', 'accuracy'] | ['ProSeqo (our on-device model)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>SWDA</th> <th>MRDA</th> <th>ATIS</th> <th>SNIPS</th> </tr> </thead> <tbody> <tr> <td>Model || ProSeqo (our on-device model)</td> <td>88.3</td> <td>90.1</td> <td>97.8</td> ... | Table 2 | table_2 | D19-1402 | 6 | emnlp2019 | 4.1 STC: Comparison with On-Device Work. We compare our on-device model against state-of-cdfczxybvgthe-art on-device short text classification approach SGNN (Ravi and Kozareva, 2018). SGNN was evaluated only on the SWDA and MRDA dialog act tasks (Ravi and Kozareva, 2018) and reached state-of-the-art performance against... | [2, 1, 2, 1, 1, 2, 1, 1, 1, 1] | ['4.1 STC: Comparison with On-Device Work.', 'We compare our on-device model against state-of-cdfczxybvgthe-art on-device short text classification approach SGNN (Ravi and Kozareva, 2018).', 'SGNN was evaluated only on the SWDA and MRDA dialog act tasks (Ravi and Kozareva, 2018) and reached state-of-the-art performance... | [None, ['ProSeqo (our on-device model)', 'SGNN(Ravi and Kozareva, 2018)(on-device)'], ['SGNN(Ravi and Kozareva, 2018)(on-device)', 'SWDA', 'MRDA'], ['SWDA', 'MRDA'], ['ProSeqo (our on-device model)', 'SWDA', 'MRDA', 'SGNN(Ravi and Kozareva, 2018)(on-device)'], ['SGNN(Ravi and Kozareva, 2018)(on-device)'], ['ATIS', 'SNI... | 1 |
D19-1402table_3 | Long Text Classification On-device Results & Comparisons to Prior Work | 2 | [['Model', 'ProSeqo (our on-device model)'], ['Model', 'SGNN (Ravi and Kozareva 2018)(on-device)'], ['Model', 'FastText-full (Joulin et al. 2016)'], ['Model', 'CharCNNLargeWithThesau. (Zhang et al. 2015)'], ['Model', 'CNN+NGM (Bui et al. 2018)'], ['Model', 'LSTM-full (Zhang et al. 2015)'], ['Model', 'Hier.-Attention (Y... | 1 | [['AG'], ['Y!A'], ['AMZN']] | [['91.5', '72.4', '62.3'], ['57.6', '36.5', '39.3'], ['92.5', '72.3', '60.2'], ['90.6', '71.2', '59.6'], ['86.9', '-', '-'], ['86.1', '70.8', '59.4'], ['-', '-', '63.6'], ['-', '-', '62.9']] | column | ['accuracy', 'accuracy', 'accuracy'] | ['ProSeqo (our on-device model)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>AG</th> <th>Y!A</th> <th>AMZN</th> </tr> </thead> <tbody> <tr> <td>Model || ProSeqo (our on-device model)</td> <td>91.5</td> <td>72.4</td> <td>62.3</td> </tr> <tr> <t... | Table 3 | table_3 | D19-1402 | 7 | emnlp2019 | 5 LDC: Long Document Classification Results This section focuses on long document classification results. Table 3 shows the results on three tasks and datasets (AG, Y!A, AMZN). Overall, ProSeqo significantly improved upon the ondevice neural model SGNN (Ravi and Kozareva, 2018) with +23% to +35.9% accuracy. ProSeqo als... | [2, 1, 1, 1] | ['5 LDC: Long Document Classification Results This section focuses on long document classification results.', 'Table 3 shows the results on three tasks and datasets (AG, Y!A, AMZN).', 'Overall, ProSeqo significantly improved upon the ondevice neural model SGNN (Ravi and Kozareva, 2018) with +23% to +35.9% accuracy.', '... | [None, ['AG', 'Y!A', 'AMZN'], ['ProSeqo (our on-device model)', 'SGNN (Ravi and Kozareva 2018)(on-device)'], ['ProSeqo (our on-device model)', 'LSTM-full (Zhang et al. 2015)', 'CharCNNLargeWithThesau. (Zhang et al. 2015)']] | 1 |
D19-1418table_3 | Results on the VQA-CP v2.0 test set. | 2 | [['Debiasing Method', 'None'], ['Debiasing Method', 'Reweight'], ['Debiasing Method', 'Bias Product'], ['Debiasing Method', 'Learned-Mixin'], ['Debiasing Method', 'Learned-Mixin +H'], ['Debiasing Method', 'Ramakrishnan et al. (2018)'], ['Debiasing Method', 'Grand and Belinkov (2019)']] | 1 | [['Acc.']] | [['39.18'], ['40.06'], ['39.93'], ['48.69'], ['52.05'], ['41.17'], ['42.33']] | column | ['Acc.'] | ['Learned-Mixin', 'Learned-Mixin +H'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Acc.</th> </tr> </thead> <tbody> <tr> <td>Debiasing Method || None</td> <td>39.18</td> </tr> <tr> <td>Debiasing Method || Reweight</td> <td>40.06</td> </tr> <tr> <td>Debi... | Table 3 | table_3 | D19-1418 | 7 | emnlp2019 | Table 3 shows the results. The learned-mixin method was highly effective, boosting performance on VQA-CP by about 9 points, and the entropy regularizer can increase this by another 3 points, significantly surpassing prior work. For the learned-mixin ensemble, we find g(x i) is strongly correlated with the bias’s expect... | [1, 1, 2, 2] | ['Table 3 shows the results.', 'The learned-mixin method was highly effective, boosting performance on VQA-CP by about 9 points, and the entropy regularizer can increase this by another 3 points, significantly surpassing prior work.', 'For the learned-mixin ensemble, we find g(x i) is strongly correlated with the bias’... | [None, ['Learned-Mixin', 'None'], ['Learned-Mixin'], None] | 1 |
D19-1420table_5 | Subjective evaluations on the task of controlling the unselected rationale words. Acc denotes the accuracy in guessing sentiment labels. Accw/o UNK denotes the sentiment accuracy for these samples that are not selected as “UNK” for the secondary task. † denotes p-value < 0.005 in t-test. A desired rationalization metho... | 2 | [['Model', 'Lei2016'], ['Model', 'Intros+minimax']] | 1 | [['%UNK'], ['Acc'], ['Acc w/o UNK']] | [['43.5', '63.5', '69'], ['54.0*', '58', '66.3']] | column | ['%UNK', 'Acc', 'Acc w/o UNK'] | ['Intros+minimax'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>%UNK</th> <th>Acc</th> <th>Acc w/o UNK</th> </tr> </thead> <tbody> <tr> <td>Model || Lei2016</td> <td>43.5</td> <td>63.5</td> <td>69</td> </tr> <tr> <td>Model || Intr... | Table 5 | table_5 | D19-1420 | 8 | emnlp2019 | Table 5 shows the performance of subjective evaluations. Looking at the first column of the table, our model is better in confusing human, which gives a higher rate in selecting "UNK". It confirms that the three-player introspective model selects more comprehensive rationales and leave less informative texts unattended... | [1, 1, 2, 1] | ['Table 5 shows the performance of subjective evaluations.', 'Looking at the first column of the table, our model is better in confusing human, which gives a higher rate in selecting "UNK".', 'It confirms that the three-player introspective model selects more comprehensive rationales and leave less informative texts un... | [None, ['Intros+minimax', '%UNK'], ['Intros+minimax'], ['Intros+minimax', 'Acc']] | 1 |
D19-1422table_6 | Main results on WSJ. | 2 | [['Model', 'Plank et al. (2016)'], ['Model', 'Huang et al. (2015)'], ['Model', 'Ma and Hovy (2016)'], ['Model', 'Liu et al. (2017)'], ['Model', 'Yang et al. (2018)'], ['Model', 'Zhang et al. (2018c)'], ['Model', 'Yasunaga et al. (2018)'], ['Model', 'Xin et al. (2018)'], ['Model', 'Transformer-softmax (Guo et al., 2019)... | 1 | [['Accuracy']] | [['97.22'], ['97.55'], ['97.55'], ['97.53'], ['97.51'], ['97.55'], ['97.58'], ['97.58'], ['97.04'], ['97.51'], ['97.51'], ['97.65']] | column | ['Accuracy'] | ['BiLSTM-LAN'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy</th> </tr> </thead> <tbody> <tr> <td>Model || Plank et al. (2016)</td> <td>97.22</td> </tr> <tr> <td>Model || Huang et al. (2015)</td> <td>97.55</td> </tr> <tr> ... | Table 6 | table_6 | D19-1422 | 6 | emnlp2019 | Table 6 compares our model with topperforming methods reported in the literature. In particular, Huang et al. (2015) use BiLSTM-CRF. Ma and Hovy (2016), Liu et al. (2017) and Yang et al. (2018) explore character level representations on BiLSTM-CRF. Zhang et al. (2018c) use S-LSTM-CRF, a graph recurrent network encoder.... | [1, 2, 2, 2, 1, 2, 1] | ['Table 6 compares our model with topperforming methods reported in the literature.', 'In particular, Huang et al. (2015) use BiLSTM-CRF.', 'Ma and Hovy (2016), Liu et al. (2017) and Yang et al. (2018) explore character level representations on BiLSTM-CRF.', 'Zhang et al. (2018c) use S-LSTM-CRF, a graph recurrent netwo... | [None, ['Huang et al. (2015)'], ['Ma and Hovy (2016)', 'Liu et al. (2017)', 'Yang et al. (2018)'], ['Zhang et al. (2018c)'], ['Yasunaga et al. (2018)'], ['Xin et al. (2018)'], ['BiLSTM-LAN']] | 1 |
D19-1426table_1 | Human teacher evaluation for learned and random question asking policy. | 1 | [['LiD'], ['Random']] | 1 | [['Avg. Reward'], ['Natural'], ['Avg. Rew (simulated)']] | [['0.524', '3.2', '0.607'], ['0.493', '2.9', '0.551']] | column | ['Avg. Reward', 'Natural', 'Avg. Rew (simulated)'] | ['LiD'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Avg. Reward</th> <th>Natural</th> <th>Avg. Rew (simulated)</th> </tr> </thead> <tbody> <tr> <td>LiD</td> <td>0.524</td> <td>3.2</td> <td>0.607</td> </tr> <tr> <td>Ran... | Table 1 | table_1 | D19-1426 | 9 | emnlp2019 | 20 users were asked to interact with the learned LiD policy to teach a chosen email-classification task. For each task, the system asked a sequence of 10 questions, and the human teacher's responses were incorporated into the system to update the classification model. The users were also asked to teach another task wit... | [2, 2, 2, 1, 1, 1, 2, 1] | ['20 users were asked to interact with the learned LiD policy to teach a chosen email-classification task.', "For each task, the system asked a sequence of 10 questions, and the human teacher's responses were incorporated into the system to update the classification model.", 'The users were also asked to teach another ... | [None, None, None, ['LiD', 'Random', 'Avg. Reward'], ['LiD', 'Avg. Reward'], ['Avg. Reward'], None, ['LiD', 'Natural', 'Random']] | 1 |
D19-1429table_4 | Results of baselines and fine-grained knowledge fusion methods on CWS. | 2 | [['Methods', 'Source only'], ['Methods', 'Target only'], ['Methods', 'BasicKD'], ['Methods', 'sampDomain-q a samp'], ['Methods', 'elemDomain-q a elem'], ['Methods', 'multiDomain-q a multi'], ['Methods', 'Sample-q a samp'], ['Methods', 'elemSample-q a elem'], ['Methods', 'multiSample-q a multi'], ['Methods', 'FGKF']] | 2 | [['Zhuxian', 'F'], ['Zhuxian', 'ROOV'], ['5% Weibo', 'F'], ['5% Weibo', 'ROOV']] | [['83.86', '62.4', '83.75', '70.74'], ['92.8', '65.81', '84.01', '64.12'], ['94.23', '74.08', '89.21', '76.26'], ['94.55', '74.02', '89.63', '75.93'], ['94.81', '74.75', '89.99', '77.59'], ['94.75', '74.96', '90.06', '77.25'], ['94.57', '74.47', '89.77', '76.81'], ['94.78', '74.52', '90.07', ''], ['94.91', '75.56', '90... | column | ['F', 'ROOV', 'F', 'ROOV'] | ['FGKF'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Zhuxian || F</th> <th>Zhuxian || ROOV</th> <th>5% Weibo || F</th> <th>5% Weibo || ROOV</th> </tr> </thead> <tbody> <tr> <td>Methods || Source only</td> <td>83.86</td> <td>62.4</... | Table 4 | table_4 | D19-1429 | 6 | emnlp2019 | The results in Table 4 show that both the basicKD method and fine-grained methods achieve performance improvements through domain adaptation. Compared with the basicKD method, FGKF behaves better (+1.1% F and +2.8% Roov v.s. takes multilevel relevance discrepancies into account. The sample-q method performs better than... | [1, 1, 1] | ['The results in Table 4 show that both the basicKD method and fine-grained methods achieve performance improvements through domain adaptation.', 'Compared with the basicKD method, FGKF behaves better (+1.1% F and +2.8% Roov v.s. takes multilevel relevance discrepancies into account.', 'The sample-q method performs bet... | [['BasicKD', 'FGKF'], ['BasicKD', 'FGKF'], ['sampDomain-q a samp', 'elemDomain-q a elem', 'multiDomain-q a multi', 'Sample-q a samp', 'elemSample-q a elem', 'multiSample-q a multi']] | 1 |
D19-1431table_5 | Results of ablation study on Hits@10 of 1-shot link prediction in NELL-One. | 2 | [['Ablation Conf.', 'standard'], ['Ablation Conf.', ' -g'], ['Ablation Conf.', ' -g -r']] | 1 | [['BG:Pre-Train'], ['BG:In-Train']] | [['0.331', '0.401'], ['0.234', '0.341'], ['0.052', '0.052']] | column | ['Hits@10', 'Hits@10'] | [' -g -r'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BG:Pre-Train</th> <th>BG:In-Train</th> </tr> </thead> <tbody> <tr> <td>Ablation Conf. || standard</td> <td>0.331</td> <td>0.401</td> </tr> <tr> <td>Ablation Conf. || -g</td> ... | Table 5 | table_5 | D19-1431 | 7 | emnlp2019 | Table 5 shows that removing gradient meta decreases 29.3% and 15% on two dataset settings, and further removing relation meta continuous decreases the performance with 55% and 72% compared to the standard results. Thus both relation meta and gradient meta contribute significantly and relation meta contributes more than... | [1, 1, 1, 1] | ['Table 5 shows that removing gradient meta decreases 29.3% and 15% on two dataset settings, and further removing relation meta continuous decreases the performance with 55% and 72% compared to the standard results.', 'Thus both relation meta and gradient meta contribute significantly and relation meta contributes more... | [[' -g', ' -g -r'], [' -g -r'], [' -g -r'], None] | 1 |
D19-1437table_1 | BLEU scores on three MT benchmark datasets for FlowSeq with argmax decoding and baselines with purely non-autoregressive decoding method. The first and second block are results of models trained w/w.o. knowledge distillation, respectively. | 3 | [['Models', 'Raw Data', 'CMLM-base'], ['Models', 'Raw Data', 'LV NAR'], ['Models', 'Raw Data', 'FlowSeq-base'], ['Models', 'Raw Data', 'FlowSeq-large'], ['Models', 'Knowledge Distillation', 'NAT-IR'], ['Models', 'Knowledge Distillation', 'CTC Loss'], ['Models', 'Knowledge Distillation', 'NAT w/ FT'], ['Models', 'Knowle... | 2 | [['WMT2014', 'EN-DE'], ['WMT2015', 'DE-EN'], ['WMT2016', 'EN-RO'], ['WMT2017', 'RO-EN'], ['IWSLT2014', 'DE-EN']] | [['10.88', '-', '20.24', '-', '-'], ['11.8', '-', '-', '-', '-'], ['18.55', '23.36', '29.26', '30.16', '24.75'], ['20.85', '25.4', '29.86', '30.69', '-'], ['13.91', '16.77', '24.45', '25.73', '21.86'], ['17.68', '19.8', '19.93', '24.71', '-'], ['17.69', '21.47', '27.29', '29.06', '20.32'], ['20.65', '24.77', '-', '-', ... | column | ['BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU'] | ['FlowSeq-base', 'FlowSeq-large'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>WMT2014 || EN-DE</th> <th>WMT2015 || DE-EN</th> <th>WMT2016 || EN-RO</th> <th>WMT2017 || RO-EN</th> <th>IWSLT2014 || DE-EN</th> </tr> </thead> <tbody> <tr> <td>Models || Raw Data || ... | Table 1 | table_1 | D19-1437 | 7 | emnlp2019 | Table 1 provides the BLEU scores of FlowSeq with argmax decoding, together with baselines with purely non-autoregressive decoding methods that generate output sequence in one parallel pass. The first block lists results of models trained on raw data, while the second block are results using knowledge distillation. With... | [1, 2, 1, 1] | ['Table 1 provides the BLEU scores of FlowSeq with argmax decoding, together with baselines with purely non-autoregressive decoding methods that generate output sequence in one parallel pass.', 'The first block lists results of models trained on raw data, while the second block are results using knowledge distillation.... | [['FlowSeq-base', 'FlowSeq-large'], None, ['FlowSeq-base', 'CMLM-base', 'LV NAR'], ['FlowSeq-base', 'FlowSeq-large']] | 1 |
D19-1457table_1 | Results on the dependency task (test set). | 2 | [['Model', 'ProLocal'], ['Model', 'QRN'], ['Model', 'EntNet'], ['Model', 'ProStruct'], ['Model', 'ProGlobal'], ['Model', 'XPAD']] | 1 | [['P'], ['R'], ['F1']] | [['24.7', '18', '20.8'], ['32.6', '30.3', '31.4'], ['32.8', '38.6', '35.5'], ['76.3', '21.3', '33.4'], ['43.4', '37', '39.9'], ['62', '32.9', '43']] | column | ['P', 'R', 'F1'] | ['XPAD'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Model || ProLocal</td> <td>24.7</td> <td>18</td> <td>20.8</td> </tr> <tr> <td>Model || QRN</td> <td... | Table 1 | table_1 | D19-1457 | 7 | emnlp2019 | Table 1 reports results of all models on the new dependency task. XPAD significantly outperforms the strongest baselines, ProGlobal and ProStruct, by more than 3 points F1. XPAD has much higher precision than ProGlobal with similar recall, suggesting that XPAD dependency-aware decoder helps it select more accurate depe... | [1, 1, 1, 1, 2] | ['Table 1 reports results of all models on the new dependency task.', 'XPAD significantly outperforms the strongest baselines, ProGlobal and ProStruct, by more than 3 points F1.', 'XPAD has much higher precision than ProGlobal with similar recall, suggesting that XPAD dependency-aware decoder helps it select more accur... | [None, ['XPAD', 'ProGlobal', 'ProStruct'], None, ['XPAD', 'ProStruct'], ['XPAD']] | 1 |
D19-1461table_2 | Comparison between our models based on fastText and BERT with the BiLSTM used by (Khatri et al., 2018) on Wikipedia Toxic Comments. | 1 | [['fastText'], ['BERT-based'], ['(Khatri et al. 2018)']] | 1 | [['OFFENSIVE F1'], ['Weighted F1']] | [['71.40%', '94.80%'], ['83.40%', '96.70%'], ['-', '95.40%']] | column | ['OFFENSIVE F1', 'Weighted F1'] | ['fastText', 'BERT-based'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>OFFENSIVE F1</th> <th>Weighted F1</th> </tr> </thead> <tbody> <tr> <td>fastText</td> <td>71.40%</td> <td>94.80%</td> </tr> <tr> <td>BERT-based</td> <td>83.40%</td> <t... | Table 2 | table_2 | D19-1461 | 4 | emnlp2019 | Experiments . We compare the two afore mentioned models with (Khatri et al., 2018) who conducted their experiments with a BiLSTM with GloVe pre-trained word vectors (Pennington et al., 2014). Results are listed in Table 2 and we compare them using the weighted-F1, i.e. the sum of F1 score of each class weighted by thei... | [2, 1, 1, 1, 2, 2, 1, 2] | ['Experiments .', 'We compare the two afore mentioned models with (Khatri et al., 2018) who conducted their experiments with a BiLSTM with GloVe pre-trained word vectors (Pennington et al., 2014).', 'Results are listed in Table 2 and we compare them using the weighted-F1, i.e. the sum of F1 score of each class weighted... | [None, ['fastText', 'BERT-based', '(Khatri et al. 2018)'], ['Weighted F1'], ['OFFENSIVE F1'], ['OFFENSIVE F1'], ['Weighted F1', 'OFFENSIVE F1'], ['BERT-based', '(Khatri et al. 2018)'], None] | 1 |
D19-1463table_3 | Results of Per-response accuracy and Per-dialog accuracy (in brackets) on bAbI dialogues. Per-dialog accuracy presents the accuracy of complete dialogues. | 2 | [['Task', 'T3'], ['Task', 'T4'], ['Task', 'T5'], ['Task', 'T3-OOV'], ['Task', 'T4-OOV'], ['Task', 'T5-OOV']] | 2 | [['SEQ2SEQ', 'Per-response accuracy'], ['SEQ2SEQ', 'Per-dialog accuracy'], ['SEQ2SEQ+Attn.', 'Per-response accuracy'], ['SEQ2SEQ+Attn.', 'Per-dialog accuracy'], ['Mem2Seq', 'Per-response accuracy'], ['Mem2Seq', 'Per-dialog accuracy'], ['HMNs-CFO', 'Per-response accuracy'], ['HMNs-CFO', 'Per-dialog accuracy'], ['HMN', '... | [['74.8', '0', '74.8', '0', '83.9', '15.6', '93.7', '55.9', '93.6', '56.1'], ['56.5', '0', '56.5', '0', '97', '90.5', '96.8', '89.3', '100', '100'], ['98.9', '82.9', '98.6', '83', '96.2', '46.4', '97.1', '58.2', '98', '69'], ['74.9', '0', '74', '0', '83.6', '18.1', '92.3', '45.2', '92.5', '48.2'], ['56.5', '0', '57', '... | column | ['Per-response accuracy', 'Per-dialog accuracy', 'Per-response accuracy', 'Per-dialog accuracy', 'Per-response accuracy', 'Per-dialog accuracy', 'Per-response accuracy', 'Per-dialog accuracy', 'Per-response accuracy', 'Per-dialog accuracy'] | ['HMNs-CFO'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>SEQ2SEQ || Per-response accuracy</th> <th>SEQ2SEQ || Per-dialog accuracy</th> <th>SEQ2SEQ+Attn. || Per-response accuracy</th> <th>SEQ2SEQ+Attn. || Per-dialog accuracy</th> <th>Mem2Seq || Per-resp... | Table 3 | table_3 | D19-1463 | 6 | emnlp2019 | Table 3 shows results of models on bAbI tasks. HMNs and Mem2Seq adopt one hop attention only and note that all results are the best performance of each model in 100 epochs. HMNs achieved the best results on most tasks except T5. HMNs-CFO also outperforms the other models. This demonstrates that both training multiple d... | [1, 1, 1, 1, 2, 1, 2, 2] | ['Table 3 shows results of models on bAbI tasks.', 'HMNs and Mem2Seq adopt one hop attention only and note that all results are the best performance of each model in 100 epochs.', 'HMNs achieved the best results on most tasks except T5.', 'HMNs-CFO also outperforms the other models.', 'This demonstrates that both train... | [None, ['HMNs-CFO', 'Mem2Seq'], ['HMNs-CFO', 'Task', 'T5'], ['HMNs-CFO'], None, ['Per-dialog accuracy', 'T3-OOV', 'T4-OOV', 'T5-OOV'], ['HMNs-CFO'], ['HMNs-CFO']] | 1 |
D19-1463table_4 | The results on the DSTC 2 | 2 | [['Model name', 'SEQ2SEQ'], ['Model name', 'SEQ2SEQ+Attn.'], ['Model name', 'SEQ2SEQ+Copy'], ['Model name', 'Mem2Seq'], ['Model name', 'Our model']] | 1 | [['F1'], ['BLEU']] | [['69.7', '55'], ['67.1', '56.6'], ['71.6', '55.4'], ['75.3', '55.3'], ['77.7', '56.4']] | column | ['F1', 'BLEU'] | ['Our model'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>F1</th> <th>BLEU</th> </tr> </thead> <tbody> <tr> <td>Model name || SEQ2SEQ</td> <td>69.7</td> <td>55</td> </tr> <tr> <td>Model name || SEQ2SEQ+Attn.</td> <td>67.1</td> ... | Table 4 | table_4 | D19-1463 | 6 | emnlp2019 | Table 4 shows our model gets the best F1 score on dataset DSTC 2, while SEQ2SEQ with attention gets the best BLEU result. | [1] | ['Table 4 shows our model gets the best F1 score on dataset DSTC 2, while SEQ2SEQ with attention gets the best BLEU result.'] | [['Our model', 'F1', 'BLEU']] | 1 |
D19-1467table_2 | Results of the ALSC task in single-task settings in terms of accuracy (%) and Macro-F1 (%). | 2 | [['Model', 'LSTM'], ['Model', 'AT-LSTM'], ['Model', 'ATAE-LSTM'], ['Model', 'GCAE'], ['Model', 'AT-CAN-Rs'], ['Model', 'AT-CAN-Ro'], ['Model', 'ATAE-CAN-Rs'], ['Model', 'ATAE-CAN-Ro']] | 3 | [['Rest14', '3-way', 'Acc'], ['Rest15', '3-way', 'F1'], ['Rest16', 'Binary', 'Acc'], ['Rest17', 'Binary', 'F1'], ['Rest15', '3-way', 'Acc'], ['Rest16', '3-way', 'F1'], ['Rest17', 'Binary', 'Acc'], ['Rest18', 'Binary', 'F1']] | [['80.92', '68.3', '85.83', '80.88', '71.24', '49.4', '71.97', '69.97'], ['81.24', '69.19', '87.25', '82.2', '73.37', '51.74', '76.79', '74.61'], ['82.18', '69.18', '88.08', '83.03', '74.56', '51.4', '79.79', '78.69'], ['82.08', '70.2', '87.72', '83.84', '76.69', '53', '79.66', '77.96'], ['82.28', '70.94', '88.43', '84... | column | ['Acc', 'F1', 'Acc', 'F1', 'Acc', 'F1', 'Acc', 'F1'] | ['AT-CAN-Rs', 'AT-CAN-Ro', 'ATAE-CAN-Rs', 'ATAE-CAN-Ro'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Rest14 || 3-way || Acc</th> <th>Rest15 || 3-way || F1</th> <th>Rest16 || Binary || Acc</th> <th>Rest17 || Binary || F1</th> <th>Rest15 || 3-way || Acc</th> <th>Rest16 || 3-way || F1</th> ... | Table 2 | table_2 | D19-1467 | 6 | emnlp2019 | Single-task Settings . Table 2 shows our experimental results of ALSC in single-task settings. Firstly, we observe that by introducing attention regularizations (either Rs or Ro), most of our proposed methods outperform their counterparts. Particularly, AT-CAN-Rs and AT-CAN-Ro outperform AT-LSTM in all results; ATAE-CA... | [2, 1, 1, 1, 1, 1, 2, 1, 1] | ['Single-task Settings .', 'Table 2 shows our experimental results of ALSC in single-task settings.', 'Firstly, we observe that by introducing attention regularizations (either Rs or Ro), most of our proposed methods outperform their counterparts.', 'Particularly, AT-CAN-Rs and AT-CAN-Ro outperform AT-LSTM in all resul... | [None, None, ['AT-CAN-Rs', 'AT-CAN-Ro', 'ATAE-CAN-Rs', 'ATAE-CAN-Ro'], ['AT-CAN-Rs', 'AT-CAN-Ro', 'AT-LSTM', 'ATAE-CAN-Rs', 'ATAE-CAN-Ro', 'ATAE-LSTM'], ['Rest15', 'ATAE-CAN-Ro', 'ATAE-LSTM', 'Acc', 'F1'], ['ATAE-CAN-Rs', 'ATAE-CAN-Ro'], ['ATAE-CAN-Ro'], ['ATAE-CAN-Ro', 'GCAE'], ['LSTM']] | 1 |
D19-1467table_3 | Results of the ALSC task in multi-task settings in terms of accuracy (%) and Macro-F1 (%). | 2 | [['Model', 'M-AT-LSTM'], ['Model', 'M-CAN-Rs'], ['Model', 'M-CAN-Ro'], ['Model', 'M-CAN-2Rs'], ['Model', 'M-CAN-2Ro']] | 3 | [['Rest14', '3-way', 'Acc'], ['Rest15', '3-way', 'F1'], ['Rest16', 'Binary', 'Acc'], ['Rest17', 'Binary', 'F1'], ['Rest15', '3-way', 'Acc'], ['Rest16', '3-way', 'F1'], ['Rest17', 'Binary', 'Acc'], ['Rest18', 'Binary', 'F1']] | [['82.6', '71.44', '88.55', '83.76', '76.33', '51.64', '79.53', '78.31'], ['83.65', '73.97', '89.26', '85.43', '75.74', '52.43', '79.66', '78.46'], ['83.12', '72.29', '89.61', '85.18', '77.04', '52.69', '79.4', '77.88'], ['83.23', '72.81', '89.37', '85.42', '78.22', '55.8', '80.44', '80.01'], ['84.28', '74.45', '89.96'... | column | ['Acc', 'F1', 'Acc', 'F1', 'Acc', 'F1', 'Acc', 'F1'] | ['M-AT-LSTM', 'M-CAN-Rs', 'M-CAN-Ro', 'M-CAN-2Rs', 'M-CAN-2Ro'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Rest14 || 3-way || Acc</th> <th>Rest15 || 3-way || F1</th> <th>Rest16 || Binary || Acc</th> <th>Rest17 || Binary || F1</th> <th>Rest15 || 3-way || Acc</th> <th>Rest16 || 3-way || F1</th> ... | Table 3 | table_3 | D19-1467 | 7 | emnlp2019 | Multi-task Settings Table 3 shows experimental results of ALSC in multi-task settings. We first observe that the overall results in multi-task settings outperform the ones in single-task settings, which demonstrates the effectiveness of multi-task learning by introducing the auxiliary ACD task to help the ALSC task. Se... | [1, 2, 1, 1] | ['Multi-task Settings Table 3 shows experimental results of ALSC in multi-task settings.', 'We first observe that the overall results in multi-task settings outperform the ones in single-task settings, which demonstrates the effectiveness of multi-task learning by introducing the auxiliary ACD task to help the ALSC tas... | [None, None, None, ['Binary', 'Rest15', 'M-AT-LSTM', 'Acc', 'F1', 'M-CAN-2Ro']] | 1 |
D19-1467table_4 | Results of the ACD task. Rest14 has 5 aspect categories while Rest15 has 13 ones. | 2 | [['Model', 'M-AT-LSTM'], ['Model', 'M-CAN-2Rs'], ['Model', 'M-CAN-2Ro']] | 2 | [['Rest14', 'Precision'], ['Rest14', 'Recall'], ['Rest14', 'F1'], ['Rest15', 'Precision'], ['Rest15', 'Recall'], ['Rest15', 'F1']] | [['0.8626', '0.8553', '0.8589', '0.6691', '0.4748', '0.5555'], ['0.8698', '0.8595', '0.8645', '0.6244', '0.5019', '0.5565'], ['0.8907', '0.8627', '0.8765', '0.7127', '0.4865', '0.5782']] | column | ['Precision', 'Recall', 'F1', 'Precision', 'Recall', 'F1'] | ['M-CAN-2Rs', 'M-CAN-2Ro'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Rest14 || Precision</th> <th>Rest14 || Recall</th> <th>Rest14 || F1</th> <th>Rest15 || Precision</th> <th>Rest15 || Recall</th> <th>Rest15 || F1</th> </tr> </thead> <tbody> <tr> ... | Table 4 | table_4 | D19-1467 | 7 | emnlp2019 | Table 4 shows the results of the ACD task in multi-task settings. Our proposed regularization terms can also improve the performance of ACD. Regularization Ro achieves the best performance in almost all metrics. | [1, 1, 1] | ['Table 4 shows the results of the ACD task in multi-task settings.', 'Our proposed regularization terms can also improve the performance of ACD.', 'Regularization Ro achieves the best performance in almost all metrics.'] | [None, ['M-CAN-2Rs', 'M-CAN-2Ro'], ['M-CAN-2Ro']] | 1 |
D19-1470table_3 | Results of our model variants on development set. The best MAP results are in bold. “Train Time”: training time per epoch divided by that of model GCN (With BiLSTM). | 2 | [['Models', 'BiLSTM'], ['Models', 'GLSTM'], ['Models', 'GCN (W/O BiLSTM)'], ['Models', 'GCN (With BiLSTM)']] | 2 | [['Train Time', '-'], ['MAP', 'Twitter'], ['MAP', 'Reddit']] | [['0.94', '0.617', '0.498'], ['1.25', '0.617', '0.528'], ['1.03', '0.619', '0.53'], ['1', '0.62', '0.533']] | column | ['Train Time', 'MAP', 'MAP'] | ['GCN (With BiLSTM)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Train Time || -</th> <th>MAP || Twitter</th> <th>MAP || Reddit</th> </tr> </thead> <tbody> <tr> <td>Models || BiLSTM</td> <td>0.94</td> <td>0.617</td> <td>0.498</td> </tr> ... | Table 3 | table_3 | D19-1470 | 7 | emnlp2019 | We first compare the effects of varying interaction modeling methods (see Section 3.3) on conversation recommendation. Table 3 displays their results on development set. In comparison, we consider BiLSTM over turn sequence (only chronological order encoded and henceforth BiLSTM), GLSTM (state number g = 6), GCN (layer ... | [2, 1, 1, 2, 1, 2, 1, 1, 2] | ['We first compare the effects of varying interaction modeling methods (see Section 3.3) on conversation recommendation.', 'Table 3 displays their results on development set.', 'In comparison, we consider BiLSTM over turn sequence (only chronological order encoded and henceforth BiLSTM), GLSTM (state number g = 6), GCN... | [None, None, ['BiLSTM', 'GLSTM', 'GCN (W/O BiLSTM)', 'GCN (With BiLSTM)'], None, ['BiLSTM', 'MAP'], ['Reddit'], ['GCN (With BiLSTM)', 'MAP', 'Train Time'], ['GCN (With BiLSTM)'], ['GCN (With BiLSTM)']] | 1 |
D19-1470table_4 | Main results on conversation recommendation. “nDCG” stands for “nDCG@5”. The best result for each column is in bold. Our model significantly outperforms all the comparisons (p < 0.01, paired ttest). | 3 | [['Models', 'Baselines', 'RANDOM'], ['Models', 'Baselines', 'POPULARITY'], ['Models', 'Comparisons', 'RSVM'], ['Models', 'Comparisons', 'NCF'], ['Models', 'Comparisons', 'CONVMF'], ['Models', 'Comparisons', 'CR_JTD'], ['Models', 'Metrics', 'OURS']] | 2 | [['Twitter', 'MAP'], ['Twitter', 'P@1'], ['Twitter', 'nDCG'], ['Reddit', 'MAP'], ['Reddit', 'P@1'], ['Reddit', 'nDCG']] | [['0.006', '0.001', '0.002', '0.04', '0.01', '0.022'], ['0.023', '0.005', '0.01', '0.082', '0.033', '0.063'], ['0.554', '0.575', '0.559', '0.453', '0.457', '0.466'], ['0.573', '0.593', '0.576', '0.412', '0.544', '0.461'], ['0.579', '0.596', '0.583', '0.485', '0.532', '0.52'], ['0.591', '0.591', '0.6', '0.453', '0.559',... | column | ['MAP', 'P@1', 'nDCG', 'MAP', 'P@1', 'nDCG'] | ['OURS'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Twitter || MAP</th> <th>Twitter || P@1</th> <th>Twitter || nDCG</th> <th>Reddit || MAP</th> <th>Reddit || P@1</th> <th>Reddit || nDCG</th> </tr> </thead> <tbody> <tr> <td>Models... | Table 4 | table_4 | D19-1470 | 8 | emnlp2019 | 5.2 Comparisons with Previous Work . Main Results. Table 4 shows the conversation recommendation results with baselines and state of the arts. Our model exhibits the best results on both datasets, significantly outperforming all the comparison models. It indicates the usefulness to encode user interactions for conversa... | [2, 2, 1, 1, 1, 1, 1] | ['5.2 Comparisons with Previous Work .', 'Main Results.', 'Table 4 shows the conversation recommendation results with baselines and state of the arts.', 'Our model exhibits the best results on both datasets, significantly outperforming all the comparison models.', 'It indicates the usefulness to encode user interaction... | [None, None, None, ['OURS'], None, ['CONVMF'], ['OURS']] | 1 |
D19-1485table_2 | Results of rumor stance classification. FS, FD, FQ and FC denote the F1 scores of supporting, denying, querying and commenting classes respectively. “–” indicates that the original paper does not report the metric. | 2 | [['Method', 'Affective Feature + SVM (Pamungkas et al., 2018)'], ['Method', 'BranchLSTM (Kochkina et al., 2017)'], ['Method', 'TemporalAttention (Veyseh et al., 2017)'], ['Method', 'Conversational-GCN (Ours, L = 2)']] | 2 | [['Evaluation Metric', 'Macro-F1'], ['Evaluation Metric', 'FS'], ['Evaluation Metric', 'FD'], ['Evaluation Metric', 'FQ'], ['Evaluation Metric', 'FC'], ['Evaluation Metric', 'Acc.']] | [['0.47', '0.41', '0', '0.58', '0.88', '0.795'], ['0.434', '0.403', '0', '0.462', '0.873', '0.784'], ['0.482', '-', '-', '-', '-', '0.82'], ['0.499', '0.311', '0.194', '0.646', '0.847', '0.751']] | column | ['Macro-F1', 'FS', 'FD', 'FQ', 'FC', 'Acc.'] | ['Conversational-GCN (Ours, L = 2)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Evaluation Metric || Macro-F1</th> <th>Evaluation Metric || FS</th> <th>Evaluation Metric || FD</th> <th>Evaluation Metric || FQ</th> <th>Evaluation Metric || FC</th> <th>Evaluation Metric |... | Table 2 | table_2 | D19-1485 | 6 | emnlp2019 | Performance Comparison Table 2 shows the results of different methods for rumor stance classification. Clearly, the macro-averaged F1 of Conversational-GCN is better than all baselines. Especially, our method shows the effectiveness of determining denying stance, while other methods can not give any correct prediction ... | [1, 1, 2, 1, 2, 2, 2, 2, 2] | ['Performance Comparison Table 2 shows the results of different methods for rumor stance classification.', 'Clearly, the macro-averaged F1 of Conversational-GCN is better than all baselines.', 'Especially, our method shows the effectiveness of determining denying stance, while other methods can not give any correct pre... | [None, ['Macro-F1', 'Conversational-GCN (Ours, L = 2)'], ['Conversational-GCN (Ours, L = 2)'], ['Conversational-GCN (Ours, L = 2)'], None, None, ['Conversational-GCN (Ours, L = 2)'], None, None] | 1 |
D19-1485table_3 | Results of veracity prediction. Single-task setting means that stance labels cannot be used to train models. | 3 | [['Method Setting', 'Single-task', 'TD-RvNN (Ma et al., 2018b)'], ['Method Setting', 'Single-task', 'Hierarchical GCN-RNN (Ours)'], ['Method Setting', 'Multi-task', 'BranchLSTM+NileTMRG (Kochkina et al., 2018)'], ['Method Setting', 'Multi-task', 'MTL2 (Veracity+Stance) (Kochkina et al., 2018)'], ['Method Setting', 'Mul... | 2 | [['SemEval dataset', 'Macro-F1'], ['SemEval dataset', 'Acc.'], ['PHEME dataset', 'Macro-F1'], ['PHEME dataset', 'Acc.']] | [['0.509', '0.536', '0.264', '0.341'], ['0.54', '0.536', '0.317', '0.356'], ['0.539', '0.57', '0.297', '0.36'], ['0.558', '0.571', '0.318', '0.357'], ['0.588', '0.643', '0.333', '0.361']] | column | ['Macro-F1', 'Acc.', 'Macro-F1', 'Acc.'] | ['Hierarchical GCN-RNN (Ours)', 'TD-RvNN (Ma et al., 2018b)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>SemEval dataset || Macro-F1</th> <th>SemEval dataset || Acc.</th> <th>PHEME dataset || Macro-F1</th> <th>PHEME dataset || Acc.</th> </tr> </thead> <tbody> <tr> <td>Method Setting || Singl... | Table 3 | table_3 | D19-1485 | 7 | emnlp2019 | Performance Comparison Table 3 shows the comparisons of different methods. By comparing single-task methods, Hierarchical GCN-RNN performs better than TD-RvNN, which indicates that our hierarchical framework can effectively model conversation structures to learn high-quality tweet representations. The recursive operati... | [1, 1, 1, 1] | ['Performance Comparison Table 3 shows the comparisons of different methods.', 'By comparing single-task methods, Hierarchical GCN-RNN performs better than TD-RvNN, which indicates that our hierarchical framework can effectively model conversation structures to learn high-quality tweet representations.', 'The recursive... | [None, ['Hierarchical GCN-RNN (Ours)', 'TD-RvNN (Ma et al., 2018b)'], ['TD-RvNN (Ma et al., 2018b)'], ['Hierarchical GCN-RNN (Ours)', 'TD-RvNN (Ma et al., 2018b)']] | 1 |
D19-1488table_2 | Test accuracy (%) of different models on six standard datasets. The second best results are underlined. The note ∗ means our model significantly outperforms the baselines based on t-test (p < 0.01). | 2 | [['Dataset', 'AGNews'], ['Dataset', 'Snippets'], ['Dataset', 'Ohsumed'], ['Dataset', 'TagMyNews'], ['Dataset', 'MR'], ['Dataset', 'Twitter']] | 1 | [['SVM +TFIDF'], ['SVM +LDACNN'], ['CNN -rand'], ['CNN -pretrain'], ['LSTM -rand'], ['LSTM -pretrain'], ['PTE'], ['TextGCN'], ['HAN'], ['HGAT']] | [['57.73', '65.16', '32.65', '67.24', '31.24', '66.28', '36', '67.61', '62.64', '72.10*'], ['63.85', '63.91', '48.34', '77.09', '26.38', '75.89', '63.1', '77.82', '58.38', '82.36*'], ['41.47', '31.26', '35.25', '32.92', '19.87', '28.7', '36.63', '41.56', '36.97', '42.68*'], ['42.9', '21.88', '28.76', '57.12', '25.52', ... | column | ['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy'] | ['HGAT'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>SVM +TFIDF</th> <th>SVM +LDACNN</th> <th>CNN -rand</th> <th>CNN -pretrain</th> <th>LSTM -rand</th> <th>LSTM -pretrain</th> <th>PTE</th> <th>TextGCN</th> <th>HAN</th> <th>... | Table 2 | table_2 | D19-1488 | 7 | emnlp2019 | Table 2 shows the classification accuracy of different methods on 6 benchmark datasets. We can see that our methods significantly outperform all the baselines by a large margin, which shows the effectiveness of our proposed method on semisupervised short text classification. The traditional method SVMs based on the hum... | [1, 1, 1, 1, 1] | ['Table 2 shows the classification accuracy of different methods on 6 benchmark datasets.', 'We can see that our methods significantly outperform all the baselines by a large margin, which shows the effectiveness of our proposed method on semisupervised short text classification.', 'The traditional method SVMs based on... | [['Dataset'], ['HGAT'], ['SVM +TFIDF', 'SVM +LDACNN', 'CNN -rand', 'LSTM -rand'], ['CNN -pretrain', 'LSTM -pretrain', 'SVM +TFIDF', 'SVM +LDACNN'], ['HGAT']] | 1 |
D19-1491table_7 | Ranking results on BENCHLS dataset | 3 | [['BENCHLS', 'full(929)', 'S'], ['BENCHLS', 'full(929)', 'C'], ['BENCHLS', 'full(929)', 'S+C'], ['BENCHLS', 'test(464)', 'S'], ['BENCHLS', 'test(464)', 'C'], ['BENCHLS', 'test(464)', 'S+C'], ['BENCHLS', 'test(464)', 'P&S']] | 1 | [['n=1'], ['n=2'], ['n=3'], ['MRR']] | [['0.4974', '0.7381', '0.8899', '0.6648'], ['0.3509', '0.5885', '0.7877', '0.5998'], ['0.5602', '0.8064', '0.9428', '0.7219'], ['0.5839', '0.7546', '0.9302', '0.7083'], ['0.4086', '0.7142', '0.895', '0.6563'], ['0.6774', '0.7857', '0.9308', '0.8218'], ['0.4841', '0.5596', '0.7004', '0.6615']] | column | ['n=1', 'n=2', 'n=3', 'MRR'] | ['BENCHLS'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>n=1</th> <th>n=2</th> <th>n=3</th> <th>MRR</th> </tr> </thead> <tbody> <tr> <td>BENCHLS || full(929) || S</td> <td>0.4974</td> <td>0.7381</td> <td>0.8899</td> <td>0.66... | Table 7 | table_7 | D19-1491 | 7 | emnlp2019 | We report the results on the full BENCHLS dataset in the upper half of Table 7. In the lower half, we compare our results on the test set of 464 instances to those running the Paetzold and Specia (2016a) system (P&S) on the same test splits. Since the P&S system was trained on half of BENCHLS we cannot run it on the fu... | [1, 2, 2, 1] | ['We report the results on the full BENCHLS dataset in the upper half of Table 7.', 'In the lower half, we compare our results on the test set of 464 instances to those running the Paetzold and Specia (2016a) system (P&S) on the same test splits.', 'Since the P&S system was trained on half of BENCHLS we cannot run it o... | [['BENCHLS'], ['P&S'], ['P&S', 'BENCHLS'], ['S+C']] | 1 |
D19-1496table_3 | Performance of SC and DISP on identifying perpetuated tokens. | 4 | [['Dataset', 'SST-2', 'SC', 'Precision'], ['Dataset', 'SST-2', 'SC', 'Recall'], ['Dataset', 'SST-2', 'SC', 'F1'], ['Dataset', 'SST-2', 'DISP', 'Precision'], ['Dataset', 'SST-2', 'DISP', 'Recall'], ['Dataset', 'SST-2', 'DISP', 'F1'], ['Dataset', 'IMDb', 'SC', 'Precision'], ['Dataset', 'IMDb', 'SC', 'Recall'], ['Dataset'... | 2 | [['Character-level Attacks', 'Insertion'], ['Character-level Attacks', 'Deletion'], ['Character-level Attacks', 'Swap'], ['Word-level Attacks', 'Random'], ['Word-level Attacks', 'Embed'], ['Overall Attacks', 'Overall Attacks']] | [['0.5087', '0.4703', '0.5044', '0.1612', '0.1484', '0.3586'], ['0.9369', '0.8085', '0.9151', '0.1732', '0.1617', '0.5991'], ['0.6594', '0.5947', '0.6504', '0.1669', '0.1548', '0.4452'], ['0.9725', '0.9065', '0.9552', '0.8407', '0.4828', '0.8315'], ['0.8865', '0.876', '0.868', '0.6504', '0.5515', '0.7665'], ['0.9275', ... | row | ['Precision', 'Recall', 'F1', 'Precision', 'Recall', 'F1', 'Precision', 'Recall', 'F1', 'Precision', 'Recall', 'F1'] | ['DISP', 'SC'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Character-level Attacks || Insertion</th> <th>Character-level Attacks || Deletion</th> <th>Character-level Attacks || Swap</th> <th>Word-level Attacks || Random</th> <th>Word-level Attacks || Emb... | Table 3 | table_3 | D19-1496 | 6 | emnlp2019 | 4.2 Experimental Results Performance on identifying perpetuated tokens. Table 3 shows the performance of DISP and SC in discriminating perturbations. Compared to SC, DISP has an absolute improvement by 35% and 46% on SST-2 and IMDb in terms of F1score, respectively. It also proves that the context information is essent... | [2, 1, 1, 2, 1, 1, 1, 1] | ['4.2 Experimental Results Performance on identifying perpetuated tokens.', 'Table 3 shows the performance of DISP and SC in discriminating perturbations.', 'Compared to SC, DISP has an absolute improvement by 35% and 46% on SST-2 and IMDb in terms of F1score, respectively.', 'It also proves that the context informatio... | [None, ['DISP', 'SC'], ['SC', 'DISP', 'SST-2', 'IMDb', 'F1'], None, ['SC', 'Recall', 'Precision', 'Character-level Attacks'], ['DISP', 'Recall', 'Precision'], ['SC', 'DISP', 'Random', 'Embed'], ['DISP', 'Random']] | 1 |
D19-1498table_1 | Overall, intraand inter-sentence pairs performance comparison with the state-of-the-art on the CDR test set. The methods below the double line take advantage of additional training data and/or incorporate external tools. | 2 | [['Method', 'Gu et al. (2017)'], ['Method', 'Verga et al. (2018)'], ['Method', 'Nguyen and Verspoor (2018)'], ['Method', 'EoG'], ['Method', 'EoG (Full)'], ['Method', 'EoG (NoInf)'], ['Method', 'EoG (Sent)'], ['Method', 'Zhou et al. (2016)'], ['Method', 'Peng et al. (2016)'], ['Method', 'Li et al. (2016b)'], ['Method', ... | 2 | [['Overall (%)', 'P'], ['Overall (%)', 'R'], ['Overall (%)', 'F1'], ['Intra (%)', 'P'], ['Intra (%)', 'R'], ['Intra (%)', 'F1'], ['Inter (%)', 'P'], ['Inter (%)', 'R'], ['Inter (%)', 'F1']] | [['55.7', '68.1', '61.3', '59.7', '55.0', '57.2', '51.9', '7.0', '11.7'], ['55.6', '70.8', '62.1', '-', '-', '-', '-', '-', '-'], ['57.0', '68.6', '62.3', '-', '-', '-', '-', '-', '-'], ['62.1', '65.2', '63.6', '64.0', '73.0', '68.2', '56.0', '46.7', '50.9'], ['59.1', '56.2', '57.6', '71.2', '62.3', '66.5', '37.1', '42... | column | ['P', 'R', 'F1', 'P', 'R', 'F1', 'P', 'R', 'F1'] | ['EoG'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Overall (%) || P</th> <th>Overall (%) || R</th> <th>Overall (%) || F1</th> <th>Intra (%) || P</th> <th>Intra (%) || R</th> <th>Intra (%) || F1</th> <th>Inter (%) || P</th> <th>Inte... | Table 1 | table_1 | D19-1498 | 6 | emnlp2019 | 4 Results Table 1 depicts the performance of our proposed model on the CDR test set, in comparison with the state-of-the-art. We directly compare our model with models that do not incorporate external knowledge. Verga et al. (2018) and Nguyen and Verspoor (2018) consider a single pair per document, while Gu et al. (201... | [1, 1, 1, 2, 2, 1] | ['4 Results Table 1 depicts the performance of our proposed model on the CDR test set, in comparison with the state-of-the-art.', 'We directly compare our model with models that do not incorporate external knowledge. Verga et al. (2018) and Nguyen and Verspoor (2018) consider a single pair per document, while Gu et al.... | [None, ['Verga et al. (2018)', 'Nguyen and Verspoor (2018)', 'Gu et al. (2017)', 'EoG'], ['EoG'], None, ['Li et al. (2016b)'], ['EoG']] | 1 |
D19-1499table_3 | Automatic evaluation results on four style transfer tasks. Acc refers to the style accuracy. | 2 | [['Model', 'S2S'], ['Model', 'SLS'], ['Model', 'DAR'], ['Model', 'CPLS']] | 2 | [['to Anc.P', 'Acc'], ['to Anc.P', 'BLEU'], ['to Anc.P', 'GLEU'], ['to M.zh', 'Acc'], ['to M.zh', 'BLEU'], ['to M.zh', 'GLEU'], ['to F.en', 'Acc'], ['to F.en', 'BLEU'], ['to F.en', 'GLEU'], ['to Inf.en', 'Acc'], ['to Inf.en', 'BLEU'], ['to Inf.en', 'GLEU']] | [['87.2%', '4.2', '3.24', '74.8%', '3.66', '3.43', '88.9%', '33.9', '14.06', '71.8%', '18.34', '2.99'], ['82.0%', '5.89', '4.49', '81.9%', '3.05', '1.88', '89.5%', '41.41', '16.77', '63.5%', '19.21', '2.55'], ['82.5%', '6.33', '5.21', '80.4%', '4.72', '4.26', '89.2%', '44.72', '18.52', '63.5%', '23.32', '3.26'], ['85.4... | column | ['Acc', 'BLEU', 'GLEU', 'Acc', 'BLEU', 'GLEU', 'Acc', 'BLEU', 'GLEU', 'Acc', 'BLEU', 'GLEU'] | ['CPLS'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>to Anc.P || Acc</th> <th>to Anc.P || BLEU</th> <th>to Anc.P || GLEU</th> <th>to M.zh || Acc</th> <th>to M.zh || BLEU</th> <th>to M.zh || GLEU</th> <th>to F.en || Acc</th> <th>to F.... | Table 3 | table_3 | D19-1499 | 8 | emnlp2019 | 9 Results and Analysis Evaluation Results. Table 3 presents the evaluation results of automatic metrics on the models. It can be seen that the BLEU scores and GLEU scores of the semi-supervised models on almost all the datasets are better than the baseline S2S model. This result indicates that the model benefits from t... | [2, 1, 1, 2, 1, 2, 2, 1, 1, 2, 2, 2, 2, 2] | ['9 Results and Analysis Evaluation Results.', 'Table 3 presents the evaluation results of automatic metrics on the models.', 'It can be seen that the BLEU scores and GLEU scores of the semi-supervised models on almost all the datasets are better than the baseline S2S model.', 'This result indicates that the model bene... | [None, None, ['BLEU', 'GLEU', 'S2S'], None, ['BLEU', 'to Anc.P', 'to M.zh'], ['to Anc.P', 'to M.zh'], ['to Anc.P', 'to M.zh'], ['CPLS'], ['CPLS', 'Acc'], None, None, None, ['S2S', 'CPLS'], None] | 1 |
D19-1499table_4 | The human annotation results of the S2S model and CPLS model from three aspects. | 3 | [['Dataset Model', 'S2S', 'to M.zh'], ['Dataset Model', 'S2S', 'to Anc.P'], ['Dataset Model', 'S2S', 'to Inf.en'], ['Dataset Model', 'S2S', 'to F.en'], ['Dataset Model', 'CPLS', 'to M.zh'], ['Dataset Model', 'CPLS', 'to Anc.P'], ['Dataset Model', 'CPLS', 'to Inf.en'], ['Dataset Model', 'CPLS', 'to F.en']] | 1 | [['Content'], ['Style'], ['Fluency']] | [['0.1875', '0.5675', '0.3575'], ['0.2275', '0.54', '0.4425'], ['0.3175', '0.46', '0.58'], ['0.3625', '0.5125', '0.6325'], ['0.31', '0.5825', '0.305'], ['0.4375', '0.6875', '0.5475'], ['0.4675', '0.4625', '0.5725'], ['0.46', '0.5675', '0.62']] | column | ['content', 'style', 'fluency'] | ['CPLS'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Content</th> <th>Style</th> <th>Fluency</th> </tr> </thead> <tbody> <tr> <td>Dataset Model || S2S || to M.zh</td> <td>0.1875</td> <td>0.5675</td> <td>0.3575</td> </tr> <tr... | Table 4 | table_4 | D19-1499 | 8 | emnlp2019 | Table 4 compares the human evaluation results of S2S model and CPLS model on all the datasets, which are calculated by the average score of the human annotations. As shown in the Table 4, the CPLS model outperforms the S2S model in the aspects of the content preservation and style strength, and is on par in terms of fl... | [1, 1] | ['Table 4 compares the human evaluation results of S2S model and CPLS model on all the datasets, which are calculated by the average score of the human annotations.', 'As shown in the Table 4, the CPLS model outperforms the S2S model in the aspects of the content preservation and style strength, and is on par in terms ... | [['S2S', 'CPLS'], ['CPLS', 'S2S', 'Content', 'Style', 'Fluency']] | 1 |
D19-1505table_4 | Performance on Edit Anchoring | 1 | [['Passive-Aggr'], ['RandForest'], ['Adaboost'], ['Gated RNN'], ['CmntEdit-MT'], ['CmntEdit-EA']] | 2 | [['Candidates=5', 'Acc'], ['Candidates=5', 'F1'], ['Candidates=10', 'Acc'], ['Candidates=10', 'F1']] | [['0.581', '0.533', '0.716', '0.262'], ['0.639', '0.290', '0.743', '0.112'], ['0.657', '0.398', '0.751', '0.207'], ['0.696', '0.651', '0.665', '0.539'], ['0.635', '0.587', '0.619', '0.468'], ['0.744', '0.687', '0.726', '0.583']] | column | ['Acc', 'F1', 'Acc', 'F1'] | ['CmntEdit-MT', 'CmntEdit-EA'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Candidates=5 || Acc</th> <th>Candidates=5 || F1</th> <th>Candidates=10 || Acc</th> <th>Candidates=10 || F1</th> </tr> </thead> <tbody> <tr> <td>Passive-Aggr</td> <td>0.581</td> ... | Table 4 | table_4 | D19-1505 | 7 | emnlp2019 | 5.2.2 Edit Anchoring. Table 4 shows the results for Edit Anchoring. Our method, CmntEdit-EA, outperforms the best baseline method, Gated-RNN, by 5.5% on F1 and 6.9% on accuracy. The improvements over all the baselines are statistically significant at a p-value of 0.01. The baseline classifiers including PassiveAggressi... | [2, 1, 1, 1, 1, 2, 2, 2, 1] | ['5.2.2 Edit Anchoring.', 'Table 4 shows the results for Edit Anchoring.', 'Our method, CmntEdit-EA, outperforms the best baseline method, Gated-RNN, by 5.5% on F1 and 6.9% on accuracy.', 'The improvements over all the baselines are statistically significant at a p-value of 0.01.', 'The baseline classifiers including P... | [None, None, ['CmntEdit-EA', 'Gated RNN', 'F1', 'Acc'], None, ['Passive-Aggr', 'RandForest', 'Adaboost', 'Acc', 'F1'], None, None, None, ['Adaboost', 'Acc', 'CmntEdit-EA', 'CmntEdit-MT', 'F1']] | 1 |
D19-1506table_3 | Evaluation Results | 1 | [['PRADO'], ['PRADO 8-bit Quantized'], ['SGNN (Ravi and Kozareva, 2018)'], ['HN-ATT* (Yang et al., 2016)'], ['HN-MAX* (Yang et al., 2016)'], ['HN-AVE* (Yang et al., 2016)'], ['LSTM-GRNN (Tang et al., 2015)'], ['Conv-GRNN (Tang et al., 2015)'], ['CNN-char (Zhang et al., 2015)'], ['CNN-word (Tang et al., 2015)'], ['CNN-w... | 2 | [['Dataset', 'Yelp'], ['Dataset', 'Amazon'], ['Dataset', 'Yahoo']] | [['64.7', '61.2', '72.3'], ['65.9', '61.9', '72.5'], ['35.4', '39.1', '36.6'], ['-', '63.6', '-'], ['-', '62.9', '-'], ['-', '62.9', '-'], ['67.6', '-', '-'], ['66', '-', '-'], ['62', '59.6', '71.2'], ['61.5', '-', '-'], ['60.5', '57.6', '71.2'], ['60.5', '-', '-'], ['58.2', '59.4', '70.8'], ['62.4', '-', '-'], ['61.1'... | column | ['accuracy', 'accuracy', 'accuracy'] | ['PRADO', 'PRADO 8-bit Quantized'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Dataset || Yelp</th> <th>Dataset || Amazon</th> <th>Dataset || Yahoo</th> </tr> </thead> <tbody> <tr> <td>PRADO</td> <td>64.7</td> <td>61.2</td> <td>72.3</td> </tr> <tr> ... | Table 3 | table_3 | D19-1506 | 6 | emnlp2019 | We train a PRADO model variant with 8-bit quantization as described in (Jacob et al., 2018). This procedure simulates the quantization process during training by nudging the weights and activations towards a grid of discrete levels (2N levels, where N=8 is the number of bits). We estimate the activation ranges f... | [2, 2, 2, 2, 1, 1, 2, 2, 2, 2] | ['We train a PRADO model variant with 8-bit quantization as described in (Jacob et al., 2018).', 'This procedure simulates the quantization process during training by nudging the weights and activations towards a grid of discrete levels (2N levels, where N=8 is the number of bits).', 'We estimate the activation ... | [['PRADO 8-bit Quantized'], None, None, None, ['PRADO 8-bit Quantized'], ['PRADO 8-bit Quantized', 'Yelp'], ['Yelp'], ['PRADO 8-bit Quantized'], ['PRADO'], ['PRADO']] | 1 |
D19-1510table_2 | Sentence fusion results on DfWiki. | 2 | [['Model', 'Transformer (Geva et al., 2019)'], ['Model', 'SEQ2SEQBERT'], ['Model', 'LASERTAGGERAR (no SWAP)'], ['Model', 'LASERTAGGERFF'], ['Model', 'LASERTAGGERAR']] | 1 | [['Exact'], ['SARI']] | [[' 51.1', ' 84.5'], ['53.6', '85.3'], [' 46.4', ' 80.4'], [' 52.2', ' 84.1'], [' 53.8', ' 85.5']] | column | ['Exact', 'SARI'] | ['LASERTAGGERAR (no SWAP)', 'LASERTAGGERFF', 'LASERTAGGERAR'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Exact</th> <th>SARI</th> </tr> </thead> <tbody> <tr> <td>Model || Transformer (Geva et al., 2019)</td> <td>51.1</td> <td>84.5</td> </tr> <tr> <td>Model || SEQ2SEQBERT</td> ... | Table 2 | table_2 | D19-1510 | 6 | emnlp2019 | Comparison against Baselines. Table 2 lists the results for the DfWiki dataset. We obtain new SOTA results with LASERTAGGERAR, outperforming the previous SOTA 7-layer Transformer model from Geva et al. (2019) by 2.7% Exact score and 1.0% SARI score. We also find that the pretrained SEQ2SEQBERT model yields nearly as go... | [2, 1, 1, 1, 2] | ['Comparison against Baselines.', 'Table 2 lists the results for the DfWiki dataset.', 'We obtain new SOTA results with LASERTAGGERAR, outperforming the previous SOTA 7-layer Transformer model from Geva et al. (2019) by 2.7% Exact score and 1.0% SARI score.', 'We also find that the pretrained SEQ2SEQBERT model yields n... | [None, None, ['LASERTAGGERAR', 'Transformer (Geva et al., 2019)', 'Exact', ' SARI'], ['SEQ2SEQBERT'], None] | 1 |
D19-1510table_5 | Results on grammatical-error correction. Note that Grundkiewicz et al. (2019) augment the training dataset of 4,384 examples by 100 million synthetic examples and 2 million Wikipedia edits. | 2 | [['Model', 'Grundkiewicz et al. (2019)'], ['Model', 'SEQ2SEQBERT'], ['Model', 'LASERTAGGER FF'], ['Model', 'LASERTAGGER AR']] | 1 | [['P'], ['R'], ['F 0.5']] | [['70.19', '47.99', '64.24'], ['6.13', '14.14', '6.91'], ['44.17', '24', '37.82'], ['47.46', '25.58', '40.52']] | column | ['P', 'R', 'F 0.5'] | ['LASERTAGGER FF', 'LASERTAGGER AR'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F 0.5</th> </tr> </thead> <tbody> <tr> <td>Model || Grundkiewicz et al. (2019)</td> <td>70.19</td> <td>47.99</td> <td>64.24</td> </tr> <tr> <td>... | Table 5 | table_5 | D19-1510 | 8 | emnlp2019 | Table 5 compares our taggers against two baselines. Again, the tagging approach clearly outperforms the BERT-based seq2seq model, here by being more than seven times as accurate in the prediction of corrections. This can be accounted to the seq2seq model's much richer generation capacity, which the model can not proper... | [1, 1, 1, 1] | ['Table 5 compares our taggers against two baselines.', 'Again, the tagging approach clearly outperforms the BERT-based seq2seq model, here by being more than seven times as accurate in the prediction of corrections.', "This can be accounted to the seq2seq model's much richer generation capacity, which the model can no... | [None, ['SEQ2SEQBERT', 'LASERTAGGER AR', 'LASERTAGGER FF'], ['SEQ2SEQBERT'], ['LASERTAGGER AR', 'LASERTAGGER FF']] | 1 |
D19-1512table_5 | Model ablation results | 4 | [['Dataset', 'Tencent', 'Metrics', 'METEOR'], ['Dataset', 'Tencent', 'Metrics', 'W-METEOR'], ['Dataset', 'Tencent', 'Metrics', 'Rouge_L'], ['Dataset', 'Tencent', 'Metrics', 'W-Rouge_L'], ['Dataset', 'Tencent', 'Metrics', 'CIDEr'], ['Dataset', 'Tencent', 'Metrics', 'W-CIDEr'], ['Dataset', 'Tencent', 'Metrics', 'BLEU-1']... | 1 | [['No Reading'], ['No Prediction'], ['No Sampling'], ['Full Model']] | [['0.096', '0.171', '0.171', '0.181'], ['0.072', '0.129', '0.131', '0.138'], ['0.282', '0.307', '0.303', '0.317'], ['0.219', '0.241', '0.239', '0.250'], ['0.012', '0.024', '0.026', '0.029'], ['0.009', '0.019', '0.021', '0.023'], ['0.426', '0.674', '0.667', '0.721'], ['0.388', '0.614', '0.607', '0.656'], ['0.081', '0.09... | row | ['METEOR', 'W-METEOR', 'Rouge_L', 'W-Rouge_L', 'CIDEr', 'W-CIDEr', 'BLEU-1', 'W-BLEU-1', 'METEOR', 'Rouge_L', 'CIDEr', 'BLEU-1'] | ['No Reading', 'No Prediction', 'No Sampling', 'Full Model'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>No Reading</th> <th>No Prediction</th> <th>No Sampling</th> <th>Full Model</th> </tr> </thead> <tbody> <tr> <td>Dataset || Tencent || Metrics || METEOR</td> <td>0.096</td> <td>0... | Table 5 | table_5 | D19-1512 | 8 | emnlp2019 | 4.5 Discussions. Ablation study: We compare the full model of DeepCom with the following variants: (1) No Reading: the entire reading network is replaced by a TF-IDF based keyword extractor, and top 40 keywords (tuned on validation sets) are fed to the generation network; (2) No Prediction: the prediction layer of the... | [0, 2, 1, 1, 2] | ['4.5 Discussions.', ' Ablation study: We compare the full model of DeepCom with the following variants: (1) No Reading: the entire reading network is replaced by a TF-IDF based keyword extractor, and top 40 keywords (tuned on validation sets) are fed to the generation network; (2) No Prediction: the prediction layer o... | [None, None, None, ['No Reading'], None] | 1 |
D19-1515table_1 | Performance of different phrase grounding methods on Flickr30k Entities (test set). Our CRF models has transition scores conditioned on features of context in between the two phrases (“M” in Table 2). Our methods, unless explicitly specified, uses ELMo (Peters et al., 2018) as word embeddings. | 5 | [['Method', 'Compared Methods', 'Structured Matching (Wang et al. 2016)', 'Vision Backbone', 'Fast R-CNN (Girshick 2015)'], ['Method', 'Compared Methods', 'Phrase-Region CCA (Plummer et al. 2017a)', 'Vision Backbone', 'Fast R-CNN (Girshick 2015)'], ['Method', 'Compared Methods', 'QRC Net (Chen et al. 2017b)', 'Vision B... | 1 | [['Grounding Accuracy (%)']] | [['42.08'], ['55.85'], ['65.14'], ['69.69'], ['73.3'], ['71.88'], ['72.21'], ['74.29'], ['72.26'], ['74.69']] | column | ['Grounding Accuracy (%)'] | ['Our methods'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Grounding Accuracy (%)</th> </tr> </thead> <tbody> <tr> <td>Method || Compared Methods || Structured Matching (Wang et al. 2016) || Vision Backbone || Fast R-CNN (Girshick 2015)</td> <td>42.08</td>... | Table 1 | table_1 | D19-1515 | 8 | emnlp2019 | Table 1 shows the performance of previous structured prediction models, current state-of-theart models, our baseline models and the SoftLabel Chain CRF model. For a fair comparison with BAN (Kim et al., 2018), we also report result of the hard-label baseline with GloVe (Pennington et al., 2014) embeddings, while we obt... | [1, 2, 1, 1, 1, 1] | ['Table 1 shows the performance of previous structured prediction models, current state-of-theart models, our baseline models and the SoftLabel Chain CRF model.', 'For a fair comparison with BAN (Kim et al., 2018), we also report result of the hard-label baseline with GloVe (Pennington et al., 2014) embeddings, while w... | [None, ['BAN (Kim et al. 2018)'], ['Soft-Label (SL)', 'Hard-Label (HL)'], ['Soft-Label Chain CRF (SL-CCRF)', 'Soft-Label (SL)'], ['Hard-Label Chain CRF (HL-CCRF)', 'Hard-Label (HL)'], ['Soft-Label Chain CRF (SL-CCRF)', 'Hard-Label (HL)', 'Structured Matching (Wang et al. 2016)', 'Phrase-Region CCA (Plummer et al. 2017a... | 1 |
D19-1521table_7 | Performance of BLING-KPE ablations. Italic marks statistically significant worse performances than Full Model. | 1 | [['No ELMo'], ['No Transformer'], ['No Position'], ['No Visual'], ['No Pretraining'], ['Full Model']] | 3 | [['OpenKP', 'Method', 'P@1'], ['OpenKP', 'Method', 'R@1'], ['OpenKP', 'Method', 'P@3'], ['OpenKP', 'Method', 'R@3'], ['OpenKP', 'Method', 'P@5'], ['OpenKP', 'Method', 'R@5'], ['Query Prediction', 'Method', 'P@1'], ['Query Prediction', 'Method', 'R@1'], ['Query Prediction', 'Method', 'P@3'], ['Query Prediction', 'Method... | [['0.27', '0.145', '0.172', '0.271', '0.132', '0.347', '0.323', '0.274', '0.189', '0.45', '0.136', '0.527'], ['0.389', '0.211', '0.247', '0.385', '0.189', '0.481', '0.489', '0.407', '0.258', '0.618', '0.178', '0.698'], ['0.394', '0.213', '0.247', '0.386', '0.187', '0.475', '0.543', '0.452', '0.281', '0.666', '0.191', '... | column | ['P@1', 'R@1', 'P@3', 'R@3', 'P@5', 'R@5', 'P@1', 'R@1', 'P@3', 'R@3', 'P@5', 'R@5'] | ['No Visual', 'No Pretraining', 'No ELMo', 'No Transformer', 'No Position'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>OpenKP || Method || P@1</th> <th>OpenKP || Method || R@1</th> <th>OpenKP || Method || P@3</th> <th>OpenKP || Method || R@3</th> <th>OpenKP || Method || P@5</th> <th>OpenKP || Method || R@5</... | Table 7 | table_7 | D19-1521 | 7 | emnlp2019 | 6.2 Ablation Study. Table 7 shows ablation results on BLING-KPE’s variations. Each variation removes a component and keeps all others unchanged. ELMo Embedding. We first verify the effectiveness of using ELMo embedding by replacing ELMo with the WordPiece token embedding (Wu et al., 2016). The accuracy of this ... | [2, 1, 1, 2, 1, 1, 1, 1, 2, 1, 2, 2, 1, 2, 1, 1, 1, 1, 1] | ['6.2 Ablation Study.', 'Table 7 shows ablation results on BLING-KPE’s variations.', 'Each variation removes a component and keeps all others unchanged.', 'ELMo Embedding.', 'We first verify the effectiveness of using ELMo embedding by replacing ELMo with the WordPiece token embedding (Wu et al., 2016).', 'The ac... | [None, None, None, None, ['No ELMo'], ['No ELMo'], None, None, None, ['No Transformer', 'No Position'], ['No Transformer'], None, ['No Position'], None, ['No Visual', 'No Pretraining'], ['No Visual', 'No Pretraining'], ['No Visual'], ['No ELMo'], None] | 1 |
D19-1524table_6 | The Precision@Top3 and the MAP results for the ranking list predicted by SciResREC. | 2 | [['Methods', 'RF (BoW+TFIDF)'], ['Methods', 'RF (N-grams+TFIDF)'], ['Methods', 'SciResREC'], ['Methods', '-Function feature'], ['Methods', '-Role 2nd feature'], ['Methods', '-Role 1st feature']] | 1 | [['Precision@Top3'], ['MAP']] | [['0.438', '0.275'], ['0.449', '0.306'], ['0.4890', '.597'], ['0.471', '0.569'], ['0.420', '0.539'], ['0.399', '0.497']] | column | ['Precision@Top3', 'MAP'] | ['SciResREC'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Precision@Top3</th> <th>MAP</th> </tr> </thead> <tbody> <tr> <td>Methods || RF (BoW+TFIDF)</td> <td>0.438</td> <td>0.275</td> </tr> <tr> <td>Methods || RF (N-grams+TFIDF)</td> ... | Table 6 | table_6 | D19-1524 | 8 | emnlp2019 | As Table 6 shows, our SciResREC framework outperforms the two baselines. An ablation test suggests that each feature component of our model contributes to the final performance, which indicates the information of role and function are helpful for understanding the scientific resources. And we can observe that the featu... | [1, 2, 1] | ['As Table 6 shows, our SciResREC framework outperforms the two baselines.', 'An ablation test suggests that each feature component of our model contributes to the final performance, which indicates the information of role and function are helpful for understanding the scientific resources.', 'And we can observe that t... | [['SciResREC', 'RF (BoW+TFIDF)', 'RF (N-grams+TFIDF)'], None, ['-Role 2nd feature']] | 1 |
D19-1526table_2 | The performances of different supervised hashing models on three datasets under different lengths of hashing codes. | 2 | [['Method', 'KSH'], ['Method', 'SHTTM'], ['Method', 'VDSH-S'], ['Method', 'NASH-DN-S'], ['Method', 'GMSH-S'], ['Method', 'BMSH-S']] | 3 | [['Datasets', 'TMC', '16bit'], ['Datasets', 'TMC', '32bit'], ['Datasets', 'TMC', '64bit'], ['Datasets', 'TMC', '128bit'], ['Datasets', '20Newsgroups', '16bit'], ['Datasets', '20Newsgroups', '32bit'], ['Datasets', '20Newsgroups', '64bit'], ['Datasets', '20Newsgroups', '128bit'], ['Datasets', 'Reuters', '16bit'], ['Datas... | [['0.6842', '0.7047', '0.7175', '0.7243', '0.5559', '0.6103', '0.6488', '0.6638', '0.8376', '0.848', '0.8537', '0.862'], ['0.6571', '0.6485', '0.6893', '0.6474', '0.3235', '0.2357', '0.1411', '0.1299', '0.852', '0.8323', '0.8271', '0.815'], ['0.7887', '0.7883', '0.7967', '0.8018', '0.6791', '0.7564', '0.685', '0.6916',... | column | ['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy'] | ['VDSH-S', 'NASH-DN-S', 'GMSH-S', 'BMSH-S'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Datasets || TMC || 16bit</th> <th>Datasets || TMC || 32bit</th> <th>Datasets || TMC || 64bit</th> <th>Datasets || TMC || 128bit</th> <th>Datasets || 20Newsgroups || 16bit</th> <th>Datasets |... | Table 2 | table_2 | D19-1526 | 7 | emnlp2019 | We evaluate the performance of supervised hashing in this section. Table 2 shows the performances of different supervised hashing models on three datasets under different lengths of hashing codes. We observe that all of the VAE-based generative hashing models (i.e VDSH, NASH, GMSH and BMSH) exhibit better performance, ... | [2, 1, 1, 1] | ['We evaluate the performance of supervised hashing in this section.', 'Table 2 shows the performances of different supervised hashing models on three datasets under different lengths of hashing codes.', 'We observe that all of the VAE-based generative hashing models (i.e VDSH, NASH, GMSH and BMSH) exhibit better perfo... | [None, None, ['VDSH-S', 'NASH-DN-S', 'GMSH-S', 'BMSH-S'], ['BMSH-S']] | 1 |
D19-1530table_2 | Word similarity Results | 2 | [['Method', 'none'], ['Method', 'CDA'], ['Method', 'gCDA'], ['Method', 'nCDA'], ['Method', 'gCDS'], ['Method', 'nCDS'], ['Method', 'WED40'], ['Method', 'WED70'], ['Method', 'nWED70']] | 2 | [['rs', 'Gigaword'], ['rs', 'Wikipedia']] | [['0.385', '0.368'], ['0.381', '0.363'], ['0.381', '0.363'], ['0.380', '0.365'], ['0.382', '0.366'], ['0.380', '0.362'], ['0.386', '0.371'], ['0.395', '0.375'], ['0.384', '0.367']] | column | ['rs', 'rs'] | ['WED40', 'WED70', 'nWED70'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>rs || Gigaword</th> <th>rs || Wikipedia</th> </tr> </thead> <tbody> <tr> <td>Method || none</td> <td>0.385</td> <td>0.368</td> </tr> <tr> <td>Method || CDA</td> <td>0.381<... | Table 2 | table_2 | D19-1530 | 7 | emnlp2019 | Word similarity . Table 2 reports the SimLex-999 Spearman rank-order correlation coefficients rs (all are significant, p < 0.01). Surprisingly, the WED40 and 70 methods outperform the unmitigated embedding, although the difference in result is small (0.386 and 0.395 vs. 0.385 on Gigaword, 0.371 and 0.367 vs. 0.368 on W... | [2, 1, 1, 1, 1, 2] | ['Word similarity .', 'Table 2 reports the SimLex-999 Spearman rank-order correlation coefficients rs (all are significant, p < 0.01).', 'Surprisingly, the WED40 and 70 methods outperform the unmitigated embedding, although the difference in result is small (0.386 and 0.395 vs. 0.385 on Gigaword, 0.371 and 0.367 vs. 0.... | [None, ['rs'], ['WED40', 'WED70', 'none', 'Gigaword', 'Wikipedia'], ['nWED70', 'none', 'Gigaword', 'Wikipedia'], ['CDA', 'gCDA', 'nCDA', 'gCDS', 'nCDS'], None] | 1 |
D19-1533table_2 | English all-words task results in F1 measure (%), averaged over three runs. SemEval 2007 Task 17 (SE07) test set is used as the development set. We show the results of nearest neighbor matching (1nn) and linear projection, by simple last layer linear projection, layer weighting (LW), and gated linear units (GLU). Apart... | 3 | [['System', 'Reported in previous papers', 'MFS baseline'], ['System', 'Reported in previous papers', 'IMS (Zhong and Ng, 2010)'], ['System', 'Reported in previous papers', 'IMS+emb (Iacobacci et al., 2016)'], ['System', 'Reported in previous papers', 'SupWSD (Papandrea et al., 2017)'], ['System', 'Reported in previous... | 1 | [['SE07'], ['SE2'], ['SE3'], ['SE13'], ['SE15'], ['Avg']] | [['54.5', '65.6', '66.0', '63.8', '67.1', '65.6'], ['61.3', '70.9', '69.3', '65.3', '69.5', '68.8'], ['60.9', '71.0', '69.3', '67.3', '71.3', '69.7'], ['60.2', '71.3', '68.8', '65.8', '70.0', '69.0'], ['63.1', '72.7', '70.6', '66.8', '71.8', '70.5'], ['63.7', '72.0', '69.4', '66.4', '72.4', '70.1'], ['–', '72.2', '70... | column | ['F1', 'F1', 'F1', 'F1', 'F1', 'F1'] | ['BERT nearest neighbor (ours)', 'BERT linear projection (ours)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>SE07</th> <th>SE2</th> <th>SE3</th> <th>SE13</th> <th>SE15</th> <th>Avg</th> </tr> </thead> <tbody> <tr> <td>System || Reported in previous papers || MFS baseline</td> <td>... | Table 2 | table_2 | D19-1533 | 6 | emnlp2019 | Table 2 shows our WSD results in F1 measure. It is shown in the table that with the nearest neighbor matching model, BERT outperforms context2vec and ELMo. This shows the effectiveness of BERT’s pre-trained contextualized word representation. When we include surrounding sentences, one to the left and one to the right, ... | [1, 1, 2, 1, 1, 2, 1, 1] | ['Table 2 shows our WSD results in F1 measure.', 'It is shown in the table that with the nearest neighbor matching model, BERT outperforms context2vec and ELMo.', 'This shows the effectiveness of BERT’s pre-trained contextualized word representation.', 'When we include surrounding sentences, one to the left and one to ... | [None, ['BERT nearest neighbor (ours)', 'context2vec (Melamud et al., 2016)', 'ELMo (Peters et al., 2018)'], None, ['BERT nearest neighbor (ours)', '1nn (1sent+1sur)'], ['BERT linear projection (ours)'], None, ['BERT linear projection (ours)', 'LW (1sent)', 'LW (1sent+1sur)', 'GLU+LW (1sent)', 'GLU+LW (1sent+1sur)', 'G... | 1 |
D19-1535table_1 | SymAcc, BLEU and AnsAcc on the FollowUp dataset. Results marked † are from Liu et al. (2019). | 2 | [['Model', 'SEQ2SEQÊ(Bahdanau et al., 2015)'], ['Model', 'COPYNETÊ(Gu et al., 2016)'], ['Model', 'COPY+BERT (Devlin et al., 2019)'], ['Model', 'CONCAT'], ['Model', 'E2ECRÊ(Lee et al., 2017)'], ['Model', 'FANDA (Liu et al., 2019)'], ['Model', 'STAR']] | 2 | [['Dev', 'SymAcc (%)'], ['Dev', 'BLEU (%)'], ['Test', 'SymAcc (%)'], ['Test', 'BLEU (%)'], ['Test', 'AnsAcc (%)']] | [['0.63±0.00', '21.34±1.14', '0.50±0.22', '20.72±1.31', '-'], ['17.50±0.87', '43.36±0.54', '19.30±0.93', '43.34±0.45', '-'], ['18.63±0.61', '45.14±0.68', '22.00±0.45', '44.87±0.52', '-'], ['-', '-', '22.00±-', '52.02±-', '25.24'], ['-', '-', '27.00±-', '52.47±-', '27.18'], ['49.00±1.28', '60.14±0.98', '47.80±1.14', '59... | column | ['SymAcc (%)', 'BLEU (%)', 'SymAcc (%)', 'BLEU (%)', 'AnsAcc (%)'] | ['STAR'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Dev || SymAcc (%)</th> <th>Dev || BLEU (%)</th> <th>Test || SymAcc (%)</th> <th>Test || BLEU (%)</th> <th>Test || AnsAcc (%)</th> </tr> </thead> <tbody> <tr> <td>Model || SEQ2SEQÊ(Ba... | Table 1 | table_1 | D19-1535 | 6 | emnlp2019 | Answer Level. Table 1 shows AnsAcc results of competitive baselines on the test set. Compared with them, STAR achieves the highest, 65.05%, which demonstrates its superiority. Meanwhile, it verifies the feasibility of follow-up query analysis in cooperating with context-independent semantic parsing. Compared with CONCA... | [2, 1, 1, 2, 1, 2, 1, 1, 1, 1, 1] | ['Answer Level.', 'Table 1 shows AnsAcc results of competitive baselines on the test set.', 'Compared with them, STAR achieves the highest, 65.05%, which demonstrates its superiority.', 'Meanwhile, it verifies the feasibility of follow-up query analysis in cooperating with context-independent semantic parsing.', 'Compa... | [None, ['AnsAcc (%)'], ['STAR'], None, ['CONCAT'], None, ['SymAcc (%)', 'BLEU (%)'], ['STAR'], ['STAR', 'BLEU (%)', 'Test'], ['CONCAT'], None] | 1 |
D19-1538table_3 | Semantic F1-score on CoNLL-2009 in-domain test set. The first row is the best result of CoNLL-2009 shared task (Hajiˇc et al., 2009). The previously best published results of Catalan and Japanese is from Zhao et al. (2009a), Chinese from Cai et al. (2018), Czech from Marcheggiani et al. (2017), English from Li et al. (... | 2 | [['Model', 'CoNLL-2009 ST best system'], ['Model', 'Zhao et al. (2009a)'], ['Model', 'Roth and Lapata (2016)'], ['Model', 'Marcheggiani et al. (2017)'], ['Model', 'Li et al. (2019)'], ['Model', 'The best previously published'], ['Model', 'Our baseline']] | 1 | [['Catalan'], ['Chinese'], ['Czech'], ['English'], ['German'], ['Japanese'], ['Spanish']] | [['80.3', '78.6', '85.4', '85.6', '79.7', '78.2', '80.5'], ['80.3', '77.7', '85.2', '86.2', '76.0', '78.2', '80.5'], ['−', '79.4', '−', '87.7', '80.1', '−', '80.2'], ['−', '81.2', '86.0', '87.7', '−', '−', '80.3'], ['−', '−', '−', '90.4', '−', '−', '−'], ['80.3', '84.3', '86.0', '90.4', '80.1', '78.2', '80.5'], ['84.07... | column | ['F1-score', 'F1-score', 'F1-score', 'F1-score', 'F1-score', 'F1-score', 'F1-score'] | ['Our baseline'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Catalan</th> <th>Chinese</th> <th>Czech</th> <th>English</th> <th>German</th> <th>Japanese</th> <th>Spanish</th> </tr> </thead> <tbody> <tr> <td>Model || CoNLL-2009 ST best... | Table 3 | table_3 | D19-1538 | 6 | emnlp2019 | Table 3 presents all test results on seven languages of CoNLL-2009 datasets. So far, the best previously reported results of Catalan, Japanese and Spanish are still from CoNLL-2009 shared task. Compared with previous methods, our baseline yields strong performance on all datasets except German. Especially for Catalan, ... | [1, 1, 1, 1, 1, 1] | ['Table 3 presents all test results on seven languages of CoNLL-2009 datasets.', 'So far, the best previously reported results of Catalan, Japanese and Spanish are still from CoNLL-2009 shared task.', 'Compared with previous methods, our baseline yields strong performance on all datasets except German.', 'Especially fo... | [None, ['Catalan', 'Japanese', 'Spanish'], ['Our baseline', 'German'], ['Catalan', 'Czech', 'Japanese', 'Spanish', 'Our baseline'], None, None] | 1 |
D19-1539table_3 | CoNLL-2003 Named Entity Recognition results. Test result was evaluated on parameter set with the best dev F1. | 2 | [['Model', 'ELMoBASE'], ['Model', 'CNN Large + ELMo'], ['Model', 'CNN Large + fine-tune'], ['Model', 'BERTBASE'], ['Model', 'BERTLARGE']] | 1 | [['dev F1'], ['test F1']] | [['95.7', '92.2'], ['96.4', '93.2'], ['96.9', '93.5'], ['96.4', '92.4'], ['96.6', '92.8']] | column | ['dev F1', 'test F1'] | ['CNN Large + ELMo', 'CNN Large + fine-tune'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>dev F1</th> <th>test F1</th> </tr> </thead> <tbody> <tr> <td>Model || ELMoBASE</td> <td>95.7</td> <td>92.2</td> </tr> <tr> <td>Model || CNN Large + ELMo</td> <td>96.4</td>... | Table 3 | table_3 | D19-1539 | 6 | emnlp2019 | Table 3 shows the results, with comparison to previous published ELMoBASE results (Peters et al., 2018) and the BERT models. Both of our stacking methods outperform the previous state of the art, but fine tuning gives the biggest gain. | [1, 1] | ['Table 3 shows the results, with comparison to previous published ELMoBASE results (Peters et al., 2018) and the BERT models.', 'Both of our stacking methods outperform the previous state of the art, but fine tuning gives the biggest gain.'] | [['ELMoBASE', 'BERTBASE', 'BERTLARGE'], ['CNN Large + ELMo', 'CNN Large + fine-tune']] | 1 |
D19-1539table_4 | Penn Treebank Constituency Parsing results. Test result was evaluated on parameter set with the best dev F1. | 2 | [['Model', 'ELMoBASE'], ['Model', 'CNN Large + ELMo'], ['Model', 'CNN Large + fine-tune']] | 1 | [['dev F1'], ['test F1']] | [['95.2', '95.1'], ['95.1', '95.2'], ['95.5', '95.6']] | column | ['dev f1', 'test f1'] | ['CNN Large + fine-tune'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>dev F1</th> <th>test F1</th> </tr> </thead> <tbody> <tr> <td>Model || ELMoBASE</td> <td>95.2</td> <td>95.1</td> </tr> <tr> <td>Model || CNN Large + ELMo</td> <td>95.1</td>... | Table 4 | table_4 | D19-1539 | 6 | emnlp2019 | 6.2.2 Constituency Parsing. We also report parseval F1 for Penn Treebank constituency parsing. We adopted the current state-ofthe-art architecture (Kitaev and Klein, 2018). We again used grid search for learning rates and number of layers in parsing encoder, and used 8E-04 for language model finetuning, 8E-03 for the p... | [2, 2, 2, 2, 1, 1] | ['6.2.2 Constituency Parsing.', 'We also report parseval F1 for Penn Treebank constituency parsing.', 'We adopted the current state-ofthe-art architecture (Kitaev and Klein, 2018).', 'We again used grid search for learning rates and number of layers in parsing encoder, and used 8E-04 for language model finetuning, 8E-0... | [None, ['dev F1', 'test F1'], None, None, None, ['CNN Large + fine-tune', 'CNN Large + ELMo']] | 1 |
D19-1539table_5 | Different loss functions on the development sets of GLUE (cf. Table 2). Results are based on the CNN base model (Table 1) | 1 | [['cloze'], ['bilm'], ['cloze + bilm']] | 1 | [['CoLA (mcc)'], ['SST-2 (acc)'], ['MRPC (F1)'], ['STS-B (scc)'], ['QQP (F1)'], ['MNLI-m (acc)'], ['QNLI (acc)'], ['RTE (acc)'], ['Avg']] | [['55.1', '92.9', '88.3', '88.3', '87.2', '82.3', '86.5', '66.4', '80.9'], ['50', '92.4', '86.6', '87.1', '86.1', '81.7', '84', '66.4', '79.3'], ['52.6', '93.2', '88.9', '87.9', '87.2', '82.1', '86.1', '65.5', '80.4']] | column | ['loss', 'loss', 'loss', 'loss', 'loss', 'loss', 'loss', 'loss', 'loss'] | ['cloze', 'bilm', 'cloze + bilm'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>CoLA (mcc)</th> <th>SST-2 (acc)</th> <th>MRPC (F1)</th> <th>STS-B (scc)</th> <th>QQP (F1)</th> <th>MNLI-m (acc)</th> <th>QNLI (acc)</th> <th>RTE (acc)</th> <th>Avg</th> </t... | Table 5 | table_5 | D19-1539 | 7 | emnlp2019 | Table 5 shows that the cloze loss performs significantly better than the bilm loss and that combining the two loss types does not improve over the cloze loss by itself. We conjecture that individual left and right context prediction tasks are too different from center word prediction and that their learning signals are... | [1, 2] | ['Table 5 shows that the cloze loss performs significantly better than the bilm loss and that combining the two loss types does not improve over the cloze loss by itself.', 'We conjecture that individual left and right context prediction tasks are too different from center word prediction and that their learning signal... | [['cloze', 'bilm', 'cloze + bilm'], None] | 1 |
D19-1540table_2 | Results on TrecQA, TwitterURL, and Quora. The best scores except for BERT are bolded. In these experiments, all our approaches use the deep encoder in Sec. 2.1. RM and SM denote that only relevance and semantic matching signals are used, respectively. HCAN denotes the complete HCAN model. | 3 | [['Model', 'Baseline', 'InferSent'], ['Model', 'Baseline', 'DecAtt'], ['Model', 'Baseline', 'ESIMseq'], ['Model', 'Baseline', 'ESIMtree'], ['Model', 'Baseline', 'ESIMseq+tree'], ['Model', 'Baseline', 'PWIM'], ['Model', 'State-of-the-Art Models', 'Rao et al. (2016)'], ['Model', 'State-of-the-Art Models', 'Gong et al. (2... | 2 | [['TrecQA', 'MAP'], ['TrecQA', 'MRR'], ['TwitterURL', 'macro-F1'], ['Quora', 'Acc']] | [['0.521', '0.559', '0.797', '0.866'], ['0.660', '0.712', '0.785', '0.845'], ['0.771', '0.795', '0.822', '0.850'], ['0.698', '0.734', '-', '0.755'], ['0.749', '0.768', '-', '0.854'], ['0.739', '0.795', '0.809', '0.834'], ['0.780', '0.834', '-', '-'], ['-', '-', '-', '0.891'], ['0.838', '0.887', '0.852', '0.892'], ['0.7... | column | ['MAP', 'MRR', 'macro-F1', 'Acc'] | ['Our Approach'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>TrecQA || MAP</th> <th>TrecQA || MRR</th> <th>TwitterURL || macro-F1</th> <th>Quora || Acc</th> </tr> </thead> <tbody> <tr> <td>Model || Baseline || InferSent</td> <td>0.521</td> ... | Table 2 | table_2 | D19-1540 | 6 | emnlp2019 | 4 Results. Our main results on the TrecQA, TwitterURL, and Quora datasets are shown in Table 2 and results on TREC Microblog 2013–2014 are shown in Table 3. The best numbers for each dataset (besides BERT) are bolded. We compare to three variants of our HCAN model: (1) only relevance matching signals (RM), (2) only s... | [2, 1, 2, 2, 2, 1, 1, 1, 1, 1] | ['4 Results.', 'Our main results on the TrecQA, TwitterURL, and Quora datasets are shown in Table 2 and results on TREC Microblog 2013–2014 are shown in Table 3.', 'The best numbers for each dataset (besides BERT) are bolded.', 'We compare to three variants of our HCAN model: (1) only relevance matching signals (RM),... | [None, None, None, None, None, ['TrecQA', 'TwitterURL', 'Quora', 'RM', 'SM'], ['Our Approach', 'InferSent', 'DecAtt', 'TwitterURL', 'Quora'], None, ['SM', 'TrecQA', 'TwitterURL', 'RM', 'Quora'], ['SM', 'RM', 'HCAN', 'TrecQA', 'TwitterURL', 'Quora']] | 1 |
D19-1541table_1 | Experimental results of syntax-aware methods we compare on CPB1.0 dataset. | 2 | [['Methods', 'Baseline'], ['Methods', 'Baseline + Dep (Tree-GRU)'], ['Methods', 'Baseline + Dep (FIR)'], ['Methods', 'Baseline + Dep (HPS)'], ['Methods', 'Baseline + Dep (IIR)']] | 2 | [['Dev', 'P'], ['Dev', 'R'], ['Dev', 'F1'], ['Test', 'P'], ['Test', 'R'], ['Test', 'F1']] | [['81.52', '82.17', '81.85', '80.95', '80.01', '80.48'], ['82.35', '80.24', '81.28', '82.1', '78.11', '80.06'], ['83.56', '83.05', '83.3', '83.38', '81.93', '82.65'], ['82.58', '84.15', '83.36', '83.22', '83.81', '83.51'], ['83.12', '83.66', '83.39', '84.49', '83.34', '83.91']] | column | ['P', 'R', 'F1', 'P', 'R', 'F1'] | ['Baseline + Dep (IIR)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Dev || P</th> <th>Dev || R</th> <th>Dev || F1</th> <th>Test || P</th> <th>Test || R</th> <th>Test || F1</th> </tr> </thead> <tbody> <tr> <td>Methods || Baseline</td> <td>81... | Table 1 | table_1 | D19-1541 | 6 | emnlp2019 | 4.3 Main Results. Results of Syntax-aware Methods. Table 1 shows the results of these syntax-aware methods on CPB1.0 dataset. First, the first line shows the results of our baseline model, which only employs the word embeddings and char representations as the inputs of the basic SRL model. Second, the Tree-GRU method o... | [2, 2, 1, 1, 1, 2, 1, 1, 1, 2, 2] | ['4.3 Main Results.', 'Results of Syntax-aware Methods.', 'Table 1 shows the results of these syntax-aware methods on CPB1.0 dataset.', 'First, the first line shows the results of our baseline model, which only employs the word embeddings and char representations as the inputs of the basic SRL model.', 'Second, the Tre... | [None, None, None, ['Baseline'], ['Baseline + Dep (Tree-GRU)', 'Baseline', 'F1', 'Test'], None, ['Baseline + Dep (FIR)', 'Baseline', 'F1', 'Test'], ['Baseline + Dep (HPS)', 'F1', 'Test'], ['Baseline + Dep (IIR)', 'F1', 'Test', 'Baseline'], None, None] | 1 |
D19-1541table_2 | Results and comparison with previous works on CPB1.0 test set. | 3 | [['Methods', 'Previous Works', 'Sun et al. (2009)'], ['Methods', 'Previous Works', 'Wang et al. (2015b)'], ['Methods', 'Previous Works', 'Sha et al. (2016)'], ['Methods', 'Previous Works', 'Xia et al. (2017)'], ['Methods', 'Ours', 'Baseline'], ['Methods', 'Ours', 'Baseline + Dep (HPS)'], ['Methods', 'Ours', 'Baseline +... | 1 | [['F1']] | [['74.12'], ['77.59'], ['77.69'], ['79.67'], ['80.48'], ['83.51'], ['83.91'], ['86.62'], ['87.03'], ['87.54']] | column | ['F1'] | ['Ours'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Methods || Previous Works || Sun et al. (2009)</td> <td>74.12</td> </tr> <tr> <td>Methods || Previous Works || Wang et al. (2015b)</td> <td>7... | Table 2 | table_2 | D19-1541 | 6 | emnlp2019 | Results on CPB1.0. Table 2 shows the results of our baseline model and proposed framework using external dependency trees on CPB1.0, as well as the corresponding results when adding BERT representations. It is clear that adding dependency trees into the baseline SRL model can effectively improve the performance (p < 0.... | [2, 1, 1, 1, 2, 1] | ['Results on CPB1.0.', 'Table 2 shows the results of our baseline model and proposed framework using external dependency trees on CPB1.0, as well as the corresponding results when adding BERT representations.', 'It is clear that adding dependency trees into the baseline SRL model can effectively improve the performance... | [None, None, ['Baseline + Dep (HPS)'], ['Baseline + Dep (IIR)', 'Baseline + BERT + Dep (IIR)', 'Baseline + Dep (HPS)', 'Baseline + BERT + Dep (HPS)'], None, ['Ours', 'Xia et al. (2017)']] | 1 |
D19-1541table_4 | Results and comparison with previous works on CoNLL-2009 Chinese test set. | 3 | [['Methods', 'Previous Works', 'Roth and Lapata (2016)'], ['Methods', 'Previous Works', 'Marcheggiani et al. (2017)'], ['Methods', 'Previous Works', 'He et al. (2018b)'], ['Methods', 'Previous Works', 'Cai et al. (2018)'], ['Methods', 'Ours', 'Baseline'], ['Methods', 'Ours', 'Baseline + Dep (IIR)'], ['Methods', 'Ours',... | 1 | [['P'], ['R'], ['F1']] | [['83.2', '75.9', '79.4'], ['84.6', '80.4', '82.5'], ['84.2', '81.5', '82.8'], ['84.7', '84.0', '84.3'], ['83.7', '84.8', '84.2'], ['84.6', '85.7', '85.1'], ['87.8', '89.2', '88.5'], ['88.0', '89.1', '88.5']] | column | ['P', 'R', 'F1'] | ['Ours'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Methods || Previous Works || Roth and Lapata (2016)</td> <td>83.2</td> <td>75.9</td> <td>79.4</td> </tr> <tr>... | Table 4 | table_4 | D19-1541 | 7 | emnlp2019 | Results on CoNLL-2009. Table 4 shows the results of our framework and comparison with previous works on the CoNLL-2009 Chinese test data. Our baseline achieves nearly the same performance with Cai et al. (2018), which is an endto-end neural model that consists of BiLSTM encoder and biaffine scorer. Our proposed framewo... | [2, 1, 1, 1, 1, 1] | ['Results on CoNLL-2009.', 'Table 4 shows the results of our framework and comparison with previous works on the CoNLL-2009 Chinese test data.', 'Our baseline achieves nearly the same performance with Cai et al. (2018), which is an endto-end neural model that consists of BiLSTM encoder and biaffine scorer.', 'Our propo... | [None, ['Ours', 'Previous Works'], ['Baseline', 'Cai et al. (2018)'], ['F1', 'Cai et al. (2018)', 'Baseline + Dep (IIR)'], ['F1', 'Baseline + BERT + Dep (IIR)'], ['F1', 'P', 'Baseline + BERT', 'Baseline + BERT + Dep (IIR)']] | 1 |
D19-1542table_3 | GLUE test results scored by the GLUE evaluation server. The best scores are represented in bold and scores higher than those of BERT-base are underlined. | 2 | [['Model', 'BERT-base'], ['Model', 'BERT-large'], ['Model', 'Transfer Fine-Tuning']] | 3 | [['Task', 'Semantic Equivalence', 'MRPC'], ['Task', 'Semantic Equivalence', 'STS-B'], ['Task', 'Semantic Equivalence', 'QQP'], ['Task', 'NLI', 'MNLI (m/mm)'], ['Task', 'NLI', 'RTE'], ['Task', 'NLI', 'QNLI'], ['Task', 'Single-Sent.', 'SST'], ['Task', 'Single-Sent.', 'CoLA']] | [['88.3', '84.7', '71.2', '84.3/83.0', '59.8', '89.1', '93.3', '52.7'], ['88.6', '86.0', '72.1', '86.2/85.5', '65.5', '92.7', '94.1', '55.7'], ['89.2', '87.4', '71.2', '83.9/83.1', '64.8', '89.3', '93.1', '47.2']] | column | ['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy'] | ['Transfer Fine-Tuning'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Task || Semantic Equivalence || MRPC</th> <th>Task || Semantic Equivalence || STS-B</th> <th>Task || Semantic Equivalence || QQP</th> <th>Task || NLI || MNLI (m/mm)</th> <th>Task || NLI || RTE</t... | Table 3 | table_3 | D19-1542 | 7 | emnlp2019 | 6.1 Effect on Semantic Equivalence Assessment Tasks. Table 3 shows fine-tuning results on GLUE; our model, denoted as Transfer Fine-Tuning, is compared against BERT-base and BERT-large. The first set of columns shows the results of semantic equivalence assessment tasks. Our model outperformed BERT-base on MRPC (+0.9 po... | [2, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2] | ['6.1 Effect on Semantic Equivalence Assessment Tasks.', 'Table 3 shows fine-tuning results on GLUE; our model, denoted as Transfer Fine-Tuning, is compared against BERT-base and BERT-large.', 'The first set of columns shows the results of semantic equivalence assessment tasks.', 'Our model outperformed BERT-base on MR... | [None, ['Transfer Fine-Tuning', 'BERT-base', 'BERT-large'], None, ['Transfer Fine-Tuning', 'BERT-base', 'MRPC', 'STS-B'], ['Transfer Fine-Tuning', 'BERT-large', 'MRPC', 'STS-B'], None, None, ['Transfer Fine-Tuning'], None, None, ['Transfer Fine-Tuning', 'BERT-large'], None] | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.