table_id_paper stringlengths 15 15 | caption stringlengths 14 1.88k | row_header_level int32 1 9 | row_headers large_stringlengths 15 1.75k | column_header_level int32 1 6 | column_headers large_stringlengths 7 1.01k | contents large_stringlengths 18 2.36k | metrics_loc stringclasses 2
values | metrics_type large_stringlengths 5 532 | target_entity large_stringlengths 2 330 | table_html_clean large_stringlengths 274 7.88k | table_name stringclasses 9
values | table_id stringclasses 9
values | paper_id stringlengths 8 8 | page_no int32 1 13 | dir stringclasses 8
values | description large_stringlengths 103 3.8k | class_sentence stringlengths 3 120 | sentences large_stringlengths 110 3.92k | header_mention stringlengths 12 1.8k | valid int32 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
P18-2015table_2 | Performance of seed selection methods. | 2 | [['Method', 'K-means'], ['Method', 'HITS Graph1'], ['Method', 'HITS Graph2'], ['Method', 'HITS Graph3'], ['Method', 'HITS+K-means Graph1'], ['Method', 'HITS+K-means Graph2'], ['Method', 'HITS+K-means Graph3'], ['Method', 'LSA'], ['Method', 'NMF'], ['Method', 'Random']] | 1 | [['Average P@50']] | [['0.96'], ['0.90'], ['0.85'], ['0.90'], ['0.92'], ['0.85'], ['0.94'], ['0.90'], ['0.89'], ['0.75']] | column | ['Average P@50'] | ['HITS Graph1', 'HITS Graph2', 'HITS Graph3', 'HITS+K-means Graph1', 'HITS+K-means Graph2', 'HITS+K-means Graph3'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Average P@50</th> </tr> </thead> <tbody> <tr> <td>Method || K-means</td> <td>0.96</td> </tr> <tr> <td>Method || HITS Graph1</td> <td>0.90</td> </tr> <tr> <td>Method || HI... | Table 2 | table_2 | P18-2015 | 4 | acl2018 | The performances of the seed selection methods are presented in Table 2. For the HITS-based and HITS+K-means-based methods, we display the P@50 with three types of graph representation as shown in Section 4.2. We use random seed selection as the baseline for comparison. As Table 2 shows, the random method achieved a pr... | [1, 1, 1, 1, 2, 1, 2, 1, 1] | ['The performances of the seed selection methods are presented in Table 2.', 'For the HITS-based and HITS+K-means-based methods, we display the P@50 with three types of graph representation as shown in Section 4.2.', 'We use random seed selection as the baseline for comparison.', 'As Table 2 shows, the random method ac... | [None, ['HITS Graph1', 'HITS Graph2', 'HITS Graph3', 'HITS+K-means Graph1', 'HITS+K-means Graph2', 'HITS+K-means Graph3', 'Average P@50'], ['Random'], ['Random', 'Average P@50'], ['Random', 'Average P@50'], ['HITS Graph1', 'HITS Graph2', 'HITS Graph3'], ['HITS Graph1', 'HITS Graph2', 'HITS Graph3'], ['K-means', 'Averag... | 1 |
P18-2016table_3 | Ranking results of scoring functions. | 2 | [['f', 'f0'], ['f', 'f1'], ['f', 'f2'], ['f', 'f3'], ['f', 'f4']] | 1 | [['MAP'], ['P@50'], ['P@100'], ['P@200'], ['P@300']] | [['0.42', '0.40', '0.44', '0.42', '0.38'], ['0.58', '0.70', '0.60', '0.53', '0.44'], ['0.48', '0.56', '0.52', '0.49', '0.42'], ['0.59', '0.68', '0.63', '0.55', '0.44'], ['0.56', '0.40', '0.48', '0.50', '0.42']] | column | ['MAP', 'P@50', 'P@100', 'P@200', 'P@300'] | ['f3'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MAP</th> <th>P@50</th> <th>P@100</th> <th>P@200</th> <th>P@300</th> </tr> </thead> <tbody> <tr> <td>f || f0</td> <td>0.42</td> <td>0.40</td> <td>0.44</td> <td>0.4... | Table 3 | table_3 | P18-2016 | 5 | acl2018 | Table 3 shows the ranking results using Mean Average Precision (MAP) and Precision at K as the metrics. Accumulative scores (f1 and f3) generally do better. Thus, we choose f = f3 with a MAP score of 0.59 as the scoring function. | [1, 1, 2] | ['Table 3 shows the ranking results using Mean Average Precision (MAP) and Precision at K as the metrics.', 'Accumulative scores (f1 and f3) generally do better.', 'Thus, we choose f = f3 with a MAP score of 0.59 as the scoring function.'] | [['MAP', 'P@50', 'P@100', 'P@200', 'P@300'], ['f1', 'f3'], ['f3']] | 1 |
P18-2020table_1 | Evaluation on GermEval data, using the official metric (metric 1) of the GermEval 2014 task that combines inner and outer chunks. | 4 | [['Type', 'CRF', 'Model', 'StanfordNER'], ['Type', 'CRF', 'Model', 'GermaNER'], ['Type', 'RNN', 'Model', 'UKP'], ['Type', '-', 'Model', 'ExB'], ['Type', 'RNN', 'Model', 'BiLSTM-WikiEmb'], ['Type', 'RNN', 'Model', 'BiLSTM-EuroEmb']] | 1 | [['Pr'], ['R'], ['F1']] | [['80.02', '62.29', '70.05'], ['81.31', '68.00', '74.06'], ['79.54', '71.10', '75.09'], ['78.07', '74.75', '76.38'], ['81.95', '78.13', '79.99*'], ['75.50', '70.72', '73.03']] | column | ['Pr', 'R', 'F1'] | ['BiLSTM-WikiEmb'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Pr</th> <th>R</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Type || CRF || Model || StanfordNER</td> <td>80.02</td> <td>62.29</td> <td>70.05</td> </tr> <tr> <td>Ty... | Table 1 | table_1 | P18-2020 | 3 | acl2018 | Table 1 shows results on GermEval using the official metric (metric 1) for the best performing systems. This measure considers both outer and inner span annotations. Within the challenge, the ExB (Hanig et al., 2015) ensemble classifier achieved the best result with an F1 score of 76.38, followed by the RNN-based metho... | [1, 2, 1, 1, 1, 1] | ['Table 1 shows results on GermEval using the official metric (metric 1) for the best performing systems.', 'This measure considers both outer and inner span annotations.', 'Within the challenge, the ExB (Hanig et al., 2015) ensemble classifier achieved the best result with an F1 score of 76.38, followed by the RNN-bas... | [None, None, ['ExB', 'F1', 'RNN', 'UKP'], ['GermaNER', 'Pr', 'R'], ['BiLSTM-WikiEmb', 'ExB', 'F1'], ['BiLSTM-EuroEmb', 'F1']] | 1 |
P18-2020table_5 | Results for different test sets when using transfer learning. † marks results statistically significantly better than the ones reported in Table 4. | 4 | [['Train', 'CoNLL', 'Transfer', 'GermEval'], ['Train', 'CoNLL', 'Transfer', 'LFT'], ['Train', 'CoNLL', 'Transfer', 'ONB'], ['Train', 'GermEval', 'Transfer', 'CoNLL'], ['Train', 'GermEval', 'Transfer', 'LFT'], ['Train', 'GermEval', 'Transfer', 'ONB']] | 2 | [['BiLSTM-WikiEmb', 'CoNLL'], ['BiLSTM-WikiEmb', 'GermEval'], ['BiLSTM-WikiEmb', 'LFT'], ['BiLSTM-WikiEmb', 'ONB'], ['BiLSTM-EuroEmb', 'CoNLL'], ['BiLSTM-EuroEmb', 'GermEval'], ['BiLSTM-EuroEmb', 'LFT'], ['BiLSTM-EuroEmb', 'ONB']] | [['78.55', '82.93', '55.28', '64.93', '72.23', '75.78', '51.98', '61.74'], ['62.80', '58.89', '72.90', '67.96', '56.30', '51.25', '70.04', '65.65'], ['62.05', '57.19', '59.43', '76.17', '55.82', '49.14', '54.19', '73.68'], ['84.73†', '72.11', '54.21', '65.95', '78.41', '63.42', '52.02', '59.28'], ['67.77', '69.09', '74... | column | ['F1', 'F1', 'F1', 'F1', 'F1', 'F1', 'F1', 'F1'] | ['BiLSTM-WikiEmb', 'BiLSTM-EuroEmb'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BiLSTM-WikiEmb || CoNLL</th> <th>BiLSTM-WikiEmb || GermEval</th> <th>BiLSTM-WikiEmb || LFT</th> <th>BiLSTM-WikiEmb || ONB</th> <th>BiLSTM-EuroEmb || CoNLL</th> <th>BiLSTM-EuroEmb || GermEval... | Table 5 | table_5 | P18-2020 | 5 | acl2018 | The results in Table 5 show significant improvements for the CoNLL dataset but performance drops for GermEval. Combining contemporary sources with historic target corpora yields to consistent benefits. Performance on LFT increases from 69.62 to 74.33 and on ONB from 73.31 to 78.56. Cross-domain classification scores ar... | [1, 2, 1, 2, 2, 2] | ['The results in Table 5 show significant improvements for the CoNLL dataset but performance drops for GermEval.', 'Combining contemporary sources with historic target corpora yields to consistent benefits.', 'Performance on LFT increases from 69.62 to 74.33 and on ONB from 73.31 to 78.56.', 'Cross-domain classificatio... | [['CoNLL', 'GermEval'], None, ['BiLSTM-WikiEmb', 'LFT', 'BiLSTM-EuroEmb', 'ONB'], None, ['GermEval'], ['BiLSTM-WikiEmb', 'BiLSTM-EuroEmb']] | 1 |
P18-2021table_2 | Results of 10× 10−fold cross-validation. | 2 | [['Feature Set', '# Tokens'], ['Feature Set', '# Sentences'], ['Feature Set', 'All'], ['Feature Set', 'Significant'], ['Feature Set', 'Relevant']] | 1 | [['Precision'], ['Recall'], ['F1'], ['AUC']] | [['0.793', '0.996', '0.883', '0.610'], ['0.792', '0.999', '0.884', '0.584'], ['0.829', '0.926', '0.872', '0.849'], ['0.805', '0.953', '0.871', '0.805'], ['0.802', '0.963', '0.874', '0.819']] | column | ['Precision', 'Recall', 'F1', 'AUC'] | ['All', 'Significant', 'Relevant'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Precision</th> <th>Recall</th> <th>F1</th> <th>AUC</th> </tr> </thead> <tbody> <tr> <td>Feature Set || # Tokens</td> <td>0.793</td> <td>0.996</td> <td>0.883</td> <td>0... | Table 2 | table_2 | P18-2021 | 4 | acl2018 | Table 2 shows the average precision, recall, F1-measure, and AUC. The classifiers trained on the linguistic features, while performing near the baselines on the first three measures, substantially outperform the baselines on AUC, with all three yielding values over 0.8. Given these results and the imbalanced nature of ... | [1, 1, 2] | ['Table 2 shows the average precision, recall, F1-measure, and AUC.', 'The classifiers trained on the linguistic features, while performing near the baselines on the first three measures, substantially outperform the baselines on AUC, with all three yielding values over 0.8.', 'Given these results and the imbalanced na... | [['Precision', 'Recall', 'F1', 'AUC'], ['All', 'Significant', 'Relevant', 'Precision', 'Recall', 'F1', 'AUC'], ['All', 'Significant', 'Relevant', '# Tokens', '# Sentences']] | 1 |
P18-2023table_4 | Performance of word representations learned under different configurations. Baidubaike is used as the training corpus. The top 1 results are in bold. | 2 | [['SGNS', 'word'], ['SGNS', 'word+ngram'], ['SGNS', 'word+char'], ['PPMI', 'word'], ['PPMI', 'word+ngram'], ['PPMI', 'word+char']] | 2 | [['CA_translated', 'Cap.'], ['CA_translated', 'Sta.'], ['CA_translated', 'Fam.'], ['CA8', 'A'], ['CA8', 'AB'], ['CA8', 'Pre.'], ['CA8', 'Suf.'], ['CA8', 'Mor.'], ['CA8', 'Geo.'], ['CA8', 'His.'], ['CA8', 'Nat.'], ['CA8', 'Peo.'], ['CA8', 'Sem.']] | [['.706', '.966', '.603', '.117', '.162', '.181', '.389', '.222', '.414', '.345', '.236', '.223', '.327'], ['.715', '.977', '.640', '.143', '.184', '.197', '.429', '.250', '.449', '.308', '.276', '.310', '.368'], ['.676', '.966', '.548', '.358', '.540', '.326', '.612', '.455', '.468', '.226', '.296', '.305', '.368'], [... | column | ['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy'] | ['word+ngram', 'word+char'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>CA_translated || Cap.</th> <th>CA_translated || Sta.</th> <th>CA_translated || Fam.</th> <th>CA8 || A</th> <th>CA8 || AB</th> <th>CA8 || Pre.</th> <th>CA8 || Suf.</th> <th>CA8 || M... | Table 4 | table_4 | P18-2023 | 4 | acl2018 | Table 4 lists the performance of them on CA_translated and CA8 datasets under different configurations. We can observe that on CA8 dataset, SGNS representations perform better in analogical reasoning of morphological relations and PPMI representations show great advantages in semantic relations. However, Table 4 shows ... | [1, 1, 1, 1, 2, 1] | ['Table 4 lists the performance of them on CA_translated and CA8 datasets under different configurations.', 'We can observe that on CA8 dataset, SGNS representations perform better in analogical reasoning of morphological relations and PPMI representations show great advantages in semantic relations.', 'However, Table ... | [['CA_translated', 'CA8'], ['CA8', 'SGNS', 'Mor.', 'PPMI', 'Sem.'], ['CA_translated', 'word+ngram', 'word'], ['CA8', 'word+ngram', 'word+char'], ['word+char', 'Mor.'], ['SGNS', 'word+char', 'Mor.']] | 1 |
P18-2023table_5 | Performance of word representations learned upon different training corpora by SGNS with context feature of word. The top 2 results are in bold. | 1 | [['Wikipedia 1.2G'], ['Baidubaike 4.3G'], ['People Daily 4.2G'], ['Sogou News 4.0G'], ['Zhihu QA 2.2G'], ['Combination 15.9G']] | 2 | [['CA_translated', 'Cap.'], ['CA_translated', 'Sta.'], ['CA_translated', 'Fam.'], ['CA8', 'A'], ['CA8', 'AB'], ['CA8', 'Pre.'], ['CA8', 'Suf.'], ['CA8', 'Mor.'], ['CA8', 'Geo.'], ['CA8', 'His.'], ['CA8', 'Nat.'], ['CA8', 'Peo.'], ['CA8', 'Sem.']] | [['.597', '.771', '.360', '.029', '.018', '.152', '.266', '.180', '.339', '.125', '.147', '.079', '.236'], ['.706', '.966', '.603', '.117', '.162', '.181', '.389', '.222', '.414', '.345', '.236', '.223', '.327'], ['.925', '.989', '.547', '.140', '.158', '.213', '.355', '.226', '.694', '.019', '.206', '.157', '.455'], [... | column | ['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy'] | ['Combination 15.9G'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>CA_translated || Cap.</th> <th>CA_translated || Sta.</th> <th>CA_translated || Fam.</th> <th>CA8 || A</th> <th>CA8 || AB</th> <th>CA8 || Pre.</th> <th>CA8 || Suf.</th> <th>CA8 || M... | Table 5 | table_5 | P18-2023 | 5 | acl2018 | Table 5 shows that accuracies increase with the growth in corpus size, e.g .Baidubaike (an online Chinese encyclopedia) has a clear advantage over Wikipedia. Also, the domain of a corpus plays an important role in the experiments. We can observe that vectors trained on news data are beneficial to geography relations, e... | [1, 2, 1, 1, 2, 1] | ['Table 5 shows that accuracies increase with the growth in corpus size, e.g .Baidubaike (an online Chinese encyclopedia) has a clear advantage over Wikipedia.', 'Also, the domain of a corpus plays an important role in the experiments.', 'We can observe that vectors trained on news data are beneficial to geography rela... | [['Baidubaike 4.3G', 'Wikipedia 1.2G'], ['Cap.', 'Sta.', 'Fam.', 'A', 'AB', 'Pre.', 'Suf.', 'Mor.', 'Geo.', 'His.', 'Nat.', 'Peo.', 'Sem.'], ['People Daily 4.2G', 'Geo.'], ['Zhihu QA 2.2G'], None, ['Combination 15.9G', 'Mor.', 'Sem.']] | 1 |
P18-2051table_4 | Single models on Ja-En. Previous evaluation result included for comparison. | 4 | [['Architecture', 'Seq2seq (8-model ensemble)', 'Representation', 'Best WAT17 result (Morishita et al. 2017)'], ['Architecture', 'Seq2seq', 'Representation', 'Plain BPE'], ['Architecture', 'Seq2seq', 'Representation', 'Linearized derivation'], ['Architecture', 'Transformer', 'Representation', 'Plain BPE'], ['Architectu... | 1 | [['Dev BLEU'], ['Test BLEU']] | [['-', '28.4'], ['21.6', '21.2'], ['21.9', '21.2'], ['28.0', '28.9'], ['28.2', '28.4'], ['28.5', '28.7'], ['28.5', '29.1']] | column | ['Dev BLEU', 'Test BLEU'] | ['Transformer', 'Plain BPE'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Dev BLEU</th> <th>Test BLEU</th> </tr> </thead> <tbody> <tr> <td>Architecture || Seq2seq (8-model ensemble) || Representation || Best WAT17 result (Morishita et al. 2017)</td> <td>-</td> ... | Table 4 | table_4 | P18-2051 | 5 | acl2018 | Our plain BPE baseline (Table 4) outperforms the current best system on WAT Ja-En, an 8-model ensemble (Morishita et al., 2017). Our syntax models achieve similar results despite producing much longer sequences. | [1, 2] | ['Our plain BPE baseline (Table 4) outperforms the current best system on WAT Ja-En, an 8-model ensemble (Morishita et al., 2017).', 'Our syntax models achieve similar results despite producing much longer sequences.'] | [['Transformer', 'Plain BPE', 'Seq2seq (8-model ensemble)', 'Best WAT17 result (Morishita et al. 2017)'], None] | 1 |
P18-2058table_2 | Experiment results with gold predicates. | 1 | [['Ours'], ['Tan et al. (2018)'], ['He et al. (2017)'], ['Yang and Mitchell (2017)'], ['Zhou and Xu (2015)']] | 1 | [['WSJ'], ['Brown'], ['OntoNotes']] | [['83.9', '73.7', '82.1'], ['84.8', '74.1', '82.7'], ['83.1', '72.1', '81.7'], ['81.9', '72.0', '-'], ['82.8', '69.4', '81.1']] | column | ['accuracy', 'accuracy', 'accuracy'] | ['Ours'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>WSJ</th> <th>Brown</th> <th>OntoNotes</th> </tr> </thead> <tbody> <tr> <td>Ours</td> <td>83.9</td> <td>73.7</td> <td>82.1</td> </tr> <tr> <td>Tan et al. (2018)</td> ... | Table 2 | table_2 | P18-2058 | 4 | acl2018 | To compare with additional previous systems, we also conduct experiments with gold predicates by constraining our predicate beam to be gold predicates only. As shown in Table 2, our model significantly out-performs He et al. (2017), but falls short of Tan et al. (2018). | [2, 1] | ['To compare with additional previous systems, we also conduct experiments with gold predicates by constraining our predicate beam to be gold predicates only.', 'As shown in Table 2, our model significantly out-performs He et al. (2017), but falls short of Tan et al. (2018).'] | [None, ['Ours', 'He et al. (2017)', 'Tan et al. (2018)']] | 1 |
P19-1001table_2 | Evaluation results on the E-commerce data. Numbers in bold mean that the improvement to the best performing baseline is statistically significant (ttest with p-value < 0.05). | 2 | [['Models', 'RNN (Lowe et al., 2015)'], ['Models', 'CNN (Lowe et al., 2015)'], ['Models', 'LSTM (Lowe et al., 2015)'], ['Models', 'BiLSTM (Kadlec et al., 2015)'], ['Models', 'DL2R (Yan et al., 2016)'], ['Models', 'MV-LSTM (Wan et al., 2016)'], ['Models', 'Match-LSTM (Wang and Jiang, 2016)'], ['Models', 'Multi-View (Zho... | 2 | [['Metrics', 'R10@1'], ['Metrics', 'R10@2'], ['Metrics', 'R10@5']] | [['0.325', '0.463', '0.775'], ['0.328', '0.515', '0.792'], ['0.365', '0.536', '0.828'], ['0.355', '0.525', '0.825'], ['0.399', '0.571', '0.842'], ['0.412', '0.591', '0.857'], ['0.41', '0.59', '0.858'], ['0.421', '0.601', '0.861'], ['0.453', '0.654', '0.886'], ['0.501', '0.7', '0.921'], ['0.526', '0.727', '0.933'], ['0.... | column | ['R10@1', 'R10@2', 'R10@5'] | ['IoI-global', 'IoI-local'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Metrics || R10@1</th> <th>Metrics || R10@2</th> <th>Metrics || R10@5</th> </tr> </thead> <tbody> <tr> <td>Models || RNN (Lowe et al., 2015)</td> <td>0.325</td> <td>0.463</td> <t... | Table 2 | table_2 | P19-1001 | 7 | acl2019 | 6.4 Evaluation Results . Table 2 report evaluation results on the three data sets where IoI-global and IoI-local represent models learned with Objective (17) and Objective (18) respectively. We can see that both IoI-local and IoI-global outperform the best performing baseline, and improvements from IoI-local on all met... | [2, 1, 1, 1] | ['6.4 Evaluation Results .', 'Table 2 report evaluation results on the three data sets where IoI-global and IoI-local represent models learned with Objective (17) and Objective (18) respectively.', 'We can see that both IoI-local and IoI-global outperform the best performing baseline, and improvements from IoI-local on... | [None, ['IoI-global', 'IoI-local'], ['IoI-local', 'IoI-global'], ['IoI-local', 'IoI-global']] | 1 |
P19-1006table_1 | Experiment Result on the Ubuntu Corpus. | 2 | [['Model', 'Baseline'], ['Model', 'DAM'], ['Model', 'DAM+Fine-tune'], ['Model', 'DME'], ['Model', 'DME-SMN'], ['Model', 'STM(Transform)'], ['Model', 'STM(GRU)'], ['Model', 'STM(Ensemble)'], ['Model', 'STM(BERT)']] | 1 | [['R100@1'], ['R100@10'], ['MRR']] | [['0.083', '0.359', '-'], ['0.347', '0.663', '0.356'], ['0.364', '0.664', '0.443'], ['0.383', '0.725', '0.498'], ['0.455', '0.761', '0.558'], ['0.49', '0.764', '0.588'], ['0.503', '0.783', '0.597'], ['0.521', '0.797', '0.616'], ['0.548', '0.827', '0.614']] | column | ['R100@1', 'R100@10', 'MRR'] | ['STM(Transform)', 'STM(GRU)', 'STM(Ensemble)', 'STM(BERT)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>R100@1</th> <th>R100@10</th> <th>MRR</th> </tr> </thead> <tbody> <tr> <td>Model || Baseline</td> <td>0.083</td> <td>0.359</td> <td>-</td> </tr> <tr> <td>Model || DAM<... | Table 1 | table_1 | P19-1006 | 4 | acl2019 | 3.5 Ablation Study. As it is shown in Table 1, we conduct an ablation study on the testset of the Ubuntu Corpus, where we aim to examine the effect of each part in our proposed model. Firstly, we verify the effectiveness of dual multi-turn encoder by comparing Baseline and DME in Table 1. Thanks to dual multi-turn en-... | [2, 1, 1, 1, 2, 2, 2, 2, 1, 1, 2, 2, 2, 1, 2, 1] | ['3.5 Ablation Study.', 'As it is shown in Table 1, we conduct an ablation study on the testset of the Ubuntu Corpus, where we aim to examine the effect of each part in our proposed model.', 'Firstly, we verify the effectiveness of dual multi-turn encoder by comparing Baseline and DME in Table 1.', 'Thanks to dual mul... | [None, None, ['Baseline', 'DME'], ['DME', 'Baseline', 'R100@10'], ['STM(Transform)', 'STM(GRU)'], ['STM(GRU)'], ['STM(GRU)'], ['STM(GRU)', 'DME-SMN'], ['STM(GRU)', 'R100@1'], ['STM(Transform)'], None, None, ['DAM'], None, ['STM(BERT)'], ['STM(BERT)']] | 1 |
P19-1013table_3 | Results on the biomedical domain dataset (§5.3). P and R represent precision and recall, respectively. The scores of C&C and EasySRL fine-tuned on the GENIA1000 is included for comparison (excerpted from Lewis et al. (2016)). | 2 | [['Method', 'C&C'], ['Method', 'EasySRL'], ['Method', 'depccg'], ['Method', '#NAME?'], ['Method', '#NAME?'], ['Method', '#NAME?']] | 1 | [['P'], ['R'], ['F1']] | [['77.8', '71.4', '74.5'], ['81.8', '82.6', '82.2'], ['83.11', '82.63', '82.87'], ['85.87', '85.34', '85.61'], ['85.45', '84.49', '84.97'], ['86.9', '86.14', '86.52']] | column | ['P', 'R', 'F1'] | ['depccg'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Method || C&C</td> <td>77.8</td> <td>71.4</td> <td>74.5</td> </tr> <tr> <td>Method || EasySRL</td> ... | Table 3 | table_3 | P19-1013 | 6 | acl2019 | Table 3 shows the results of the parsing experiment, where the scores of revious work (C&C (Clark and Curran, 2007) and EasySRL (Lewis et al., 2016)) are included for reference. The plain depccg already achieves higher scores than these methods, and boosts when combined with ELMo (improvement of 2.73 points in terms of... | [2, 1, 2, 1] | ['Table 3 shows the results of the parsing experiment, where the scores of revious work (C&C (Clark and Curran, 2007) and EasySRL (Lewis et al., 2016)) are included for reference.', 'The plain depccg already achieves higher scores than these methods, and boosts when combined with ELMo (improvement of 2.73 points in ter... | [None, ['depccg'], None, None] | 1 |
P19-1013table_4 | Results on question sentences (§5.3). All of baseline C&C, EasySRL and depccg parsers are retrained on Questions data. | 2 | [['Method', 'C&C'], ['Method', 'EasySRL'], ['Method', 'depccg'], ['Method', 'depccg+elmo'], ['Method', 'depccg+proposed']] | 1 | [['P'], ['R'], ['F1']] | [['-', '-', '86.8'], ['88.2', '87.9', '88'], ['90.42', '90.15', '90.29'], ['90.55', '89.86', '90.21'], ['90.27', '89.97', '90.12']] | column | ['P', 'R', 'F1'] | ['depccg', 'depccg+elmo', 'depccg+proposed'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Method || C&C</td> <td>-</td> <td>-</td> <td>86.8</td> </tr> <tr> <td>Method || EasySRL</td> <t... | Table 4 | table_4 | P19-1013 | 6 | acl2019 | Table 4 compares the performance of depccg fine-tuned on the QuestionBank, along with other baselines. Contrary to our expectation, the plain depccg retrained on Questions data performs the best, with neither ELMo nor the proposed method taking any effect. We hypothesize that, since the evaluation set contains sentence... | [1, 1, 2, 2] | ['Table 4 compares the performance of depccg fine-tuned on the QuestionBank, along with other baselines.', 'Contrary to our expectation, the plain depccg retrained on Questions data performs the best, with neither ELMo nor the proposed method taking any effect.', 'We hypothesize that, since the evaluation set contains ... | [None, ['depccg', 'depccg+elmo', 'depccg+proposed'], None, None] | 1 |
P19-1019table_1 | Results of the proposed method in comparison to previous work (BLEU). Overall best results are in bold, the best ones in each group are underlined. ∗Detokenized BLEU equivalent to the official mteval-v13a.pl script. The rest use tokenized BLEU with multi-bleu.perl (or similar). | 2 | [['NMT', 'Artetxe et al. (2018c)'], ['NMT', 'Lample et al. (2018a)'], ['NMT', 'Yang et al. (2018)'], ['NMT', 'Lample et al. (2018b)'], ['SMT', 'Artetxe et al. (2018b)'], ['SMT', 'Lample et al. (2018b)'], ['SMT', 'Marie and Fujita (2018)'], ['SMT', 'Proposed system'], ['SMT', '+detok. SacreBLEU'], ['SMT + NMT', 'Lample ... | 2 | [['language pair', 'fr-en'], ['language pair', 'en-fr'], ['language pair', 'de-en'], ['language pair', 'en-de'], ['language pair', 'de-en'], ['language pair', 'en-de']] | [['15.6', '15.1', '10.2', '6.6', '-', '-'], ['14.3', '15.1', '-', '-', '13.3', '9.6'], ['15.6', '17', '-', '-', '14.6', '10.9'], ['24.2', '25.1', '-', '-', '21', '17.2'], ['25.9', '26.2', '17.4', '14.1', '23.1', '18.2'], ['27.2', '28.1', '-', '-', '22.9', '17.9'], ['-', '-', '-', '-', '20.2', '15.5'], ['28.4', '30.1', ... | column | ['BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU'] | ['Proposed system'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>language pair || fr-en</th> <th>language pair || en-fr</th> <th>language pair || de-en</th> <th>language pair || en-de</th> <th>language pair || de-en</th> <th>language pair || en-de</th> ... | Table 1 | table_1 | P19-1019 | 6 | acl2019 | Table 1 reports the results of the proposed system in comparison to previous work. As it can be seen, our full system obtains the best published results in all cases, outperforming the previous state-of-the-art by 5-7 BLEU points in all datasets and translation directions. | [1, 1] | ['Table 1 reports the results of the proposed system in comparison to previous work.', 'As it can be seen, our full system obtains the best published results in all cases, outperforming the previous state-of-the-art by 5-7 BLEU points in all datasets and translation directions.'] | [None, ['Proposed system']] | 1 |
P19-1019table_3 | Results of the proposed method in comparison to different supervised systems (BLEU). ∗Detokenized BLEU equivalent to the official mteval-v13a.pl script. The rest use tokenized BLEU with multi-bleu.perl (or similar). †Results in the original test set from WMT 2014, which slightly differs from the full test set used in a... | 2 | [['Unsupervised', 'Proposed system'], ['Unsupervised', '+detok SacreBLEU*'], ['Supervised', 'WMT best*'], ['Supervised', 'Vaswani et al. (2017)'], ['Supervised', 'Edunov et al. (2018)']] | 2 | [['WMT-14', 'fr-en'], ['WMT-14', 'en-fr'], ['WMT-14', 'de-en'], ['WMT-14', 'en-de']] | [['33.5', '36.2', '27', '22.5'], ['33.2', '33.6', '26.4', '21.2'], ['35', '35.8', '29', '20.6'], ['-', '41', '-', '28.4'], ['-', '45.6', '-', '35']] | column | ['BLEU', 'BLEU', 'BLEU', 'BLEU'] | ['Proposed system'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>WMT-14 || fr-en</th> <th>WMT-14 || en-fr</th> <th>WMT-14 || de-en</th> <th>WMT-14 || en-de</th> </tr> </thead> <tbody> <tr> <td>Unsupervised || Proposed system</td> <td>33.5</td> ... | Table 3 | table_3 | P19-1019 | 7 | acl2019 | So as to put our results into perspective, Table 3 reports the results of different supervised systems in the same WMT 2014 test set. More concretely, we include the best results from the shared task itself, which reflect the state-of-the-art in machine translation back in 2014; those of Vaswani et al. (2017), who intr... | [1, 2, 1, 1] | ['So as to put our results into perspective, Table 3 reports the results of different supervised systems in the same WMT 2014 test set.', 'More concretely, we include the best results from the shared task itself, which reflect the state-of-the-art in machine translation back in 2014; those of Vaswani et al. (2017), who... | [None, ['Vaswani et al. (2017)', 'Edunov et al. (2018)'], ['Unsupervised', 'Proposed system', 'en-de'], ['Unsupervised', 'Supervised']] | 1 |
P19-1021table_4 | Korean→English results. Mean and standard deviation of three training runs reported. | 2 | [['system', '(Gu et al., 2018b) (supervised Transformer)'], ['system', 'phrase-based SMT'], ['system', 'NMT baseline (2)'], ['system', 'NMT optimized (8)']] | 1 | [['BLEU']] | [['5.97'], ['6.57 ± 0.17'], ['2.93 ± 0.34'], ['10.37 ± 0.29']] | column | ['BLEU'] | ['NMT optimized (8)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU</th> </tr> </thead> <tbody> <tr> <td>system || (Gu et al., 2018b) (supervised Transformer)</td> <td>5.97</td> </tr> <tr> <td>system || phrase-based SMT</td> <td>6.57 ± 0.17</td... | Table 4 | table_4 | P19-1021 | 5 | acl2019 | Table 4 shows results for Korean - English, using the same configurations (1, 2 and 8) as for German - English. Our results confirm that the techniques we apply are successful across datasets, and result in stronger systems than previously reported on this dataset, achieving 10.37 BLEU as compared to 5.97 BLEU reported... | [1, 1] | ['Table 4 shows results for Korean - English, using the same configurations (1, 2 and 8) as for German - English.', 'Our results confirm that the techniques we apply are successful across datasets, and result in stronger systems than previously reported on this dataset, achieving 10.37 BLEU as compared to 5.97 BLEU rep... | [None, ['NMT optimized (8)', 'BLEU', '(Gu et al., 2018b) (supervised Transformer)']] | 1 |
P19-1023table_3 | Experiments result. | 3 | [['Model', 'Existing Models', 'MinIE (+AIDA)'], ['Model', 'Existing Models', 'MinIE (+NeuralEL)'], ['Model', 'Existing Models', 'ClausIE (+AIDA)'], ['Model', 'Existing Models', 'ClausIE (+NeuralEL)'], ['Model', 'Existing Models', 'CNN (+AIDA)'], ['Model', 'Existing Models', 'CNN (+NeuralEL)'], ['Model', 'Encoder-Decode... | 2 | [['WIKI', 'Precision'], ['WIKI', 'Recall'], ['WIKI', 'F1'], ['GEO', 'Precision'], ['GEO', 'Recall'], ['GEO', 'F1']] | [['0.3672', '0.4856', '0.4182', '0.3574', '0.3901', '0.373'], ['0.3511', '0.3967', '0.3725', '0.3644', '0.3811', '0.3726'], ['0.3617', '0.4728', '0.4099', '0.3531', '0.3951', '0.3729'], ['0.3445', '0.3786', '0.3607', '0.3563', '0.3791', '0.3673'], ['0.4035', '0.3503', '0.375', '0.3715', '0.3165', '0.3418'], ['0.3689', ... | column | ['Precision', 'Recall', 'F1', 'Precision', 'Recall', 'F1'] | ['N-gram Attention (+beam)', 'N-gram Attention (+triple classifier)', 'N-gram Attention (+pre-trained)', 'N-gram Attention'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>WIKI || Precision</th> <th>WIKI || Recall</th> <th>WIKI || F1</th> <th>GEO || Precision</th> <th>GEO || Recall</th> <th>GEO || F1</th> </tr> </thead> <tbody> <tr> <td>Model || E... | Table 3 | table_3 | P19-1023 | 8 | acl2019 | 4.3 Result. Table 3 shows that the end-to-end models outper-form the existing model. In particular, our proposed n-gram attention model achieves the best results in terms of precision, recall, and F1 score. Our proposed model outperforms the best existing model (MinIE) by 33.39% and 34.78% in terms of F1 score on th... | [0, 1, 1, 1, 2, 1, 2, 2, 2, 1, 1, 1, 1, 1, 2, 2, 1, 2] | ['4.3 Result.', 'Table 3 shows that the end-to-end models outper-form the existing model.', 'In particular, our proposed n-gram attention model achieves the best results in terms of precision, recall, and F1 score.', 'Our proposed model outperforms the best existing model (MinIE) by 33.39% and 34.78% in terms of F1 ... | [None, ['Proposed'], ['Proposed', 'N-gram Attention', 'Precision', 'Recall', 'F1'], ['Proposed', 'N-gram Attention', 'MinIE (+AIDA)', 'MinIE (+NeuralEL)', 'F1', 'WIKI', 'GEO'], ['MinIE (+AIDA)', 'MinIE (+NeuralEL)'], ['MinIE (+AIDA)', 'F1', 'Precision', 'MinIE (+NeuralEL)'], None, ['CNN (+AIDA)', 'CNN (+NeuralEL)'], ['... | 1 |
P19-1029table_3 | Evaluation of models at early stopping points. Results for three random seeds on IWSLT are averaged, reporting the standard deviation in the subscript. The translation of the dev set is obtained by greedy decoding (as during validation) and of the test set with beam search of width five. The costs are measured in chara... | 2 | [['Model', 'Baseline'], ['Model', 'Full'], ['Model', 'Weak'], ['Model', 'Self'], ['Model', 'Reg4'], ['Model', 'Reg3'], ['Model', 'Reg2']] | 2 | [['IWSLT dev', 'BLEU'], ['IWSLT dev', 'Cost'], ['IWSLT test', 'BLEU'], ['IWSLT test', 'TER']] | [['28.28', '-', '24.84', '62.42'], ['28.93±0.02', ' 417k', '25.60±0.02', '61.86±0.03'], ['28.65±0.01', '32k', '25.10±0.09', '62.12±0.12'], ['28.58±0.02', '-', '25.33±0.06', '61.96±0.05'], ['28.57±0.04', '68k', '25.23±0.05', '62.02±0.12'], ['28.61±0.03', '18k', '25.23±0.09', '62.07±0.06'], ['28.66±0.06', '88k', '25.27±0... | column | ['BLEU', 'cost', 'BLEU', 'TER'] | ['Model'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>IWSLT dev || BLEU</th> <th>IWSLT dev || Cost</th> <th>IWSLT test || BLEU</th> <th>IWSLT test || TER</th> </tr> </thead> <tbody> <tr> <td>Model || Baseline</td> <td>28.28</td> <t... | Table 3 | table_3 | P19-1029 | 12 | acl2019 | A.3 Offline Evaluation on IWSLT. Table 3 reports the offline held-out set evaluations for the early stopping points selected on the dev set for all feedback modes. All models notably improve over the baseline, only using full feedback leads to the overall best model on IWSLT (+0.6 BLEU / -0.6 TER), but costs a massive... | [2, 1, 1, 1, 2, 1, 1, 2] | ['A.3 Offline Evaluation on IWSLT.', ' Table 3 reports the offline held-out set evaluations for the early stopping points selected on the dev set for all feedback modes.', 'All models notably improve over the baseline, only using full feedback leads to the overall best model on IWSLT (+0.6 BLEU / -0.6 TER), but costs a... | [None, None, ['Full', 'Baseline', 'BLEU', 'TER', 'Cost'], ['Self', 'BLEU', 'TER'], None, ['Self'], ['Weak', 'Self', 'Baseline'], None] | 1 |
P19-1035table_1 | Results of the difficulty prediction approaches. SVM (original) has been taken from Beinborn (2016) | 2 | [['Model', 'SVM (original)'], ['Model', 'SVM (reproduced)'], ['Model', 'MLP'], ['Model', 'BiLSTM']] | 2 | [['Original data', 'rho'], ['Original data', 'RMSE'], ['Original data', 'qw k'], ['New data', 'rho'], ['New data', 'RMSE'], ['New data', 'qw k']] | [['0.5', '0.23', '0.44', '–', '–', '–'], ['0.49', '0.24', '0.47', '0.5', '0.21', '0.39'], ['0.42', '0.25', '0.31', '0.41', '0.22', '0.25'], ['0.49', '0.24', '0.35', '0.39', '0.24', '0.27']] | column | ['rho', 'RMSE', 'qw k', 'rho', 'RMSE', 'qw k'] | ['SVM (original)', 'SVM (reproduced)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Original data || rho</th> <th>Original data || RMSE</th> <th>Original data || qw k</th> <th>New data || rho</th> <th>New data || RMSE</th> <th>New data || qw k</th> </tr> </thead> <tbod... | Table 1 | table_1 | P19-1035 | 4 | acl2019 | The right-hand side of table 1 shows the performance of our SVM and the two neural methods. The results indicate that the SVM setup is wellsuited for the difficulty prediction task and that it successfully generalizes to new data. | [1, 1] | ['The right-hand side of table 1 shows the performance of our SVM and the two neural methods.', 'The results indicate that the SVM setup is wellsuited for the difficulty prediction task and that it successfully generalizes to new data.'] | [['SVM (original)', 'SVM (reproduced)'], ['SVM (original)', 'SVM (reproduced)']] | 1 |
P19-1036table_4 | Performance of our Method on the Operational Risk Text Classification Task | 2 | [['Taxonomy level', 'Level 1'], ['Taxonomy level', 'Level 2'], ['Taxonomy level', 'Level 3']] | 1 | [['Precision'], ['Recall'], ['F1-Score']] | [['91.8', '89.37', '90.45'], ['86.08', '74.8', '78.1'], ['34.98', '19.88', '22.95']] | column | ['Precision', 'Recall', 'F1-score'] | ['Taxonomy level', 'Level 1'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Precision</th> <th>Recall</th> <th>F1-Score</th> </tr> </thead> <tbody> <tr> <td>Taxonomy level || Level 1</td> <td>91.8</td> <td>89.37</td> <td>90.45</td> </tr> <tr> ... | Table 4 | table_4 | P19-1036 | 8 | acl2019 | 5.2 Result . For the purpose of experiment, operational teams (not experts) were asked to provide manual tags for a sample of 989 operational incidents. Table 4 provide the classification results of our approach when compared to those manual annotations, considering all three levels of the taxonomy. In a second step in... | [2, 2, 1, 2, 1] | ['5.2 Result .', 'For the purpose of experiment, operational teams (not experts) were asked to provide manual tags for a sample of 989 operational incidents.', 'Table 4 provide the classification results of our approach when compared to those manual annotations, considering all three levels of the taxonomy.', 'In a sec... | [None, None, ['Taxonomy level'], None, ['Level 1']] | 1 |
P19-1041table_3 | Manual evaluation on the Yelp dataset. | 2 | [['Model', 'Fu et al. (2018)'], ['Model', 'Shen et al. (2017)'], ['Model', 'Zhao et al. (2018)'], ['Model', 'Ours (DAE)'], ['Model', 'Ours (VAE)']] | 1 | [['TS'], ['CP'], ['LQ'], ['GM']] | [['1.67', '3.84', '3.66', '2.86'], ['3.63', '3.07', '3.08', '3.25'], ['3.55', '3.09', '3.77', '3.46'], ['3.67', '3.64', '4.19', '3.83'], ['4.32', '3.73', '4.48', '4.16']] | column | ['TS', 'CP', 'LQ', 'GM'] | ['Ours (DAE)', 'Ours (VAE)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>TS</th> <th>CP</th> <th>LQ</th> <th>GM</th> </tr> </thead> <tbody> <tr> <td>Model || Fu et al. (2018)</td> <td>1.67</td> <td>3.84</td> <td>3.66</td> <td>2.86</td> <... | Table 3 | table_3 | P19-1041 | 8 | acl2019 | Table 3 presents the results of human evaluation on selected methods. Again, we see that the style embedding model (Fu et al., 2018) is ineffective as it has a very low transfer strength, and that our method outperforms other baselines in all aspects. The results are consistent with Table 2. This also implies that the ... | [1, 1, 2, 2] | ['Table 3 presents the results of human evaluation on selected methods.', 'Again, we see that the style embedding model (Fu et al., 2018) is ineffective as it has a very low transfer strength, and that our method outperforms other baselines in all aspects.', 'The results are consistent with Table 2.', 'This also implie... | [None, ['Fu et al. (2018)', 'Ours (DAE)', 'Ours (VAE)'], None, None] | 1 |
P19-1042table_4 | Average single model results comparing different strategies to model cross-sentence context. ‘aux (+gate)’ is used in our CROSENT model. | 1 | [['BASELINE'], ['concat'], ['aux (no gate)'], ['aux (+gate)']] | 2 | [['Dev', 'F 0.5'], ['CoNLL-2013', 'P'], ['CoNLL-2013', 'R'], ['CoNLL-2013', 'F 0.5']] | [['33.21', '54.51', '15.16', '35.88'], ['33.41', '55.14', '15.28', '36.23'], ['32.99', '55.1', '14.83', '35.69'], ['35.68', '55.65', '16.93', '38.17']] | column | ['F 0.5', 'P', 'R', 'F 0.5'] | ['BASELINE', 'concat', 'aux (no gate)', 'aux (+gate)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Dev || F 0.5</th> <th>CoNLL-2013 || P</th> <th>CoNLL-2013 || R</th> <th>CoNLL-2013 || F 0.5</th> </tr> </thead> <tbody> <tr> <td>BASELINE</td> <td>33.21</td> <td>54.51</td> ... | Table 4 | table_4 | P19-1042 | 6 | acl2019 | 5.1 Modeling Cross-Sentence Context . We investigate different mechanisms of integrating cross-sentence context. Table 4 shows the average single model results of our sentence-level BASELINE compared to two different strategies of integrating cross-sentence context. concat' refers to simply prepending the previous sour... | [2, 1, 1, 2, 2, 2, 2, 2, 1, 1, 1, 1] | ['5.1 Modeling Cross-Sentence Context .', 'We investigate different mechanisms of integrating cross-sentence context.', 'Table 4 shows the average single model results of our sentence-level BASELINE compared to two different strategies of integrating cross-sentence context.', "concat' refers to simply prepending the pr... | [None, None, ['BASELINE'], ['concat'], None, None, ['aux (no gate)'], ['aux (+gate)'], ['BASELINE', 'aux (no gate)', 'concat'], ['BASELINE'], None, None] | 1 |
P19-1046table_5 | Comparison of Efficiency. | 2 | [['Methods', 'BC-LSTM'], ['Methods', 'TFN'], ['Methods', 'HFFN']] | 1 | [['FLOPs'], ['Number of Parameters']] | [['1322024', '1383902'], ['8491845', '4245986'], ['16665', '8301']] | column | ['FLOPs', 'Number of Parameters'] | ['HFFN'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>FLOPs</th> <th>Number of Parameters</th> </tr> </thead> <tbody> <tr> <td>Methods || BC-LSTM</td> <td>1322024</td> <td>1383902</td> </tr> <tr> <td>Methods || TFN</td> <td>8... | Table 5 | table_5 | P19-1046 | 8 | acl2019 | Table 5 shows that in terms of the number of parameters, TFN is around 511 times larger than our HFFN, even under the situation where we adopt a more complex module after tensor fusion, demonstrating the high efficiency of HFFN. Note that if TFN adopts the original setting as stated in (Zadeh et al., 2017) where the FC... | [1, 2, 1, 1, 1] | ['Table 5 shows that in terms of the number of parameters, TFN is around 511 times larger than our HFFN, even under the situation where we adopt a more complex module after tensor fusion, demonstrating the high efficiency of HFFN.', 'Note that if TFN adopts the original setting as stated in (Zadeh et al., 2017) where t... | [['TFN', 'HFFN'], ['TFN'], ['HFFN', 'FLOPs'], ['TFN', 'FLOPs'], ['TFN', 'BC-LSTM', 'HFFN']] | 1 |
P19-1048table_4 | F1-I scores of different model variants. Average results over 5 runs are reported. | 2 | [['Model variants', 'Vanilla model'], ['Model variants', '+Opinion transmission'], ['Model variants', '+Message passing-a (IMN -d)'], ['Model variants', '+DS'], ['Model variants', '+DD'], ['Model variants', '+Message passing-d (IMN)']] | 1 | [['D1'], ['D2'], ['D3']] | [['66.66', '55.63', '56.24'], ['66.98', '56.03', '56.65'], ['68.32', '57.66', '57.91'], ['68.48', '57.86', '58.03'], ['68.65', '57.5', '58.26'], ['69.54', '58.37', '59.18']] | column | ['F1-I', 'F1-I', 'F1-I'] | ['+Message passing-d (IMN)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>D1</th> <th>D2</th> <th>D3</th> </tr> </thead> <tbody> <tr> <td>Model variants || Vanilla model</td> <td>66.66</td> <td>55.63</td> <td>56.24</td> </tr> <tr> <td>Model... | Table 4 | table_4 | P19-1048 | 8 | acl2019 | Ablation study. To investigate the impact of different components, we start with a vanilla model which consists of f θs , f θae , and f θas only without any informative message passing, and add other components one at a time. Table 4 shows the results of different model variants. +Opinion transmission denotes the opera... | [2, 2, 1, 2, 2, 2, 2, 1, 1, 2, 1] | ['Ablation study.', 'To investigate the impact of different components, we start with a vanilla model which consists of f θs , f θae , and f θas only without any informative message passing, and add other components one at a time.', 'Table 4 shows the results of different model variants.', '+Opinion transmission denote... | [None, None, None, ['+Opinion transmission'], ['+Message passing-a (IMN -d)'], ['+DS', '+DD'], ['+Message passing-d (IMN)'], ['+Message passing-a (IMN -d)', '+Message passing-d (IMN)'], ['+DS', '+DD'], ['+DS', '+DD'], ['+Message passing-d (IMN)']] | 1 |
P19-1048table_7 | Model comparison in a setting without opinion term labels. Average results over 5 runs with random initialization are reported. ∗ indicates the proposed method is significantly better than the other baselines (p < 0.05) based on one-tailed unpaired t-test. | 2 | [['Methods', 'DECNN-ALSTM'], ['Methods', 'DECNN-dTrans'], ['Methods', 'PIPELINE'], ['Methods', 'MNN'], ['Methods', 'INABSA'], ['Methods', 'IMN -d'], ['Methods', 'IMN']] | 2 | [['D1', 'F1-a'], ['D1', 'acc-s'], ['D1', 'F1-s'], ['D1', 'F1-I'], ['D2', 'F1-a'], ['D2', 'acc-s'], ['D2', 'F1-s'], ['D2', 'F1-I'], ['D3', 'F1-a'], ['D3', 'acc-s'], ['D3', 'F1-s'], ['D3', 'F1-I']] | [['83.33', '77.63', '70.09', '64.32', '80.28', '69.98', '66.2', '55.92', '68.72', '79.22', '54.4', '54.22'], ['83.33', '79.45', '73.08', '66.15', '80.28', '71.51', '68.03', '57.28', '68.72', '82.09', '68.35', '56.08'], ['83.33', '79.39', '69.45', '65.96', '80.28', '72.12', '68.56', '57.29', '68.72', '81.85', '58.74', '... | column | ['F1-a', 'acc-s', 'F1-s', 'F1-I', 'F1-a', 'acc-s', 'F1-s', 'F1-I', 'F1-a', 'acc-s', 'F1-s', 'F1-I'] | ['IMN -d', 'IMN'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>D1 || F1-a</th> <th>D1 || acc-s</th> <th>D1 || F1-s</th> <th>D1 || F1-I</th> <th>D2 || F1-a</th> <th>D2 || acc-s</th> <th>D2 || F1-s</th> <th>D2 || F1-I</th> <th>D3 || F1-a</t... | Table 7 | table_7 | P19-1048 | 12 | acl2019 | Both IMN -d and IMN still significantly outperform other baselines in most cases under this setting. In addition, when compare the results in Table 7 and Table 3, we observe that IMN -d and IMN consistently yield better F1-I scores on all datasets in Table 3, when opinion term extraction is also considered. Consistent ... | [2, 1, 2, 2, 2] | ['Both IMN -d and IMN still significantly outperform other baselines in most cases under this setting.', 'In addition, when compare the results in Table 7 and Table 3, we observe that IMN -d and IMN consistently yield better F1-I scores on all datasets in Table 3, when opinion term extraction is also considered.', 'Con... | [['IMN -d', 'IMN'], ['F1-a', 'F1-s', 'F1-I'], None, None, ['IMN']] | 1 |
P19-1056table_3 | F1 score (%) comparison only for aspect term extraction. | 2 | [['Model', 'DE-CNN'], ['Model', 'DOER*'], ['Model', 'DOER']] | 1 | [['SL'], ['SR'], ['ST']] | [['81.26', '78.98', '63.23'], ['82.11', '79.98', '68.99'], ['82.61', '81.06', '71.35']] | column | ['F1', 'F1', 'F1'] | ['DOER'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>SL</th> <th>SR</th> <th>ST</th> </tr> </thead> <tbody> <tr> <td>Model || DE-CNN</td> <td>81.26</td> <td>78.98</td> <td>63.23</td> </tr> <tr> <td>Model || DOER*</td> ... | Table 3 | table_3 | P19-1056 | 8 | acl2019 | Results on ATE. Table 3 shows the results of aspect term extraction only. DE-CNN is the current state-of-the-art model on ATE as mentioned above. Comparing with it, DOER achieves new state-of-the-art scores. DOER* denotes the DOER without ASC part. As the table shows, DOER achieves better performance than DOER*, which ... | [2, 1, 2, 1, 2, 1] | ['Results on ATE.', 'Table 3 shows the results of aspect term extraction only.', 'DE-CNN is the current state-of-the-art model on ATE as mentioned above.', 'Comparing with it, DOER achieves new state-of-the-art scores.', 'DOER* denotes the DOER without ASC part.', 'As the table shows, DOER achieves better performance t... | [None, None, ['DE-CNN'], ['DOER'], ['DOER*'], ['DOER', 'DOER*']] | 1 |
P19-1069table_5 | Results of IMS trained on different corpora on the English all-words WSD tasks. † marks statistical significance between OneSeC and its competitors. | 2 | [['Dataset', 'Senseval-2'], ['Dataset', 'Senseval-3'], ['Dataset', 'SemEval-07'], ['Dataset', 'SemEval-13'], ['Dataset', 'SemEval-15'], ['Dataset', 'ALL']] | 1 | [['OneSeC'], ['TOM'], ['OMSTI'], ['SemCor'], ['MFS']] | [['73.2', '70.5', '74.1', '76.8', '72.1'], ['68.2', '67.4', '67.2', '73.8', '72'], ['63.5', '59.8', '62.3', '67.3', '65.4'], ['66.5', '65.5', '62.8', '65.5', '63'], ['70.8', '68.6', '63.1', '66.1', '66.3'], ['69', '67.3', '66.4', '70.4', '67.6']] | column | ['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy'] | ['OneSeC'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>OneSeC</th> <th>TOM</th> <th>OMSTI</th> <th>SemCor</th> <th>MFS</th> </tr> </thead> <tbody> <tr> <td>Dataset || Senseval-2</td> <td>73.2</td> <td>70.5</td> <td>74.1</t... | Table 5 | table_5 | P19-1069 | 6 | acl2019 | In Table 5 we compare the results of IMS when trained on different corpora. As one can see, OneSeC achieves the best results on ALL when compared to automatic and semi-automatic approaches, and ranks second only with respect to SemCor. Interestingly enough, OneSeC beats its manual competitor on SemEval-2013 by 1 point ... | [1, 1, 1, 1, 2, 2, 2, 2, 1] | ['In Table 5 we compare the results of IMS when trained on different corpora.', 'As one can see, OneSeC achieves the best results on ALL when compared to automatic and semi-automatic approaches, and ranks second only with respect to SemCor.', 'Interestingly enough, OneSeC beats its manual competitor on SemEval-2013 by ... | [None, ['OneSeC', 'ALL', 'SemCor'], ['OneSeC', 'SemEval-13', 'SemEval-15'], ['OneSeC', 'ALL'], ['OneSeC', 'SemCor'], ['OneSeC', 'SemCor'], None, ['SemCor'], ['TOM']] | 1 |
P19-1074table_7 | Performance of joint relation and supporting evidence prediction in F1 measurement (%). | 2 | [['Method', 'Heuristic predictor'], ['Method', 'Neural predictor']] | 1 | [['Dev'], ['Test']] | [['36.21', '36.76'], ['44.07', '43.85']] | column | ['F1', 'F1'] | ['Neural predictor', 'Heuristic predictor'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Dev</th> <th>Test</th> </tr> </thead> <tbody> <tr> <td>Method || Heuristic predictor</td> <td>36.21</td> <td>36.76</td> </tr> <tr> <td>Method || Neural predictor</td> <td>... | Table 7 | table_7 | P19-1074 | 8 | acl2019 | Supporting Evidence Prediction. We propose a new task to predict the supporting evidence for relation instances. On the one hand, jointly predicting the evidence provides better explainability. On the other hand, identifying supporting evidence and reasoning relational facts from text are naturally dual tas... | [2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1] | ['Supporting Evidence Prediction.', 'We propose a new task to predict the supporting evidence for relation instances.', 'On the one hand, jointly predicting the evidence provides better explainability.', 'On the other hand, identifying supporting evidence and reasoning relational facts from text are natural... | [None, None, None, None, ['Heuristic predictor'], None, ['Neural predictor'], None, None, None, ['Neural predictor', 'Heuristic predictor']] | 1 |
P19-1079table_5 | Results on benchmark dataset (ATIS and subsets of Snips). | 2 | [['Model', 'JOINT-SF-IC'], ['Model', 'PARALLEL[UNIV]'], ['Model', 'PARALLEL[UNIV+TASK]'], ['Model', 'PARALLEL[UNIV+GROUP+TASK]'], ['Model', 'SERIAL'], ['Model', 'SERIAL+HIGHWAY'], ['Model', 'SERIAL+HIGHWAY+SWAP']] | 2 | [['ATIS', 'Intent Acc.'], ['ATIS', 'Slot F1'], ['Snips-location', 'Intent Acc.'], ['Snips-location', 'Slot F1'], ['Snips-music', 'Intent Acc.'], ['Snips-music', 'Slot F1'], ['Snips-creative', 'Intent Acc.'], ['Snips-creative', 'Slot F1']] | [['96.1', '95.4', '99.7', '96.3', '100', '93.1', '100', '96.6'], ['96.4', '95.4', '99.7', '95.8', '100', '92.1', '100', '95.8'], ['96.2', '95.5', '99.7', '96', '100', '93.4', '100', '97.2'], ['96.9', '95.4', '99.7', '96.5', '99.5', '94.4', '100', '97.3'], ['97.2', '95.8', '100', '96.5', '100', '93.8', '100', '97.2'], [... | column | ['Intent Acc.', 'Slot F1', 'Intent Acc.', 'Slot F1', 'Intent Acc.', 'Slot F1', 'Intent Acc.', 'Slot F1'] | ['SERIAL', 'SERIAL+HIGHWAY', 'SERIAL+HIGHWAY+SWAP'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>ATIS || Intent Acc.</th> <th>ATIS || Slot F1</th> <th>Snips-location || Intent Acc.</th> <th>Snips-location || Slot F1</th> <th>Snips-music || Intent Acc.</th> <th>Snips-music || Slot F1</th... | Table 5 | table_5 | P19-1079 | 7 | acl2019 | Table 5 shows results on ATIS and our split version of Snips. We now have four tasks: ATIS, Snips-location, Snips-music, and Snips-creative. JOINT-SF-IC is our baseline that treats these four tasks independently. All other models process the four tasks together in the MTL setup. For the models introduced in this paper,... | [1, 1, 2, 2, 1, 1] | ['Table 5 shows results on ATIS and our split version of Snips.', 'We now have four tasks: ATIS, Snips-location, Snips-music, and Snips-creative.', 'JOINT-SF-IC is our baseline that treats these four tasks independently.', 'All other models process the four tasks together in the MTL setup.', 'For the models introduced ... | [['ATIS', 'Snips-location', 'Snips-music', 'Snips-creative'], ['ATIS', 'Snips-location', 'Snips-music', 'Snips-creative'], ['JOINT-SF-IC'], None, ['ATIS', 'Snips-location', 'Snips-music', 'Snips-creative'], ['SERIAL', 'SERIAL+HIGHWAY', 'SERIAL+HIGHWAY+SWAP', 'PARALLEL[UNIV]', 'PARALLEL[UNIV+TASK]']] | 1 |
P19-1079table_6 | Results on the Alexa dataset. Best results on mean intent accuracy and slot F1 values, and results that are not statistically different from the best model are marked in bold. | 2 | [['Model', 'JOINT-SF-IC'], ['Model', 'PARALLEL[UNIV]'], ['Model', 'PARALLEL[UNIV+TASK]'], ['Model', 'PARALLEL[UNIV+GROUP+TASK]'], ['Model', 'SERIAL'], ['Model', 'SERIAL+HIGHWAY'], ['Model', 'SERIAL+HIGHWAY+SWAP']] | 2 | [['Intent Acc.', 'Mean'], ['Intent Acc.', 'Median'], ['Slot F1', 'Mean'], ['Slot F1', 'Median']] | [['93.36', '95.9', '79.97', '85.23'], ['93.44', '95.5', '80.76', '86.18'], ['93.78', '96.35', '80.49', '85.81'], ['93.87', '96.31', '80.84', '86.21'], ['93.83', '96.24', '80.84', '86.14'], ['93.81', '96.28', '80.73', '85.71'], ['94.02', '96.42', '80.8', '86.44']] | column | ['Intent Acc.', 'Intent Acc.', 'Slot F1', 'Slot F1'] | ['Model'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Intent Acc. || Mean</th> <th>Intent Acc. || Median</th> <th>Slot F1 || Mean</th> <th>Slot F1 || Median</th> </tr> </thead> <tbody> <tr> <td>Model || JOINT-SF-IC</td> <td>93.36</td> ... | Table 6 | table_6 | P19-1079 | 7 | acl2019 | 4.2 Alexa data. Table 6 shows the results of the single-domain model and the MTL models on the Alexa dataset. The trend is clearly visible in these results compared to the results on the benchmark data. As Alexa data has more domains, there might not be many features that are common across all the domains. Capturing th... | [2, 1, 2, 2, 2, 1, 1, 1] | ['4.2 Alexa data.', 'Table 6 shows the results of the single-domain model and the MTL models on the Alexa dataset.', 'The trend is clearly visible in these results compared to the results on the benchmark data.', 'As Alexa data has more domains, there might not be many features that are common across all the domains.',... | [None, None, None, None, None, ['SERIAL+HIGHWAY+SWAP'], ['PARALLEL[UNIV+GROUP+TASK]', 'SERIAL+HIGHWAY'], ['Mean', 'Slot F1']] | 1 |
P19-1081table_3 | Cross-domain (train/test on the different domain) response generation performance on the OpenDialKG dataset (metric: recall@k). E: entities, S: sentence, D: dialog contexts. | 4 | [['Input', 'E+S+D', 'Model', 'seq2seq (Sutskever et al.,2014)'], ['Input', 'E+S', 'Model', 'Tri-LSTM (Young et al.,2018)'], ['Input', 'E+S', 'Model', 'Ext-ED (Parthasarathi and Pineau,2018)'], ['Input', 'E', 'Model', 'DialKG Walker (ablation)'], ['Input', 'E+S', 'Model', 'DialKG Walker (ablation)'], ['Input', 'E+S+D', ... | 2 | [['Movie®Book', 'r@1'], ['Movie®Book', 'r@3'], ['Movie®Book', 'r@5'], ['Movie®Book', 'r@10'], ['Movie®Book', 'r@25'], ['Movie®Music', 'r@1'], ['Movie®Music', 'r@3'], ['Movie®Music', 'r@5'], ['Movie®Music', 'r@10'], ['Movie®Music', 'r@25']] | [['2.9', '21.3', '35.1', '50.6', '64.2', '1.5', '12.1', '19.7', '34.9', '49.4'], ['2.3', '17.9', '29.7', '44.9', '61', '1.9', '8.7', '12.9', '25.8', '44.4'], ['2', '7.9', '11.2', '16.4', '22.4', '1.3', '2.6', '3.8', '4.1', '8.3'], ['8.2', '15.7', '22.8', '31.8', '48.9', '4.5', '16.7', '21.6', '25.8', '33'], ['12.6', '2... | column | ['r@1', 'r@3', 'r@5', 'r@10', 'r@25', 'r@1', 'r@3', 'r@5', 'r@10', 'r@25'] | ['DialKG Walker (proposed)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Movie®Book || r@1</th> <th>Movie®Book || r@3</th> <th>Movie®Book || r@5</th> <th>Movie®Book || r@10</th> <th>Movie®Book || r@25</th> <th>Movie®Music || r@1</th> <th>Movie®Music || r@3</... | Table 3 | table_3 | P19-1081 | 7 | acl2019 | Cross-domain evaluation: Table 3 demonstrates that the DialKG Walker model can generalize to multiple domains better than the baseline approaches (train: movie & test: book / train: movie & test: music). This result indicates that our method also allows for zeroshot pruning by relations based on their proximity in the ... | [1, 1, 2] | ['Cross-domain evaluation: Table 3 demonstrates that the DialKG Walker model can generalize to multiple domains better than the baseline approaches (train: movie & test: book / train: movie & test: music).', 'This result indicates that our method also allows for zeroshot pruning by relations based on their proximity in... | [['DialKG Walker (proposed)'], None, None] | 1 |
P19-1085table_4 | Sentence selection evaluation and average label accuracy of GEAR with different thresholds on dev set (%). | 2 | [['threshold', '0'], ['threshold', '10^-4'], ['threshold', '10^-3'], ['threshold', '10^-2'], ['threshold', '10^-1']] | 1 | [['OFEVER'], ['Precision'], ['Recall'], ['F1'], ['GEAR LA']] | [['91.1', '24.08', '86.72', '37.69', '74.84'], ['91.04', '30.88', '86.63', '45.53', '74.86'], ['90.86', '40.6', '86.36', '55.23', '74.91'], ['90.27', '53.12', '85.47', '65.52', '74.89'], ['87.7', '70.61', '81.64', '75.72', '74.81']] | column | ['OFEVER', 'Precision', 'Recall', 'F1', 'GEAR LA'] | ['GEAR LA', 'threshold'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>OFEVER</th> <th>Precision</th> <th>Recall</th> <th>F1</th> <th>GEAR LA</th> </tr> </thead> <tbody> <tr> <td>threshold || 0</td> <td>91.1</td> <td>24.08</td> <td>86.72<... | Table 4 | table_4 | P19-1085 | 6 | acl2019 | The rightmost column of Table 4 shows the results of our GEAR frameworks with different sentence selection thresholds. We choose the model with threshold 10^-3, which has the highest label accuracy, as our final model. When the threshold increases from 0 to 10^-3, the label accuracy increases due to less noisy informat... | [1, 1, 1, 1] | ['The rightmost column of Table 4 shows the results of our GEAR frameworks with different sentence selection thresholds.', 'We choose the model with threshold 10^-3, which has the highest label accuracy, as our final model.', 'When the threshold increases from 0 to 10^-3, the label accuracy increases due to less noisy ... | [['GEAR LA'], ['threshold', '10^-3'], ['threshold', '0', '10^-3'], ['threshold', '10^-3', '10^-1']] | 1 |
P19-1085table_7 | Evaluations of the full pipeline. The results of our pipeline are chosen from the model which has the highest dev FEVER score (%). | 2 | [['Model', 'Athene'], ['Model', 'UCL MRG'], ['Model', 'UNC NLP'], ['Model', 'BERT Pair'], ['Model', 'BERT Concat'], ['Model', 'Our pipeline']] | 2 | [['Dev', 'LA'], ['Dev', 'FEVER'], ['Test', 'LA'], ['Test', 'FEVER']] | [['68.49', '64.74', '65.46', '61.58'], ['69.66', '65.41', '67.62', '62.52'], ['69.72', '66.49', '68.21', '64.21'], ['73.3', '68.9', '69.75', '65.18'], ['73.67', '68.89', '71.01', '65.64'], ['74.84', '70.69', '71.6', '67.1']] | column | ['LA', 'FEVER', 'LA', 'FEVER'] | ['Our pipeline'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Dev || LA</th> <th>Dev || FEVER</th> <th>Test || LA</th> <th>Test || FEVER</th> </tr> </thead> <tbody> <tr> <td>Model || Athene</td> <td>68.49</td> <td>64.74</td> <td>65.46... | Table 7 | table_7 | P19-1085 | 7 | acl2019 | Table 7 presents the evaluations of the full pipeline. We find the test FEVER score of BERT fine-tuning systems outperform other shared task models by nearly 1%. Furthermore, our full pipeline outperforms the BERT-Concat baseline by 1.46% and achieves significant improvements. | [1, 1, 1] | ['Table 7 presents the evaluations of the full pipeline.', 'We find the test FEVER score of BERT fine-tuning systems outperform other shared task models by nearly 1%.', 'Furthermore, our full pipeline outperforms the BERT-Concat baseline by 1.46% and achieves significant improvements.'] | [None, ['FEVER', 'BERT Pair', 'BERT Concat'], None] | 1 |
P19-1087table_4 | The comparison of Seq2Seq model performance using Transformer (Xformer) and LSTM encoders. Both encoders were pre-trained. | 3 | [['Encoder', 'Unweighted F1', 'Xformer'], ['Encoder', 'Unweighted F1', 'LSTM'], ['Encoder', 'Weighted F1', 'Xformer'], ['Encoder', 'Weighted F1', 'LSTM']] | 1 | [['Sx'], ['Sx + Status']] | [['0.67', '0.51'], ['0.70', '0.55'], ['0.76', '0.61'], ['0.79', '0.64']] | row | ['Unweighted F1', 'Unweighted F1', 'Weighted F1', 'Weighted F1'] | ['LSTM'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Sx</th> <th>Sx + Status</th> </tr> </thead> <tbody> <tr> <td>Encoder || Unweighted F1 || Xformer</td> <td>0.67</td> <td>0.51</td> </tr> <tr> <td>Encoder || Unweighted F1 || LST... | Table 4 | table_4 | P19-1087 | 6 | acl2019 | Next, the Transformer encoder was compared against the LSTM encoder, using pre-training in both cases. Based on the performance on the development set, the best encoder was chosen which consists of two layers, each with 1024 hidden dimension and 16 attention heads. The results in Table 4 show that the LSTM-encoder outp... | [1, 2, 1, 1] | ['Next, the Transformer encoder was compared against the LSTM encoder, using pre-training in both cases.', 'Based on the performance on the development set, the best encoder was chosen which consists of two layers, each with 1024 hidden dimension and 16 attention heads.', 'The results in Table 4 show that the LSTM-enco... | [['Xformer', 'LSTM'], None, ['LSTM', 'Xformer'], ['LSTM']] | 1 |
P19-1088table_5 | Overall prediction results and F-scores for counseling quality using linguistic feature sets | 2 | [['Feature set', 'Baseline'], ['Feature set', 'N-grams'], ['Feature set', 'Semantic'], ['Feature set', 'Metafeatures'], ['Feature set', 'Sentiment'], ['Feature set', 'Alignment'], ['Feature set', 'Topics'], ['Feature set', 'MITI Behav'], ['Feature set', 'All features']] | 3 | [['Counseling Quality', 'Acc.', '-'], ['Counseling Quality', 'F-score', ' Low'], ['Counseling Quality', 'F-score', ' High']] | [['59.85%', '', ''], ['87.26%', '0.849', '0.89'], ['80.31%', '0.763', '0.832'], ['72.59%', '0.297', '0.83'], ['74.52%', '0.298', '0.844'], ['72.59%', '0.64', '0.779'], ['81.08%', '0.768', '0.84'], ['79.54%', '0.787', '0.808'], ['88.03%', '0.857', '0.897']] | column | ['Acc.', 'F-score', 'F-score'] | ['All features'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Counseling Quality || Acc. || -</th> <th>Counseling Quality || F-score || Low</th> <th>Counseling Quality || F-score || High</th> </tr> </thead> <tbody> <tr> <td>Feature set || Baseline</td>... | Table 5 | table_5 | P19-1088 | 8 | acl2019 | Table 5 shows the classification performance obtained when using each feature set at the time. We measure the performance of the classifiers in terms of accuracy and F-score, which provide overall and class-specific performance assessments. Compared to the majority baseline, all the feature sets demonstrate a clear imp... | [1, 1, 1, 1, 1] | ['Table 5 shows the classification performance obtained when using each feature set at the time.', 'We measure the performance of the classifiers in terms of accuracy and F-score, which provide overall and class-specific performance assessments.', 'Compared to the majority baseline, all the feature sets demonstrate a c... | [None, ['Acc.', 'F-score'], ['Baseline'], ['N-grams'], ['All features', 'Acc.']] | 1 |
P19-1109table_1 | SEQ vs. CAMB system results on words only and on words and phrases | 2 | [['Words Only', 'NEWS'], ['Words Only', 'WIKINEWS'], ['Words Only', 'WIKIPEDIA'], ['Words+Phrases', 'NEWS'], ['Words+Phrases', 'WIKINEWS'], ['Words+Phrases', 'WIKIPEDIA']] | 2 | [['Macro F-Scores', 'CAMB'], ['Macro F-Scores', 'SEQ']] | [['0.8633', '0.8763 (+1.30)'], ['0.8317', '0.8540 (+2.23)'], ['0.7780', '0.8140 (+3.60)'], ['0.8736', '0.8763 (+0.27)'], ['0.8400', '0.8505 (+1.05)'], ['0.8115', '0.8158 (+0.43)']] | column | ['Macro F-Scores', 'Macro F-Scores'] | ['SEQ', 'CAMB'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Macro F-Scores || CAMB</th> <th>Macro F-Scores || SEQ</th> </tr> </thead> <tbody> <tr> <td>Words Only || NEWS</td> <td>0.8633</td> <td>0.8763 (+1.30)</td> </tr> <tr> <td>Words ... | Table 1 | table_1 | P19-1109 | 4 | acl2019 | The results presented in Table 1 show that the SEQ system outperforms the CAMB system on all three genres on the task of binary complex word identification. The largest performance increase for words is on the WIKIPEDIA test set (+3.60%). Table 1 also shows that on the combined set of words and phrases (words+phrases) ... | [1, 1, 1, 2, 1, 1] | ['The results presented in Table 1 show that the SEQ system outperforms the CAMB system on all three genres on the task of binary complex word identification.', 'The largest performance increase for words is on the WIKIPEDIA test set (+3.60%).', 'Table 1 also shows that on the combined set of words and phrases (words+p... | [['SEQ', 'CAMB'], ['WIKIPEDIA'], ['Words+Phrases', 'SEQ', 'CAMB'], ['CAMB'], ['CAMB'], ['SEQ']] | 1 |
P19-1112table_3 | Experimental results in SemEval setting | 3 | [['Label Distribution Learning Models', 'M1', 'DL-BiLSTM+GloVe'], ['Label Distribution Learning Models', 'M2', 'DL-BiLSTM+GloVe+Att'], ['Label Distribution Learning Models', 'M3', 'DL-BiLSTM+ELMo'], ['Label Distribution Learning Models', 'M4', 'DL-BiLSTM+ELMo+Att'], ['Single Label Learning Models', 'M5', 'SL-BiLSTM+Glo... | 3 | [['Model/Eval', 'Match m', 'm=1'], ['Model/Eval', 'Match m', 'm=2'], ['Model/Eval', 'Match m', 'm=3'], ['Model/Eval', 'Match m', 'm=4']] | [['54.6', '69.2', '76.5', '81.9'], ['57.5', '69.7', '76.7', '80.7'], ['0.6', '71.7', '78.7', '84.1'], ['59.6', '72.7', '77.7', '84.6'], ['51.7', '66.7', '75.0', '81.1'], ['52.9', '66.5', '73.6', '0.8'], ['54.2', '69.0', '77.9', '83.0'], ['54.2', '70.7', '78.5', '82.8'], ['45.4', '66.0', '72.8', '80.2']] | column | ['Match m', 'Match m', 'Match m', 'Match m'] | ['M3', 'M4'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Model/Eval || Match m || m=1</th> <th>Model/Eval || Match m || m=2</th> <th>Model/Eval || Match m || m=3</th> <th>Model/Eval || Match m || m=4</th> </tr> </thead> <tbody> <tr> <td>Label D... | Table 3 | table_3 | P19-1112 | 5 | acl2019 | We are organizing a SemEval shared task on emphasis selection called "Task 10: Emphasis Selection for Written Text in Visual Media". In order to set out a comparable baseline for this shared task, in this section, we report results of our models according to the SemEval setting defined for the task. After the submissio... | [2, 2, 2, 2, 2, 1, 1] | ['We are organizing a SemEval shared task on emphasis selection called "Task 10: Emphasis Selection for Written Text in Visual Media".', 'In order to set out a comparable baseline for this shared task, in this section, we report results of our models according to the SemEval setting defined for the task.', 'After the s... | [None, None, None, None, None, None, ['M3', 'M4']] | 1 |
P19-1119table_1 | Results of UNMT | 1 | [['Baseline'], ['Baseline-fix']] | 1 | [['Fr-En'], ['En-Fr'], ['Ja-En'], ['En-Ja']] | [['24.5', '25.37', '14.09', '21.63'], ['24.22', '25.26', '13.88', '21.93']] | column | ['accuracy', 'accuracy', 'accuracy', 'accuracy'] | ['Baseline-fix'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Fr-En</th> <th>En-Fr</th> <th>Ja-En</th> <th>En-Ja</th> </tr> </thead> <tbody> <tr> <td>Baseline</td> <td>24.5</td> <td>25.37</td> <td>14.09</td> <td>21.63</td> </t... | Table 1 | table_1 | P19-1119 | 3 | acl2019 | 3.3 Analysis. The empirical results in this section show that the quality of pre-trained UBWE is important to UNMT. However, the quality of UBWE decreases significantly during UNMT training. We hypothesize that maintaining the quality of UBWE may enhance the performance of UNMT. In this subsection, we analyze some pos... | [2, 2, 2, 2, 2, 2, 2, 1, 1, 1, 2] | ['3.3 Analysis.', ' The empirical results in this section show that the quality of pre-trained UBWE is important to UNMT.', 'However, the quality of UBWE decreases significantly during UNMT training.', 'We hypothesize that maintaining the quality of UBWE may enhance the performance of UNMT.', 'In this subsection, we an... | [None, None, None, None, None, None, None, ['Baseline-fix'], ['Baseline-fix', 'Baseline'], ['Baseline-fix'], None] | 1 |
P19-1120table_2 | Translation results of different transfer learning setups. | 1 | [['Baseline'], ['Multilingual (Johnson et al. 2017)'], ['Transfer (Zoph et al. 2016)'], [' + Cross-lingual word embedding'], [' + Artificial noises'], [' + Synthetic data']] | 2 | [['BLEU (%)', 'eu-en'], ['BLEU (%)', 'sl-en'], ['BLEU (%)', 'be-en'], ['BLEU (%)', 'az-en'], ['BLEU (%)', 'tr-en']] | [['1.7', '10.1', '3.2', '3.1', '0.8'], ['5.1', '16.7', '4.2', '4.5', '8.7'], ['4.9', '19.2', '8.9', '5.3', '7.4'], ['7.4', '20.6', '12.2', '7.4', '9.4'], ['8.2', '21.3', '12.8', '8.1', '10.1'], ['9.7', '22.1', '14', '9', '11.3']] | column | ['BLEU (%)', 'BLEU (%)', 'BLEU (%)', 'BLEU (%)', 'BLEU (%)'] | [' + Cross-lingual word embedding', ' + Artificial noises', ' + Synthetic data'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU (%) || eu-en</th> <th>BLEU (%) || sl-en</th> <th>BLEU (%) || be-en</th> <th>BLEU (%) || az-en</th> <th>BLEU (%) || tr-en</th> </tr> </thead> <tbody> <tr> <td>Baseline</td> ... | Table 2 | table_2 | P19-1120 | 6 | acl2019 | Table 2 presents the results. Plain transfer learning already gives a boost but is still far from a satisfying quality, especially for Basque®-English and Azerbaijani®English. On top of that, each of our three techniques offers clear, incremental improvements in all child language pairs with a maximum of 5.1% BLEU in t... | [1, 1, 1] | ['Table 2 presents the results.', 'Plain transfer learning already gives a boost but is still far from a satisfying quality, especially for Basque®-English and Azerbaijani®English.', 'On top of that, each of our three techniques offers clear, incremental improvements in all child language pairs with a maximum of 5.1% B... | [None, ['Transfer (Zoph et al. 2016)', 'be-en', 'az-en'], [' + Cross-lingual word embedding', ' + Artificial noises', ' + Synthetic data', 'BLEU (%)']] | 1 |
P19-1120table_5 | Translation results with different sizes of the source vocabulary. | 2 | [['BPE merges', '10k'], ['BPE merges', '20k'], ['BPE merges', '50k'], ['BPE merges', '70k']] | 2 | [['BLEU (%)', 'sl-en'], ['BLEU (%)', 'be-en']] | [['21', '11.2'], ['20.6', '12.2'], ['20.2', '10.9'], ['20', '10.9']] | column | ['BLEU (%)', 'BLEU (%)'] | ['BPE merges'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU (%) || sl-en</th> <th>BLEU (%) || be-en</th> </tr> </thead> <tbody> <tr> <td>BPE merges || 10k</td> <td>21</td> <td>11.2</td> </tr> <tr> <td>BPE merges || 20k</td> <t... | Table 5 | table_5 | P19-1120 | 7 | acl2019 | Table 5 estimates how large the vocabulary should be for the language-switching side in NMT transfer. We varied the number of BPE merges on the source side, fixing the target vocabulary to 50k merges. The best results are with 10k or 20k of BPE merges, which shows that the source vocabulary should be reasonably small t... | [1, 2, 1, 2] | ['Table 5 estimates how large the vocabulary should be for the language-switching side in NMT transfer.', 'We varied the number of BPE merges on the source side, fixing the target vocabulary to 50k merges.', 'The best results are with 10k or 20k of BPE merges, which shows that the source vocabulary should be reasonably... | [None, ['50k'], ['10k', '20k'], None] | 1 |
P19-1127table_1 | Relation extraction manual evaluation results: Precision of top 1000 predictions. | 2 | [['Precision@N', 'PCNN+ATT'], ['Precision@N', 'PCNN+ATT+GloRE'], ['Precision@N', 'PCNN+ATT+GloRE+'], ['Precision@N', 'PCNN+ATT+GloRE++']] | 1 | [['100'], ['300'], ['500'], ['700'], ['900'], ['1000']] | [['97', '93.7', '92.8', '89.1', '85.2', '83.9'], ['97', '97.3', '94.6', '93.3', '90.1', '89.3'], ['98', '98.7', '96.6', '93.1', '89.9', '88.8'], ['98', '97.3', '96', '93.6', '91', '89.8']] | column | ['Precision', 'Precision', 'Precision', 'Precision', 'Precision', 'Precision'] | ['PCNN+ATT+GloRE', 'PCNN+ATT+GloRE+', 'PCNN+ATT+GloRE++'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>100</th> <th>300</th> <th>500</th> <th>700</th> <th>900</th> <th>1000</th> </tr> </thead> <tbody> <tr> <td>Precision@N || PCNN+ATT</td> <td>97</td> <td>93.7</td> ... | Table 1 | table_1 | P19-1127 | 4 | acl2019 | Same as (Su et al., 2018), we use PCNN+ATT (Lin et al., 2016) as our base model. GloRE++ improves its best F1-score from 42.7% to 45.2%, slightly outperforming the previous state-of-theart (GloRE, 44.7%). As shown in previous work (Su et al., 2018), on NYT dataset, due to a significant amount of false negatives, the PR... | [2, 1, 2, 2, 2, 2, 2, 2, 1, 1, 1] | ['Same as (Su et al., 2018), we use PCNN+ATT (Lin et al., 2016) as our base model.', 'GloRE++ improves its best F1-score from 42.7% to 45.2%, slightly outperforming the previous state-of-theart (GloRE, 44.7%).', 'As shown in previous work (Su et al., 2018), on NYT dataset, due to a significant amount of false negatives... | [['PCNN+ATT'], ['PCNN+ATT+GloRE++', 'PCNN+ATT+GloRE'], None, None, ['1000'], None, None, None, None, ['PCNN+ATT+GloRE+', 'PCNN+ATT+GloRE++', 'PCNN+ATT+GloRE'], ['PCNN+ATT+GloRE++', '700', '900', '1000']] | 1 |
P19-1130table_1 | Comparison between our model and the stateof-the-art models using ACE 2005 English corpus. F1scores higher than the state-of-the-art are in bold. | 2 | [['Model', 'SPTree'], ['Model', 'Walk-based'], ['Model', 'Baseline'], ['Model', 'Baseline+Tag'], ['Model', 'Baseline+MTL'], ['Model', 'Baseline+MTL+Tag']] | 1 | [['P%'], ['R%'], ['F1%']] | [['70.1', '61.2', '65.3'], ['69.7', '59.5', '64.2'], ['58.8', '57.3', '57.2'], ['61.3', '76.7', '67.4'], ['63.8', '56.1', '59.5'], ['66.5', '71.8', '68.9']] | column | ['P%', 'R%', 'F1%'] | ['Baseline+MTL+Tag'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P%</th> <th>R%</th> <th>F1%</th> </tr> </thead> <tbody> <tr> <td>Model || SPTree</td> <td>70.1</td> <td>61.2</td> <td>65.3</td> </tr> <tr> <td>Model || Walk-based</td... | Table 1 | table_1 | P19-1130 | 6 | acl2019 | Results. From Table 1, we can see both BIO tag embeddings and multi-task learning can improve the performance of the baseline model. Baseline+Tag can outperform the state-ofthe-art models on both the Chinese and English corpus. Compared to the baseline model, BIO tag embeddings lead to an absolute increase of about 10%... | [2, 1, 1, 1, 1] | ['Results.', 'From Table 1, we can see both BIO tag embeddings and multi-task learning can improve the performance of the baseline model.', 'Baseline+Tag can outperform the state-ofthe-art models on both the Chinese and English corpus.', 'Compared to the baseline model, BIO tag embeddings lead to an absolute increase o... | [None, ['Baseline+MTL+Tag'], ['Baseline+Tag'], None, ['Baseline+MTL+Tag', 'F1%']] | 1 |
P19-1136table_1 | Results for both NYT and WebNLG datasets. | 2 | [['Method', 'NovelTagging'], ['Method', 'OneDecoder'], ['Method', 'MultiDecoder'], ['Method', 'GraphRel1p'], ['Method', 'GraphRel2p']] | 2 | [['NYT', 'Precision'], ['NYT', 'Recall'], ['NYT', 'F1'], ['WebNLG', 'Precision'], ['WebNLG', 'Recall'], ['WebNLG', 'F1']] | [['62.40%', '31.70%', '42.00%', '52.50%', '19.30%', '28.30%'], ['59.40%', '53.10%', '56.00%', '32.20%', '28.90%', '30.50%'], ['61.00%', '56.60%', '58.70%', '37.70%', '36.40%', '37.10%'], ['62.90%', '57.30%', '60.00%', '42.30%', '39.20%', '40.70%'], ['63.90%', '60.00%', '61.90%', '44.70%', '41.10%', '42.90%']] | column | ['precision', 'recall', 'F1', 'precision', 'recall', 'F1'] | ['GraphRel1p', 'GraphRel2p'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>NYT || Precision</th> <th>NYT || Recall</th> <th>NYT || F1</th> <th>WebNLG || Precision</th> <th>WebNLG || Recall</th> <th>WebNLG || F1</th> </tr> </thead> <tbody> <tr> <td>Meth... | Table 1 | table_1 | P19-1136 | 6 | acl2019 | 5.3 Quantitative Results . Table 1 presents the precision, recall, and F1 score of NovelTagging, MultiDecoder, and GraphRel for both the NYT and WebNLG datasets. OneDecoder, proposed in MultiDecoderÕs original paper, uses only a single decoder to extract relation triplets. GraphRel1p is the proposed method but only 1st... | [2, 1, 2, 2, 1, 1, 1] | ['5.3 Quantitative Results .', 'Table 1 presents the precision, recall, and F1 score of NovelTagging, MultiDecoder, and GraphRel for both the NYT and WebNLG datasets.', 'OneDecoder, proposed in MultiDecoderÕs original paper, uses only a single decoder to extract relation triplets.', 'GraphRel1p is the proposed method b... | [None, ['NovelTagging', 'MultiDecoder', 'GraphRel1p', 'GraphRel2p', 'Precision', 'Recall', 'F1'], ['OneDecoder', 'MultiDecoder'], ['GraphRel1p', 'GraphRel2p'], ['NYT', 'GraphRel1p', 'NovelTagging', 'OneDecoder', 'MultiDecoder'], ['GraphRel1p', 'Precision', 'Recall', 'F1'], ['GraphRel2p', 'MultiDecoder', 'GraphRel1p']] | 1 |
P19-1137table_3 | Total diagnostic results, where columns contain the precision, recall and accuracy of DS-generated labels evaluated on 200 human-annotated labels as well as the number of positive and negative patterns preserved after the pattern-refinement stage, and we underline some cases in which DS performs poorly. | 1 | [['R0'], ['R1'], ['R2'], ['R3'], ['R4'], ['R5'], ['R6'], ['R7'], ['R8'], ['R9'], ['R6u'], ['R7u'], ['R8u'], ['R9u']] | 1 | [['Prec.'], ['Recall'], ['Acc.'], ['#Pos.'], ['#Neg.']] | [['100', '81.8', '82', '20', '0'], ['93.9', '33.5', '36.2', '18', '0'], ['75.7', '88', '76.5', '9', '5'], ['100', '91.4', '92', '20', '0'], ['93.3', '72.4', '80.9', '10', '2'], ['93.8', '77.3', '86.5', '15', '0'], ['88.3', '76.9', '75.1', '14', '0'], ['91.9', '64.6', '64', '20', '0'], ['29.3', '30.4', '60', '4', '10'],... | column | ['Prec.', 'Recall', 'Acc.', '#Pos.', '#Neg.'] | None | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Prec.</th> <th>Recall</th> <th>Acc.</th> <th>#Pos.</th> <th>#Neg.</th> </tr> </thead> <tbody> <tr> <td>R0</td> <td>100</td> <td>81.8</td> <td>82</td> <td>20</td> ... | Table 3 | table_3 | P19-1137 | 7 | acl2019 | 4.3 Pattern-based Diagnostic Results . Besides for improving the extraction performance, DIAG-NRE can interpret different noise effects caused by DS via refined patterns, as Table 3 shows. Next, we elaborate these diagnostic results and the corresponding performance degradation of NRE models from two perspectives: fals... | [1, 1, 1] | ['4.3 Pattern-based Diagnostic Results .', 'Besides for improving the extraction performance, DIAG-NRE can interpret different noise effects caused by DS via refined patterns, as Table 3 shows.', 'Next, we elaborate these diagnostic results and the corresponding performance degradation of NRE models from two perspectiv... | [None, None, None] | 1 |
P19-1140table_3 | Overall performance. | 1 | [['MTransE'], ['JAPE'], ['AlignEA'], ['GCN-Align'], ['MuGNN w/o Asr'], ['MuGNN']] | 3 | [['Methods', 'DBPZH-EN', 'H@1'], ['Methods', 'DBPZH-EN', 'H@10'], ['Methods', 'DBPZH-EN', 'MRR'], ['Methods', 'DBPJA-EN', 'H@1'], ['Methods', 'DBPJA-EN', 'H@10'], ['Methods', 'DBPJA-EN', 'MRR'], ['Methods', 'DBPFR-EN', 'H@1'], ['Methods', 'DBPFR-EN', 'H@10'], ['Methods', 'DBPFR-EN', 'MRR'], ['Methods', 'DBP-WD', 'H@1']... | [['0.308', '0.614', '0.364', '0.279', '0.575', '0.349', '0.244', '0.556', '0.335', '0.281', '0.52', '0.363', '0.252', '0.493', '0.334'], ['0.412', '0.745', '0.49', '0.363', '0.685', '0.476', '0.324', '0.667', '0.43', '0.318', '0.589', '0.411', '0.236', '0.484', '0.32'], ['0.472', '0.792', '0.581', '0.448', '0.789', '0.... | column | ['H@1', 'H@10', 'MRR', 'H@1', 'H@10', 'MRR', 'H@1', 'H@10', 'MRR', 'H@1', 'H@10', 'MRR', 'H@1', 'H@10', 'MRR'] | ['MuGNN'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Methods || DBPZH-EN || H@1</th> <th>Methods || DBPZH-EN || H@10</th> <th>Methods || DBPZH-EN || MRR</th> <th>Methods || DBPJA-EN || H@1</th> <th>Methods || DBPJA-EN || H@10</th> <th>Methods ... | Table 3 | table_3 | P19-1140 | 7 | acl2019 | 5.2 Overall Performance. Results on Table 3 shows DBP15K and DWY100K. In general, MuGNN significantly outperforms all baselines regarding all metrics, mainly because it reconciles the structural differences by two different schemes for KG completion and pruning, which are thus well modeled in multi-channel GNN. | [2, 1, 1] | ['5.2 Overall Performance.', 'Results on Table 3 shows DBP15K and DWY100K.', 'In general, MuGNN significantly outperforms all baselines regarding all metrics, mainly because it reconciles the structural differences by two different schemes for KG completion and pruning, which are thus well modeled in multi-channel GNN.... | [None, None, ['MuGNN']] | 1 |
P19-1147table_3 | Comparison of LSTM and BERT models under human evaluations against GS-EC attack. Readability is a relative quality score between models, and Human Accuracy is the percentage that human raters correctly identify the adversarial examples. | 1 | [['LSTM'], ['BERT']] | 1 | [['Readability'], ['Human Accuracy']] | [['0.6', '52.10%'], ['1', '68.80%']] | column | ['Readability', 'Human Accuracy'] | ['BERT'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Readability</th> <th>Human Accuracy</th> </tr> </thead> <tbody> <tr> <td>LSTM</td> <td>0.6</td> <td>52.10%</td> </tr> <tr> <td>BERT</td> <td>1</td> <td>68.80%</td> ... | Table 3 | table_3 | P19-1147 | 5 | acl2019 | The human accuracy metric is the percentage of human responses that matches the true label. Table 3 is a comparison of LSTM and BERT models using the GS-EC attack. It shows that the distance in embeddings space of BERT can better reflect semantic similarity and contribute to more natural adversarial examples. And, in T... | [2, 1, 1, 0, 0] | ['The human accuracy metric is the percentage of human responses that matches the true label.', 'Table 3 is a comparison of LSTM and BERT models using the GS-EC attack.', 'It shows that the distance in embeddings space of BERT can better reflect semantic similarity and contribute to more natural adversarial examples.',... | [None, ['LSTM', 'BERT'], ['BERT'], None, None] | 1 |
P19-1153table_2 | SST-5 and SST-2 performance on all and root nodes respectively. Model results in the first section are from the Stanford Treebank paper (2013). GenSen and BERTBASE results are from (Subramanian et al., 2018) and (Devlin et al., 2018) respectively. | 1 | [['NB'], ['SVM'], ['BiNB'], ['VecAvg'], ['RNN'], ['MV-RNN'], ['RNTN'], ['RAE'], ['GenSen'], ['RAE + GenSen'], ['BERTBASE']] | 1 | [['SST-5(All)'], ['SST-2(Root)']] | [['67.2', '81.8'], ['64.3', '79.4'], ['71', '83.1'], ['73.3', '80.1'], ['79', '82.4'], ['78.7', '82.9'], ['80.7', '85.4'], ['81.07', '83'], ['-', '84.5'], ['-', '86.43'], ['-', '93.5']] | column | ['accuracy', 'accuracy'] | ['BERTBASE'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>SST-5(All)</th> <th>SST-2(Root)</th> </tr> </thead> <tbody> <tr> <td>NB</td> <td>67.2</td> <td>81.8</td> </tr> <tr> <td>SVM</td> <td>64.3</td> <td>79.4</td> </tr> ... | Table 2 | table_2 | P19-1153 | 5 | acl2019 | We present in Table 2 results for fine-grained sentiment analysis on all nodes as well as comparison with recent state-of-the-art methods on binary sentiment classification of the root node. For the five class sentiment task, we compare our model with the original Sentiment Treebank results and beat all the models. In ... | [2, 2, 2, 2, 1, 2, 2, 2, 1, 1] | ['We present in Table 2 results for fine-grained sentiment analysis on all nodes as well as comparison with recent state-of-the-art methods on binary sentiment classification of the root node.', 'For the five class sentiment task, we compare our model with the original Sentiment Treebank results and beat all the models... | [None, None, None, None, ['GenSen', 'BERTBASE'], ['RAE'], None, ['RAE', 'GenSen', 'BERTBASE'], ['GenSen', 'RNTN', 'SST-2(Root)'], ['BERTBASE']] | 1 |
P19-1173table_3 | The results (FEATS) of the learning curve over the EGY training dataset, for the EGY dataset alone, multitask learning (MTL), and the adversarial training (ADV). We do not use morphological analyzers here, so the results are not comparable to Table 2. | 2 | [['EGY Train Size', '2K (1.5%)'], ['EGY Train Size', '8K (6%)'], ['EGY Train Size', '16K (12%)'], ['EGY Train Size', '33K (25%)'], ['EGY Train Size', '67K (50%)'], ['EGY Train Size', '134K (100%)']] | 2 | [['EGY', 'None'], ['MSA-EGY', 'MTL'], ['MSA-EGY', 'ADV']] | [['29.7', '61.9', '71.1'], ['62.5', '73.5', '78.3'], ['74.7', '78.1', '81.5'], ['80.7', '81.6', '83.5'], ['83.3', '82', '84'], ['84.5', '85.4', '85.6']] | column | ['accuracy', 'accuracy', 'accuracy'] | ['MSA-EGY'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>EGY || None</th> <th>MSA-EGY || MTL</th> <th>MSA-EGY || ADV</th> </tr> </thead> <tbody> <tr> <td>EGY Train Size || 2K (1.5%)</td> <td>29.7</td> <td>61.9</td> <td>71.1</td> </... | Table 3 | table_3 | P19-1173 | 8 | acl2019 | Table 3 shows the results. Multitask learning with MSA consistently outperforms the models that use EGY data only. The accuracy almost doubles in the 2K model. We also notice that the accuracy gap increases as the EGY training dataset size decreases, highlighting the importance of joint modeling with MSA in low-resourc... | [1, 1, 1, 1, 1, 1, 1] | ['Table 3 shows the results.', 'Multitask learning with MSA consistently outperforms the models that use EGY data only.', 'The accuracy almost doubles in the 2K model.', 'We also notice that the accuracy gap increases as the EGY training dataset size decreases, highlighting the importance of joint modeling with MSA in ... | [None, ['MTL', 'MSA-EGY', 'EGY'], ['2K (1.5%)'], ['EGY Train Size', 'MSA-EGY'], ['ADV'], None, ['ADV']] | 1 |
P19-1175table_6 | Test results EN-NL (all sentences). | 2 | [['System', 'Baseline NMT'], ['System', 'Baseline SMT'], ['System', 'Baseline TM-SMT'], ['System', 'Google Translate'], ['System', 'Best NFR + NMT backoff'], ['System', 'Best NFR unified']] | 1 | [['BLEU'], ['TER'], ['MET.']] | [['51.45', '36.21', '69.83'], ['54.21', '35.99', '71.28'], ['55.72', '34.96', '72.25'], ['44.37', '41.51', '65.07'], ['58.91', '31.36', '74.12'], ['58.6', '31.57', '73.96']] | column | ['bleu', 'ter', 'met.'] | ['Best NFR + NMT backoff'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU</th> <th>TER</th> <th>MET.</th> </tr> </thead> <tbody> <tr> <td>System || Baseline NMT</td> <td>51.45</td> <td>36.21</td> <td>69.83</td> </tr> <tr> <td>System ||... | Table 6 | table_6 | P19-1175 | 6 | acl2019 | 5.3 Test set evaluation Table 6 contains the results for EN-NL for the entire test set (3207 sentences). The dedicated NFR + NMT backoff approach outperforms all baseline systems, scoring +3.19 BLEU, -3.6 TER and +1.87 METEOR points compared to the best baseline (TM-SMT). Compared to the NMT baseline, the difference is... | [1, 1, 1, 1, 1, 1] | ['5.3 Test set evaluation Table 6 contains the results for EN-NL for the entire test set (3207 sentences).', 'The dedicated NFR + NMT backoff approach outperforms all baseline systems, scoring +3.19 BLEU, -3.6 TER and +1.87 METEOR points compared to the best baseline (TM-SMT).', 'Compared to the NMT baseline, the diffe... | [None, ['Best NFR + NMT backoff', 'BLEU', 'TER', 'MET.'], ['Baseline NMT', 'BLEU'], ['Best NFR unified', 'Best NFR + NMT backoff'], ['Best NFR + NMT backoff', 'Best NFR unified'], ['Baseline SMT', 'Baseline NMT']] | 1 |
P19-1183table_4 | Performance comparisons on the Personsentence dataset (Yamaguchi et al., 2017). | 2 | [['Methods', 'Random'], ['Methods', 'Proposal upper bound'], ['Methods', 'DVSA+Avg'], ['Methods', 'DVSA+NetVLAD'], ['Methods', 'DVSA+LSTM'], ['Methods', 'GroundeR+Avg'], ['Methods', 'GroundeR+NetVLAD'], ['Methods', 'GroundeR+LSTM'], ['Methods', 'Ours w/o L div'], ['Methods', 'Ours']] | 2 | [['Accuracy', '0.4'], ['Accuracy', '0.5'], ['Accuracy', '0.6'], ['Accuracy', 'Average']] | [['15.1', '7.2', '3.5', '8.6'], ['89.8', '79.9', '64.1', '77.9'], ['39.8', '30.3', '19.7', '29.9'], ['34.1', '25', '18.3', '25.8'], ['42.7', '30.2', '20', '31'], ['45.5', '32.2', '21.7', '33.1'], ['22.1', '16.1', '8.6', '15.6'], ['39.9', '28.2', '17.7', '28.6'], ['57.9', '47.7', '35.6', '47.1'], ['62.5', '52', '38.4', ... | column | ['accuracy', 'accuracy', 'accuracy', 'accuracy'] | ['Ours', 'Ours w/o L div', 'Proposal upper bound'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy || 0.4</th> <th>Accuracy || 0.5</th> <th>Accuracy || 0.6</th> <th>Accuracy || Average</th> </tr> </thead> <tbody> <tr> <td>Methods || Random</td> <td>15.1</td> <td>7.2<... | Table 4 | table_4 | P19-1183 | 8 | acl2019 | Table 4 shows the results. Similarly, the proposed attentive interactor model (without the diversity loss) outperforms all the baselines. Moreover, the diversity loss further improves the performance. Note that the improvement of our model on this dataset is more significant than that on the VID-sentence dataset. The r... | [1, 1, 1, 1, 1, 2] | ['Table 4 shows the results.', 'Similarly, the proposed attentive interactor model (without the diversity loss) outperforms all the baselines.', 'Moreover, the diversity loss further improves the performance.', 'Note that the improvement of our model on this dataset is more significant than that on the VID-sentence dat... | [None, ['Ours'], ['Ours w/o L div'], ['Ours'], ['Proposal upper bound'], None] | 1 |
P19-1185table_1 | Test results on GLUE tasks for various models: Baseline, ENAS, and CAS (continual architecture search). The CAS results maintain statistical equality across each step. | 3 | [['Models', 'PREVIOUS WORK', 'BiLSTM+ELMo (2018)'], ['Models', 'PREVIOUS WORK', 'BiLSTM+ELMo+Attn (2018)'], ['Models', 'BASELINES', 'Baseline (with ELMo)'], ['Models', 'BASELINES', 'ENAS (Architecture Search)'], ['Models', 'CAS RESULTS', 'CAS Step-1 (QNLI training)'], ['Models', 'CAS RESULTS', 'CAS Step-2 (RTE training... | 1 | [['QNLI'], ['RTE'], ['WNLI']] | [['69.4', '50.1', '65.1'], ['61.1', '50.3', '65.1'], ['73.2', '52.3', '65.1'], ['74.5', '52.9', '65.1'], ['73.8', 'N/A', 'N/A'], ['73.6', '54.1', 'N/A'], ['73.3', '54.0', '64.4']] | column | ['accuracy', 'accuracy', 'accuracy'] | ['BASELINES', 'ENAS (Architecture Search)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>QNLI</th> <th>RTE</th> <th>WNLI</th> </tr> </thead> <tbody> <tr> <td>Models || PREVIOUS WORK || BiLSTM+ELMo (2018)</td> <td>69.4</td> <td>50.1</td> <td>65.1</td> </tr> <tr... | Table 1 | table_1 | P19-1185 | 7 | acl2019 | 7.1 Continual Learning on GLUE Tasks Baseline Models: We use bidirectional LSTMRNN encoders with max-pooling (Conneau et al., 2017) as our baseline. Further, we used the ELMo embeddings (Peters et al., 2018) as input to the encoders, where we allowed to train the weights on each layer of ELMo to get a final representat... | [2, 2, 1, 2, 1, 2, 1, 2, 1, 2, 2, 2, 2] | ['7.1 Continual Learning on GLUE Tasks Baseline Models: We use bidirectional LSTMRNN encoders with max-pooling (Conneau et al., 2017) as our baseline.', 'Further, we used the ELMo embeddings (Peters et al., 2018) as input to the encoders, where we allowed to train the weights on each layer of ELMo to get a final repres... | [None, None, ['Baseline (with ELMo)'], ['ENAS (Architecture Search)'], ['ENAS (Architecture Search)', 'Baseline (with ELMo)'], ['ENAS (Architecture Search)'], ['CAS Step-1 (QNLI training)', 'CAS Step-2 (RTE training)', 'CAS Step-3 (WNLI training)'], ['CAS Step-1 (QNLI training)', 'CAS Step-2 (RTE training)', 'CAS Step-... | 1 |
P19-1186table_3 | Accuracy [%] of CSDA w. Dirichlet trained with different configurations of F and Y. Y on its own is only a little worse than only F, showing that target labels y are more important for learning than the domain d. The Y configuration fully domain unsupervised training still results in decent performance, boding well for... | 1 | [['F'], ['F+Y'], ['Y']] | 1 | [['B'], ['D'], ['E'], ['K'], ['Average']] | [['77.9', '80.6', '84.4', '86.5', '82.3'], ['80.0', '84.3', '86.2', '87.0', '84.4'], ['77.6', '81.5', '83.7', '85.2', '82.0']] | column | ['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy'] | ['F+Y'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>B</th> <th>D</th> <th>E</th> <th>K</th> <th>Average</th> </tr> </thead> <tbody> <tr> <td>F</td> <td>77.9</td> <td>80.6</td> <td>84.4</td> <td>86.5</td> <td>8... | Table 3 | table_3 | P19-1186 | 7 | acl2019 | Next, we consider the impact of using different combinations of F and Y. Table 3 shows the performance of difference configurations. Overall, F + Y gives excellent performance. Interestingly, Y on its own is only a little worse than only F, showing that target labels y are more important for learning than the domain d.... | [1, 1, 1, 1, 2] | ['Next, we consider the impact of using different combinations of F and Y.', 'Table 3 shows the performance of difference configurations.', 'Overall, F + Y gives excellent performance.', 'Interestingly, Y on its own is only a little worse than only F, showing that target labels y are more important for learning than th... | [['F+Y'], None, ['F+Y'], ['Y', 'F'], None] | 1 |
P19-1193table_2 | Results of human evaluation. The best performance is highlighted in bold and “*” indicates the best result achieved by baselines. We calculate the Pearson correlation to show the inter-annotator agreement. | 2 | [['Methods', 'SC-LSTM'], ['Methods', 'PNN'], ['Methods', 'MTA'], ['Methods', 'CVAE'], ['Methods', 'Plan&Write'], ['Methods', 'Proposal']] | 1 | [['Consistency'], ['Novelty'], ['Diversity'], ['Coherence']] | [['1.67', '2.04', '1.39', '1.16'], ['2.52', '1.96', '1.95', '2.84'], ['3.17', '2.56', '2.43', '3.28'], ['3.42*', '2.87*', '2.74*', '2.63'], ['3.27', '2.81', '2.56', '3.36*'], ['3.84', '3.24', '3.16', '3.61']] | column | ['Consistency', 'Novelty', 'Diversity', 'Coherence'] | ['Proposal'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Consistency</th> <th>Novelty</th> <th>Diversity</th> <th>Coherence</th> </tr> </thead> <tbody> <tr> <td>Methods || SC-LSTM</td> <td>1.67</td> <td>2.04</td> <td>1.39</td> ... | Table 2 | table_2 | P19-1193 | 6 | acl2019 | Table 2 presents the human evaluation results, from which we can draw similar conclusions. It is obvious that our approach can outperform the baselines by a large margin, especially in terms of diversity and topic-consistency. For example, the proposed model achieves improvements of 15.33% diversity score and 12.28% co... | [1, 1, 1, 2, 2, 2] | ['Table 2 presents the human evaluation results, from which we can draw similar conclusions.', 'It is obvious that our approach can outperform the baselines by a large margin, especially in terms of diversity and topic-consistency.', 'For example, the proposed model achieves improvements of 15.33% diversity score and 1... | [None, ['Proposal', 'Diversity', 'Consistency'], ['Proposal', 'Diversity', 'Consistency'], None, None, None] | 1 |
P19-1193table_3 | Automatic evaluations of ablation study. “w/o Dynamic” means that we use static memory mechanism. | 2 | [['Methods', 'Full Model'], ['Methods', 'w/o Adversarial Training'], ['Methods', 'w/o Memory'], ['Methods', 'w/o Dynamic']] | 1 | [['BLEU'], ['Consistency'], ['Novelty'], ['Dist-1'], ['Dist-2']] | [['9.72', '39.42', '75.71', '5.19', '20.49'], ['7.74', '31.74', '74.13', '5.22', '20.43'], ['8.4', '33.95', '71.86', '4.16', '17.59'], ['8.46', '36.18', '73.62', '4.18', '18.49']] | column | ['BLEU', 'Consistency', 'Novelty', 'Dist-1', 'Dist-2'] | ['w/o Adversarial Training', 'w/o Memory'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU</th> <th>Consistency</th> <th>Novelty</th> <th>Dist-1</th> <th>Dist-2</th> </tr> </thead> <tbody> <tr> <td>Methods || Full Model</td> <td>9.72</td> <td>39.42</td> ... | Table 3 | table_3 | P19-1193 | 7 | acl2019 | Memory mechanism. We find that the memory mechanism can significantly improve the novelty and diversity. As is shown in Table 3, compared to the removal of the adversarial training, the model exhibits larger degradation in terms of novelty and diversity when the memory mechanism is removed. This shows that with the hel... | [2, 2, 1, 1] | ['Memory mechanism.', 'We find that the memory mechanism can significantly improve the novelty and diversity.', 'As is shown in Table 3, compared to the removal of the adversarial training, the model exhibits larger degradation in terms of novelty and diversity when the memory mechanism is removed.', 'This shows that w... | [None, None, ['w/o Adversarial Training', 'w/o Memory', 'Novelty', 'Dist-1', 'Dist-2'], None] | 1 |
P19-1195table_7 | Results on ROTOWIRE (RW) and MLB development sets using relation generation (RG) count (#) and precision (P%), content selection (CS) precision (P%) and recall (R%), content ordering (CO) in normalized Damerau-Levenshtein distance (DLD%), and BLEU. | 2 | [['RW', 'TEMPL'], ['RW', 'WS-2017'], ['RW', 'ED+CC'], ['RW', 'NCP+CC'], ['RW', 'ENT']] | 2 | [['RG', '#'], ['RG', 'P%'], ['CS', 'P%'], ['CS', 'R%'], ['CO', 'DLD%'], ['-', 'BLEU']] | [['54.29', '99.92', '26.61', '59.16', '14.42', '8.51'], ['23.95', '75.1', '28.11', '35.86', '15.33', '14.57'], ['22.68', '79.4', '29.96', '34.11', '16', '14'], ['33.88', '87.51', '33.52', '51.21', '18.57', '16.19'], ['31.84', '91.97', '36.65', '48.18', '19.68', '15.97']] | column | ['#', 'P%', 'P%', 'R%', 'DLD%', 'BLEU'] | ['ENT'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>RG || #</th> <th>RG || P%</th> <th>CS || P%</th> <th>CS || R%</th> <th>CO || DLD%</th> <th>- || BLEU</th> </tr> </thead> <tbody> <tr> <td>RW || TEMPL</td> <td>54.29</td> ... | Table 7 | table_7 | P19-1195 | 12 | acl2019 | Results on the Development Set . Table 7 (top) shows results on the ROTOWIRE development set for our dynamic entity memory model (ENT), the best system of Wiseman et al. (2017) (WS-2017) which is an encoder-decoder model with conditional copy, the template generator (TEMPL), our implementation of encoder-decoder model ... | [2, 1, 1] | ['Results on the Development Set .', 'Table 7 (top) shows results on the ROTOWIRE development set for our dynamic entity memory model (ENT), the best system of Wiseman et al. (2017) (WS-2017) which is an encoder-decoder model with conditional copy, the template generator (TEMPL), our implementation of encoder-decoder m... | [None, ['TEMPL', 'WS-2017', 'ED+CC', 'NCP+CC', 'ENT'], ['ENT']] | 1 |
P19-1197table_1 | Results of our model and the baselines. Above is the performance of the key fact prediction component (F1: F1 score, P: precision, R: recall). Middle is the comparison between models under the Vanilla Seq2Seq framework. Below is the models implemented with the transformer framework. | 2 | [['Model', 'Vanilla Seq2Seq'], ['Model', 'Structure-S2S'], ['Model', 'PretrainedMT'], ['Model', 'SemiMT'], ['Model', 'PIVOT-Vanilla'], ['Model', 'Transformer'], ['Model', 'PretrainedMT'], ['Model', 'SemiMT'], ['Model', 'PIVOT-Trans']] | 1 | [['BLEU'], ['NIST'], ['ROUGE']] | [['2.14', '0.2809', '0.47'], ['3.27', '0.9612', '0.71'], ['4.35', '1.9937', '0.91'], ['6.76', '3.5017', '2.04'], ['20.09', '6.5130', '18.31'], ['5.48', '1.9873', '1.26'], ['6.43', '2.1019', '1.77'], ['9.71', '2.7019', '3.31'], ['27.34', '6.8763', '19.3']] | column | ['BLEU', 'NIST', 'ROUGE'] | ['PIVOT-Trans', 'PIVOT-Vanilla'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU</th> <th>NIST</th> <th>ROUGE</th> </tr> </thead> <tbody> <tr> <td>Model || Vanilla Seq2Seq</td> <td>2.14</td> <td>0.2809</td> <td>0.47</td> </tr> <tr> <td>Model ... | Table 1 | table_1 | P19-1197 | 6 | acl2019 | 3.5 Results. We compare our PIVOT model with the above baseline models. Table 1 summarizes the results of these models. It shows that our PIVOT model achieves 87.92% F1 score, 92.59% precision, and 83.70% recall at the stage of key fact prediction, which provides a good foundation for the stage of surface realization. ... | [2, 0, 1, 0, 1, 1] | ['3.5 Results.', 'We compare our PIVOT model with the above baseline models.', 'Table 1 summarizes the results of these models.', 'It shows that our PIVOT model achieves 87.92% F1 score, 92.59% precision, and 83.70% recall at the stage of key fact prediction, which provides a good foundation for the stage of surface re... | [None, ['PIVOT-Vanilla', 'PIVOT-Trans'], None, None, ['PIVOT-Vanilla', 'BLEU', 'NIST', 'ROUGE', 'PIVOT-Trans'], ['PIVOT-Trans', 'PIVOT-Vanilla']] | 1 |
P19-1204table_2 | ROUGE scores on the CL-SciSumm 2016 test benchmark. *: results from Yasunaga et al. (2019). | 2 | [['Model', 'TALKSUMM-HYBRID'], ['Model', 'TALKSUMM-ONLY'], ['Model', 'GCN HYBRID2*'], ['Model', 'GCN CITED TEXT SPANS*'], ['Model', 'ABSTRACT*']] | 1 | [['2-R'], ['2-F'], ['3-F'], ['SU4-F']] | [['35.05', '34.11', '27.19', '24.13'], ['22.77', '21.94', '15.94', '12.55'], ['32.44', '30.08', '23.43', '23.77'], ['25.16', '24.26', '18.79', '17.67'], ['29.52', '29.4', '23.16', '23.34']] | column | ['2-R', '2-F', '3-F', 'SU4-F'] | ['TALKSUMM-HYBRID', 'TALKSUMM-ONLY'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>2-R</th> <th>2-F</th> <th>3-F</th> <th>SU4-F</th> </tr> </thead> <tbody> <tr> <td>Model || TALKSUMM-HYBRID</td> <td>35.05</td> <td>34.11</td> <td>27.19</td> <td>24.13<... | Table 2 | table_2 | P19-1204 | 4 | acl2019 | Automatic Evaluation . Table 2 summarizes the results: both GCN CITED TEXT SPANS and TALKSUMM-ONLY models, are not able to obtain better performance than ABSTRACT8. However, for the Hybrid approach, where the abstract is augmented with sentences from the summaries emitted by the models, our TALKSUMM-HYBRID outperforms ... | [2, 1, 1, 1] | ['Automatic Evaluation .', 'Table 2 summarizes the results: both GCN CITED TEXT SPANS and TALKSUMM-ONLY models, are not able to obtain better performance than ABSTRACT8.', 'However, for the Hybrid approach, where the abstract is augmented with sentences from the summaries emitted by the models, our TALKSUMM-HYBRID outp... | [None, ['GCN CITED TEXT SPANS*', 'TALKSUMM-ONLY'], ['TALKSUMM-HYBRID', 'GCN HYBRID2*', 'ABSTRACT*'], ['TALKSUMM-HYBRID', 'TALKSUMM-ONLY']] | 1 |
P19-1206table_2 | ROUGE F1 score of the evaluation set (%). R-1, R-2 and R-L denote ROUGE-1, ROUGE-2, and ROUGE-L, respectively. The best performing model among unsupervised approaches is shown in boldface. | 2 | [['Unsupervised Approach', 'TextRank'], ['Unsupervised Approach', 'Opinosis'], ['Unsupervised Approach', 'MeanSum-single'], ['Unsupervised Approach', 'StrSum'], ['Unsupervised Approach', 'StrSum+DiscourseRank'], ['Supervised baselines', 'Seq-Seq'], ['Supervised baselines', 'Seq-Seq-att']] | 4 | [['Domain', 'Toys & Games', 'Metric', 'R-1'], ['Domain', 'Toys & Games', 'Metric', 'R-2'], ['Domain', 'Toys & Games', 'Metric', 'R-L'], ['Domain', 'Sports & Outdoors', 'Metric', 'R-1'], ['Domain', 'Sports & Outdoors', 'Metric', 'R-2'], ['Domain', 'Sports & Outdoors', 'Metric', 'R-L'], ['Domain', 'Movies & TV', 'Metric'... | [['8.63', '1.24', '7.26', '7.16', '0.89', '6.39', '8.27', '1.44', '7.35'], ['8.25', '1.51', '7.52', '7.04', '1.42', '6.45', '7.8', '1.2', '7.11'], ['8.12', '0.58', '7.3', '5.42', '0.47', '4.97', '6.96', '0.35', '6.08'], ['11.61', '1.56', '11.04', '9.15', '1.38', '8.79', '7.38', '1.03', '6.94'], ['11.87', '1.63', '11.4'... | column | ['R-1', 'R-2', 'R-L', 'R-1', 'R-2', 'R-L', 'R-1', 'R-2', 'R-L'] | ['StrSum+DiscourseRank'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Domain || Toys & Games || Metric || R-1</th> <th>Domain || Toys & Games || Metric || R-2</th> <th>Domain || Toys & Games || Metric || R-L</th> <th>Domain || Sports & Outdoors || Metric... | Table 2 | table_2 | P19-1206 | 6 | acl2019 | 4.4 Evaluation of Summary Generation . Table 2 shows the ROUGE scores of our models and the baselines for the evaluation sets. With regards to Toys & Games and Sports & Outdoors, our full model (StrSum + DiscourseRank) achieves the best ROUGE scores among the unsupervised approaches. | [0, 1, 1] | ['4.4 Evaluation of Summary Generation .', 'Table 2 shows the ROUGE scores of our models and the baselines for the evaluation sets.', 'With regards to Toys & Games and Sports & Outdoors, our full model (StrSum + DiscourseRank) achieves the best ROUGE scores among the unsupervised approaches.'] | [None, ['R-1', 'R-2', 'R-L'], ['StrSum+DiscourseRank', 'R-1', 'R-2', 'R-L']] | 1 |
P19-1209table_2 | Instance selection results; evaluated for primary, secondary, and all ground-truth sentences. Our BERTSingPairMix method achieves strong performance owing to its capability of building effective representations for both singletons and pairs. | 3 | [['System', 'CNN/Daily Mail', 'LEAD-Baseline'], ['System', 'CNN/Daily Mail', 'SumBasic (Vanderwende et al., 2007)'], ['System', 'CNN/Daily Mail', 'KL-Summ (Haghighi et al., 2009)'], ['System', 'CNN/Daily Mail', 'LexRank (Erkan and Radev, 2004)'], ['System', 'CNN/Daily Mail', 'VSM-SingOnly (This work)'], ['System', 'CNN... | 2 | [['Primary', 'P'], ['Primary', 'R'], ['Primary', 'F'], ['Secondary', 'P'], ['Secondary', 'R'], ['Secondary', 'F'], ['All', 'P'], ['All', 'R'], ['All', 'F']] | [['31.9', '38.4', '34.9', '10.7', '34.3', '16.3', '39.9', '37.3', '38.6'], ['15.2', '17.3', '16.2', '5.3', '15.8', '8', '19.6', '16.9', '18.1'], ['15.7', '17.9', '16.7', '5.4', '15.9', '8', '20', '17.4', '18.6'], ['22', '25.9', '23.8', '7.2', '21.4', '10.7', '27.5', '24.7', '26'], ['30.8', '36.9', '33.6', '9.8', '34.4'... | column | ['P', 'R', 'F', 'P', 'R', 'F', 'P', 'R', 'F'] | ['VSM-SingOnly (This work)', 'VSM-SingPairMix (This work)', 'BERT-SingOnly (This work)', 'BERT-SingPairMix (This work)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Primary || P</th> <th>Primary || R</th> <th>Primary || F</th> <th>Secondary || P</th> <th>Secondary || R</th> <th>Secondary || F</th> <th>All || P</th> <th>All || R</th> <th>A... | Table 2 | table_2 | P19-1209 | 7 | acl2019 | Extraction Results . In Table 2 we present intance selection results for the CNN/DM, XSum, and DUC-04 datasets. Our method builds representations for instances using either BERT or VSM (§3.1). To ensure a thorough comparison, we experiment with selecting a mixed set of singletons and pairs (“SingPairMix”) as well as se... | [2, 1, 1, 1, 1, 1, 1, 1, 2, 2, 1, 2, 1, 1, 2, 2] | ['Extraction Results .', 'In Table 2 we present intance selection results for the CNN/DM, XSum, and DUC-04 datasets.', 'Our method builds representations for instances using either BERT or VSM (§3.1).', 'To ensure a thorough comparison, we experiment with selecting a mixed set of singletons and pairs (“SingPairMix”) as... | [None, ['CNN/Daily Mail', 'XSum', 'DUC-04'], ['VSM-SingOnly (This work)', 'VSM-SingPairMix (This work)', 'BERT-SingOnly (This work)', 'BERT-SingPairMix (This work)'], ['VSM-SingOnly (This work)', 'VSM-SingPairMix (This work)', 'BERT-SingOnly (This work)', 'BERT-SingPairMix (This work)'], ['CNN/Daily Mail', 'XSum', 'BER... | 1 |
P19-1212table_4 | ROUGE scores on three large datasets. The best results for non-baseline systems are in bold. Except for SentRewriting on CNN/DM and NYT, for all abstractive models, we truncate input and summaries at 400 and 100. | 2 | [['Models', 'LEAD-3'], ['Models', 'ORACLEFRAG (Grusky et al. 2018)'], ['Models', 'ORACLEEXT'], ['Models', 'TEXTRANK (Mihalcea and Tarau 2004)'], ['Models', 'LEXRANK (Erkan and Radev 2004)'], ['Models', 'SUMBASIC (Nenkova and Vanderwende 2005)'], ['Models', 'RNN-EXT RL (Chen and Bansal 2018)'], ['Models', 'SEQ2SEQ (Suts... | 2 | [['CNN/DM', 'R-1'], ['CNN/DM', 'R-2'], ['CNN/DM', 'R-L'], ['NYT', 'R-1'], ['NYT', 'R-2'], ['NYT', 'R-L'], ['BIGPATENT', 'R-1'], ['BIGPATENT', 'R-2'], ['BIGPATENT', 'R-L']] | [['40.23', '17.52', '36.34', '32.93', '17.69', '29.58', '31.27', '8.75', '26.18'], ['93.36', '83.19', '93.36', '88.15', '74.74', '88.15', '91.85', '78.66', '91.85'], ['49.35', '27.96', '46.24', '42.62', '26.39', '39.5', '43.56', '16.91', '36.52'], ['37.72', '15.59', '33.81', '28.57', '14.29', '23.79', '35.99', '11.14',... | column | ['R-1', 'R-2', 'R-L', 'R-1', 'R-2', 'R-L', 'R-1', 'R-2', 'R-L'] | ['SENTREWRITING (Chen and Bansal 2018)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>CNN/DM || R-1</th> <th>CNN/DM || R-2</th> <th>CNN/DM || R-L</th> <th>NYT || R-1.1</th> <th>NYT || R-2</th> <th>NYT || R-L</th> <th>BIGPATENT || R-1</th> <th>BIGPATENT || R-2</th> ... | Table 4 | table_4 | P19-1212 | 5 | acl2019 | Table 4 reports F1 scores of ROUGE-1, 2, and L (Lin and Hovy, 2003) for all models. For BIGPATENT, almost all models outperform the LEAD-3 baseline due to the more uniform distribution of salient content in BIGPATENT’s input articles. Among extractive models, TEXTRANK and LEXRANK outperform RNN-EXT RL which was trained... | [1, 1, 1, 1] | ['Table 4 reports F1 scores of ROUGE-1, 2, and L (Lin and Hovy, 2003) for all models.', 'For BIGPATENT, almost all models outperform the LEAD-3 baseline due to the more uniform distribution of salient content in BIGPATENT’s input articles.', 'Among extractive models, TEXTRANK and LEXRANK outperform RNN-EXT RL which was... | [['R-1', 'R-2', 'R-L'], ['BIGPATENT', 'LEAD-3'], ['TEXTRANK (Mihalcea and Tarau 2004)', 'LEXRANK (Erkan and Radev 2004)', 'RNN-EXT RL (Chen and Bansal 2018)'], ['SENTREWRITING (Chen and Bansal 2018)', 'BIGPATENT']] | 1 |
P19-1216table_2 | Results for the Giga-MSC dataset. | 2 | [['Model', 'Ground truth'], ['Model', '#1 WG (Filippova, 10)'], ['Model', '#2 KWG (Boudin+, 13)'], ['Model', '#3 Hard Para.'], ['Model', '#4 Seq2seq with attention'], ['Model', '#5 Our rewriter (RWT)']] | 1 | [['METEOR'], ['NN-1'], ['NN-2'], ['NN-3'], ['NN-4'], ['Comp. rate']] | [['-', '8.6', '28', '40', '49.1', '0.5'], ['0.29', '0', '0', '2.8', '6.8', '0.34'], ['0.36', '0', '0', '1.1', '3.1', '0.52'], ['0.35', '10.1', '19.7', '29.1', '38', '0.51'], ['0.33', '12.7', '24', '34.7', '44.4', '0.49'], ['0.36', '9', '17.4', '25.7', '33.8', '0.5']] | column | ['METEOR', 'NN-1', 'NN-2', 'NN-3', 'NN-4', 'Comp. rate'] | ['#5 Our rewriter (RWT)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>METEOR</th> <th>NN-1</th> <th>NN-2</th> <th>NN-3</th> <th>NN-4</th> <th>Comp. rate</th> </tr> </thead> <tbody> <tr> <td>Model || Ground truth</td> <td>-</td> <td>8.6</... | Table 2 | table_2 | P19-1216 | 4 | acl2019 | 5 Results and Analysis . METEOR metric (n-gram overlap with synonyms) was used for automatic evaluation. The novel ngram rate9 (e.t., NN-1, NN-2, NN-3, and NN-4) was also computed to investigate the number of novel words that could be introduced by the models. Table 2 and Table 3 present the results and below are our o... | [2, 1, 1, 1, 1, 1, 1] | ['5 Results and Analysis .', 'METEOR metric (n-gram overlap with synonyms) was used for automatic evaluation.', 'The novel ngram rate9 (e.t., NN-1, NN-2, NN-3, and NN-4) was also computed to investigate the number of novel words that could be introduced by the models.', 'Table 2 and Table 3 present the results and belo... | [None, ['METEOR'], ['NN-1', 'NN-2', 'NN-3', 'NN-4'], ['METEOR', '#2 KWG (Boudin+, 13)'], ['METEOR', '#5 Our rewriter (RWT)'], None, ['#3 Hard Para.', '#4 Seq2seq with attention', '#5 Our rewriter (RWT)', 'METEOR']] | 1 |
P19-1216table_4 | Human evaluation for informativeness and grammaticality. † stands for significantly better than KWG with 0.95 confidence. | 2 | [['Method', 'KWG'], ['Method', 'RWT']] | 1 | [['Informativeness'], ['Grammaticality']] | [['1.06', '1.19'], ['1.02', '1.40†']] | column | ['Informativeness', 'Grammaticality'] | ['RWT'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Informativeness</th> <th>Grammaticality</th> </tr> </thead> <tbody> <tr> <td>Method || KWG</td> <td>1.06</td> <td>1.19</td> </tr> <tr> <td>Method || RWT</td> <td>1.02</td>... | Table 4 | table_4 | P19-1216 | 4 | acl2019 | Human Evaluation . As METEOR metric cannot measure the grammaticality of compression, we asked two human raters10 to assess 50 compressed sentences out of the Giga-MSC test dataset in terms of informativeness and grammaticality. We used 0-2 point scale (2 pts: excellent; 1 pts: good; 0 pts: poor), similar to previous w... | [2, 2, 2, 1, 1] | ['Human Evaluation .', 'As METEOR metric cannot measure the grammaticality of compression, we asked two human raters10 to assess 50 compressed sentences out of the Giga-MSC test dataset in terms of informativeness and grammaticality.', 'We used 0-2 point scale (2 pts: excellent; 1 pts: good; 0 pts: poor), similar to pr... | [None, None, None, ['Informativeness', 'Grammaticality'], ['RWT', 'Grammaticality']] | 1 |
P19-1220table_2 | Performance of our and competing models on the MS MARCO V2 leaderboard (4 March 2019). aSeo et al. (2017); bYan et al. (2019); cShao (unpublished), a variant of Tan et al. (2018); dLi (unpublished), a model using Devlin et al. (2018) and See et al. (2017); eQian (unpublished); f Wu et al. (2018). Whether the competing ... | 2 | [['Model', 'BiDAFa'], ['Model', 'Deep Cascade Qab'], ['Model', 'S-Net+CES2Sc'], ['Model', 'BERT+Multi-PGNetd'], ['Model', 'Selector+CCGe'], ['Model', 'VNETf'], ['Model', 'Masque (NLG single)'], ['Model', 'Masque (NLG ensemble)'], ['Model', 'Masque (Q&A single)'], ['Model', 'Masque (Q&A ensemble)'], ['Model', 'Human Per... | 2 | [['NLG', 'R-L'], ['NLG', 'B-1'], ['Q&A', 'R-L'], ['Q&A', 'B-1']] | [['16.91', '9.3', '23.96', '10.64'], ['35.14', '37.35', '52.01', '54.64'], ['45.04', '40.62', '44.96', '46.36'], ['47.37', '45.09', '48.14', '52.03'], ['47.39', '45.26', '50.63', '52.03'], ['48.37', '46.75', '51.63', '54.37'], ['49.19', '49.63', '48.42', '48.68'], ['49.61', '50.13', '48.92', '48.75'], ['25.66', '36.62'... | column | ['R-L', 'B-1', 'R-L', 'B-1'] | ['Masque (NLG ensemble)', 'Masque (Q&A ensemble)', 'Masque (NLG single)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>NLG || R-L</th> <th>NLG || B-1</th> <th>Q&A || R-L</th> <th>Q&A || B-1</th> </tr> </thead> <tbody> <tr> <td>Model || BiDAFa</td> <td>16.91</td> <td>9.3</td> <td>23.... | Table 2 | table_2 | P19-1220 | 6 | acl2019 | 4.2 Results . Does our model achieve state-of-the-art on the two tasks with different styles?. Table 2 shows the performance of our model and competing models on the leaderboard. Our ensemble model of six training runs, where each model was trained with the two answer styles, achieved state-of-theart performance on bot... | [2, 2, 1, 1, 1] | ['4.2 Results .', 'Does our model achieve state-of-the-art on the two tasks with different styles?.', 'Table 2 shows the performance of our model and competing models on the leaderboard.', 'Our ensemble model of six training runs, where each model was trained with the two answer styles, achieved state-of-theart perform... | [None, None, ['Masque (NLG ensemble)', 'Masque (Q&A ensemble)', 'Masque (NLG single)', 'Masque (Q&A single)'], ['Masque (NLG ensemble)', 'Masque (Q&A ensemble)'], ['Masque (NLG single)']] | 1 |
P19-1220table_5 | Performance of our and competing models on the NarrativeQA test set. aSeo et al. (2017); bTay et al. (2018); cBauer et al. (2018); dIndurthi et al. (2018); eHu et al. (2018). f Results on the NarrativeQA validation set. | 2 | [['Model', 'BiDAFa'], ['Model', 'DECAPROP'], ['Model', 'MHPGM+NOIC'], ['Model', 'ConZNet'], ['Model', 'eRMR+A2D'], ['Model', 'Masque (NQA)'], ['Model', 'w/o multi-style learning'], ['Model', 'Masque (NLG)'], ['Model', 'fMasque (NQA; valid.)']] | 1 | [['B-1'], ['B-4'], ['M'], ['R-L']] | [['33.72', '15.53', '15.38', '36.3'], ['42', '23.42', '23.42', '40.07'], ['43.63', '21.07', '19.03', '44.16'], ['42.76', '22.49', '19.24', '46.67'], ['50.4', '26.5', 'N/A', '53.3'], ['54.11', '30.43', '26.13', '59.87'], ['48.7', '20.98', '21.95', '54.74'], ['39.14', '18.11', '24.62', '50.09'], ['52.78', '28.72', '25.38... | column | ['B-1', 'B-4', 'M', 'R-L'] | ['Masque (NQA)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>B-1</th> <th>B-4</th> <th>M</th> <th>R-L</th> </tr> </thead> <tbody> <tr> <td>Model || BiDAFa</td> <td>33.72</td> <td>15.53</td> <td>15.38</td> <td>36.3</td> </tr> ... | Table 5 | table_5 | P19-1220 | 8 | acl2019 | 5.2 Results . Does our model achieve state-of-the-art performance?. Table 5 shows that our single model, trained with two styles and controlled with the NQA style, pushed forward the state-of-the-art by a significant margin. The evaluation scores of the model controlled with the NLG style were low because the two style... | [2, 2, 1, 1, 1, 1] | ['5.2 Results .', 'Does our model achieve state-of-the-art performance?.', 'Table 5 shows that our single model, trained with two styles and controlled with the NQA style, pushed forward the state-of-the-art by a significant margin.', 'The evaluation scores of the model controlled with the NLG style were low because th... | [None, None, ['Masque (NQA)'], ['Masque (NLG)'], ['Masque (NQA)', 'R-L'], ['Masque (NQA)', 'Masque (NLG)']] | 1 |
P19-1221table_4 | Results on the SQuAD-document dev set. | 2 | [['Model', 'S-Norm (Clark and Gardner, 2018)'], ['Model', 'RE3QABASE'], ['Model', 'RE3QALARGE']] | 1 | [['EM'], ['F1']] | [['64.08', '72.37'], ['77.9', '84.81'], ['80.71', '87.2']] | column | ['EM', 'F1'] | ['RE3QALARGE'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>EM</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Model || S-Norm (Clark and Gardner, 2018)</td> <td>64.08</td> <td>72.37</td> </tr> <tr> <td>Model || RE3QABASE</td> <td... | Table 4 | table_4 | P19-1221 | 7 | acl2019 | We also report the performance on documentlevel SQuAD in Table 4 to assess our approach in single-document setting. We find our approach adapts well: the best model achieves 87.2 F1. Note that the BERTLARGE model has obtained 90.9 F1 on the original SQuAD dataset (single-paragraph setting), which is only 3.7% ahead of ... | [1, 1, 2] | ['We also report the performance on documentlevel SQuAD in Table 4 to assess our approach in single-document setting.', 'We find our approach adapts well: the best model achieves 87.2 F1.', 'Note that the BERTLARGE model has obtained 90.9 F1 on the original SQuAD dataset (single-paragraph setting), which is only 3.7% a... | [None, ['F1', 'RE3QALARGE'], ['F1']] | 1 |
P19-1225table_5 | Performance of our model and the baseline in evidence extraction on the development set in the distractor setting. The correlation is the Kendall tau correlation of the number of predicted evidence sentences and that of gold evidence. | 1 | [['baseline'], ['QFE']] | 1 | [['Precision'], ['Recall'], ['Correlation']] | [['79', '82.4', '0.259'], ['88.4', '83.2', '0.375']] | column | ['Precision', 'Recall', 'Correlation'] | ['QFE'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Precision</th> <th>Recall</th> <th>Correlation</th> </tr> </thead> <tbody> <tr> <td>baseline</td> <td>79</td> <td>82.4</td> <td>0.259</td> </tr> <tr> <td>QFE</td> ... | Table 5 | table_5 | P19-1225 | 6 | acl2019 | What are the characteristics of our evidence extraction? . Table 5 shows the evidence extraction performance in the distractor setting. Our model improves both precision and recall, and the improvement in precision is larger. Figure 4 reveals the reason for the high EM and precision scores; QFE rarely extracts too much... | [2, 1, 1, 0, 0, 1, 2, 2] | ['What are the characteristics of our evidence extraction? .', 'Table 5 shows the evidence extraction performance in the distractor setting.', 'Our model improves both precision and recall, and the improvement in precision is larger.', 'Figure 4 reveals the reason for the high EM and precision scores; QFE rarely extrac... | [None, None, ['Precision', 'Recall', 'QFE', 'baseline'], None, None, ['Correlation', 'QFE', 'baseline'], None, ['baseline']] | 1 |
P19-1227table_6 | Overall results on the XQA dataset. | 2 | [['Languages', 'English'], ['Languages', 'Chinese'], ['Languages', 'French'], ['Languages', 'German'], ['Languages', 'Polish'], ['Languages', 'Portuguese'], ['Languages', 'Russian'], ['Languages', 'Tamil'], ['Languages', 'Ukrainian']] | 3 | [['Translate-Test', 'DocQA', 'EM'], ['Translate-Test', 'DocQA', 'F1'], ['Translate-Test', 'BERT', 'EM'], ['Translate-Test', 'BERT', 'F1'], ['Translate-Train', 'DocQA', 'EM'], ['Translate-Train', 'DocQA', 'F1'], ['Translate-Train', 'BERT', 'EM'], ['Translate-Train', 'BERT', 'F1'], ['Zero-shot', 'Multilingual BERT', 'EM'... | [['32.32', '38.29', '33.72', '40.51', '32.32', '38.29', '33.72', '40.51', '30.85', '38.11'], ['7.17', '17.2', '9.81', '23.05', '7.45', '18.73', '18.93', '31.5', '25.88', '39.53'], ['11.19', '18.97', '15.42', '26.13', '-', '-', '-', '-', '23.34', '31.08'], ['12.98', '19.15', '16.84', '23.65', '11.23', '15.08', '19.06', ... | column | ['EM', 'F1', 'EM', 'F1', 'EM', 'F1', 'EM', 'F1', 'EM', 'F1'] | ['Multilingual BERT'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Translate-Test || DocQA || EM</th> <th>Translate-Test || DocQA || F1</th> <th>Translate-Test || BERT || EM</th> <th>Translate-Test || BERT || F1</th> <th>Translate-Train || DocQA || EM</th> ... | Table 6 | table_6 | P19-1227 | 6 | acl2019 | 5.3 Overall Results. Table 6 shows the overall results for different methods in different languages. There is a large gap between the performance of English and that of other target languages, which implies that the task of cross-lingual OpenQA is difficult. In the English test set, the performance of the multilingual ... | [2, 1, 1, 1, 1, 1, 2, 1, 2, 2, 2, 2] | ['5.3 Overall Results.', 'Table 6 shows the overall results for different methods in different languages.', 'There is a large gap between the performance of English and that of other target languages, which implies that the task of cross-lingual OpenQA is difficult.', 'In the English test set, the performance of the mu... | [None, ['Languages'], ['English'], ['English', 'Multilingual BERT', 'BERT'], ['Languages', 'Multilingual BERT'], ['DocQA', 'BERT', 'English', 'Translate-Train', 'Translate-Test'], None, ['Translate-Train', 'Translate-Test', 'DocQA', 'German'], ['DocQA'], ['German'], ['Translate-Train', 'German'], ['BERT']] | 1 |
P19-1227table_9 | Performance with respect to language distance and percentage of “easy” questions. | 2 | [['Languages', 'German'], ['Languages', 'Chinese'], ['Languages', 'Portuguese'], ['Languages', 'French'], ['Languages', 'Polish'], ['Languages', 'Ukrainian'], ['Languages', 'Russian'], ['Languages', 'Tamil']] | 1 | [['Genetic dist.'], ['Pct. of easy'], ['EM']] | [['30.8', '19.09', '36.67'], ['82.4', '33.24', '35.93'], ['59.8', '29.03', '33.68'], ['48.7', '23.37', '31.21'], ['66.9', '17.7', '31.17'], ['60.3', '21.18', '24.26'], ['60.3', '18.56', '21.11'], ['96.5', '17.63', '16.95']] | column | ['Genetic dist.', 'Pct. of easy', 'EM'] | ['Languages'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Genetic dist.</th> <th>Pct. of easy</th> <th>EM</th> </tr> </thead> <tbody> <tr> <td>Languages || German</td> <td>30.8</td> <td>19.09</td> <td>36.67</td> </tr> <tr> <... | Table 9 | table_9 | P19-1227 | 7 | acl2019 | The results in Table 9 verify our assumption. The performance of different languages generally decreases as the genetic distance grows. The exceptions are Chinese and Portuguese since the percentages of "easy" questions in them are significantly higher than those in other languages. For languages that have similar gene... | [1, 1, 1, 1] | ['The results in Table 9 verify our assumption.', 'The performance of different languages generally decreases as the genetic distance grows.', 'The exceptions are Chinese and Portuguese since the percentages of "easy" questions in them are significantly higher than those in other languages.', 'For languages that have s... | [None, ['Pct. of easy', 'EM', 'German', 'French', 'Polish', 'Ukrainian', 'Russian', 'Tamil', 'Genetic dist.'], ['Chinese', 'Portuguese', 'Pct. of easy'], ['Genetic dist.', 'Russian', 'Ukrainian', 'Portuguese', 'Pct. of easy']] | 1 |
P19-1229table_3 | Performance on dev data of models trained on a single-domain training data. | 2 | [['Trained on', 'BC'], ['Trained on', 'PB'], ['Trained on', 'ZX']] | 2 | [['BC', 'UAS'], ['BC', 'LAS'], ['PB', 'UAS'], ['PB', 'LAS'], ['ZX', 'UAS'], ['ZX', 'LAS']] | [['82.77', '77.66', '68.73', '61.93', '69.34', '61.32'], ['62.1', '55.2', '75.85', '70.12', '51.5', '41.92'], ['56.15', '48.34', '52.56', '43.76', '69.54', '63.65']] | column | ['UAS', 'LAS', 'UAS', 'LAS', 'UAS', 'LAS'] | ['PB', 'BC', 'ZX'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BC || UAS</th> <th>BC || LAS</th> <th>PB || UAS</th> <th>PB || LAS</th> <th>ZX || UAS</th> <th>ZX || LAS</th> </tr> </thead> <tbody> <tr> <td>Trained on || BC</td> <td>82.7... | Table 3 | table_3 | P19-1229 | 5 | acl2019 | 4.1 Single-domain Training Results . Table 3 presents parsing accuracy on the dev data when training each parser on a single-domain training data. We can see that although PB-train is much smaller than BC-train, the PB-trained parser outperforms the BC-trained parser by about 8% on PB-dev, indicating the usefulness and... | [2, 1, 1, 1, 1, 1, 1] | ['4.1 Single-domain Training Results .', 'Table 3 presents parsing accuracy on the dev data when training each parser on a single-domain training data.', 'We can see that although PB-train is much smaller than BC-train, the PB-trained parser outperforms the BC-trained parser by about 8% on PB-dev, indicating the useful... | [None, None, ['PB', 'BC'], ['ZX', 'BC'], ['ZX'], ['BC'], None] | 1 |
P19-1229table_5 | Final results on the test data. | 2 | [['Trained on single-domain data', 'BC-train'], ['Trained on single-domain data', 'PB-train'], ['Trained on single-domain data', 'ZX-train'], ['Trained on source- and target-domain data', 'MTL'], ['Trained on source- and target-domain data', 'CONCAT'], ['Trained on source- and target-domain data', 'DOEMB'], ['Trained o... | 2 | [['PB', 'UAS'], ['PB', 'LAS'], ['ZX', 'UAS'], ['ZX', 'LAS']] | [['67.55', '61.01', '68.44', '59.55'], ['74.52', '69.02', '51.62', '40.36'], ['52.24', '42.76', '68.14', '61.71'], ['75.39', '69.69', '72.11', '65.66'], ['77.49', '72.16', '76.8', '70.85'], ['78.24', '72.81', '77.96', '72.04'], ['77.62', '72.35', '78.5', '72.49'], ['82.05', '77.16', '80.44', '75.11']] | column | ['UAS', 'LAS', 'UAS', 'LAS'] | ['DOEMB', '+ ELMo', '+ Fine-tuning'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>PB || UAS</th> <th>PB || LAS</th> <th>ZX || UAS</th> <th>ZX || LAS</th> </tr> </thead> <tbody> <tr> <td>Trained on single-domain data || BC-train</td> <td>67.55</td> <td>61.01</... | Table 5 | table_5 | P19-1229 | 7 | acl2019 | 4.4 Final Results . Table 5 shows the final results on the test data, which are consistent with the previous observations. First, when constrained on single-domain training data, using the target-domain data is the most effective. Second, using source-domain data as extra training data is helpful, and the DOEMB method ... | [2, 1, 1, 1, 1] | ['4.4 Final Results .', 'Table 5 shows the final results on the test data, which are consistent with the previous observations.', 'First, when constrained on single-domain training data, using the target-domain data is the most effective.', 'Second, using source-domain data as extra training data is helpful, and the DO... | [None, None, ['Trained on single-domain data'], ['Trained on source- and target-domain data', 'DOEMB'], ['+ ELMo', '+ Fine-tuning']] | 1 |
P19-1231table_3 | Model performance by F1 on the testing set of each dataset. The first group of models are all fullysupervised, which use manual fine-grained annotations. while the second group of models use only named entity dictionaries to perform the NER task. | 2 | [['CoNLL (en)', 'PER'], ['CoNLL (en)', 'LOC'], ['CoNLL (en)', 'ORG'], ['CoNLL (en)', 'MISC'], ['CoNLL (en)', 'Overall'], ['CoNLL (sp)', 'PER'], ['CoNLL (sp)', 'LOC'], ['CoNLL (sp)', 'ORG'], ['CoNLL (sp)', 'Overall'], ['MUC', 'PER'], ['MUC', 'LOC'], ['MUC', 'ORG'], ['MUC', 'Overall'], ['Twitter', 'PER'], ['Twitter', 'LO... | 1 | [['MEMM'], ['CRF'], ['BiLSTM'], ['BiLSTM+CRF'], ['Matching'], ['uPU'], ['buPU'], ['bnPU'], ['AdaPU']] | [['91.61', '93.12', '94.21', '95.71', '6.7', '74.22', '85.01', '87.2F11', '90.17'], ['89.72', '91.15', '91.76', '93.02', '67.16', '69.88', '81.27', '83.37', '85.62'], ['80.6', '81.91', '83.21', '88.45', '46.65', '73.64', '74.72', '75.29', '76.03'], ['77.45', '79.35', '76', '79.86', '53.98', '68.9', '68.9', '66.88', '69... | column | ['F1', 'F1', 'F1', 'F1', 'F1', 'F1', 'F1', 'F1', 'F1'] | ['PER', 'LOC', 'ORG', 'Overall'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MEMM</th> <th>CRF</th> <th>BiLSTM</th> <th>BiLSTM+CRF</th> <th>Matching</th> <th>uPU</th> <th>buPU</th> <th>bnPU</th> <th>AdaPU</th> </tr> </thead> <tbody> <tr> <... | Table 3 | table_3 | P19-1231 | 7 | acl2019 | General Performance. Table 3 shows model performance by entity type and the overall performance on the four tested datasets. From the table, we can observe: 1) The performance of the Matching model is quite poor compared to other models. We found out that it mainly resulted from low recall values. | [2, 1, 1, 2] | ['General Performance.', 'Table 3 shows model performance by entity type and the overall performance on the four tested datasets.', 'From the table, we can observe: 1) The performance of the Matching model is quite poor compared to other models.', 'We found out that it mainly resulted from low recall values.'] | [None, ['CoNLL (en)', 'CoNLL (sp)', 'MUC', 'Twitter'], ['Matching'], None] | 1 |
P19-1236table_3 | F1-scores on 13PC and 13CG. † indicates that the FINAL results are statistically significant compared to all transfer baselines and ablation baselines with p < 0.01 by t-test. | 1 | [['Crichton et al. (2017)'], ['STM-TARGET'], ['MULTITASK(NER+LM)'], ['MULTITASK(NER)'], ['FINETUNE'], ['STM+ELMO'], ['CO-LM'], ['CO-NER'], ['MIX-DATA'], ['FINAL']] | 2 | [['Datasets', '13PC'], ['Datasets', '13CG']] | [['81.92', '78.9'], ['82.59', '76.55'], ['81.33', '75.27'], ['83.09', '77.73'], ['82.55', '76.73'], ['82.76', '78.24'], ['84.43', '78.6'], ['83.87', '78.43'], ['83.88', '78.7'], ['85.54', '79.86']] | column | ['F1-scores', 'F1-scores'] | ['FINAL'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Datasets || 13PC</th> <th>Datasets || 13CG</th> </tr> </thead> <tbody> <tr> <td>Crichton et al. (2017)</td> <td>81.92</td> <td>78.9</td> </tr> <tr> <td>STM-TARGET</td> <td... | Table 3 | table_3 | P19-1236 | 6 | acl2019 | Our method outperforms all baselines significantly, which shows the importance of using rich data. A contrast between our method and MIXDATA shows the effectiveness of using two different language models across domains. Even through MIX-DATA uses more data for training language models on both the source and target doma... | [1, 1, 1, 1, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2] | ['Our method outperforms all baselines significantly, which shows the importance of using rich data.', 'A contrast between our method and MIXDATA shows the effectiveness of using two different language models across domains.', 'Even through MIX-DATA uses more data for training language models on both the source and tar... | [['FINAL'], ['FINAL', 'MIX-DATA'], ['MIX-DATA'], ['FINAL'], None, ['13PC', '13CG', 'MULTITASK(NER+LM)', 'MULTITASK(NER)'], ['FINAL'], None, None, None, None, None, None, None] | 1 |
P19-1240table_6 | Comparison results of our ablation models on three datasets (SE: StackExchange) — separate train: our model with pre-trained latent topics; w/o topic-attn: decoder attention without topics (Eq. 7); w/o topicstate: decoder hidden states without topics (Eq. 5). We report F1@1 for Twitter and Weibo, F1@3 for StackExchange... | 1 | [['SEQ2SEQ-COPY'], ['Our model (separate train)'], ['Our model (w/o topic-attn)'], ['Our model (w/o topic-state)'], ['Our full model']] | 1 | [['Twitter'], ['Weibo'], ['SE']] | [['36.6', '32.01', '31.53'], ['36.75', '32.75', '31.78'], ['37.24', '32.42', '32.34'], ['37.44', '33.48', '31.98'], ['38.49', '34.99', '33.41']] | column | ['F1', 'F1', 'F1'] | ['Our full model'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Twitter</th> <th>Weibo</th> <th>SE</th> </tr> </thead> <tbody> <tr> <td>SEQ2SEQ-COPY</td> <td>36.6</td> <td>32.01</td> <td>31.53</td> </tr> <tr> <td>Our model (separa... | Table 6 | table_6 | P19-1240 | 8 | acl2019 | 5.3 Further Discussions . Ablation Study. We compare the results of our full model and its four ablated variants to analyze the relative contributions of topics on different components. The results in Table 6 indicate the competitive effect of topics on decoder attention and that on hidden states, but combining them bo... | [2, 2, 2, 1, 1, 2] | ['5.3 Further Discussions .', 'Ablation Study.', 'We compare the results of our full model and its four ablated variants to analyze the relative contributions of topics on different components.', 'The results in Table 6 indicate the competitive effect of topics on decoder attention and that on hidden states, but combin... | [None, None, None, ['Our full model'], ['Our model (separate train)', 'SEQ2SEQ-COPY'], None] | 1 |
P19-1241table_3 | Performance Comparisons on the SHR Dataset | 1 | [['DAD Model'], ['SDA Model'], ['Word-CNN'], ['LSTM'], ['RNN'], ['CL-CNN'], ['fastText-BOT'], ['HATT'], ['Bi-LSTM'], ['RCNN'], ['CNN-LSTM'], ['Attentional Bi-LSTM'], ['A-CNN-LSTM'], ['openAI-Transformer'], ['SMLM']] | 1 | [['Accuracy'], ['Precision'], ['Recall'], ['F1']] | [['0.91', '0.9', '0.91', '0.9'], ['0.9', '0.87', '0.9', '0.88'], ['0.92', '0.68', '0.95', '0.79'], ['0.92', '0.7', '0.98', '0.81'], ['0.93', '0.86', '0.95', '0.9'], ['0.92', '0.7', '0.91', '0.79'], ['0.87', '0.7', '0.8', '0.74'], ['0.93', '0.93', '0.95', '0.93'], ['0.93', '0.86', '0.98', '0.91'], ['0.9', '0.86', '0.9',... | column | ['Accuracy', 'Precision', 'Recall', 'F1'] | ['SMLM'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy</th> <th>Precision</th> <th>Recall</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>DAD Model</td> <td>0.91</td> <td>0.9</td> <td>0.91</td> <td>0.9</td> </... | Table 3 | table_3 | P19-1241 | 6 | acl2019 | 6.1 Performance . Table 3 describes the performance of the baseline classifiers as well as the deep learning models based on four evaluation metrics. The Social Media Language Model outperforms all baseline models, including RNNs, LSTMs, CNNs, and the linear DAD and SDA models. The A-CNN-LSTM and the Hierarchical Atten... | [2, 2, 1, 1, 2] | ['6.1 Performance .', 'Table 3 describes the performance of the baseline classifiers as well as the deep learning models based on four evaluation metrics.', 'The Social Media Language Model outperforms all baseline models, including RNNs, LSTMs, CNNs, and the linear DAD and SDA models.', 'The A-CNN-LSTM and the Hierarc... | [None, None, ['SMLM', 'RNN', 'LSTM', 'DAD Model', 'SDA Model'], ['A-CNN-LSTM', 'HATT', 'Recall'], None] | 1 |
P19-1244table_5 | Results of different claim verification models on FEVER dataset (Dev set). The columns correspond to the predicted label accuracy, the evidence precision, recall, F1 score, and the FEVER score. | 1 | [['Fever-base'], ['NSMN'], ['HAN-nli'], ['HAN-nli*'], ['HAN*']] | 1 | [['Acc.'], ['Prec.'], ['Rec.'], ['F1'], ['FEVER']] | [['0.521', '-', '-', '-', '0.326'], ['0.697', '0.286', '0.87', '0.431', '0.665'], ['0.642', '0.34', '0.484', '0.4', '0.464'], ['0.72', '0.447', '0.536', '0.488', '0.571'], ['0.475', '0.356', '0.471', '0.406', '0.365']] | column | ['Acc.', 'Prec.', 'Rec.', 'F1', 'FEVER'] | ['HAN-nli', 'HAN-nli*', 'HAN*'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Acc.</th> <th>Prec.</th> <th>Rec.</th> <th>F1</th> <th>FEVER</th> </tr> </thead> <tbody> <tr> <td>Fever-base</td> <td>0.521</td> <td>-</td> <td>-</td> <td>-</td> ... | Table 5 | table_5 | P19-1244 | 8 | acl2019 | Table 5 shows that HAN-nli* is much better than the two baselines in terms of label accuracy and evidence F1 score. There are two reasons: 1) apart from the retrieval module, our model optimizes all the parameters end-to-end, while the two pipeline systems may result in error propagation; and 2) our evidence embedding ... | [1, 2, 1, 2] | ['Table 5 shows that HAN-nli* is much better than the two baselines in terms of label accuracy and evidence F1 score.', 'There are two reasons: 1) apart from the retrieval module, our model optimizes all the parameters end-to-end, while the two pipeline systems may result in error propagation; and 2) our evidence embed... | [['HAN-nli*', 'Fever-base', 'NSMN', 'Acc.', 'F1'], None, ['HAN-nli', 'Fever-base'], ['FEVER']] | 1 |
P19-1249table_5 | Accuracy of (top) the state of the art gender prediction approaches on their respective datasets and transfer performance to celebrities, and (bottom) our baseline deep learning approach, with and without retraining on the PAN datasets. | 1 | [['alvarezcamona15 (2015)'], ['nissim16 (2016)'], ['nissim17 (2017)'], ['danehsvar18 (2018)'], ['CNN (Celeb)'], ['CNN (Celeb + PAN15)'], ['CNN (Celeb + PAN16)'], ['CNN (Celeb + PAN17)'], ['CNN (Celeb + PAN18)']] | 2 | [['Model', 'PAN15'], ['Model', 'PAN16'], ['Model', 'PAN17'], ['Model', 'PAN18'], ['Model', 'Celeb']] | [['0.859', '-', '-', '-', '0.723'], ['-', '0.641', '-', '-', '0.74'], ['-', '-', '0.823', '-', '0.855'], ['-', '-', '-', '0.822', '0.817'], ['0.747', '0.59', '0.747', '0.756', '0.861'], ['0.793', '-', '-', '-', '-'], ['-', '0.69', '-', '-', '-'], ['-', '-', '0.768', '-', '-'], ['-', '-', '-', '0.759', '-']] | column | ['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy'] | ['alvarezcamona15 (2015)', 'nissim16 (2016)', 'PAN15', 'PAN16'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Model || PAN15</th> <th>Model || PAN16</th> <th>Model || PAN17</th> <th>Model || PAN18</th> <th>Model || Celeb</th> </tr> </thead> <tbody> <tr> <td>alvarezcamona15 (2015)</td> <... | Table 5 | table_5 | P19-1249 | 5 | acl2019 | Table 5 shows all models' transfer performance between populations on gender. In general, all models generalize well to the respectively unseen datasets but perform best on the data they have been specifically trained for. The largest difference can be observed on the sub-1,000 author dataset PAN15, where the model of ... | [1, 1, 1, 2, 1, 2] | ["Table 5 shows all models' transfer performance between populations on gender.", 'In general, all models generalize well to the respectively unseen datasets but perform best on the data they have been specifically trained for.', 'The largest difference can be observed on the sub-1,000 author dataset PAN15, where the m... | [None, None, ['PAN15', 'PAN16', 'nissim16 (2016)'], None, ['PAN15', 'PAN16'], None] | 1 |
P19-1251table_3 | The overall classification performance of the baseline methods and our approach. All of the improvements of our approach (ours) over PAQI are significant with a paired t-test at a 99% significance level. | 4 | [['Dataset', 'ID', 'Method', 'BOW'], ['Dataset', 'ID', 'Method', 'PAQI'], ['Dataset', 'ID', 'Method', 'Ours'], ['Dataset', 'IN', 'Method', 'BOW'], ['Dataset', 'IN', 'Method', 'PAQI'], ['Dataset', 'IN', 'Method', 'Ours'], ['Dataset', 'IL', 'Method', 'BOW'], ['Dataset', 'IL', 'Method', 'PAQI'], ['Dataset', 'IL', 'Method'... | 2 | [['MicroAverage', 'Prec.'], ['MicroAverage', 'Rec.'], ['MicroAverage', 'F1'], ['MacroAverage', 'Prec'], ['MacroAverage', 'Rec.'], ['MacroAverage', 'F1']] | [['0.807', '0.829', '0.809', '0.687', '0.619', '0.631'], ['0.816', '0.728', '0.757', '0.611', '0.677', '0.617'], ['0.863', '0.811', '0.828', '0.691', '0.776', '0.714'], ['0.792', '0.786', '0.786', '0.508', '0.508', '0.501'], ['0.847', '0.682', '0.737', '0.567', '0.649', '0.548'], ['0.855', '0.849', '0.852', '0.64', '0.... | column | ['Prec.', 'Rec.', 'F1', 'Prec', 'Rec.', 'F1'] | ['Ours'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MicroAverage || Prec.</th> <th>MicroAverage || Rec.</th> <th>MicroAverage || F1</th> <th>MacroAverage || Prec</th> <th>MacroAverage || Rec.</th> <th>MacroAverage || F1</th> </tr> </thead... | Table 3 | table_3 | P19-1251 | 5 | acl2019 | 3.2 Experimental Results. For evaluation, micro and macro-F1 scores are selected the evaluation metrics. Table 3 demonstrates the performance of the three methods. Micro-F1 scores are generally better than macroF1 scores because the trivial cases like the class of good air quality are the majority of datasets with high... | [2, 2, 1, 1, 1, 2, 1, 1, 2] | ['3.2 Experimental Results.', 'For evaluation, micro and macro-F1 scores are selected the evaluation metrics.', 'Table 3 demonstrates the performance of the three methods.', 'Micro-F1 scores are generally better than macroF1 scores because the trivial cases like the class of good air quality are the majority of dataset... | [None, None, None, ['MicroAverage', 'F1', 'MacroAverage'], ['PAQI', 'BOW'], ['BOW'], ['Ours'], ['PAQI', 'MacroAverage', 'F1'], None] | 1 |
P19-1254table_3 | Accuracy at choosing the correct reference string for a mention, discriminating against 10, 50 and 100 random distractors. We break out results for the first mention of an entity (requiring novelty to produce an appropriate name in the context) and subsequent references (typically pronouns, nominal references, or short... | 2 | [['Model', 'Word-Based'], ['Model', 'BPE'], ['Model', 'Character-level'], ['Model', 'No story'], ['Model', 'Left story context'], ['Model', 'Full story']] | 2 | [['First Mentions', 'Rank 10'], ['First Mentions', 'Rank 50'], ['First Mentions', 'Rank 100'], ['Subsequent Mentions', 'Rank 10'], ['Subsequent Mentions', 'Rank 50'], ['Subsequent Mentions', 'Rank 100']] | [['42.3', '25.4', '17.2', '48.1', '38.4', '28.8'], ['48.1', '20.3', '25.5', '52.5', '50.7', '48.8'], ['64.2', '51', '35.6', '66.1', '55', '51.2'], ['50.3', '40', '26.7', '54.7', '51.3', '30.4'], ['59.1', '49.6', '33.3', '62.9', '53.2', '49.4'], ['64.2', '51', '35.6', '66.1', '55', '51.2']] | column | ['Accuracy', 'Accuracy', 'Accuracy', 'Accuracy', 'Accuracy', 'Accuracy'] | ['Full story'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>First Mentions || Rank 10</th> <th>First Mentions || Rank 50</th> <th>First Mentions || Rank 100</th> <th>Subsequent Mentions || Rank 10</th> <th>Subsequent Mentions || Rank 50</th> <th>Subs... | Table 3 | table_3 | P19-1254 | 8 | acl2019 | We compare three models ability to fill (using coreferenceentities based on context anonymization): a model that does not receive the story, a model that uses only leftward context (as in Clark et al. (2018)), and a model with access to the full story. We show in Table 3 that having access to the full story provides th... | [2, 1, 1, 1, 2] | ['We compare three models ability to fill (using coreferenceentities based on context anonymization): a model that does not receive the story, a model that uses only leftward context (as in Clark et al. (2018)), and a model with access to the full story.', 'We show in Table 3 that having access to the full story provid... | [None, ['Full story'], ['No story'], ['Left story context'], ['Full story']] | 1 |
P19-1256table_2 | Automatic evaluation results of different NLU models on both training and test sets | 1 | [['Original data'], ['NLU refined data'], ['w/o self-training']] | 1 | [['Train Err (%)'], ['Test Err (%)']] | [['35.5', '37.59'], ['16.31', '14.26'], ['25.14', '22.69']] | column | ['Train Err (%)', 'Test Err (%)'] | ['NLU refined data'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Train Err (%)</th> <th>Test Err (%)</th> </tr> </thead> <tbody> <tr> <td>Original data</td> <td>35.5</td> <td>37.59</td> </tr> <tr> <td>NLU refined data</td> <td>16.31</td... | Table 2 | table_2 | P19-1256 | 4 | acl2019 | 3.2 Main Results . NLU Results. One challenge in E2E dataset is the need to account for the noise in the corpus as some of the MR-text pairs are not semantically equivalent due to the data collection process (Dusek et al., 2018). We examine the performance of the NLU module by comparing noise reduction of the reconstru... | [2, 2, 2, 2, 1, 1] | ['3.2 Main Results .', 'NLU Results.', 'One challenge in E2E dataset is the need to account for the noise in the corpus as some of the MR-text pairs are not semantically equivalent due to the data collection process (Dusek et al., 2018).', 'We examine the performance of the NLU module by comparing noise reduction of th... | [None, None, None, None, None, ['NLU refined data', 'Original data', 'Test Err (%)']] | 1 |
P19-1256table_3 | Human evaluation results for NLU on test set (inter-annotator agreement: Fleiss’ kappa = 0.855) | 1 | [['Original data'], ['NLU refined data'], ['w/o self-training']] | 1 | [['E (%)'], ['M (%)'], ['A (%)'], ['C (%)']] | [['71.93', '0', '24.13', '3.95'], ['88.62', '5.45', '2.48', '3.47'], ['73.53', '13.23', '8.33', '4.91']] | column | ['E (%)', 'M (%)', 'A (%)', 'C (%)'] | ['NLU refined data'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>E (%)</th> <th>M (%)</th> <th>A (%)</th> <th>C (%)</th> </tr> </thead> <tbody> <tr> <td>Original data</td> <td>71.93</td> <td>0</td> <td>24.13</td> <td>3.95</td> </... | Table 3 | table_3 | P19-1256 | 4 | acl2019 | Human evaluation in Table 3 shows that our proposed method achieves 16.69% improvement on information equivalence between MR-text pairs. These results confirm the effectiveness of our method in reducing the unaligned data noise, and the large improvement (i.e, 15.09%) on exact match when applying self-training algorith... | [1, 1] | ['Human evaluation in Table 3 shows that our proposed method achieves 16.69% improvement on information equivalence between MR-text pairs.', 'These results confirm the effectiveness of our method in reducing the unaligned data noise, and the large improvement (i.e, 15.09%) on exact match when applying self-training alg... | [['NLU refined data', 'Original data'], ['NLU refined data', 'w/o self-training', 'Original data']] | 1 |
P19-1257table_2 | Quality evaluation results of the testing set. Flue., Rele. and Info. denotes fluency, relevance, and informativeness, respectively. | 2 | [['Evaluation', 'Score'], ['Evaluation', 'Pearson']] | 1 | [['Flue.'], ['Rele.'], ['Info.'], ['Overall']] | [['9.2', '6.7', '6.4', '7.6'], ['0.74', '0.76', '0.66', '0.68']] | column | ['Flue.', 'Rele.', 'Info.', 'Overall'] | ['Overall'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Flue.</th> <th>Rele.</th> <th>Info.</th> <th>Overall</th> </tr> </thead> <tbody> <tr> <td>Evaluation || Score</td> <td>9.2</td> <td>6.7</td> <td>6.4</td> <td>7.6</td> ... | Table 2 | table_2 | P19-1257 | 2 | acl2019 | Data Analysis . High-quality testing set is necessary for faithful automatic evaluation. Therefore, we randomly selected 200 samples from the testing set for quality evaluation. Three annotators with linguistic background are required to score comments and readers can refer to Section 4.3 for the evaluation details. Ta... | [2, 2, 2, 2, 1, 1] | ['Data Analysis .', 'High-quality testing set is necessary for faithful automatic evaluation.', 'Therefore, we randomly selected 200 samples from the testing set for quality evaluation.', 'Three annotators with linguistic background are required to score comments and readers can refer to Section 4.3 for the evaluation ... | [None, None, None, None, None, ['Overall']] | 1 |
P19-1266table_4 | Results summary over the 510 comparisons of Reimers and Gurevych (2017a). | 1 | [['Case A'], ['Case B'], ['Case C']] | 1 | [['% of comparisons'], ['Avg. e'], ['e std']] | [['0.98%', '0', '0'], ['48.04%', '0.072', '0.108'], ['50.98%', '0.202', '0.143']] | column | ['% of comparisons', 'Avg. e', 'e std'] | ['Case A'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>% of comparisons</th> <th>Avg. e</th> <th>e std</th> </tr> </thead> <tbody> <tr> <td>Case A</td> <td>0.98%</td> <td>0</td> <td>0</td> </tr> <tr> <td>Case B</td> ... | Table 4 | table_4 | P19-1266 | 8 | acl2019 | Results: Summary. We now turn to a summary of our analysis across the 510 comparisons of Reimers and Gurevych (2017a). Table 4 presents the percentage of comparisons that fall into each category, along with the average and std of the e value of ASO for each case (all ASO results are significant with p <= 0.01). Figure ... | [2, 2, 1, 2, 1, 2] | ['Results: Summary.', 'We now turn to a summary of our analysis across the 510 comparisons of Reimers and Gurevych (2017a).', 'Table 4 presents the percentage of comparisons that fall into each category, along with the average and std of the e value of ASO for each case (all ASO results are significant with p <= 0.01).... | [None, None, ['Avg. e'], None, ['% of comparisons', 'Case A'], None] | 1 |
P19-1276table_4 | Overall performance of schema matching. | 2 | [['Method', 'Nguyen et al. (2015)'], ['Method', 'Clustering'], ['Method', 'ODEE-F'], ['Method', 'ODEE-FE'], ['Method', 'ODEE-FER']] | 2 | [['Schema Matching (%)', 'P'], ['Schema Matching (%)', 'R'], ['Schema Matching (%)', 'F1']] | [['41.5', '53.4', '46.7'], ['41.2', '50.6', '45.4'], ['41.7', '53.2', '46.8'], ['42.4', '56.1', '48.3'], ['43.4', '58.3', '49.8']] | column | ['P', 'R', 'F1'] | ['ODEE-F', 'ODEE-FE', 'ODEE-FER'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Schema Matching (%) || P</th> <th>Schema Matching (%) || R</th> <th>Schema Matching (%) || F1</th> </tr> </thead> <tbody> <tr> <td>Method || Nguyen et al. (2015)</td> <td>41.5</td> <... | Table 4 | table_4 | P19-1276 | 7 | acl2019 | Schemas Matching. Table 4 shows the overall performance of schema matching on GNBusiness-Test. From the table, we can see that ODEE-FER achieves the best F1 scores among all the methods. By comparing Nguyen et al. (2015) and ODEE-F (p= 0.01), we can see that using continuous contextual features gives better per... | [2, 1, 1, 1, 2, 1, 2, 2, 1, 1] | ['Schemas Matching.', 'Table 4 shows the overall performance of schema matching on GNBusiness-Test.', 'From the table, we can see that ODEE-FER achieves the best F1 scores among all the methods.', 'By comparing Nguyen et al. (2015) and ODEE-F (p= 0.01), we can see that using continuous contextual features gives... | [None, None, ['ODEE-FER', 'F1'], ['Nguyen et al. (2015)', 'ODEE-F'], None, ['ODEE-F', 'ODEE-FE', 'ODEE-FER'], None, None, ['ODEE-F', 'F1'], ['F1', 'ODEE-FER', 'ODEE-FE']] | 1 |
P19-1284table_3 | SNLI results (accuracy). | 2 | [['Model', ' LSTM (Bowman et al. 2016)'], ['Model', ' DA (Parikh et al. 2016)'], ['Model', ' DA (reimplementation)'], ['Model', ' DA with HardKuma attention']] | 1 | [[' Dev'], [' Test']] | [[' –', '80.6'], [' –', '86.3'], ['86.9', '86.5'], ['86', '85.5']] | column | ['accuracy', 'accuracy'] | [' DA with HardKuma attention'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Dev</th> <th>Test</th> </tr> </thead> <tbody> <tr> <td>Model || LSTM (Bowman et al. 2016)</td> <td>–</td> <td>80.6</td> </tr> <tr> <td>Model || DA (Parikh et al. 2016)</td> ... | Table 3 | table_3 | P19-1284 | 7 | acl2019 | Results. With a target rate of 10%, the HardKuma model achieved 8.5% non-zero attention. Table 3 shows that, even with so many zeros in the attention matrices, it only does about 1% worse compared to the DA baseline. Figure 6 shows an example of HardKuma attention, with additional examples in Appendix B. We leave furth... | [2, 1, 1, 2, 2] | ['Results.', 'With a target rate of 10%, the HardKuma model achieved 8.5% non-zero attention.', 'Table 3 shows that, even with so many zeros in the attention matrices, it only does about 1% worse compared to the DA baseline.', 'Figure 6 shows an example of HardKuma attention, with additional examples in Appendix B.', '... | [None, [' DA with HardKuma attention'], [' DA with HardKuma attention', ' DA (reimplementation)'], None, None] | 1 |
P19-1296table_1 | Translation results for Chinese-English and English-German translation task. “†”: indicates statistically better than Transformer(Base/Big) (ρ < 0.01). | 3 | [['Model', 'Existing NMT Systems', 'EDR (Tu et al., 2017)'], ['Model', 'Existing NMT Systems', '(Kuang et al., 2018)'], ['Model', 'Our NMT Systems', 'Transformer(Base)'], ['Model', 'Our NMT Systems', '+lossmse'], ['Model', 'Our NMT Systems', '+lossmse + enhanced'], ['Model', 'Our NMT Systems', 'Transformer(Big)'], ['Mo... | 2 | [['NIST', '3'], ['NIST', '4'], ['NIST', '5'], ['NIST', '6'], ['NIST', 'Avg'], ['WMT', '14']] | [['N/A', 'N/A', '33.73', '34.15', 'N/A', 'N/A'], ['38.02', '40.83', 'N/A', 'N/A', 'N/A', 'N/A'], ['45.57', '46.4', '46.11', '44.92', '45.75', '27.28'], ['46.71†', '47.23†', '47.12†', '45.78†', '46.71', '28.11†'], ['46.94†', '47.52†', '47.43†', '46.04†', '46.98', '28.38†'], ['46.73', '47.36', '47.15', '46.82', '47.01', ... | column | ['BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU'] | ['Our NMT Systems'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>NIST || 3</th> <th>NIST || 4</th> <th>NIST || 5</th> <th>NIST || 6</th> <th>NIST || Avg</th> <th>WMT || 14</th> </tr> </thead> <tbody> <tr> <td>Model || Existing NMT Systems || ... | Table 1 | table_1 | P19-1296 | 4 | acl2019 | 4.2 Performance . Table 1 shows the performances measured in terms of BLEU score. On ZH-EN task, Transformer(Base) existing systems EDR (Tu et al., 2017) and DB (Kuang et al., 2018) by 11.5 and 6.5 BLEU points. With respect to BLEU scores, all the proposed models consistently outperform Transformer(base) by 0.96 and 1.... | [2, 1, 1, 1, 1, 2, 2, 1] | ['4.2 Performance .', 'Table 1 shows the performances measured in terms of BLEU score.', 'On ZH-EN task, Transformer(Base) existing systems EDR (Tu et al., 2017) and DB (Kuang et al., 2018) by 11.5 and 6.5 BLEU points.', 'With respect to BLEU scores, all the proposed models consistently outperform Transformer(base) by ... | [None, None, ['EDR (Tu et al., 2017)', '(Kuang et al., 2018)'], ['Our NMT Systems', 'Transformer(Base)'], ['+lossmse', '+lossmse + enhanced'], None, None, None] | 1 |
P19-1305table_3 | Comparison on the cross-lingual ASSUM performances. | 2 | [['System', 'Transformerbpe'], ['System', 'Pipeline-TS'], ['System', 'Pipeline-ST'], ['System', 'Pseudo-Summary (Ayana et al., 2018)'], ['System', 'Pivot-based (Cheng et al., 2017)'], ['System', 'Pseudo-Chinese'], ['System', 'Teaching Generation'], ['System', 'Teaching Attention'], ['System', 'Teaching Generation+Atten... | 2 | [['Gigaword', 'ROUGE-1'], ['Gigaword', 'ROUGE-2'], ['Gigaword', 'ROUGE-L'], ['DUC2004', 'ROUGE-1'], ['DUC2004', 'ROUGE-2'], ['DUC2004', 'ROUGE-L']] | [['38.1', '19.1', '35.2', '31.2', '10.7', '27.1'], ['25.8', '9.7', '23.6', '23.7', '6.8', '20.9'], ['22', '7', '20.9', '20.9', '5.3', '18.3'], ['21.5', '6.6', '19.6', '19.3', '4.3', '17'], ['26.7', '10.2', '24.3', '24', '7', '21.3'], ['27.9', '10.9', '25.6', '24.4', '6.6', '21.4'], ['29.6', '12.1', '27.3', '25.6', '7.9... | column | ['ROUGE-1', 'ROUGE-2', 'ROUGE-L', 'ROUGE-1', 'ROUGE-2', 'ROUGE-L'] | ['Teaching Generation', 'Teaching Attention', 'Teaching Generation+Attention'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Gigaword || ROUGE-1</th> <th>Gigaword || ROUGE-2</th> <th>Gigaword || ROUGE-L</th> <th>DUC2004 || ROUGE-1</th> <th>DUC2004 || ROUGE-2</th> <th>DUC2004 || ROUGE-L</th> </tr> </thead> <tb... | Table 3 | table_3 | P19-1305 | 7 | acl2019 | Our Systems VS. the Baselines . The bottom part of Table 3 lists the performances of our methods. It manifests that both teaching summary word generation and teaching attention weights are able to improve the performance over the baselines. When the summary word generation and attention weights are taught simultaneousl... | [2, 1, 1, 1] | ['Our Systems VS. the Baselines .', 'The bottom part of Table 3 lists the performances of our methods.', 'It manifests that both teaching summary word generation and teaching attention weights are able to improve the performance over the baselines.', 'When the summary word generation and attention weights are taught si... | [None, ['Teaching Generation', 'Teaching Attention', 'Teaching Generation+Attention'], ['Teaching Generation', 'Teaching Attention'], ['Teaching Generation+Attention', 'DUC2004']] | 1 |
P19-1306table_2 | Test set result on English to Swahili and English to Tagalog. We report the TREC ad-hoc retrieval evaluation metrics (MAP, P@20, NDCG@20) and the Actual Query Weighted Value (AQWV). | 2 | [['Query Translation and Document Translation with Indri', 'Dictionary-Based Query Translation (DBQT)'], ['Query Translation and Document Translation with Indri', 'Probabilistic Structured Query (PSQ)'], ['Query Translation and Document Translation with Indri', 'Statistical MT (SMT)'], ['Query Translation and Document ... | 2 | [['EN->SW', 'MAP'], ['EN->SW', 'P@20'], ['EN->SW', 'NDCG@20'], ['EN->SW', 'AQWV'], ['EN->TL', 'MAP'], ['EN->TL', 'P@20'], ['EN->TL', 'NDCG@20'], ['EN->TL', 'AQWV']] | [['20.93', '4.86', '28.65', '6.5', '20.01', '5.42', '27.01', '5.93'], ['27.16', '5.81', '36.03', '12.56', '35.2', '8.18', '44.04', '19.81'], ['26.3', '5.28', '34.6', '13.77', '37.31', '8.77', '46.77', '21.9'], ['26.54', '5.26', '34.83', '15.7', '33.83', '8.2', '43.17', '18.56'], ['24.69', '5.24', '32.85', '11.73', '32.... | column | ['MAP', 'P@20', 'NDCG@20', 'AQWV', 'MAP', 'P@20', 'NDCG@20', 'AQWV'] | ['Probabilistic Structured Query (PSQ)', 'POSIT-DRMM'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>EN->SW || MAP</th> <th>EN->SW || P@20</th> <th>EN->SW || NDCG@20</th> <th>EN->SW || AQWV</th> <th>EN->TL || MAP</th> <th>EN->TL || P@20</th> <th>EN->TL || NDCG@20</... | Table 2 | table_2 | P19-1306 | 4 | acl2019 | Table 2 shows the result on EN->SW and EN>TL where we train and test on the same language pair. Performance of Baselines. For query translation, PSQ is better than DBQT because PSQ uses a weighted alternative to translate query terms and does not limit to the fixed translation from the dictionary as in DBQT. For docume... | [1, 2, 1, 1, 2, 1, 2, 2, 2, 2, 1, 1, 2] | ['Table 2 shows the result on EN->SW and EN>TL where we train and test on the same language pair.', 'Performance of Baselines.', 'For query translation, PSQ is better than DBQT because PSQ uses a weighted alternative to translate query terms and does not limit to the fixed translation from the dictionary as in DBQT.', ... | [['EN->SW', 'EN->TL'], None, ['Probabilistic Structured Query (PSQ)', 'Dictionary-Based Query Translation (DBQT)'], ['Statistical MT (SMT)', 'Neural MT (NMT)', 'Probabilistic Structured Query (PSQ)'], ['Probabilistic Structured Query (PSQ)', 'EN->SW', 'EN->TL'], ['Probabilistic Structured Query (PSQ)', 'Statistical MT ... | 1 |
P19-1308table_2 | The accuracy of different methods in various language pairs. Bold indicates the best supervised and unsupervised results, respectively. “-” means that the model fails to converge and hence the result is omitted. | 3 | [['Methods', 'Supervised', 'Mikolov et al. (2013a)'], ['Methods', 'Supervised', 'Xing et al. (2015)'], ['Methods', 'Supervised', 'Shigeto et al. (2015)'], ['Methods', 'Supervised', 'Artetxe et al. (2016)'], ['Methods', 'Supervised', 'Artetxe et al. (2017)'], ['Methods', 'Unsupervised', 'Zhang et al. (2017a)'], ['Method... | 1 | [['DE-EN'], ['EN-DE'], ['ES-EN'], ['EN-ES'], ['FR-EN'], ['EN-FR'], ['IT-EN'], ['EN-IT']] | [['61.93', '73.07', '74', '80.73', '71.33', '82.2', '68.93', '77.6'], ['67.73', '69.53', '77.2', '78.6', '76.33', '78.67', '72', '73.33'], ['71.07', '63.73', '81.07', '74.53', '79.93', '73.13', '76.47', '68.13'], ['69.13', '72.13', '78.27', '80.07', '77.73', '79.2', '73.6', '74.47'], ['68.07', '69.2', '75.6', '78.2', '... | column | ['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy'] | ['Ours'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>DE-EN</th> <th>EN-DE</th> <th>ES-EN</th> <th>EN-ES</th> <th>FR-EN</th> <th>EN-FR</th> <th>IT-EN</th> <th>EN-IT</th> </tr> </thead> <tbody> <tr> <td>Methods || Supervis... | Table 2 | table_2 | P19-1308 | 4 | acl2019 | 3.2 Experimental Results . Table 2 presents the results of different systems, showing that our proposed model achieves the best performance on all test language pairs under unsupervised settings. In addition, our approach is able to achieve completely comparable or even better performance than supervised systems. This ... | [2, 1, 1, 2, 2, 2] | ['3.2 Experimental Results .', 'Table 2 presents the results of different systems, showing that our proposed model achieves the best performance on all test language pairs under unsupervised settings.', 'In addition, our approach is able to achieve completely comparable or even better performance than supervised system... | [None, ['Ours', 'Unsupervised'], ['Ours', 'Supervised'], None, None, None] | 1 |
P19-1309table_2 | BUCC results (precision, recall and F1) on the training set, used to optimize the filtering threshold. | 4 | [['Func.', 'Abs. (cos)', 'Retrieval', 'Forward'], ['Func.', 'Abs. (cos)', 'Retrieval', 'Backward'], ['Func.', 'Abs. (cos)', 'Retrieval', 'Intersection'], ['Func.', 'Abs. (cos)', 'Retrieval', 'Max. score'], ['Func.', 'Dist.', 'Retrieval', 'Forward'], ['Func.', 'Dist.', 'Retrieval', 'Backward'], ['Func.', 'Dist.', 'Retri... | 2 | [['EN-DE', 'P'], ['EN-DE', 'R'], ['EN-DE', 'F1'], ['EN-FR', 'P'], ['EN-FR', 'R'], ['EN-FR', 'F1']] | [['78.9', '75.1', '77', '82.1', '74.2', '77.9'], ['79', '73.1', '75.9', '77.2', '72.2', '74.7'], ['84.9', '80.8', '82.8', '83.6', '78.3', '80.9'], ['83.1', '77.2', '80.1', '80.9', '77.5', '79.2'], ['94.8', '94.1', '94.4', '91.1', '91.8', '91.4'], ['94.8', '94.1', '94.4', '91.5', '91.4', '91.4'], ['94.9', '94.1', '94.5'... | column | ['P', 'R', 'F1', 'P', 'R', 'F1'] | ['Intersection', 'Max. score'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>EN-DE || P</th> <th>EN-DE || R</th> <th>EN-DE || F1</th> <th>EN-FR || P</th> <th>EN-FR || R</th> <th>EN-FR || F1</th> </tr> </thead> <tbody> <tr> <td>Func. || Abs. (cos) || Retr... | Table 2 | table_2 | P19-1309 | 4 | acl2019 | 4.1 BUCC mining task. The shared task of the workshop on Building and Using Comparable Corpora (BUCC) is a wellestablished evaluation framework for bitext mining (Zweigenbaum et al., 2017, 2018). The task is to mine for parallel sentences between English and four foreign languages: German, French, Russian and Chinese. ... | [2, 2, 2, 2, 2, 1, 1, 1, 1, 1, 2] | ['4.1 BUCC mining task.', 'The shared task of the workshop on Building and Using Comparable Corpora (BUCC) is a wellestablished evaluation framework for bitext mining (Zweigenbaum et al., 2017, 2018).', 'The task is to mine for parallel sentences between English and four foreign languages: German, French, Russian and C... | [None, None, None, None, None, ['P', 'R', 'F1'], None, ['Retrieval', 'Abs. (cos)', 'Intersection'], ['Ratio', 'Dist.', 'Max. score', 'Abs. (cos)'], ['Ratio', 'Dist.'], None] | 1 |
P19-1317table_3 | Name normalization accuracy on disease (Di) and chemical (Ch) datasets. The last row group includes the results of supervised models that utilize training annotations in each specific dataset. XM denotes the use of ‘exact match’ rule to assign the corresponding concept to a mention if the mention is found in the traini... | 2 | [['Models', 'Jaccard'], ['Models', 'SG W'], ['Models', 'SG W + WMD'], ['Models', 'SG S'], ['Models', 'SG S.C'], ['Models', 'BNE + SG W'], ['Models', 'BNE + SG S.C'], ['Models', 'Wieting et al. (2015)'], ['Models', 'DSouza and Ng (2015)'], ['Models', 'Leaman and Lu (2016)'], ['Models', 'Wright et al. (2019)'], ['Models'... | 2 | [['NCBI', '(Di)'], ['BC5CDR', '(Di)'], ['BC5CDR', '(Ch)']] | [['0.843', '0.772', '0.935'], ['0.8', '0.725', '0.771'], ['0.779', '0.731', '0.919'], ['0.815', '0.79', '0.929'], ['0.838', '0.811', '0.929'], ['0.854', '0.829', '0.93'], ['0.857', '0.829', '0.934'], ['0.822', '0.813', '0.93'], ['0.847', '0.841', '-'], ['0.877*', '0.889*', '0.941'], ['0.878*', '0.880*', '-'], ['0.873',... | column | ['accuracy', 'accuracy', 'accuracy'] | ['BNE + SG W', 'BNE + SG S.C', 'BNE + SG W + XM', 'BNE + SG S.C + XM'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>NCBI || (Di)</th> <th>BC5CDR || (Di)</th> <th>BC5CDR || (Ch)</th> </tr> </thead> <tbody> <tr> <td>Models || Jaccard</td> <td>0.843</td> <td>0.772</td> <td>0.935</td> </tr> ... | Table 3 | table_3 | P19-1317 | 8 | acl2019 | Different from the lexical (Jaccard) and semantic matching (WMD and SGW) baselines, BNE obtains high scores in accuracy metric (see Table 3). The result indicates that BNE has encoded both lexical and semantic information of names into their embeddings. Table 3 also includes performances of other state-of-the-art base... | [1, 2, 1, 2, 2, 1] | ['Different from the lexical (Jaccard) and semantic matching (WMD and SGW) baselines, BNE obtains high scores in accuracy metric (see Table 3).', 'The result indicates that BNE has encoded both lexical and semantic information of names into their embeddings.', "Table 3 also includes performances of other state-of-the-... | [['BNE + SG W', 'BNE + SG S.C', 'BNE + SG W + XM', 'BNE + SG S.C + XM'], None, ['DSouza and Ng (2015)', 'Leaman and Lu (2016)', 'Wright et al. (2019)'], None, ['BNE + SG W', 'BNE + SG S.C', 'BNE + SG W + XM', 'BNE + SG S.C + XM'], ['BNE + SG W + XM', 'BNE + SG S.C + XM']] | 1 |
P19-1318table_1 | Accuracy and macro-averaged F-Measure, precision and recall on BLESS and DiffVec. Models marked with † use external resources. The results with * indicate that WordNet was used for both the development of the model and the construction of the dataset. All models concatenate their encoded representations with the baseli... | 4 | [['Encoding', 'Mult+Avg', 'RWE', '(This paper)'], ['Encoding', 'Mult+Avg', 'Pair2Vec', '(Joshi et al., 2019)'], ['Encoding', 'Mult+Avg', 'FastText', '(Bojanowski et al., 2017)'], ['Encoding', 'Mult+Avg', 'Retrofitting†', '(Faruqui et al., 2015)'], ['Encoding', 'Mult+Avg', 'Attract-Repel†', '(Mrkšić et al., 2017)'], ['E... | 2 | [['DiffVec', 'Acc.'], ['DiffVec', 'F1'], ['DiffVec', 'Prec.'], ['DiffVec', 'Rec.'], ['BLESS', 'Acc.'], ['BLESS', 'F1'], ['BLESS', 'Prec.'], ['BLESS', 'Rec.']] | [['85.3', '64.2', '65.1', '64.5', '94.3', '92.8', '93.0', '92.6'], ['85', '64.0', '65.0', '64.5', '91.2', '89.3', '88.9', '89.7'], ['84.2', '61.4', '62.6', '61.9', '92.8', '90.4', '90.7', '90.2'], ['86.1*', '64.6*', '66.6*', '64.5*', '90.6', '88.3', '88.1', '88.6'], ['86.0*', '64.6*', '66.0*', '65.2*', '91.2', '89.0', ... | column | ['Acc.', 'F1', 'Prec.', 'Rec.', 'Acc.', 'F1', 'Prec.', 'Rec.'] | ['RWE'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>DiffVec || Acc.</th> <th>DiffVec || F1</th> <th>DiffVec || Prec.</th> <th>DiffVec || Rec.</th> <th>BLESS || Acc.</th> <th>BLESS || F1</th> <th>BLESS || Prec.</th> <th>BLESS || Rec.... | Table 1 | table_1 | P19-1318 | 6 | acl2019 | Results . Table 1 shows the results of our relational word vectors, the standard FastText embeddings and other baselines on the two relation classification datasets (i.e. BLESS and DiffVec). Our model consistently outperforms the FastText embeddings baseline and comparison systems, with the only exception being the pre... | [2, 1, 1, 1, 2, 1] | ['Results .', 'Table 1 shows the results of our relational word vectors, the standard FastText embeddings and other baselines on the two relation classification datasets (i.e. BLESS and DiffVec).', 'Our model consistently outperforms the FastText embeddings baseline and comparison systems, with the only exception being... | [None, ['FastText', 'BLESS', 'DiffVec'], ['FastText', 'DiffVec'], ['BLESS', 'Retrofitting†', 'Attract-Repel†'], ['DiffVec'], ['RWE']] | 1 |
P19-1318table_2 | Results on the McRae feature norms dataset (Macro F-Score) and QVEC (correlation score). Models marked with † use external resources. The results with * indicate that WordNet was used for both the development of the model and the construction of the dataset. | 2 | [['Model', 'RWE'], ['Model', 'Pair2Vec'], ['Model', 'Retrofitting'], ['Model', 'Attract-Repel'], ['Model', 'FastText']] | 2 | [['McRae Feature Norms', 'Overall'], ['McRae Feature Norms', 'metal'], ['McRae Feature Norms', 'is_small'], ['McRae Feature Norms', 'is_large'], ['McRae Feature Norms', 'animal'], ['McRae Feature Norms', 'is_edible'], ['McRae Feature Norms', 'wood'], ['McRae Feature Norms', 'is_round'], ['McRae Feature Norms', 'is_long... | [['55.2', '73.6', '46.7', '45.9', '89.2', '61.5', '38.5', '39', '46.8', '55.4'], ['55', '71.9', '49.2', '43.3', '88.9', '68.3', '37.7', '35', '45.5', '52.7'], ['50.6', '72.3', '44', '39.1', '90.6', '75.7', '15.4', '22.9', '44.4', '56.8*'], ['50.4', '73.2', '44.4', '33.3', '88.9', '71.8', '31.1', '24.2', '35.9', '55.9*'... | column | ['F-Score', 'F-Score', 'F-Score', 'F-Score', 'F-Score', 'F-Score', 'F-Score', 'F-Score', 'F-Score', 'correlation'] | ['RWE'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>McRae Feature Norms || Overall</th> <th>McRae Feature Norms || metal</th> <th>McRae Feature Norms || is_small</th> <th>McRae Feature Norms || is_large</th> <th>McRae Feature Norms || animal</th> ... | Table 2 | table_2 | P19-1318 | 7 | acl2019 | Results . Table 2 shows the results on the McRae Feature Norms dataset and QVEC. In the case of the McRae Feature Norms dataset, our relational word embeddings achieve the best overall results, although there is some variation for the individual features. These results suggest that attributional information is encoded ... | [2, 1, 1, 1, 1, 1, 1, 1] | ['Results .', 'Table 2 shows the results on the McRae Feature Norms dataset and QVEC.', 'In the case of the McRae Feature Norms dataset, our relational word embeddings achieve the best overall results, although there is some variation for the individual features.', 'These results suggest that attributional information ... | [None, ['McRae Feature Norms', 'QVEC'], ['McRae Feature Norms', 'RWE'], ['RWE'], ['Retrofitting', 'Attract-Repel'], ['Retrofitting', 'Attract-Repel'], ['Pair2Vec', 'FastText', 'RWE'], ['Pair2Vec']] | 1 |
P19-1321table_1 | Word analogy accuracy results on different datasets. | 2 | [['Models', 'GloVe'], ['Models', 'SG'], ['Models', 'CBOW'], ['Models', 'WeMAP'], ['Models', 'CvMF'], ['Models', 'CvMF(NIG)']] | 1 | [['Gsem'], ['GSyn'], ['MSR'], ['IM'], ['DM'], ['ES'], ['LS']] | [['78.85', '62.81', '53.04', '55.21', '14.82', '10.56', '0.881'], ['71.58', '60.50', '51.71', '55.45', '13.48', '08.78', '0.671'], ['64.81', '47.39', '45.33', '50.58', '10.11', '07.02', '0.764'], ['83.52', '63.08', '55.08', '56.03', '14.95', '10.62', '0.903'], ['63.22', '67.41', '63.21', '65.94', '17.46', '9.380', '1.1... | column | ['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy'] | ['CvMF(NIG)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Gsem</th> <th>GSyn</th> <th>MSR</th> <th>IM</th> <th>DM</th> <th>ES</th> <th>LS</th> </tr> </thead> <tbody> <tr> <td>Models || GloVe</td> <td>78.85</td> <td>62.81... | Table 1 | table_1 | P19-1321 | 6 | acl2019 | Table 1 shows word analogy results for three datasets. First, we show results for the Google analogy dataset (Mikolov et al., 2013a) which is available from the GloVe project and covers a mix of semantic and syntactic relations. These results are shown separately in Table 1 as Gsem and Gsyn respectively. Secon... | [1, 2, 1, 1, 1, 1, 1, 1, 2, 2] | ['Table 1 shows word analogy results for three datasets.', 'First, we show results for the Google analogy dataset (Mikolov et al., 2013a) which is available from the GloVe project and covers a mix of semantic and syntactic relations.', 'These results are shown separately in Table 1 as Gsem and Gsyn respectivel... | [None, None, ['Gsem', 'GSyn'], ['MSR'], ['IM', 'DM', 'ES', 'LS'], ['CvMF(NIG)', 'GSyn', 'MSR', 'IM', 'DM'], ['CvMF(NIG)', 'Gsem'], ['ES'], None, None] | 1 |
P19-1321table_7 | Document classification results (F1). | 2 | [['Models', 'TF-IDF'], ['Models', 'LDA'], ['Models', 'HDP'], ['Models', 'movMF'], ['Models', 'GLDA'], ['Models', 'sHDP'], ['Models', 'GloVe'], ['Models', 'WeMAP'], ['Models', 'SG'], ['Models', 'CBOW'], ['Models', 'CvMF'], ['Models', 'CvMF(NIG)']] | 1 | [['20NG'], ['OHS'], ['TechTC'], ['Reu']] | [['0.852', '0.632', '0.306', '0.319'], ['0.859', '0.629', '0.305', '0.323'], ['0.862', '0.627', '0.304', '0.339'], ['0.809', '0.610', '0.302', '0.336'], ['0.862', '0.629', '0.305', '0.352'], ['0.863', '0.631', '0.304', '0.353'], ['0.852', '0.629', '0.301', '0.315'], ['0.855', '0.630', '0.306', '0.345'], ['0.853', '0.63... | column | ['F1', 'F1', 'F1', 'F1'] | ['CvMF(NIG)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>20NG</th> <th>OHS</th> <th>TechTC</th> <th>Reu</th> </tr> </thead> <tbody> <tr> <td>Models || TF-IDF</td> <td>0.852</td> <td>0.632</td> <td>0.306</td> <td>0.319</td> ... | Table 7 | table_7 | P19-1321 | 6 | acl2019 | Table 7 summarizes our document classification results. It can be seen that our model outperforms all baselines, except for the TechTC dataset, where the results are very close. Among the baselines, InterestsHDP achieves the best performance. Interestingly, this model also uses von Mishes-Fisher mixtures, but relies on... | [1, 1, 1, 2] | ['Table 7 summarizes our document classification results.', 'It can be seen that our model outperforms all baselines, except for the TechTC dataset, where the results are very close.', 'Among the baselines, InterestsHDP achieves the best performance.', 'Interestingly, this model also uses von Mishes-Fisher mixtures, bu... | [None, ['CvMF(NIG)', 'TechTC'], ['sHDP'], ['sHDP']] | 1 |
P19-1328table_2 | Ablation study results of our approach. BERT (Keep/Mask) are the baselines that uses BERT unmasking/masking the target word to propose candidates and rank by the proposal scores. Remember that our approach is a linear combination of proposal score sp and validation score sv, as in Eq (3). In the baselines “w/o sp”, we ... | 2 | [['LS07', 'our approach'], ['LS07', ' - w/o sp (Keep)'], ['LS07', ' - w/o sp (Mask)'], ['LS07', ' - w/o sp (WordNet)'], ['LS07', ' - w/o sv'], ['LS07', 'BERT (Keep)'], ['LS07', 'BERT (Mask)'], ['LS14', 'our approach'], ['LS14', ' - w/o sp (Keep)'], ['LS14', ' - w/o sp (Mask)'], ['LS14', ' - w/o sp (WordNet)'], ['LS14',... | 2 | [['Method', 'best'], ['Method', 'best-m'], ['Method', 'oot'], ['Method', 'oot-m'], ['Method', 'P@1']] | [['20.3', '34.2', '55.4', '68.4', '51.1'], ['18.9', '32.6', '51.7', '63.5', '48.6'], ['16.2', '27.5', '46.4', '57.9', '43.3'], ['15.9', '27.1', '45.9', '57.1', '42.8'], ['12.1', '20.2', '40.8', '56.9', '13.1'], ['9.2', '16.3', '37.3', '52.2', '9.2'], ['8.6', '14.2', '33.2', '48.9', '5.7'], ['14.5', '33.9', '45.9', '69.... | column | ['best', 'best-m', 'oot', 'oot-m', 'P@1'] | ['our approach'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Method || best</th> <th>Method || best-m</th> <th>Method || oot</th> <th>Method || oot-m</th> <th>Method || P@1</th> </tr> </thead> <tbody> <tr> <td>LS07 || our approach</td> <t... | Table 2 | table_2 | P19-1328 | 4 | acl2019 | For understanding the improvement, we conduct an ablation test and show the result in Table 2. According to Table 2, we observe that the original BERT cannot perform as well as the previous state-of-the-art approaches by its own. When we further add our candidate valuation method in Section 2.2 to validate the candidat... | [1, 1, 1, 1] | ['For understanding the improvement, we conduct an ablation test and show the result in Table 2.', 'According to Table 2, we observe that the original BERT cannot perform as well as the previous state-of-the-art approaches by its own.', 'When we further add our candidate valuation method in Section 2.2 to validate the ... | [None, ['BERT (Keep)', 'BERT (Mask)'], ['our approach'], ['our approach', ' - w/o sp (WordNet)']] | 1 |
P19-1335table_5 | Performance of the Full-Transformer (UWB) model evaluated on seen and unseen entities from the training and validation worlds. | 2 | [['Evaluation', 'Training worlds seen'], ['Evaluation', 'Training worlds unseen'], ['Evaluation', 'Validation worlds unseen']] | 1 | [['Accuracy']] | [['87.74'], ['82.96'], ['76']] | column | ['Accuracy'] | ['Evaluation'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy</th> </tr> </thead> <tbody> <tr> <td>Evaluation || Training worlds seen</td> <td>87.74</td> </tr> <tr> <td>Evaluation || Training worlds unseen</td> <td>82.96</td> </tr>... | Table 5 | table_5 | P19-1335 | 7 | acl2019 | To analyze the impact of unseen entities and domain shift in zero-shot entity linking, we evaluate performance on a more standard in-domain entity linking setting by making predictions on held out mentions from the training worlds. Table 5 compares entity linking performance for different entity splits. Seen entities f... | [2, 1, 1, 1] | ['To analyze the impact of unseen entities and domain shift in zero-shot entity linking, we evaluate performance on a more standard in-domain entity linking setting by making predictions on held out mentions from the training worlds.', 'Table 5 compares entity linking performance for different entity splits.', 'Seen en... | [None, None, ['Training worlds seen'], ['Training worlds unseen']] | 1 |
P19-1338table_1 | Parsing performance with and without punctuation. Mean F indicates mean parsing F -score against the Stanford Parser (early stopping by F -score). Self-/RB-agreement indicates self-agreement and agreement with the right-branching baseline across multiple runs. † indicates a statistical difference from the corresponding... | 2 | [['Model', 'Left-Branching'], ['Model', 'Right-Branching'], ['Model', 'Balanced-Tree'], ['Model', 'ST-Gumbel'], ['Model', 'PRPN'], ['Model', 'Imitation (SbS only)'], ['Model', 'Imitation (SbS + refine)']] | 2 | [['w/o Punctuation', 'Mean F'], ['w/o Punctuation', 'Self-agreement'], ['w/o Punctuation', 'RB-agreement'], ['w/ Punctuation', 'Mean F'], ['w/ Punctuation', 'Self-agreement'], ['w/ Punctuation', 'RB-agreement']] | [['20.7', '-', '-', '18.9', '-', '-'], ['58.5', '-', '-', '18.5', '-', '-'], ['39.5', '-', '-', '22', '-', '-'], ['36.4', '57', '33.8', '21.9', '56.8', '38.1'], ['46', '48.9', '51.2', '51.6', '65', '27.4'], ['45.9', '49.5', '62.2', '52', '70.8', '20.6'], ['53.3', '58.2', '64.9', '53.7', '67.4', '21.1']] | column | ['Mean F', 'Self-agreement', 'RB-agreement', 'Mean F', 'Self-agreement', 'RB-agreement'] | ['Imitation (SbS only)', 'Imitation (SbS + refine)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>w/o Punctuation || Mean F</th> <th>w/o Punctuation || Self-agreement</th> <th>w/o Punctuation || RB-agreement</th> <th>w/ Punctuation || Mean F</th> <th>w/ Punctuation || Self-agreement</th> ... | Table 1 | table_1 | P19-1338 | 5 | acl2019 | Table 1 shows the parsing F -scores against the Stanford Parser. The ST-Gumbel Tree-LSTM model and the PRPN were run five times with different initializations, each known as a trajectory. For imitation learning, given a PRPN trajectory, we perform SbS training once and then policy refinement for five runs. Left-/right-... | [1, 2, 2, 2, 2, 1, 2, 2] | ['Table 1 shows the parsing F -scores against the Stanford Parser.', 'The ST-Gumbel Tree-LSTM model and the PRPN were run five times with different initializations, each known as a trajectory.', 'For imitation learning, given a PRPN trajectory, we perform SbS training once and then policy refinement for five runs.', 'L... | [None, ['ST-Gumbel', 'PRPN'], ['Imitation (SbS only)'], ['Left-Branching', 'Right-Branching', 'Balanced-Tree'], None, ['Imitation (SbS only)', 'Imitation (SbS + refine)', 'PRPN'], ['Imitation (SbS only)', 'Imitation (SbS + refine)', 'PRPN'], ['Imitation (SbS only)', 'Imitation (SbS + refine)', 'PRPN']] | 1 |
P19-1341table_6 | Correlation (τ) of generic DA lexicons with gold standard lexicons. ORTH results are from Rothe et al. (2016). The other columns use REG (§3.4). Training words for lexicon induction are from Rothe et al. (2016) (GEN) and from PBC+ ZS lexicons. | 1 | [['CZ web'], ['DE web'], ['ES web'], ['FR web'], ['EN Tw.'], ['EN Ne.'], ['JA Wiki']] | 2 | [['ORTH', 'GEN'], ['REG', 'GEN'], ['REG', 'PBC+/T'], ['REG', 'PBC+/NT']] | [['0.58', '0.576', '0.529', '0.524'], ['0.654', '0.654', '0.634', '0.634'], ['0.563', '0.568', '0.524', '0.514'], ['0.544', '0.54', '0.514', '0.474'], ['0.654', '0.629', '0.583', '0.583'], ['0.622', '0.582', '0.562', '0.557'], ['-', '0.628', '0.571', '0.558']] | column | ['correlation', 'correlation', 'correlation', 'correlation'] | ['REG'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>ORTH || GEN</th> <th>REG || GEN</th> <th>REG || PBC+/T</th> <th>REG || PBC+/NT</th> </tr> </thead> <tbody> <tr> <td>CZ web</td> <td>0.58</td> <td>0.576</td> <td>0.529</td> ... | Table 6 | table_6 | P19-1341 | 7 | acl2019 | Columns (i) and (ii) of Table 6 show that REG (§3.4) delivers results comparable to Densifier (ORTH) when using the same set of generic training words (GEN) in lexicon induction. However, our method is more efficient - no need to compute the expensive SVD after every batch update. | [1, 1] | ['Columns (i) and (ii) of Table 6 show that REG (§3.4) delivers results comparable to Densifier (ORTH) when using the same set of generic training words (GEN) in lexicon induction.', 'However, our method is more efficient - no need to compute the expensive SVD after every batch update.'] | [['GEN', 'ORTH', 'REG'], ['REG', 'GEN']] | 1 |
P19-1342table_6 | Sentence-level phrase accuracy (SPAcc) and phrase error deviation (PEDev) comparison on SST-5 between bi-tree-LSTM and TCM. | 2 | [['Metrics', 'SPAcc alpha=1'], ['Metrics', 'SPAcc alpha=0.9'], ['Metrics', 'SPAcc alpha=0.8'], ['Metrics', 'PEDev-mean'], ['Metrics', 'PEDev-median']] | 1 | [['BTL'], ['TCM'], ['Diff.']] | [['3.2', '3.7', '0.5'], ['20', '21.2', '1.2'], ['70.7', '71.4', '0.7'], ['36.4', '35.7', '-0.7'], ['37.6', '37', '-0.6']] | row | ['SPAcc alpha=1', 'SPAcc alpha=0.9', 'SPAcc alpha=0.8', 'PEDev-mean', 'PEDev-median'] | ['TCM'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BTL</th> <th>TCM</th> <th>Diff.</th> </tr> </thead> <tbody> <tr> <td>Metrics || SPAcc alpha=1</td> <td>3.2</td> <td>3.7</td> <td>0.5</td> </tr> <tr> <td>Metrics || SP... | Table 6 | table_6 | P19-1342 | 8 | acl2019 | Table 6 shows the sentence-level phrase accuracy (SPAcc) and phrase error deviation (PEDev) comparison on SST-5 between bi-tree-LSTM and TCM, respectively. TCM outperforms bi-treeLSTM on all the metrics, which demonstrates that TCM gives more consistent predictions of sentiments over different phrases in a tree, compar... | [1, 1, 1] | ['Table 6 shows the sentence-level phrase accuracy (SPAcc) and phrase error deviation (PEDev) comparison on SST-5 between bi-tree-LSTM and TCM, respectively.', 'TCM outperforms bi-treeLSTM on all the metrics, which demonstrates that TCM gives more consistent predictions of sentiments over different phrases in a tree, c... | [['BTL', 'TCM'], ['BTL', 'TCM'], None] | 1 |
P19-1346table_3 | Comparison of oracles, baselines, retrieval, extractive, and abstractive models on the full proposed answers. | 2 | [['Model', 'Support Document'], ['Model', 'Nearest Neighbor'], ['Model', 'Extractive (TFIDF)'], ['Model', 'Extractive (BidAF)'], ['Model', 'Oracle support doc'], ['Model', 'Oracle web sources'], ['Model', 'LM Q + A'], ['Model', 'LM Q + D + A'], ['Model', 'Seq2Seq Q to A'], ['Model', 'Seq2Seq Q + D to A'], ['Model', 'Se... | 1 | [['PPL'], ['ROUGE-1'], ['ROUGE-2'], ['ROUGE-L']] | [['-', '16.8', '2.3', '10.2'], ['-', '16.7', '2.3', '12.5'], ['-', '20.6', '2.9', '17'], ['-', '23.5', '3.1', '17.5'], ['-', '27.4', '2.8', '19.9'], ['-', '54.8', '8.6', '40.3'], ['42.2', '27.8', '4.7', '23.1'], ['33.9', '26.4', '4', '20.5'], ['52.9', '28.3', '5.1', '22.7'], ['55.1', '28.3', '5.1', '22.8'], ['32.7', '2... | column | ['PPL', 'ROUGE-1', 'ROUGE-2', 'ROUGE-L'] | ['Seq2Seq Q to A', 'Seq2Seq Q + D to A'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>PPL</th> <th>ROUGE-1</th> <th>ROUGE-2</th> <th>ROUGE-L</th> </tr> </thead> <tbody> <tr> <td>Model || Support Document</td> <td>-</td> <td>16.8</td> <td>2.3</td> <td>10... | Table 3 | table_3 | P19-1346 | 6 | acl2019 | 6.1 Overview of Model Performance . Full answer ROUGE. Table 3 shows that the nearest neighbor baseline performs similarly to simply returning the support document which indicates that memorizing answers from the training set is insufficient. For extractive models, the oracle provides an approximate upper bound of 27.4... | [2, 2, 1, 1, 1, 1, 1, 1, 0] | ['6.1 Overview of Model Performance .', 'Full answer ROUGE.', 'Table 3 shows that the nearest neighbor baseline performs similarly to simply returning the support document which indicates that memorizing answers from the training set is insufficient.', 'For extractive models, the oracle provides an approximate upper bo... | [None, None, ['Nearest Neighbor'], ['Oracle support doc', 'ROUGE-1'], ['Extractive (BidAF)', 'Extractive (TFIDF)'], ['Oracle web sources'], None, ['Seq2Seq Q to A', 'Seq2Seq Q + D to A'], None] | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.