table_id_paper stringlengths 15 15 | caption stringlengths 14 1.88k | row_header_level int32 1 9 | row_headers large_stringlengths 15 1.75k | column_header_level int32 1 6 | column_headers large_stringlengths 7 1.01k | contents large_stringlengths 18 2.36k | metrics_loc stringclasses 2
values | metrics_type large_stringlengths 5 532 | target_entity large_stringlengths 2 330 | table_html_clean large_stringlengths 274 7.88k | table_name stringclasses 9
values | table_id stringclasses 9
values | paper_id stringlengths 8 8 | page_no int32 1 13 | dir stringclasses 8
values | description large_stringlengths 103 3.8k | class_sentence stringlengths 3 120 | sentences large_stringlengths 110 3.92k | header_mention stringlengths 12 1.8k | valid int32 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
D18-1486table_2 | Single-task results. Row “Orphan category” denotes a variant of CapsNet-2 without orphan category | 1 | [['LSTM'], ['BiLSTM'], ['LR-LSTM'], ['VD-CNN'], ['DCNN'], ['CNN-MC'], ['CapsNet-1'], ['CapsNet-2'], ['- Orphan']] | 2 | [['Dataset', 'MR'], ['Dataset', 'SST-1'], ['Dataset', 'SST-2'], ['Dataset', 'Subj'], ['Dataset', 'TREC'], ['Dataset', 'AG’s']] | [['75.9', '45.9', '80.6', '89.3', '86.8', '86.1'], ['79.3', '46.2', '83.2', '90.5', '89.6', '88.2'], ['81.5', '48.2', '87.5', '89.9', '-', '-'], ['-', '-', '-', '-', '-', '91.3'], ['-', '48.5', '86.8', '-', '93.0', '-'], ['81.1', '47.4', '88.1', '93.2', '92.2', '-'], ['81.5', '48.1', '86.4', '93.3', '91.8', '91.1'], ['... | column | ['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy'] | ['CapsNet-1', 'CapsNet-2'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Dataset || MR</th> <th>Dataset || SST-1</th> <th>Dataset || SST-2</th> <th>Dataset || Subj</th> <th>Dataset || TREC</th> <th>Dataset || AG’s</th> </tr> </thead> <tbody> <tr> <td... | Table 2 | table_2 | D18-1486 | 6 | emnlp2018 | 4.3 Single-Task Learning Results. We first test our approach on six datasets for text classification under the scheme of single-task. As Table 2 shows, our single-task network enhanced by capsules is already a strong model. CapsNet1 that has one kernel size obtains the best accuracy on 2 out of 6 datasets, and gets com... | [2, 2, 1, 1, 1, 2, 1, 2, 2, 1, 2] | ['4.3 Single-Task Learning Results.', 'We first test our approach on six datasets for text classification under the scheme of single-task.', 'As Table 2 shows, our single-task network enhanced by capsules is already a strong model.', 'CapsNet1 that has one kernel size obtains the best accuracy on 2 out of 6 datasets, a... | [None, ['CapsNet-1', 'CapsNet-2', 'MR', 'SST-1', 'SST-2', 'Subj', 'TREC', 'AG’s'], ['CapsNet-1', 'CapsNet-2'], ['CapsNet-1', 'MR', 'SST-1', 'SST-2', 'Subj', 'TREC', 'AG’s'], ['CapsNet-2', 'MR', 'SST-1', 'Subj', 'AG’s'], ['CapsNet-1', 'CapsNet-2'], ['CapsNet-2', 'VD-CNN', 'DCNN', 'CNN-MC'], None, None, ['CapsNet-2', '- ... | 1 |
D18-1490table_5 | Comparison of the ACNN model to the stateof-the-art methods on the Switchboard test set. The other models listed have used richer inputs and/or rely on the output of other systems, as well as pattern match features, as indicated by the following symbols: (cid:5) dependency parser, † hand-crafted constraints/rules, (cid... | 2 | [['model', 'Yoshikawa et al.(2016)'], ['model', 'Georgila et al. (2010)'], ['model', 'Tran et al. (2018)'], ['model', 'Kahn et al. (2005)'], ['model', 'Johnson et al. (2004)'], ['model', 'Georgila (2009)'], ['model', 'Johnson et al. (2004)'], ['model', 'Rasooli et al. (2013)'], ['model', 'Zwarts et al. (2011)'], ['mode... | 1 | [['P'], ['R'], ['F']] | [['67.9', '57.9', '62.5'], ['77.4', '64.6', '70.4'], ['-', '-', '77.5'], ['-', '-', '78.2'], ['82.0', '77.8', '79.7'], ['-', '-', '80.1'], ['-', '-', '81.0'], ['85.1', '77.9', '81.4'], ['-', '-', '83.8'], ['-', '-', '84.1'], ['-', '-', '84.1'], ['89.5', '80.0', '84.5'], ['90.0', '81.2', '85.4'], ['91.8', '80.6', '85.9'... | column | ['P', 'R', 'F'] | ['ACNN'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F</th> </tr> </thead> <tbody> <tr> <td>model || Yoshikawa et al.(2016)</td> <td>67.9</td> <td>57.9</td> <td>62.5</td> </tr> <tr> <td>model || Ge... | Table 5 | table_5 | D18-1490 | 7 | emnlp2018 | Finally, we compare the ACNN model to stateof-the-art methods from the literature, evaluated on the Switchboard test set. Table 5 shows that the ACNN model is competitive with recent models from the literature. The three models that score more highly than the ACNN all rely on handcrafted features, additional informatio... | [2, 1, 2, 2] | ['Finally, we compare the ACNN model to stateof-the-art methods from the literature, evaluated on the Switchboard test set.', 'Table 5 shows that the ACNN model is competitive with recent models from the literature.', 'The three models that score more highly than the ACNN all rely on handcrafted features, additional in... | [['ACNN'], ['ACNN'], ['Ferguson et al. (2015)', 'Zayats et al. (2016)', 'Jamshid Lou et al. (2017)'], ['ACNN']] | 1 |
D18-1494table_7 | Word similarity results on ISEAR. | 1 | [['W2V'], ['siW2V'], ['SSPMI'], ['SLTM']] | 1 | [['MEN'], ['SimLex'], ['Rare']] | [['0.002', '-0.008', '-0.119'], ['0.002', '0.017', '0.062'], ['0.023', '0.028', '-0.004'], ['0.169', '0.037', '0.089']] | column | ['similarity', 'similarity', 'similarity'] | ['SLTM'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MEN</th> <th>SimLex</th> <th>Rare</th> </tr> </thead> <tbody> <tr> <td>W2V</td> <td>0.002</td> <td>-0.008</td> <td>-0.119</td> </tr> <tr> <td>siW2V</td> <td>0.00... | Table 7 | table_7 | D18-1494 | 8 | emnlp2018 | We train W2V, siW2V, and SSPMI over each corpus by setting the number of context window size to 5. Furthermore, the dimension of word embeddings generated from all models is set to 50 according to (Lai et al., 2016). The values of word similarity on ISEAR and YouTube are respectively shown in Table 7 and Table 8, where... | [2, 2, 1, 1, 1] | ['We train W2V, siW2V, and SSPMI over each corpus by setting the number of context window size to 5.', 'Furthermore, the dimension of word embeddings generated from all models is set to 50 according to (Lai et al., 2016).', 'The values of word similarity on ISEAR and YouTube are respectively shown in Table 7 and Table ... | [['W2V', 'siW2V', 'SSPMI'], ['W2V', 'siW2V', 'SSPMI'], None, ['SLTM'], ['SLTM']] | 1 |
D18-1495table_3 | Results for different sampling numbers in different setting for the two datasets. Score denotes the topic coherence score. | 4 | [['Datasets(# Topics)', '20 News (k=50)', '# Samples', '1'], ['Datasets(# Topics)', '20 News (k=50)', '# Samples', '3'], ['Datasets(# Topics)', '20 News (k=50)', '# Samples', '10'], ['Datasets(# Topics)', '20 News (k=100)', '# Samples', '1'], ['Datasets(# Topics)', '20 News (k=100)', '# Samples', '3'], ['Datasets(# Top... | 1 | [['Score']] | [['0.24'], ['0.28'], ['0.25'], ['0.21'], ['0.26'], ['0.25'], ['0.27'], ['0.22'], ['0.20'], ['0.26'], ['0.20'], ['0.17']] | column | ['Score'] | ['# Samples'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Score</th> </tr> </thead> <tbody> <tr> <td>Datasets(# Topics) || 20 News (k=50) || # Samples || 1</td> <td>0.24</td> </tr> <tr> <td>Datasets(# Topics) || 20 News (k=50) || # Samples || 3... | Table 3 | table_3 | D18-1495 | 6 | emnlp2018 | Effect of the mini-corpus. To study the effect of our sampling strategy which has been discussed in section 3.3. Table 3 shows the performance of our model with different sample size for a mini-corpus. For the 20 Newsgroups dataset, the best performance is achieved when the sample size is 3. When we do not use our samp... | [2, 2, 1, 1, 1, 2, 2, 2, 1, 2, 1, 2] | ['Effect of the mini-corpus.', 'To study the effect of our sampling strategy which has been discussed in section 3.3.', 'Table 3 shows the performance of our model with different sample size for a mini-corpus.', 'For the 20 Newsgroups dataset, the best performance is achieved when the sample size is 3.', 'When we do no... | [None, None, ['20 News (k=50)', '20 News (k=100)', 'All news (k=50)', 'All news (k=100)', '# Samples'], ['20 News (k=50)', '20 News (k=100)', '# Samples', '3'], ['# Samples', '1', '3'], ['20 News (k=50)', '20 News (k=100)'], ['20 News (k=50)', '20 News (k=100)'], None, ['# Samples', 'Score'], None, ['20 News (k=50)', '... | 1 |
D18-1497table_6 | Cross AUC results for different representations on the BeerAdvocate data. Row: Embedding used. Column: Aspect evaluated against. | 1 | [['Look'], ['Aroma'], ['Palate'], ['Taste']] | 1 | [['Look'], ['Aroma'], ['Palate'], ['Taste']] | [['0.92', '0.89', '0.88', '0.87'], ['0.90', '0.93', '0.91', '0.92'], ['0.89', '0.92', '0.94', '0.95'], ['0.90', '0.94', '0.95', '0.96']] | column | ['AUC', 'AUC', 'AUC', 'AUC'] | ['Look', 'Aroma', 'Palate', 'Taste'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Look</th> <th>Aroma</th> <th>Palate</th> <th>Taste</th> </tr> </thead> <tbody> <tr> <td>Look</td> <td>0.92</td> <td>0.89</td> <td>0.88</td> <td>0.87</td> </tr> <... | Table 6 | table_6 | D18-1497 | 6 | emnlp2018 | In Table 6 we present cross AUC evaluations. Rows correspond to the embedding used and columns to the aspect evaluated against. As expected, aspect-embeddings perform better w.r.t. the aspects for which they code, suggesting some disentanglement. However, the reduction in performance when using one aspect representatio... | [1, 1, 1, 1, 2, 2] | ['In Table 6 we present cross AUC evaluations.', 'Rows correspond to the embedding used and columns to the aspect evaluated against.', 'As expected, aspect-embeddings perform better w.r.t. the aspects for which they code, suggesting some disentanglement.', 'However, the reduction in performance when using one aspect re... | [None, ['Look', 'Aroma', 'Palate', 'Taste'], None, None, ['Aroma', 'Taste'], ['Look', 'Aroma', 'Palate', 'Taste']] | 1 |
D18-1498table_4 | POS tagging results on SANCL data. Source domains include Web, Emails, Twitter. † indicates the unified multi-source model trained without Twitter, thus can be considered as the oracle performance (upper-bound) of uni-MS. | 2 | [['TARGET', 'Answers'], ['TARGET', 'Reviews'], ['TARGET', 'Newsgroup'], ['TARGET', 'Average']] | 2 | [['NON-ADVERSARIAL', 'best-SS'], ['NON-ADVERSARIAL', 'uni-MS'], ['NON-ADVERSARIAL', 'uni-MS†'], ['NON-ADVERSARIAL', 'MoE'], ['ADVERSARIAL', 'best-SS-A'], ['ADVERSARIAL', 'uni-MS-A'], ['ADVERSARIAL', 'uni-MS-A†'], ['ADVERSARIAL', 'MoE-A']] | [['88.16', '88.89', '89.88', '90.26', '88.47', '89.04', '89.99', '89.80'], ['87.15', '87.45', '88.91', '89.37', '87.26', '87.90', '88.94', '89.40'], ['89.14', '89.95', '90.70', '91.03', '89.54', '90.20', '90.70', '91.13'], ['88.15', '88.76', '89.83', '90.22', '88.42', '89.05', '89.88', '90.11']] | column | ['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy'] | ['MoE'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>NON-ADVERSARIAL || best-SS</th> <th>NON-ADVERSARIAL || uni-MS</th> <th>NON-ADVERSARIAL || uni-MS†</th> <th>NON-ADVERSARIAL || MoE</th> <th>ADVERSARIAL || best-SS-A</th> <th>ADVERSARIAL || un... | Table 4 | table_4 | D18-1498 | 9 | emnlp2018 | 5.2 Part-of-Speech Tagging. Table 4 summarizes our results on POS tagging. Again, our approach consistently achieves the best performance across different settings and tasks. Adding Twitter as a source leads to a drop in performance for the unified model, as a result of negative transfer. Our method, however, robustly ... | [2, 1, 1, 1, 2] | ['5.2 Part-of-Speech Tagging.', 'Table 4 summarizes our results on POS tagging.', 'Again, our approach consistently achieves the best performance across different settings and tasks.', 'Adding Twitter as a source leads to a drop in performance for the unified model, as a result of negative transfer.', 'Our method, howe... | [None, None, ['MoE'], ['uni-MS', 'uni-MS†'], ['MoE']] | 1 |
D18-1508table_1 | Experimental results of the two baselines, as well as single and label-wise attention modifications to the “vanilla” 2-BiLSTM model. | 4 | [['Lab', '20*', 'Syst', 'FastText'], ['Lab', '20*', 'Syst', '2-BiLSTM'], ['Lab', '20*', 'Syst', '2-BiLSTMa'], ['Lab', '20*', 'Syst', '2-BiLSTMl'], ['Lab', '50', 'Syst', 'FastText'], ['Lab', '50', 'Syst', '2-BiLSTM'], ['Lab', '50', 'Syst', '2-BiLSTMa'], ['Lab', '50', 'Syst', '2-BiLSTMl'], ['Lab', '100', 'Syst', 'FastTex... | 1 | [['F1'], ['A@1'], ['A@5'], ['CE']] | [['30.97', '42.57', '72.45', '4.56'], ['33.52', '45.76', '75.54', '3.88'], ['34.11', '46.11', '75.68', '3.86'], ['33.51', '45.94', '76.02', '3.82'], ['18.04', '22.33', '48.13', '14.27'], ['19.07', '25.35', '53.38', '9.37'], ['19.83', '25.52', '53.51', '9.35'], ['20.08', '25.64', '53.77', '9.26'], ['16.25', '20.29', '42... | column | ['F1', 'A@1', 'A@5', 'CE'] | ['2-BiLSTMl'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>F1</th> <th>A@1</th> <th>A@5</th> <th>CE</th> </tr> </thead> <tbody> <tr> <td>Lab || 20* || Syst || FastText</td> <td>30.97</td> <td>42.57</td> <td>72.45</td> <td>4.56... | Table 1 | table_1 | D18-1508 | 3 | emnlp2018 | Results. Table 1 shows the results of our model and the baselines in the emoji prediction task for the different evaluation splits. The evaluation metrics used are: F1, Accuracy@k (A@k, where k ∈ {1, 5}), and Coverage Error (CE) (Tsoumakas et al., 2009). We note that the latter metric is not normally used in emoji pred... | [2, 1, 1, 2, 2, 1, 2] | ['Results.', 'Table 1 shows the results of our model and the baselines in the emoji prediction task for the different evaluation splits.', 'The evaluation metrics used are: F1, Accuracy@k (A@k, where k ∈ {1, 5}), and Coverage Error (CE) (Tsoumakas et al., 2009).', 'We note that the latter metric is not normally used in... | [None, ['FastText', '2-BiLSTM', '2-BiLSTMa', '2-BiLSTMl', 'F1', 'A@1', 'A@5', 'CE'], ['F1', 'A@1', 'A@5', 'CE'], None, None, ['2-BiLSTMl', 'Lab', '50', '100', '200', 'F1', 'CE'], ['2-BiLSTMl']] | 1 |
D18-1510table_1 | Results on NIST Chinese-to-English Translation Task. AVG = average BLEU scores for test sets. The bold number indicates the highest score in the column. | 2 | [['System', 'BaseNMT'], ['System', 'MRT'], ['System', 'RF'], ['System', 'P-BLEU'], ['System', 'P-GLEU'], ['System', 'P-P2']] | 1 | [['Dev(MT02)'], ['MT03'], ['MT04'], ['MT05'], ['MT06'], ['AVG']] | [['36.72', '33.95', '37.44', '33.96', '33.09', '34.61'], ['37.17', '34.89', '37.90', '34.62', '33.78', '35.30'], ['37.13', '34.66', '37.69', '34.55', '33.74', '35.16'], ['37.26', '34.54', '38.05', '34.30', '34.11', '35.25'], ['37.44', '34.67', '38.11', '34.24', '34.58', '35.40'], ['38.03', '35.45', '39.30', '35.10', '3... | column | ['BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU'] | ['P-BLEU', 'P-GLEU', 'P-P2'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Dev(MT02)</th> <th>MT03</th> <th>MT04</th> <th>MT05</th> <th>MT06</th> <th>AVG</th> </tr> </thead> <tbody> <tr> <td>System || BaseNMT</td> <td>36.72</td> <td>33.95</td... | Table 1 | table_1 | D18-1510 | 4 | emnlp2018 | Performance. Table 1 shows the translation performance on test sets measured in BLEU score. Simply training NMT model by the probabilistic 2-gram precision achieves an improvement of 1.5 BLEU points, which significantly outperforms the reinforcement-based algorithms. We also test the precision of other n-grams and thei... | [2, 1, 1, 1, 2] | ['Performance.', 'Table 1 shows the translation performance on test sets measured in BLEU score.', 'Simply training NMT model by the probabilistic 2-gram precision achieves an improvement of 1.5 BLEU points, which significantly outperforms the reinforcement-based algorithms.', 'We also test the precision of other n-gra... | [None, None, ['P-P2', 'BaseNMT', 'RF'], ['P-BLEU', 'P-GLEU', 'P-P2'], None] | 1 |
D18-1516table_3 | Results (filtered setting) of the temporal knowledge graph completion experiments for the data sets YAGO15K and WIKIDATA. The best results are written bold. | 1 | [['TTRANSE'], ['TRANSE'], ['DISTMULT'], ['TA-TRANSE'], ['TA-DISTMULT']] | 2 | [['YAGO15K', 'MRR'], ['YAGO15K', 'MR'], ['YAGO15K', 'Hits@10'], ['YAGO15K', 'Hits@1'], ['WIKIDATA', 'MRR'], ['WIKIDATA', 'MR'], ['WIKIDATA', 'Hits@10'], ['WIKIDATA', 'Hits@1']] | [['32.1', '578', '51.0', '23.0', '48.8', '80', '80.6', '33.9'], ['29.6', '614', '46.8', '22.8', '31.6', '50', '65.9', '18.1'], ['27.5', '578', '43.8', '21.5', '31.6', '77', '66.1', '18.1'], ['32.1', '564', '51.2', '23.1', '48.4', '79', '80.7', '32.9'], ['29.1', '551', '47.6', '21.6', '70.0', '198', '78.5', '65.2']] | column | ['MRR', 'MR', 'Hits@10', 'Hits@1', 'MRR', 'MR', 'Hits@10', 'Hits@1'] | ['TA-TRANSE', 'TA-DISTMULT'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>YAGO15K || MRR</th> <th>YAGO15K || MR</th> <th>YAGO15K || Hits@10</th> <th>YAGO15K || Hits@1</th> <th>WIKIDATA || MRR</th> <th>WIKIDATA || MR</th> <th>WIKIDATA || Hits@10</th> <th>... | Table 3 | table_3 | D18-1516 | 4 | emnlp2018 | 4.3 Results. Table 3 and 4 list the results for the KG completion tasks. TA-TRANSE and TA-DISTMULT systematically improve TRANSE and DISTMULT in MRR, hits@10 and hits@1 in almost all cases. Mean rank is a metric that is very susceptible to outliers and hence these improvements are not consistent. TTRANSE learns indepen... | [2, 1, 1, 2, 2, 2, 1, 1, 2] | ['4.3 Results.', 'Table 3 and 4 list the results for the KG completion tasks.', 'TA-TRANSE and TA-DISTMULT systematically improve TRANSE and DISTMULT in MRR, hits@10 and hits@1 in almost all cases.', 'Mean rank is a metric that is very susceptible to outliers and hence these improvements are not consistent.', 'TTRANSE ... | [None, None, ['TA-TRANSE', 'TA-DISTMULT', 'TRANSE', 'DISTMULT', 'MRR', 'Hits@10', 'Hits@1'], ['MR'], ['TTRANSE'], ['TTRANSE'], ['TTRANSE', 'YAGO15K'], ['TTRANSE', 'TA-TRANSE', 'WIKIDATA'], ['TTRANSE']] | 1 |
D18-1525table_1 | Results of transfer learning tasks performance of the proposed autoencoder models. All the models are trained on the Yelp reviews dataset with the use of fastText pre-trained word embeddings. | 2 | [['Model', 'Cross-entropy (vanilla AE)'], ['Model', 'Soft label N = 3'], ['Model', 'Soft label N = 5'], ['Model', 'Soft label N = 10'], ['Model', 'Weighted similarity'], ['Model', 'Weighted cross-entropy']] | 2 | [['MSRP', 'F1'], ['MSRP', 'Acc'], ['SNLI', 'Acc'], ['SICK-E', 'Acc']] | [['79.0', '66.9', '44.8', '56.8'], ['77.6', '67.1', '57.8', '71.8'], ['79.1', '67.3', '57.2', '71.6'], ['77.9', '66.5', '57.9', '72.4'], ['77.5', '65.6', '69.1', '56.6'], ['79.4', '68.2', '57.2', '70.2']] | column | ['F1', 'Acc', 'Acc', 'Acc'] | ['Soft label N = 3', 'Soft label N = 5', 'Soft label N = 10', 'Weighted similarity', 'Weighted cross-entropy'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MSRP || F1</th> <th>MSRP || Acc</th> <th>SNLI || Acc</th> <th>SICK-E || Acc</th> </tr> </thead> <tbody> <tr> <td>Model || Cross-entropy (vanilla AE)</td> <td>79.0</td> <td>66.9<... | Table 1 | table_1 | D18-1525 | 3 | emnlp2018 | 4 Discussion. We find that almost all of the proposed loss functions outperform the vanilla autoencoder trained with cross-entropy on all three tasks (see Table 1). The only exception is the weighted similarity loss function. Compared to the logarithm-based losses, this loss applies softer penalties when the groundtrut... | [2, 1, 1, 2, 2, 1, 2] | ['4 Discussion.', 'We find that almost all of the proposed loss functions outperform the vanilla autoencoder trained with cross-entropy on all three tasks (see Table 1).', 'The only exception is the weighted similarity loss function.', 'Compared to the logarithm-based losses, this loss applies softer penalties when the... | [None, ['Cross-entropy (vanilla AE)', 'Soft label N = 3', 'Soft label N = 5', 'Soft label N = 10', 'Weighted cross-entropy', 'Acc'], ['Acc', 'Weighted similarity'], None, None, ['Acc', 'Weighted cross-entropy', 'Weighted similarity', 'MSRP', 'SNLI', 'SICK-E', 'Soft label N = 10'], ['Soft label N = 10']] | 1 |
D18-1529table_3 | Performance of recent neural network based models without using pretrained embeddings. Our model’s wins are statsitically significantly better than prior work (p < 0.05 bootstrap resampling), except on PKU. | 1 | [['Liu et al. (2016)'], ['Zhou et al. (2017)'], ['Cai et al. (2017)'], ['Wang and Xu (2017)'], ['Ours']] | 1 | [['AS'], ['CITYU'], ['CTB6'], ['CTB7'], ['MSR'], ['PKU'], ['UD']] | [['-', '-', '94.6', '-', '94.8', '94.9', '-'], ['-', '-', '94.9', '-', '97.2', '95.0', '-'], ['95.2', '95.4', '-', '-', '97.0', '95.4', '-'], ['-', '-', '-', '-', '96.7', '94.7', '-'], ['95.5', '95.7', '95.5', '95.6', '97.5', '95.4', '94.6']] | column | ['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy'] | ['Ours'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>AS</th> <th>CITYU</th> <th>CTB6</th> <th>CTB7</th> <th>MSR</th> <th>PKU</th> <th>UD</th> </tr> </thead> <tbody> <tr> <td>Liu et al. (2016)</td> <td>-</td> <td>-</... | Table 3 | table_3 | D18-1529 | 3 | emnlp2018 | Table 3 contains results achieved without using any pretrained embeddings. Our model achieves the best results among NN models on 6/7 datasets. In addition, while the majority of datasets work the best if the pretrained embedding matrix is treated as constant, the MSR dataset is an outlier: fine-tuning embeddings yield... | [1, 1, 2, 2] | ['Table 3 contains results achieved without using any pretrained embeddings.', 'Our model achieves the best results among NN models on 6/7 datasets.', 'In addition, while the majority of datasets work the best if the pretrained embedding matrix is treated as constant, the MSR dataset is an outlier: fine-tuning embeddin... | [None, ['Ours', 'AS', 'CITYU', 'CTB6', 'CTB7', 'MSR', 'UD'], ['AS', 'CITYU', 'CTB6', 'CTB7', 'PKU', 'UD'], ['MSR']] | 1 |
D18-1529table_6 | Ablation results on development data. Top row: absolute performance of our system. Other rows: difference relative to the top row. | 2 | [['System', 'This work'], ['System', '-LSTM dropout'], ['System', '-stacked bi-LSTM'], ['System', '-pretrain']] | 1 | [['AS'], ['CITYU'], ['CTB6'], ['CTB7'], ['MSR'], ['PKU'], ['UD'], ['Average']] | [['98.03', '98.22', '97.06', '97.07', '98.48', '97.95', '97.00', '97.69'], ['+0.03', '-0.33', '-0.31', '-0.24', '+0.04', '-0.29', '-0.76', '-0.35'], ['-0.13', '-0.20', '-0.15', '-0.14', '-0.17', '-0.17', '-0.39', '-0.27'], ['-0.13', '-0.23', '-0.94', '-0.74', '-0.45', '-0.27', '-2.73', '-0.78']] | column | ['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy'] | ['This work'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>AS</th> <th>CITYU</th> <th>CTB6</th> <th>CTB7</th> <th>MSR</th> <th>PKU</th> <th>UD</th> <th>Average</th> </tr> </thead> <tbody> <tr> <td>System || This work</td> ... | Table 6 | table_6 | D18-1529 | 5 | emnlp2018 | 3.2 Ablation Experiments. To see which decisions had the greatest impact on the result, we performed ablation experiments on the holdout sets of the different corpora. Starting with our proposed system, we remove one decision, perform hyperparameter tuning, and see the change in performance. The results are summarized ... | [2, 2, 2, 1, 1, 1, 1] | ['3.2 Ablation Experiments.', 'To see which decisions had the greatest impact on the result, we performed ablation experiments on the holdout sets of the different corpora.', 'Starting with our proposed system, we remove one decision, perform hyperparameter tuning, and see the change in performance.', 'The results are ... | [None, None, ['This work'], None, None, ['This work', '-LSTM dropout', '-stacked bi-LSTM', '-pretrain'], ['AS', 'MSR', '-LSTM dropout']] | 1 |
D18-1531table_2 | Results of SLM-4 incorporating ad hoc guidelines, where † represents using additional 1024 segmented setences for training data and * represents using a rule-based post-processing | 1 | [['SLM-4'], ['SLM-4*'], ['SLM-4†'], ['SLM-4†*']] | 2 | [['F1 score', 'PKU'], ['F1 score', 'MSR'], ['F1 score', 'AS'], ['F1 score', 'CityU']] | [['79.2', '79.0', '79.8', '79.7'], ['81.9', '83.0', '81.0', '81.4'], ['87.5', '84.3', '84.2', '86.0'], ['87.3', '84.8', '83.9', '85.8']] | column | ['F1 score', 'F1 score', 'F1 score', 'F1 score'] | ['SLM-4', 'SLM-4*', 'SLM-4†', 'SLM-4†*'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>F1 score || PKU</th> <th>F1 score || MSR</th> <th>F1 score || AS</th> <th>F1 score || CityU</th> </tr> </thead> <tbody> <tr> <td>SLM-4</td> <td>79.2</td> <td>79.0</td> <td>... | Table 2 | table_2 | D18-1531 | 4 | emnlp2018 | Table 2 shows the results. We can find from the table that only 1024 guideline sentences can improve the performance of “SLM-4” significantly. While rule-based post-processing is very effective, “SLM-4†” can outperform “SLM-4*” on all the four datasets. Moreover, performance drops when applying the rule-based post-proc... | [1, 1, 1, 1, 2, 2] | ['Table 2 shows the results.', 'We can find from the table that only 1024 guideline sentences can improve the performance of “SLM-4” significantly.', 'While rule-based post-processing is very effective, “SLM-4†” can outperform “SLM-4*” on all the four datasets.', 'Moreover, performance drops when applying the rule-base... | [None, ['SLM-4†', 'SLM-4'], ['SLM-4*', 'SLM-4†', 'PKU', 'MSR', 'AS', 'CityU'], ['SLM-4†', 'SLM-4†*', 'PKU', 'AS', 'CityU'], None, ['SLM-4']] | 1 |
D18-1538table_1 | Comparison of baseline models (B) with the models trained with joint objective (J). | 2 | [['Model/Legend', 'B100'], ['Model/Legend', 'B10'], ['Model/Legend', 'B1'], ['Model/Legend', 'J100'], ['Model/Legend', 'J10'], ['Model/Legend', 'J1']] | 1 | [['Test F1'], ['Average disagreement rate (%)']] | [['84.40', '14.69'], ['78.56', '17.01'], ['67.28', '21.17'], ['84.75 (+0.35)', '14.48 (1.43%)'], ['79.09 (+0.53)', '16.25 (4.47%)'], ['68.02 (+0.74)', '20.49 (3.21%)']] | column | ['Test F1', 'Average disagreement rate (%)'] | ['J100', 'J10', 'J1'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Test F1</th> <th>Average disagreement rate (%)</th> </tr> </thead> <tbody> <tr> <td>Model/Legend || B100</td> <td>84.40</td> <td>14.69</td> </tr> <tr> <td>Model/Legend || B10</... | Table 1 | table_1 | D18-1538 | 3 | emnlp2018 | Does training with joint objective help?. We trained 3 models with random 1%, 10% and whole 100% of the training set with joint objective (α1 = α2 = 0.5). For comparison, we trained 3 SOTA models with the same training sets. All models were trained for max 150 epochs and with a patience of 20 epochs. Table 1 reports th... | [2, 2, 2, 2, 1, 1, 1, 2] | ['Does training with joint objective help?.', 'We trained 3 models with random 1%, 10% and whole 100% of the training set with joint objective (α1 = α2 = 0.5).', 'For comparison, we trained 3 SOTA models with the same training sets.', 'All models were trained for max 150 epochs and with a patience of 20 epochs.', 'Tabl... | [None, ['J100', 'J10', 'J1'], ['B100', 'B10', 'B1'], ['B100', 'B10', 'B1', 'J100', 'J10', 'J1'], None, ['B100', 'B10', 'B1', 'J100', 'J10', 'J1', 'Test F1', 'Average disagreement rate (%)'], ['J100', 'J10', 'J1'], ['J100', 'J10', 'J1']] | 1 |
D18-1544table_2 | Language modeling performance (perplexity) on the WSJ test set, broken down by training data used and by whether early stopping is done using the parsing objective (UP) or the language modeling objective (LM). | 9 | [['Model', 'a', 'PRPN-LM', 'Training Data', 'WSJ Train', 'Stopping Criterion', 'LM', 'Vocab Size', '10k'], ['Model', 'b', 'PRPN-LM', 'Training Data', 'WSJ Train', 'Stopping Criterion', 'UP', 'Vocab Size', '10k'], ['Model', 'c', 'PRPN-UP', 'Training Data', 'WSJ Train', 'Stopping Criterion', 'LM', 'Vocab Size', '10k'], [... | 1 | [['PPL Median']] | [['61.4'], ['81.6'], ['92.8'], ['112.1'], ['112.8'], ['797.5'], ['848.9']] | column | ['PPL Median'] | ['PRPN-LM', 'PRPN-UP'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>PPL Median</th> </tr> </thead> <tbody> <tr> <td>Model || a || PRPN-LM || Training Data || WSJ Train || Stopping Criterion || LM || Vocab Size || 10k</td> <td>61.4</td> </tr> <tr> <td>Mod... | Table 2 | table_2 | D18-1544 | 4 | emnlp2018 | 3 Experimental Results. Table 2 shows our results for language modeling. PRPN-UP, configured as-is with parsing criterion and language modeling criterion, performs dramatically worse than the standard PRPN-LM (a vs. d and e). However, this is not a fair comparison as the larger vocabulary gives PRPN-UP a harder task to... | [2, 1, 1, 2, 1, 1, 2, 1, 2, 2, 2] | ['3 Experimental Results.', 'Table 2 shows our results for language modeling.', 'PRPN-UP, configured as-is with parsing criterion and language modeling criterion, performs dramatically worse than the standard PRPN-LM (a vs. d and e).', 'However, this is not a fair comparison as the larger vocabulary gives PRPN-UP a har... | [None, ['PRPN-LM', 'PRPN-UP'], ['PRPN-UP', 'a', 'd', 'e', 'PPL Median'], ['PRPN-UP'], ['a', 'c', 'd', 'PRPN-LM', 'PRPN-UP', 'Vocab Size', '10k'], ['a', 'b', 'd', 'e', 'Stopping Criterion', 'LM', 'UP'], ['Stopping Criterion', 'LM', 'UP'], ['Training Data', 'AllNLI Train', 'f', 'g'], ['WSJ Train', 'AllNLI Train'], ['PRPN... | 1 |
D18-1544table_3 | Unlabeled parsing F1 on the MultiNLI development set for models trained on AllNLI. F1 wrt. shows F1 with respect to strictly rightand left-branching (LB/RB) trees and with respect to the Stanford Parser (SP) trees supplied with the corpus; The evaluations of SPINN, RL-SPINN, and ST-Gumbel are from Williams et al. (2018... | 4 | [['Model', '300D SPINN', 'Stopping Criterion', 'NLI'], ['Model', 'w/o Leaf GRU', 'Stopping Criterion', 'NLI'], ['Model', '300D SPINN-NC', 'Stopping Criterion', 'NLI'], ['Model', 'w/o Leaf GRU', 'Stopping Criterion', 'NLI'], ['Model', '300D ST-Gumbel', 'Stopping Criterion', 'NLI'], ['Model', 'w/o Leaf GRU', 'Stoppi... | 2 | [['F1 wrt.', 'LB'], ['F1 wrt.', 'RB'], ['F1 wrt.', 'SP'], ['F1 wrt.', 'Depth']] | [['19.3', '36.9', '70.2', '6.2'], ['21.2', '39.0', '63.5', '6.4'], ['19.2', '36.2', '70.5', '6.1'], ['20.6', '38.9', '64.1', '6.3'], ['32.6', '37.5', '23.7', '4.1'], ['30.8', '35.6', '27.5', '4.6'], ['95.0', '13.5', '18.8', '8.6'], ['99.1', '10.7', '18.1', '8.6'], ['25.6', '26.9', '45.7', '4.9'], ['19.4', '41.0', '46.3... | column | ['F1 wrt.', 'F1 wrt.', 'F1 wrt.', 'F1 wrt.'] | ['PRPN-LM', 'PRPN-UP', 'PRPN-UP'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>F1 wrt. || LB</th> <th>F1 wrt. || RB</th> <th>F1 wrt. || SP</th> <th>F1 wrt. || Depth</th> </tr> </thead> <tbody> <tr> <td>Model || 300D SPINN || Stopping Criterion || NLI</td> <td>... | Table 3 | table_3 | D18-1544 | 4 | emnlp2018 | In addition, Table 3 shows that the PRPN-UP models achieve the median parsing F1 scores of 46.3 and 48.6 respectively on the MultiNLI dev set while PRPN-LM performs the median F1 of 45.7; setting the state of the art in parsing performance on this dataset among latent tree models by a large margin. We conclude that PRP... | [1, 2] | ['In addition, Table 3 shows that the PRPN-UP models achieve the median parsing F1 scores of 46.3 and 48.6 respectively on the MultiNLI dev set while PRPN-LM performs the median F1 of 45.7; setting the state of the art in parsing performance on this dataset among latent tree models by a large margin.', 'We conclude tha... | [['PRPN-LM', 'PRPN-UP', 'F1 wrt.', 'SP'], ['PRPN-LM', 'PRPN-UP']] | 1 |
D18-1547table_4 | Performance comparison of two different model architectures using a corpus-based evaluation. | 1 | [['Inform (%)'], ['Success (%)'], ['BLEU']] | 2 | [['Cam676', 'w/o attention'], ['Cam676', 'w/ attention'], ['MultiWOZ', 'w/o attention'], ['MultiWOZ', 'w/ attention']] | [['99.17', '99.58', '71.29', '71.33'], ['75.08', '73.75', '60.29', '60.96'], ['0.219', '0.204', '0.188', '0.189']] | row | ['Inform (%)', 'Success (%)', 'BLEU'] | ['MultiWOZ'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Cam676 || w/o attention</th> <th>Cam676 || w/ attention</th> <th>MultiWOZ || w/o attention</th> <th>MultiWOZ || w/ attention</th> </tr> </thead> <tbody> <tr> <td>Inform (%)</td> <td>... | Table 4 | table_4 | D18-1547 | 9 | emnlp2018 | We trained the same neural architecture (taking into account different number of domains) on both MultiWOZ and Cam676 datasets. The best results on the Cam676 corpus were obtained with bidirectional GRU cell. In the case of MultiWOZ dataset, the LSTM cell serving as a decoder and an encoder achieved the highest score w... | [2, 2, 2, 1, 1, 1, 1, 1, 2, 1] | ['We trained the same neural architecture (taking into account different number of domains) on both MultiWOZ and Cam676 datasets.', 'The best results on the Cam676 corpus were obtained with bidirectional GRU cell.', 'In the case of MultiWOZ dataset, the LSTM cell serving as a decoder and an encoder achieved the highest... | [['Cam676', 'MultiWOZ'], ['Cam676'], ['MultiWOZ'], None, ['Cam676', 'Inform (%)'], ['Inform (%)'], ['w/o attention', 'w/ attention', 'Success (%)'], ['Cam676', 'MultiWOZ', 'Inform (%)', 'Success (%)'], None, ['Cam676', 'MultiWOZ', 'BLEU']] | 1 |
D19-1001table_1 | Results on the SHARC test set, averaged over 3 independent runs for GPT2 and BISON, reporting micro accuracy and macro accuracy in terms of the classification task and BLEU-1 and BLEU-4 on instances for which a clarification question was generated. E&D uses no language model pre-training. | 2 | [['Model', 'E&D'], ['Model', 'E&D+B'], ['Model', 'GPT2'], ['Model', 'BISON']] | 1 | [['Micro Acc.'], ['Macro Acc.'], ['B-1'], ['B-4']] | [['31.9', '38.9', '17.1', '1.9'], ['54.7', '60.4', '24.3', '4.3'], ['60.4', '65.1', '53.7', '33.9'], ['64.9', '68.8', '61.8', '46.2']] | column | ['Micro Acc.', 'Macro Acc.', 'B-1', 'B-4'] | ['BISON'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Micro Acc.</th> <th>Macro Acc.</th> <th>B-1</th> <th>B-4</th> </tr> </thead> <tbody> <tr> <td>Model || E&D</td> <td>31.9</td> <td>38.9</td> <td>17.1</td> <td>1.9</... | Table 1 | table_1 | D19-1001 | 5 | emnlp2019 | We submitted the best BISON model out of the random three of Table 1 to be evaluated on the hidden test set and report results in comparison to the best model on the leaderboard,3 E3 (Zhong and Zettlemoyer, 2019) in Table 2. BISON outperforms E3 by 5.6 BLEU-4 points, while it is only slightly worse than E3 in terms of ... | [1, 1] | ['We submitted the best BISON model out of the random three of Table 1 to be evaluated on the hidden test set and report results in comparison to the best model on the leaderboard,3 E3 (Zhong and Zettlemoyer, 2019) in Table 2.', 'BISON outperforms E3 by 5.6 BLEU-4 points, while it is only slightly worse than E3 in term... | [['BISON'], ['BISON']] | 1 |
D19-1005table_1 | Comparison of masked LM perplexity, Wikidata probing MRR, and number of parameters (in millions) in the masked LM (word piece embeddings, transformer layers, and output layers), KAR, and entity embeddings for BERT and KnowBert. The table also includes the total time to run one forward and backward pass (in seconds) on ... | 2 | [['System', 'BERTBASE'], ['System', 'BERTLARGE'], ['System', 'KnowBert-Wiki'], ['System', 'KnowBert-WordNet'], ['System', 'KnowBert-W+W']] | 2 | [['PPL', '-'], ['MRR', 'Wikidata'], ['# params', 'masked LM'], ['# params', 'KAR'], ['# params', 'entity embed.'], ['time', 'Fwd. / Bwd.']] | [['5.5', '0.09', '110', '0', '0', '0.25'], ['4.5', '0.11', '336', '0', '0', '0.75'], ['4.3', '0.26', '110', '2.4', '141', '0.27'], ['4.1', '0.22', '110', '4.9', '265', '0.31'], ['3.5', '0.31', '110', '7.3', '406', '0.33']] | column | ['PPL', 'MRR', '# params', '# params', '# params', 'time'] | ['KnowBert-Wiki', 'KnowBert-WordNet', 'KnowBert-W+W', 'PPL'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>PPL || -</th> <th>MRR || Wikidata</th> <th># params || masked LM</th> <th># params || KAR</th> <th># params || entity embed.</th> <th>time || Fwd. / Bwd.</th> </tr> </thead> <tbody> ... | Table 1 | table_1 | D19-1005 | 6 | emnlp2019 | Perplexity. Table 1 compares masked LM perplexity for KnowBert with BERTBASE and BERTLARGE. To rule out minor differences due to our data preparation, the BERT models are finetuned on our training data before being evaluated. Overall, KnowBert improves the masked LM perplexity, with all KnowBert models outperforming BE... | [2, 1, 2, 1] | ['Perplexity.', 'Table 1 compares masked LM perplexity for KnowBert with BERTBASE and BERTLARGE.', 'To rule out minor differences due to our data preparation, the BERT models are finetuned on our training data before being evaluated.', 'Overall, KnowBert improves the masked LM perplexity, with all KnowBert models outpe... | [None, ['PPL', 'KnowBert-Wiki', 'KnowBert-WordNet', 'KnowBert-W+W', 'BERTBASE', 'BERTLARGE'], None, ['KnowBert-Wiki', 'KnowBert-WordNet', 'KnowBert-W+W', 'masked LM', 'BERTLARGE', 'BERTBASE']] | 1 |
D19-1009table_4 | WSDGα results on the SICK dataset. | 1 | [['sense'], ['word']] | 1 | [['Pearson'], ['Spearman'], ['MSE']] | [['46.5', '43.9', '7.9'], ['39.8', '39.9', '8.6']] | column | ['Pearson', 'Spearman', 'MSE'] | ['sense'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Pearson</th> <th>Spearman</th> <th>MSE</th> </tr> </thead> <tbody> <tr> <td>sense</td> <td>46.5</td> <td>43.9</td> <td>7.9</td> </tr> <tr> <td>word</td> <td>39.8... | Table 4 | table_4 | D19-1009 | 10 | emnlp2019 | Sentence similarity . We used the SICK dataset (Marelli et al., 2014) for this task. It consists of 9841 sentence pairs that had been annotated with relatedness scores on a 5-point rating scale. We used the test split of this dataset that contains 4906 sentence pairs. The aim of this experiment was to test if disambigu... | [2, 2, 2, 2, 2, 2, 2, 1, 1, 1] | ['Sentence similarity .', 'We used the SICK dataset (Marelli et al., 2014) for this task.', 'It consists of 9841 sentence pairs that had been annotated with relatedness scores on a 5-point rating scale.', 'We used the test split of this dataset that contains 4906 sentence pairs.', 'The aim of this experiment was to tes... | [None, None, None, None, None, None, None, ['Pearson', 'Spearman', 'MSE'], None, ['sense', 'word']] | 1 |
D19-1010table_3 | Performance of different dialog agents on the multi-domain dialog corpus by interacting with the agenda-based user simulator. All the results except “dialog turns” are shown in percentage terms. Real human-human performance computed from the test set (i.e. the last row) serves as the upper bounds. | 2 | [['Method', 'GP-MBCM'], ['Method', 'ACER'], ['Method', 'PPO'], ['Method', 'ALDM'], ['Method', 'GDPL-sess'], ['Method', 'GDPL-discr'], ['Method', 'GDPL'], ['Method', 'Human']] | 2 | [['Agenda', 'Turns'], ['Agenda', 'Inform'], ['Agenda', 'Match'], ['Agenda', 'Success']] | [['2.99', '19.04', '44.29', '28.9'], ['10.49', '77.98', '62.83', '50.8'], ['9.83', '83.34', '69.09', '59.1'], ['12.47', '81.20', '62.60', '61.2'], ['7.49', '88.39', '77.56', '76.4'], ['7.86', '93.21', '80.43', '80.5'], ['7.64', '94.97', '83.90', '86.5'], ['7.37', '66.89', '95.29', '75.0']] | column | ['F1', 'F1', 'F1', 'F1'] | ['GDPL'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Agenda || Turns</th> <th>Agenda || Inform</th> <th>Agenda || Match</th> <th>Agenda || Success</th> </tr> </thead> <tbody> <tr> <td>Method || GP-MBCM</td> <td>2.99</td> <td>19.04... | Table 3 | table_3 | D19-1010 | 6 | emnlp2019 | The performance of each approach that interact swith the agenda-based user simulator is shown in Table 3. GDPL achieves extremely high perfor-mance in the task success on account of the substantial improvement in inform F1 and match rateover the baselines. Since the reward estimator of GDPL evaluates stateaction pairs... | [1, 1, 2, 1, 1, 1, 2] | ['The performance of each approach that interact swith the agenda-based user simulator is shown in Table 3.', 'GDPL achieves extremely high perfor-mance in the task success on account of the substantial improvement in inform F1 and match rateover the baselines.', ' Since the reward estimator of GDPL evaluates stateacti... | [None, ['GDPL', 'Success'], ['GDPL'], ['GDPL', 'Human', 'Match'], ['Human'], ['Human'], None] | 1 |
D19-1020table_3 | Main results on PGR testest. † denotes previous numbers rounded into 3 significant digits. * and ** indicate significance over DEPTREE at p < 0.05 and p < 0.01 with 1000 bootstrap tests. | 2 | [['Model', 'BO-LSTM (Lamurias et al., 2019)†'], ['Model', 'BioBERT (Lee et al., 2019)†'], ['Model', 'TEXTONLY'], ['Model', 'DEPTREE'], ['Model', 'KBESTEISNERPS'], ['Model', 'EDGEWISEPS']] | 1 | [['F1 score']] | [['52.3'], ['67.2'], ['76.0'], ['78.9'], ['83.6*'], ['85.7**']] | column | ['F1 Score'] | ['EDGEWISEPS'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>F1 score</th> </tr> </thead> <tbody> <tr> <td>Model || BO-LSTM (Lamurias et al., 2019)†</td> <td>52.3</td> </tr> <tr> <td>Model || BioBERT (Lee et al., 2019)†</td> <td>67.2</td> ... | Table 3 | table_3 | D19-1020 | 8 | emnlp2019 | 7.8 Main results on PGR . Table 3 shows the comparison with previous work on the PGR testset, where our models are significantly better than the existing models. This is likely because the previous models do not utilize all the information from inputs: BO-LSTM only takes the words (without arc labels) along the shortes... | [2, 1, 2] | ['7.8 Main results on PGR .', 'Table 3 shows the comparison with previous work on the PGR testset, where our models are significantly better than the existing models.', 'This is likely because the previous models do not utilize all the information from inputs: BO-LSTM only takes the words (without arc labels) along the... | [None, ['EDGEWISEPS'], ['BO-LSTM (Lamurias et al., 2019)†', 'BioBERT (Lee et al., 2019)†']] | 1 |
D19-1021table_1 | Precision, recall and F1 results (%) for different models. The first two models are baselines. The next five models are different variants of our model. | 2 | [['Approach', 'VAE'], ['Approach', 'RW-HAC'], ['Approach', 'SN-HAC'], ['Approach', 'SN-L'], ['Approach', 'SN-L+V'], ['Approach', 'SN-L+C'], ['Approach', 'SN-L+CV1']] | 1 | [['P'], ['R'], ['F1'], ['P'], ['R'], ['F1']] | [['17.9', '69.7', '28.5', '17.9', '69.7', '28.5'], ['31.8', '46', '37.6', '31.8', '46.0', '37.6'], ['36.2', '53.3', '43.1', '34.5', '53.3', '41.5'], ['36.5', '69.2', '47.8', '34.6', '59.8', '43.9'], ['46.1', '77.3', '57.8', '40.7', '52.4', '45.8'], ['47.1', '78.1', '58.8', '42.3', '66.0', '51.5'], ['48.9', '77.5', '59.... | column | ['P', 'R', 'F1', 'P', 'R', 'F1'] | ['SN-HAC', 'SN-L', 'SN-L+V', 'SN-L+C', 'SN-L+CV1'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> <th>P</th> <th>R</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Approach || VAE</td> <td>17.9</td> <td>69.7</td> <td>28.5</td> ... | Table 1 | table_1 | D19-1021 | 7 | emnlp2019 | Experimental Result Analysis. Table 1 shows the experimental results, from which we can observe that:. (1) RSN models outperform all baseline models on precision, recall, and F1-score, among which Weakly-supervised RSN (SN-L+CV) achieves state-of-the-art performances. This indicates that RSN is capable of understanding... | [2, 1, 1, 2, 2, 1, 2, 2, 2, 1, 2, 2, 2] | ['Experimental Result Analysis.', 'Table 1 shows the experimental results, from which we can observe that:.', '(1) RSN models outperform all baseline models on precision, recall, and F1-score, among which Weakly-supervised RSN (SN-L+CV) achieves state-of-the-art performances.', 'This indicates that RSN is capable of un... | [None, None, ['SN-HAC', 'SN-L', 'SN-L+V', 'SN-L+C', 'SN-L+CV1', 'P', 'R', 'F1'], None, None, ['RW-HAC', 'SN-HAC'], None, ['RW-HAC'], None, ['SN-HAC', 'SN-L'], None, ['SN-HAC'], ['SN-L']] | 1 |
D19-1022table_1 | Micro-averaged precision (P), recall (R) and F1 score on TACRED dataset. †, ‡ and †† mark the results reported in (Zhang et al., 2017), (Zhang et al., ∗ 2018) and (Bilan and Roth, 2018) respectively. marks statistically significant improvements over Selfattn with p < 0.01 under one-tailed t-test. | 2 | [['Model', 'CNN'], ['Model', 'CNN-PE'], ['Model', 'GCN'], ['Model', 'LSTM'], ['Model', 'PA-LSTM'], ['Model', 'C-GCN'], ['Model', 'Self-attn'], ['Model', 'Knwl-attn'], ['Model', 'Knwl+Self (MCA)'], ['Model', 'Knwl+Self (SI)'], ['Model', ' Know+Self (KISA)']] | 1 | [['P'], ['R'], ['F1']] | [['72.1', '50.3', '59.2'], ['68.2', '55.4', '61.1'], ['69.8', '59', '64'], ['61.4', '61.7', '61.5'], ['65.7', '64.5', '65.1'], ['69.9', '63.3', '66.4'], ['64.6', '68.6', '66.5'], ['70', '63.1', '66.4'], ['68.4', '66.1', '67.3*'], ['67.1', '68.4', '67.8*'], ['69.4', '66', '67.7*']] | column | ['P', 'R', 'F1'] | ['Knwl-attn'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Model || CNN</td> <td>72.1</td> <td>50.3</td> <td>59.2</td> </tr> <tr> <td>Model || CNN-PE</td> <td... | Table 1 | table_1 | D19-1022 | 7 | emnlp2019 | 5.3 Results and Analysis. 5.3.1 Results on TACRED dataset . Table 1 shows the results of baseline as well as our proposed models on TACRED dataset. It is observed that our proposed knowledge-attention encoder outperforms all CNN-based and RNNbased models by at least 1.3 F1. Meanwhile, it achieves comparable results wi... | [2, 2, 1, 1, 1] | ['5.3 Results and Analysis.', ' 5.3.1 Results on TACRED dataset .', 'Table 1 shows the results of baseline as well as our proposed models on TACRED dataset.', 'It is observed that our proposed knowledge-attention encoder outperforms all CNN-based and RNNbased models by at least 1.3 F1.', 'Meanwhile, it achieves compara... | [None, None, None, ['Knwl-attn', 'CNN', 'LSTM', 'F1'], ['C-GCN', 'Self-attn']] | 1 |
D19-1025table_2 | Performance (%) on low-resource languages. | 1 | [['CNN-CRFs'], ['BiLSTM-CRFs'], ['Trans-CRFs'], ['BiLSTM-PCRFs'], ['Ours']] | 2 | [['CY', 'P'], ['CY', 'R'], ['CY', 'F1'], ['BN', 'P'], ['BN', 'R'], ['BN', 'F1'], ['YO', 'P'], ['YO', 'R'], ['YO', 'F1'], ['MN', 'P'], ['MN', 'R'], ['MN', 'F1'], ['ARZ', 'P'], ['ARZ', 'R'], ['ARZ', 'F1']] | [['84.4', '76.2', '80.1', '92', '89.1', '90.5', '80.9', '68.9', '74.4', '87.3', '85.5', '86.3', '88.6', '86.7', '87.6'], ['86', '77.8', '81.6', '93.3', '91.5', '92.3', '74.1', '68.9', '71.3', '89', '85.5', '87.1', '89.5', '88.5', '89'], ['83.7', '73.2', '78.1', '93', '85.9', '89.3', '80.2', '60.5', '69', '88', '80', '8... | column | ['P', 'R', 'F1', 'P', 'R', 'F1', 'P', 'R', 'F1', 'P', 'R', 'F1', 'P', 'R', 'F1'] | ['Ours'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>CY || P</th> <th>CY || R</th> <th>CY || F1</th> <th>BN || P</th> <th>BN || R</th> <th>BN || F1</th> <th>YO || P</th> <th>YO || R</th> <th>YO || F1</th> <th>MN || P</th> ... | Table 2 | table_2 | D19-1025 | 7 | emnlp2019 | 6.2 Results on Low-Resource Languages . Table 2 shows the overall performance of our proposed model as well as the baseline methods (P and R denote Precision and Recall). We can see:. Our method consistently outperforms all baselines in five languages w.r.t F1, mainly because we greatly improve recall (2.7% to 9.34% on... | [2, 1, 2, 1, 1, 2, 2, 2] | ['6.2 Results on Low-Resource Languages .', 'Table 2 shows the overall performance of our proposed model as well as the baseline methods (P and R denote Precision and Recall).', 'We can see:.', 'Our method consistently outperforms all baselines in five languages w.r.t F1, mainly because we greatly improve recall (2.7% ... | [None, ['Ours', 'P', 'R'], None, ['Ours', 'F1'], ['BiLSTM-PCRFs', 'CNN-CRFs', 'BiLSTM-CRFs', 'Trans-CRFs'], ['CY'], None, None] | 1 |
D19-1026table_3 | Performance Comparison on Cross-domain Datasets using F1 score (%). The best results are in bold. Note that our own results all retain two decimal places. Other results with uncertain amount of decimal places are directly retrieved from their original paper. | 2 | [['System', 'AIDA (Hoffart et al., 2011)'], ['System', 'GLOW (Ratinov et al., 2011)'], ['System', 'RI (Cheng and Roth, 2013)'], ['System', 'WNED (Guo and Barbosa, 2016)'], ['System', 'Deep-ED (Ganea and Hofmann, 2017)'], ['System', 'Ment-Norm (Le and Titov, 2018)'], ['System', 'Prior (p(ejm)) (Ganea and Hofmann, 2017)'... | 1 | [['MSBNC'], ['AQUAINT'], ['ACE2004'], ['CWEB'], ['WIKI']] | [['79', '56', '80', '58.6', '63'], ['75', '83', '82', '56.2', '67.2'], ['90', '90', '86', '67.5', '73.4'], ['92', '87 88', '77', '84.5'], ['93.7', '88.5', '88.5', '77.9', '77.5'], ['93.9', '88.3', '89.9', '77.5', '78.0'], ['89.3', '83.2', '84.4', '69.8', '64.2'], ['89.05', '80.55', '87.32', '67.97', '60.27'], ['93.38 ±... | column | ['F1', 'F1', 'F1', 'F1', 'F1'] | ['Berkeley-CNN + DCA-SL', 'Berkeley-CNN + DCA-RL', 'ETHZ-Attn + DCA-SL', 'ETHZ-Attn + DCA-RL'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MSBNC</th> <th>AQUAINT</th> <th>ACE2004</th> <th>CWEB</th> <th>WIKI</th> </tr> </thead> <tbody> <tr> <td>System || AIDA (Hoffart et al., 2011)</td> <td>79</td> <td>56</td> ... | Table 3 | table_3 | D19-1026 | 6 | emnlp2019 | Table 3 shows the results on the five crossdomain datasets. As shown, none of existing methods can consistently win on all datasets. DCA-based models achieve state-of-the-art performance on the MSBNC and the ACE2004 dataset. On remaining datasets, DCA-RL achieves comparable performance with other complex global models.... | [1, 1, 1, 1, 1, 2, 2] | ['Table 3 shows the results on the five crossdomain datasets.', 'As shown, none of existing methods can consistently win on all datasets.', 'DCA-based models achieve state-of-the-art performance on the MSBNC and the ACE2004 dataset.', 'On remaining datasets, DCA-RL achieves comparable performance with other complex glo... | [None, None, ['Berkeley-CNN + DCA-SL', 'Berkeley-CNN + DCA-RL', 'ETHZ-Attn + DCA-SL', 'ETHZ-Attn + DCA-RL'], ['Berkeley-CNN + DCA-RL', 'ETHZ-Attn + DCA-RL'], ['Berkeley-CNN + DCA-RL', 'Berkeley-CNN + DCA-SL'], ['Berkeley-CNN + DCA-SL', 'Berkeley-CNN + DCA-RL', 'ETHZ-Attn + DCA-SL', 'ETHZ-Attn + DCA-RL'], None] | 1 |
D19-1026table_4 | Ablation Study on Neighbor Entities. We compare the performance of DCA with or without neighbor entities (i.e., 2-hop vs. 1-hop). | 2 | [['System', 'ETHZ-Attn (Section 2.2)'], ['System', 'ETHZ-Attn + 1-hop DCA'], ['System', 'ETHZ-Attn + 2-hop DCA']] | 2 | [['In-KB acc. (%)', 'SL'], ['In-KB acc. (%)', ' RL']] | [[' 90.88', ' -'], [' 93.69', ' 93.20'], [' 94.47', ' 93.76']] | column | ['In-KB acc. (%)', 'In-KB acc. (%)'] | ['ETHZ-Attn + 1-hop DCA', 'ETHZ-Attn + 2-hop DCA'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>In-KB acc. (%) || SL</th> <th>In-KB acc. (%) || RL</th> </tr> </thead> <tbody> <tr> <td>System || ETHZ-Attn (Section 2.2)</td> <td>90.88</td> <td>-</td> </tr> <tr> <td>System ... | Table 4 | table_4 | D19-1026 | 6 | emnlp2019 | 2. Effect of neighbor entities. In contrast to traditional global models, we include both previously linked entities and their close neighbors for global signal. Table 4 shows the effectiveness of this strategy. We observe that incorporating these neighbor significantly improve the performance (compared to 1-hop) by in... | [2, 2, 1, 1, 1, 2] | ['2. Effect of neighbor entities.', 'In contrast to traditional global models, we include both previously linked entities and their close neighbors for global signal.', 'Table 4 shows the effectiveness of this strategy.', 'We observe that incorporating these neighbor significantly improve the performance (compared to 1... | [None, None, None, ['ETHZ-Attn + 1-hop DCA'], ['ETHZ-Attn + 1-hop DCA', 'ETHZ-Attn + 2-hop DCA'], None] | 1 |
D19-1028table_2 | Overall results for entity set expansion on Google Web 1T,where Oursfull is the full version of our method, Ours-MCTS is our method with the MCTS disabled, and Ours-PMSN is our method but replacing the PMSN with fixed word embeddings. * indicates COB using the human feedback for seed entity selection. | 2 | [['Method', 'POS'], ['Method', 'MEB'], ['Method', 'COB*'], ['Method', 'Ours full'], ['Method', 'Ours -MCTS'], ['Method', 'Ours -PMSN']] | 1 | [['P@10'], ['P@20'], ['P@50'], ['P@100'], ['P@200'], ['MAP']] | [['0.84', '0.74', '0.55', '0.41', '0.34', '0.42'], ['0.83', '0.79', '0.68', '0.58', '0.51', '-'], ['0.97', '0.96', '0.9', '0.79', '0.66', '0.85'], ['0.97', '0.96', '0.92', '0.82', '0.69', '0.87'], ['0.85', '0.81', '0.73', '0.63', '0.52', '0.75'], ['0.63', '0.6', '0.56', '0.48', '0.42', '0.61']] | column | ['P@10', 'P@20', 'P@50', 'P@100', 'P@200', 'MAP'] | ['Ours full'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P@10</th> <th>P@20</th> <th>P@50</th> <th>P@100</th> <th>P@200</th> <th>MAP</th> </tr> </thead> <tbody> <tr> <td>Method || POS</td> <td>0.84</td> <td>0.74</td> <t... | Table 2 | table_2 | D19-1028 | 7 | emnlp2019 | 5.2 Experimental Results . Comparison with three baseline methods on Google Web 1T. Table 2 shows the performance of different bootstrapping methods on Google Web 1T. We can see that our full model outperforms three baseline methods: comparing with POS, our method achieves 41% improvement in P@100, 35% improvement in P... | [2, 2, 1, 1, 1] | ['5.2 Experimental Results .', 'Comparison with three baseline methods on Google Web 1T.', 'Table 2 shows the performance of different bootstrapping methods on Google Web 1T.', 'We can see that our full model outperforms three baseline methods: comparing with POS, our method achieves 41% improvement in P@100, 35% impro... | [None, None, None, ['Ours full', 'MAP', 'MEB', 'COB*', 'P@100', 'P@200'], ['Ours full']] | 1 |
D19-1030table_7 | Event Argument Role Labeling results (F1 %) on Chinese and Arabic using English as training data (with system generated entity mentions) | 2 | [['Target Language', 'Chinese'], ['Target Language', 'Arabic']] | 1 | [['F1 Score']] | [['56.9'], ['60.1']] | column | ['F1 Score'] | ['Chinese', 'Arabic'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>F1 Score</th> </tr> </thead> <tbody> <tr> <td>Target Language || Chinese</td> <td>56.9</td> </tr> <tr> <td>Target Language || Arabic</td> <td>60.1</td> </tr> </tbody></table> | Table 7 | table_7 | D19-1030 | 6 | emnlp2019 | Table 7 shows the results of event argument role labeling on Chinese and Arabic entity mentions automatically extracted by Stanford CoreNLP instead of manually annotated mentions. The system extracted entity mentions introduce noise and thus decrease the performance of the model, but the overall results are still promi... | [1, 1] | ['Table 7 shows the results of event argument role labeling on Chinese and Arabic entity mentions automatically extracted by Stanford CoreNLP instead of manually annotated mentions.', 'The system extracted entity mentions introduce noise and thus decrease the performance of the model, but the overall results are still ... | [['Chinese', 'Arabic'], None] | 1 |
D19-1034table_4 | Our results on five categories compared to Ju et al. (2018) and Sohrab and Miwa (2018) on GENIA test set. | 2 | [['Category', 'DNA'], ['Category', 'RNA'], ['Category', 'protein'], ['Category', 'cell line'], ['Category', 'cell type'], ['Category', 'overall']] | 2 | [['Ours', 'P (%)'], ['Ours', 'R (%)'], ['Ours', 'F (%)'], ['Ju', 'F (%)'], ['Soh', 'F (%)']] | [['73.6', '67.8', '70.6', '70.1', '67.8'], ['82.2', '80.7', '81.5', '80.8', '75.9'], ['76.7', '76', '76.4', '72.7', '72.9'], ['77.8', '65.8', '71.3', '66.9', '63.6'], ['73.9', '71.2', '72.5', '71.3', '69.8'], ['75.8', '73.6', '74.7', '71.1', '70.7']] | column | ['P (%)', 'R (%)', 'F (%)', 'F (%)', 'F (%)'] | ['Ours'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Ours || P (%)</th> <th>Ours || R (%)</th> <th>Ours || F (%)</th> <th>Ju || F (%)</th> <th>Soh || F (%)</th> </tr> </thead> <tbody> <tr> <td>Category || DNA</td> <td>73.6</td> ... | Table 4 | table_4 | D19-1034 | 7 | emnlp2019 | Table 4 describes the performances of our model on the five categories on the test dataset. Our model outperforms the model described in Ju et al. (2018) and Sohrab and Miwa (2018) with F-score value on all categories. | [1, 1] | ['Table 4 describes the performances of our model on the five categories on the test dataset.', 'Our model outperforms the model described in Ju et al. (2018) and Sohrab and Miwa (2018) with F-score value on all categories.'] | [None, ['Ours', 'Ju', 'Soh', 'F (%)']] | 1 |
D19-1034table_5 | Performance of Boundary Detection on GENIA test set. | 2 | [['Model', 'Sohrab and Miwa (2018)'], ['Model', 'Ju et al. (2018)'], ['Model', 'Our model(softmax)']] | 2 | [['Boundary Detection', 'P (%)'], ['Boundary Detection', 'R (%)'], ['Boundary Detection', 'F (%)']] | [['76.6', '69.2', '72.7'], ['79.9', '67.08', '73.4'], ['79.7', '76.9', '78.3']] | column | ['P (%)', 'R (%)', 'F (%)'] | ['Our model(softmax)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Boundary Detection || P (%)</th> <th>Boundary Detection || R (%)</th> <th>Boundary Detection || F (%)</th> </tr> </thead> <tbody> <tr> <td>Model || Sohrab and Miwa (2018)</td> <td>76.6</t... | Table 5 | table_5 | D19-1034 | 7 | emnlp2019 | 5.2 Performance of Boundary Detection . We conduct experiments on boundary detection to illustrate that our model extract entity boundaries more precisely comparing to Sohrab and Miwa (2018) and Ju et al. (2018). Table 5 shows the results of boundary detection on GENIA test dataset. Our model locates entities more accu... | [2, 2, 1, 1, 1, 2, 2] | ['5.2 Performance of Boundary Detection .', 'We conduct experiments on boundary detection to illustrate that our model extract entity boundaries more precisely comparing to Sohrab and Miwa (2018) and Ju et al. (2018).', 'Table 5 shows the results of boundary detection on GENIA test dataset.', 'Our model locates entitie... | [None, None, ['Boundary Detection'], ['Our model(softmax)', 'Ju et al. (2018)', 'Sohrab and Miwa (2018)'], ['Our model(softmax)'], None, None] | 1 |
D19-1034table_7 | Performance Comparison of our pipeline model and multitask model on GENIA development set and test set. | 2 | [['Model', 'Pipeline'], ['Model', 'Multitask']] | 2 | [['Development Set', 'P (%)'], ['Development Set', 'R (%)'], ['Development Set', 'F (%)'], ['Test Set', 'P (%)'], ['Test Set', 'R (%)'], ['Test Set', 'F (%)']] | [['74.5', '74.8', '74.6', '75.4', '72.2', '73.8'], ['74.5', '75.6', '75', '75.9', '73.4', '74.7']] | column | ['P (%)', 'R (%)', 'F (%)', 'P (%)', 'R (%)', 'F (%)'] | ['Multitask'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Development Set || P (%)</th> <th>Development Set || R (%)</th> <th>Development Set || F (%)</th> <th>Test Set || P (%)</th> <th>Test Set || R (%)</th> <th>Test Set || F (%)</th> </tr> <... | Table 7 | table_7 | D19-1034 | 8 | emnlp2019 | 5.4 Performance of Multitask Learning . Table 7 shows the performance of our pipeline model and multitask model on GENIA development set and test set. For pipeline model, we train the boundary detection module and entity categorical label prediction module separately. Our multitask model has a higher F value both in de... | [2, 1, 2, 1] | ['5.4 Performance of Multitask Learning .', 'Table 7 shows the performance of our pipeline model and multitask model on GENIA development set and test set.', 'For pipeline model, we train the boundary detection module and entity categorical label prediction module separately.', 'Our multitask model has a higher F value... | [None, ['Pipeline', 'Multitask', 'Development Set', 'Test Set'], ['Pipeline'], ['Multitask', 'F (%)', 'Development Set', 'Test Set']] | 1 |
D19-1037table_3 | P@N results for models with internal CNNs self-attention and curriculum learning | 3 | [['P@N (%)', 'CNN-based Models', 'CNN+ONE'], ['P@N (%)', 'CNN-based Models', 'ResCNN-9'], ['P@N (%)', 'CNN-based Models', 'CNN+ONE+SelfAtt'], ['P@N (%)', 'CNN-based Models', 'CNN+ATT'], ['P@N (%)', 'CNN-based Models', 'CNN+ATT+SelfAtt'], ['P@N (%)', 'PCNN-based Models', 'PCNN+ONE'], ['P@N (%)', 'PCNN-based Models', 'PC... | 1 | [['100'], ['200'], ['300'], ['Mean']] | [['67.3', '64.7', '58.1', '63.4'], ['79', '69', '61', '69.7'], ['81.1', '75.1', '70.4', '75.5'], ['76.2', '68.6', '59.8', '68.2'], ['81.1', '74.1', '72.4', '75.9'], ['72.3', '69.7', '64.1', '68.7'], ['84.1', '75.1', '69.1', '76.1'], ['85.1', '78.6', '74.4', '79.4'], ['76.2', '73.1', '67.4', '72.2'], ['81.1', '71.6', '7... | row | ['P@N (%)', 'P@N (%)', 'P@N (%)', 'P@N (%)', 'P@N (%)', 'P@N (%)', 'P@N (%)', 'P@N (%)', 'P@N (%)', 'P@N (%)', 'P@N (%)'] | ['[NetMax+SelfAtt]+CCL-CT', '[NetAtt+SelfAtt]+CCL-CT'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>100</th> <th>200</th> <th>300</th> <th>Mean</th> </tr> </thead> <tbody> <tr> <td>P@N (%) || CNN-based Models || CNN+ONE</td> <td>67.3</td> <td>64.7</td> <td>58.1</td> ... | Table 3 | table_3 | D19-1037 | 8 | emnlp2019 | From Figures 5(a) and 5(b), we can see that the CCL based models have further improvements in terms of PR-curves compared with PCNN+ATT/ONE+SelfAtt. The P@N results in Table 3 indicate that CCL further improves the model's performance when compared to PCNN+ATT/ONE+SelfAtt as well. | [0, 1] | ['From Figures 5(a) and 5(b), we can see that the CCL based models have further improvements in terms of PR-curves compared with PCNN+ATT/ONE+SelfAtt.', "The P@N results in Table 3 indicate that CCL further improves the model's performance when compared to PCNN+ATT/ONE+SelfAtt as well."] | [None, ['PCNN+ATT', 'PCNN+ONE+SelAtt', '[NetMax+SelfAtt]+CCL-CT', '[NetAtt+SelfAtt]+CCL-CT']] | 1 |
D19-1040table_3 | Performances of entity representations on EntEval tasks. Best performing model in each task is boldfaced. CAP: coreference arc prediction, CERP: contexualized entity relationship prediction, EFP: entity factuality prediction, ET: entity typing, ESR: entity similarity and relatedness, ERT: entity relationship typing, NE... | 1 | [['GloVe'], ['BERT Base'], ['BERT Large'], ['ELMo'], ['EntELMo baseline'], ['EntELMo'], ['EntELMo w/o lctx'], ['EntELMo w/ letn']] | 1 | [['CAP'], ['CERP'], ['EFP'], ['ET'], ['ESR'], ['ERT'], ['NED'], ['Average']] | [['71.9', '52.6', '67', '10.3', '50.9', '40.8', '41.2', '47.8'], ['80.6', '65.6', '74.8', '32', '28.8', '42.2', '50.6', '53.5'], ['79.1', '66.9', '76.7', '32.3', '32.6', '48.8', '54.3', '55.8'], ['80.2', '61.2', '75.8', '35.6', '60.3', '46.8', '51.6', '58.8'], ['78', '59.6', '71.5', '31.3', '61.6', '46.5', '48.5', '56.... | column | ['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy'] | ['BERT Base', 'BERT Large', 'ELMo'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>CAP</th> <th>CERP</th> <th>EFP</th> <th>ET</th> <th>ESR</th> <th>ERT</th> <th>NED</th> <th>Average</th> </tr> </thead> <tbody> <tr> <td>GloVe</td> <td>71.9</td> ... | Table 3 | table_3 | D19-1040 | 8 | emnlp2019 | 5.2 Results . Table 3 shows the performance of our models on the EntEval tasks. Our findings are detailed below: . Pretrained CWRs (ELMo, BERT) perform the best on EntEval overall, indicating that they capture knowledge about entities in contextual mentions or as entity descriptions. BERT performs poorly on entity simi... | [2, 1, 2, 1, 1, 2, 1, 2, 2, 1, 1, 1, 2] | ['5.2 Results .', 'Table 3 shows the performance of our models on the EntEval tasks.', 'Our findings are detailed below: .', 'Pretrained CWRs (ELMo, BERT) perform the best on EntEval overall, indicating that they capture knowledge about entities in contextual mentions or as entity descriptions.', 'BERT performs poorly ... | [None, None, None, ['ELMo', 'BERT Base', 'BERT Large'], ['BERT Base', 'BERT Large'], None, ['BERT Large', 'BERT Base', 'ERT', 'NED'], ['ERT'], ['NED'], ['BERT Large', 'BERT Base'], ['CERP', 'EFP', 'ET', 'NED', 'EntELMo', 'EntELMo baseline'], ['NED', 'ESR'], None] | 1 |
D19-1041table_4 | Model performance breakdown for TB-Dense. “-” indicates no predictions were made for that particular label, probably due to the small size of the training sample. BEFORE (B), AFTER (A), INCLUDES (I), IS INCLUDED (II), SIMULTANEOUS (S), VAGUE (V) | 1 | [['B'], ['A'], ['I'], ['II'], ['S'], ['V'], ['Avg']] | 2 | [['CAEVO', 'P'], ['CAEVO', 'R'], ['CAEVO', 'F1'], ['Pipeline Joint', 'P'], ['Pipeline Joint', 'R'], ['Pipeline Joint', 'F1'], ['Structure Joint', 'P'], ['Structure Joint', 'R'], ['Structure Joint', 'F1']] | [['41.4', '19.5', '26.5', '59', '46.9', '52.3', '59.8', '46.9', '52.6'], ['42.1', '17.5', '24.7', '69.3', '45.3', '54.8', '71.9', '46.7', '56.6'], ['50', '3.6', '6.7', '-', '-', '-', '-', '-', '-'], ['38.5', '9.4', '15.2', '-', '-', '-', '-', '-', '-'], ['14.3', '4.5', '6.9', '-', '-', '-', '-', '-', '-'], ['44.9', '59... | column | ['P', 'R', 'F1', 'P', 'R', 'F1', 'P', 'R', 'F1'] | ['B', 'A', 'I', 'II', 'S', 'V'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>CAEVO || P</th> <th>CAEVO || R</th> <th>CAEVO || F1</th> <th>Pipeline Joint || P</th> <th>Pipeline Joint || R</th> <th>Pipeline Joint || F1</th> <th>Structure Joint || P</th> <th>S... | Table 4 | table_4 | D19-1041 | 8 | emnlp2019 | In Table 4 we further show the breakdown performances for each positive relation on TB-Dense. The breakdown on MATRES is shown in Table 10 in the appendix. BEFORE, AFTER and VAGUE are the three dominant label classes in TB-Dense. We observe that the linguistic rule-based model, CAEVO, tends to have a more evenly spread... | [1, 2, 1, 1] | ['In Table 4 we further show the breakdown performances for each positive relation on TB-Dense.', 'The breakdown on MATRES is shown in Table 10 in the appendix.', 'BEFORE, AFTER and VAGUE are the three dominant label classes in TB-Dense.', 'We observe that the linguistic rule-based model, CAEVO, tends to have a more ev... | [None, None, ['B', 'A', 'V'], ['CAEVO']] | 1 |
D19-1043table_1 | Experimental results of our model compared with other models. Performance is measured in accuracy (%). Models are divided into 3 categories. The first part is baseline methods including SVM and Naive Bayes and their variations. The second part contains models about recurrent neural networks. The third part contains mod... | 2 | [['Method', 'SVM [Socher et al. 2013]'], ['Method', 'NB [Socher et al. 2013]'], ['Method', 'NBSVM-bi [Wang and Manning 2012b]'], ['Method', 'Standard-LSTM'], ['Method', 'bi-LSTM'], ['Method', 'RCNN [Lai et al. 2015]'], ['Method', 'SNN [Zhao et al. 2018]'], ['Method', 'CNN-non-static [Kim 2014]'], ['Method', 'VD-CNN [Sc... | 1 | [['SST-2'], ['SST-5'], ['MR'], ['Subj'], ['TREC'], ['AG news']] | [['79.4', '40.7', '-', '-', '-', '-'], ['81.8', '41', '-', '-', '-', '-'], ['-', '-', '79.4', '93.2', '-', '-'], ['80.6', '45.3', '75.9', '89.3', '86.8', '86.1'], ['83.2', '46.7', '79.3', '90.5', '89.6', '88.2'], ['-', '47.21', '-', '-', '-', ''], ['-', '50.4', '82.1', '93.9', '96', '-'], ['87.2', '48', '81.5', '93.4',... | column | ['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy'] | ['HCapsNet'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>SST-2</th> <th>SST-5</th> <th>MR</th> <th>Subj</th> <th>TREC</th> <th>AG news</th> </tr> </thead> <tbody> <tr> <td>Method || SVM [Socher et al. 2013]</td> <td>79.4</td> ... | Table 1 | table_1 | D19-1043 | 6 | emnlp2019 | 4.3 Results and Discussions . Table 1 reports the results of our model on different datasets comparing with the widely used text classification methods and state-of-the-art approaches. We can have the following observations. Our HCapsNet achieves the best results on 5 out of 6 datasets, which verifies the effectiveness... | [2, 1, 1, 1, 1] | ['4.3 Results and Discussions .', 'Table 1 reports the results of our model on different datasets comparing with the widely used text classification methods and state-of-the-art approaches.', 'We can have the following observations.', 'Our HCapsNet achieves the best results on 5 out of 6 datasets, which verifies the ef... | [None, ['HCapsNet'], None, ['HCapsNet'], ['HCapsNet', 'Capsule-B [Yang et al. 2018]']] | 1 |
D19-1056table_5 | Cross-lingual (XL) system results using BLEU score on individual languages inside the Dev set. We compute BLEU on labeled sequences (F-Seq), and separately for words and only labels. We also show scores when pre-filtering on F-Seq with BLEU ≥ 10. | 2 | [['Model [Filter]', 'XL-GloVe [All]'], ['Model [Filter]', 'XL-BERT [All]'], ['Model [Filter]', 'XL-GloVe [greater than equal 10]'], ['Model [Filter]', 'XL-BERT [greater than equal 10]']] | 2 | [['German', 'F-Seq'], ['German', 'Word'], ['German', 'Label'], ['French', 'F-Seq'], ['French', 'Word'], ['French', 'Label']] | [['18.86', '17.17', '25.52', '28.99', '17.36', '32.76'], ['27.22', '27.36', '29.59', '33.59', '22.48', '37.17'], ['30.58', '36.71', '51.68', '38.99', '43.79', '61.73'], ['36.95', '41.36', '55.73', '42.66', '46.52', '65.32']] | column | ['BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU'] | ['XL-GloVe [greater than equal 10]', 'XL-BERT [greater than equal 10]'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>German || F-Seq</th> <th>German || Word</th> <th>German || Label</th> <th>French || F-Seq</th> <th>French || Word</th> <th>French || Label</th> </tr> </thead> <tbody> <tr> <td>M... | Table 5 | table_5 | D19-1056 | 7 | emnlp2019 | The bottom part of Table 5 shows the scores when restricting the evaluation to sentences with score greater than equal 10. We observed that this threshold is a good trade-off in both the amount of kept sentences (above the threshold) and average BLEU score increase (presumably sentence quality). | [1, 1] | ['The bottom part of Table 5 shows the scores when restricting the evaluation to sentences with score greater than equal 10.', 'We observed that this threshold is a good trade-off in both the amount of kept sentences (above the threshold) and average BLEU score increase (presumably sentence quality).'] | [None, ['XL-GloVe [greater than equal 10]', 'XL-BERT [greater than equal 10]']] | 1 |
D19-1057table_5 | SRL results with different incorporation methods of the syntactic information on the Chinese dev set. Experiments are conducted on the BIAFFINE parsing results. | 2 | [['INPUT', 'DEP'], ['INPUT', 'DEP&REL'], ['INPUT', 'DEP&RELPATH'], ['INPUT', 'DEPPATH&RELPATH'], ['LISA', 'DEP'], ['LISA', 'DEP&REL'], ['LISA', 'DEP&RELPATH'], ['LISA', 'DEPPATH&RELPATH9'], ['RELAWE', 'DEP'], ['RELAWE', 'DEP&REL'], ['RELAWE', 'DEP&RELPATH'], ['RELAWE', 'DEPPATH&RELPATH']] | 1 | [['P'], ['R'], ['F1']] | [['83.89', '83.61', '83.75'], ['86.21', '85', '85.6'], ['86.01', '85.38', '85.69'], ['85.84', '85.54', '85.69'], ['84.68', '85.38', '85.03'], ['85.56', '85.89', '85.73'], ['85.84', '85.64', '85.74'], ['-', '-', '-'], ['84.33', '84.47', '84.4'], ['86.04', '85.43', '85.73'], ['86.21', '85.01', '85.6'], ['86.4', '85.52', ... | column | ['P', 'R', 'F1'] | ['INPUT', 'LISA', 'RELAWE'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>INPUT || DEP</td> <td>83.89</td> <td>83.61</td> <td>83.75</td> </tr> <tr> <td>INPUT || DEP&REL</td> ... | Table 5 | table_5 | D19-1057 | 7 | emnlp2019 | Firstly, results in Table 5 show that with little dependency information (DEP), LISA performs better, while incorporating richer syntactic knowledge (DEP&REL or DEP&RELPATH), three methods achieve similar performance. Overall, RELAWE achieves best results given enough syntactic knowledge. | [1, 1] | ['Firstly, results in Table 5 show that with little dependency information (DEP), LISA performs better, while incorporating richer syntactic knowledge (DEP&REL or DEP&RELPATH), three methods achieve similar performance.', 'Overall, RELAWE achieves best results given enough syntactic knowledge.'] | [['DEP', 'LISA', 'DEP&REL', 'DEP&RELPATH', 'INPUT', 'RELAWE'], ['RELAWE']] | 1 |
D19-1057table_7 | SRL results on the Chinese test set. We choose the best settings for each configuration of our model. | 3 | [['Chinese', 'NONE', 'Metric'], ['Chinese', 'Closed', 'CoNLL09 SRL Only'], ['Chinese', 'Closed', 'INPUT(DEPPATH&RELPATH)'], ['Chinese', 'Closed', 'LISA(DEP&RELPATH)'], ['Chinese', 'Closed', 'RELAWE(DEPPATH&RELPATH)'], ['Chinese', 'Open', 'Marcheggiani and Titov (2017)'], ['Chinese', 'Open', 'Cai et al. (2018)'], ['Chin... | 1 | [['P'], ['R'], ['F1']] | [['81.99', '80.65', '81.31'], ['-', '-', '78.6'], ['84.19', '83.65', '83.92'], ['83.84', '83.54', '83.69'], ['84.77', '83.68', '84.22'], ['-', '-', '82.5'], ['84.7', '84', '84.3'], ['86.89', '87.75', '87.32'], ['86.45', '87.9', '87.17'], ['86.73', '87.98', '87.35'], ['91.93', '92.36', '92.14']] | column | ['P', 'R', 'F1'] | ['Open', 'Closed', 'GOLD'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Chinese || NONE || Metric</td> <td>81.99</td> <td>80.65</td> <td>81.31</td> </tr> <tr> <td>Chinese || Cl... | Table 7 | table_7 | D19-1057 | 8 | emnlp2019 | Table 7 shows that our OPEN model achieves more than 3 points of f1-score than the stateand RELAWE with DEPof-the-art PATH&RELPATH achieves in both CLOSED and OPEN settings. Notice that our best CLOSED model can almost perform as well as the state-of-the-art model while the latter utilizes pretrained word embeddings. B... | [1, 2, 1, 2, 1] | ['Table 7 shows that our OPEN model achieves more than 3 points of f1-score than the stateand RELAWE with DEPof-the-art PATH&RELPATH achieves in both CLOSED and OPEN settings.', 'Notice that our best CLOSED model can almost perform as well as the state-of-the-art model while the latter utilizes pretrained word embeddin... | [['Open', 'Closed', 'RELAWE(DEPPATH&RELPATH) + BERT'], ['RELAWE(DEPPATH&RELPATH) + BERT', 'Closed'], ['Open'], None, ['GOLD']] | 1 |
D19-1061table_8 | Results for predicting temporal anchors with the neural network (only text, and text and image). | 2 | [['NN, only text', 'yes'], ['NN, only text', 'no'], ['NN, only text', 'Macro Avg.'], ['NN, text + img', 'yes'], ['NN, text + img', 'no'], ['NN, text + img', 'Macro Avg.']] | 2 | [['Before', 'P'], ['Before', 'R'], ['Before', 'F1'], ['During', 'P'], ['During', 'R'], ['During', 'F1'], ['After', 'P'], ['After', 'R'], ['After', 'F1']] | [['0.74', '0.96', '0.83', '0.92', '0.98', '0.95', '0.82', '0.88', '0.85'], ['0.35', '0.07', '0.11', '0', '0', '0', '0.29', '0.21', '0.24'], ['0.55', '0.52', '0.47', '0.46', '0.49', '0.48', '0.56', '0.55', '0.55'], ['0.7', '0.78', '0.74', '0.88', '0.97', '0.92', '0.84', '0.89', '0.87'], ['0.48', '0.38', '0.43', '0.25', ... | column | ['P', 'R', 'F1', 'P', 'R', 'F1', 'P', 'R', 'F1'] | ['NN, only text', 'NN, text + img'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Before || P</th> <th>Before || R</th> <th>Before || F1</th> <th>During || P</th> <th>During || R</th> <th>During || F1</th> <th>After || P</th> <th>After || R</th> <th>After |... | Table 8 | table_8 | D19-1061 | 8 | emnlp2019 | Regarding interest in the possessee, all models but the majority baseline (including logistic regression) obtain similar F1s (0.58–0.59). While there is certainly room for improvement, the current results lead to the conclusion that a few keywords are sufficient to obtain 0.58 F1: neither images nor word embeddings bri... | [0, 0, 2, 1, 1, 1] | ['Regarding interest in the possessee, all models but the majority baseline (including logistic regression) obtain similar F1s (0.58–0.59).', 'While there is certainly room for improvement, the current results lead to the conclusion that a few keywords are sufficient to obtain 0.58 F1: neither images nor word embedding... | [None, None, None, None, ['NN, only text', 'NN, text + img', 'Before', 'After', 'During'], ['F1', 'yes', 'no']] | 1 |
D19-1063table_6 | Results on TEST UNSEENALL of our model, trained with and without curiosity-encouraging loss, and an LSTM-based encoder-decoder model (both models have about 15M parameters). “Navigation mistake repeat” is the fraction of time steps on which the agent repeats a non-optimal navigation action at a previously visited locat... | 2 | [['Model', 'LSTM-ENCDEC'], ['Model', 'Our model (alpha = 0)'], ['Model', 'Our model (alpha = 1)']] | 1 | [['SR (%)'], ['Nav. mistake repeat (%)'], ['Help-request repeat (%)']] | [['19.25', '31.09', '49.37'], ['43.12', '25', '40.17'], ['47.45', '17.85', '21.1']] | column | ['SR (%)', 'Nav. mistake repeat (%)', 'Help-request repeat (%)'] | ['Our model (alpha = 1)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>SR (%)</th> <th>Nav. mistake repeat (%)</th> <th>Help-request repeat (%)</th> </tr> </thead> <tbody> <tr> <td>Model || LSTM-ENCDEC</td> <td>19.25</td> <td>31.09</td> <td>49.37</... | Table 6 | table_6 | D19-1063 | 9 | emnlp2019 | Does the proposed imitation learning algorithm achieve its goals?. The curiosity-encouraging training objective is proposed to prevent the agent from making the same mistakes at previously encountered situations. Table 6 shows that training with the curiosity-encouraging objective reduces the chance of the agent loopin... | [2, 2, 1, 1] | ['Does the proposed imitation learning algorithm achieve its goals?.', 'The curiosity-encouraging training objective is proposed to prevent the agent from making the same mistakes at previously encountered situations.', 'Table 6 shows that training with the curiosity-encouraging objective reduces the chance of the agen... | [None, None, ['Our model (alpha = 1)', 'Nav. mistake repeat (%)', 'Help-request repeat (%)'], ['Our model (alpha = 1)', 'Our model (alpha = 0)']] | 1 |
D19-1068table_2 | Experimental results in exploring different lexical mapping methods. | 2 | [['Method', 'Embbeding_Proj'], ['Method', 'CL Trans (1 cand.)'], ['Method', 'CL Trans (2 cand.)'], ['Method', 'CL Trans (3 cand.)'], ['Method', 'CL Trans (4 cand.)'], ['Method', 'CL Trans (5 cand.)']] | 1 | [['Pre.'], ['Rec.'], ['F1']] | [['26', '20', '22.6'], ['31.2', '21.4', '25.4'], ['31.7', '22.3', '26.2'], ['32', '23.4', '27'], ['30.7', '23.6', '26.7'], ['30.2', '23.6', '26.5']] | column | ['Pre.', 'Rec.', 'F1'] | ['CL Trans (1 cand.)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Pre.</th> <th>Rec.</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Method || Embbeding_Proj</td> <td>26</td> <td>20</td> <td>22.6</td> </tr> <tr> <td>Method || CL Tr... | Table 2 | table_2 | D19-1068 | 8 | emnlp2019 | Exploring Lexical Mapping Method. To explore our lexical mapping method, we compare the performance of several variant systems retrieving a different number of candidates (ranging from 1 to 5) and the embedding-projection method (Embedding proj). Note the system retrieving only one candidate actually takes the nearest ... | [2, 1, 2, 2, 1, 1, 2, 2, 1] | ['Exploring Lexical Mapping Method.', 'To explore our lexical mapping method, we compare the performance of several variant systems retrieving a different number of candidates (ranging from 1 to 5) and the embedding-projection method (Embedding proj).', 'Note the system retrieving only one candidate actually takes the ... | [None, ['CL Trans (1 cand.)', 'CL Trans (2 cand.)', 'CL Trans (3 cand.)', 'CL Trans (4 cand.)', 'CL Trans (5 cand.)'], None, None, None, ['CL Trans (1 cand.)', 'F1'], ['CL Trans (1 cand.)'], ['Rec.'], ['CL Trans (5 cand.)', 'Pre.', 'F1']] | 1 |
D19-1070table_2 | Performance of the rule-based baselines and the post conditioned models on the ingredient detection task of the RECIPES dataset. These models all underperform First Occ. | 3 | [['Model', 'Performance Benchmarks', 'Majority'], ['Model', 'Performance Benchmarks', 'Exact Match'], ['Model', 'Performance Benchmarks', 'First Occ'], ['Model', 'Models', 'GPTattn'], ['Model', 'Models', 'GPTindep'], ['Model', 'Models', 'ELMotoken'], ['Model', 'Models', 'ELMosent']] | 1 | [['P'], ['R'], ['F1'], ['Acc'], ['UR'], ['CR']] | [['-', '-', '-', '57.27', '-', '-'], ['84.94', '20.25', '32.7', '64.39', '73.42', '4.02'], ['65.23', '87.17', '74.6', '74.65', '84.88', '87.79'], ['63.94', '71.72', '67.6', '70.63', '54.3', '77.04'], ['67.05', '69.07', '68.04', '72.28', '47.09', '75.79'], ['64.96', '76.64', '70.32', '72.35', '69.14', '78.94'], ['69.09'... | column | ['P', 'R', 'F1', 'Acc', 'UR', 'CR'] | None | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> <th>Acc</th> <th>UR</th> <th>CR</th> </tr> </thead> <tbody> <tr> <td>Model || Performance Benchmarks || Majority</td> <td>-</td> <td>-</t... | Table 2 | table_2 | D19-1070 | 4 | emnlp2019 | Table 2 compares the performance of the discussed models against the baselines, evaluating per-step entity prediction performance. Using the ground truth about ingredient's state, we also report the uncombined (UR) and combined (CR) recalls, which are per-timestep ingredient recall distinguished by whether the ingredie... | [1, 1, 1] | ['Table 2 compares the performance of the discussed models against the baselines, evaluating per-step entity prediction performance.', "Using the ground truth about ingredient's state, we also report the uncombined (UR) and combined (CR) recalls, which are per-timestep ingredient recall distinguished by whether the ing... | [None, ['UR', 'CR'], ['Exact Match', 'First Occ', 'P', 'R']] | 1 |
D19-1070table_9 | Model’s performance degradation with input ablations. We see that the model’s major source of performance is from verbs than compared to other ingredient’s explicit mentions. | 2 | [['Input', 'Complete Process'], ['Input', 'w/o Other Ingredients'], ['Input', 'w/o Verbs'], ['Input', 'w/o Verbs & Other Ingredients']] | 1 | [['Accuracy']] | [['84.59'], ['82.71'], ['79.08'], ['77.79']] | column | ['Accuracy'] | ['Input'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy</th> </tr> </thead> <tbody> <tr> <td>Input || Complete Process</td> <td>84.59</td> </tr> <tr> <td>Input || w/o Other Ingredients</td> <td>82.71</td> </tr> <tr> <... | Table 9 | table_9 | D19-1070 | 9 | emnlp2019 | Table 9 presents these ablation studies. We only observe a minor performance drop from 84.59 to 82.71 (accuracy) when other ingredients are removed entirely. Removing verbs dropped the performance to 79.08 and further omitting both leads to 77.79. This shows the models dependence on verb semantics over tracking the oth... | [1, 1, 1, 2] | ['Table 9 presents these ablation studies.', 'We only observe a minor performance drop from 84.59 to 82.71 (accuracy) when other ingredients are removed entirely.', 'Removing verbs dropped the performance to 79.08 and further omitting both leads to 77.79.', 'This shows the models dependence on verb semantics over track... | [None, ['Accuracy', 'w/o Other Ingredients'], ['Accuracy', 'w/o Verbs', 'w/o Verbs & Other Ingredients'], None] | 1 |
D19-1076table_1 | The comparison between the proposed methods LLMap and RGP, and the MUSE supervised method. The values are average precision over 10 random 90-10 splits of the dictionaries, statistically significant results between LLMap and MUSE are shown in bold and between LLMap and RGP are underlined. | 2 | [['Language', 'Czech (CS)'], ['Language', 'Norwegian (NO)'], ['Language', 'Dutch (NL)'], ['Language', 'Chinese (ZH)'], ['Language', 'Korean (KO)'], ['Language', 'Japanese (JA)'], ['Language', 'Croatian (HR)'], ['Language', 'Indonesian (ID)'], ['Language', 'Farsi (FA)'], ['Language', 'Bulgarian (BG)'], ['Language', 'Spa... | 2 | [['P@1', 'LLMap'], ['P@1', 'MUSE'], ['P@1', 'RGP'], ['P@5', 'LLMap'], ['P@5', 'MUSE'], ['P@5', 'RGP'], ['P@10', 'LLMap'], ['P@10', 'MUSE'], ['P@10', 'RGP']] | [['28.29', '28.37', '28.37', '56.92', '55.65', '55.88', '65.99', '64.72', '64.94'], ['32.9', '31.62', '31.63', '58.74', '56.13', '56.53', '66.23', '63.57', '64.01'], ['42.3', '41.06', '41.18', '67.13', '65.43', '65.7', '73.73', '72.03', '72.49'], ['17.4', '14.19', '19.51', '43.19', '35.6', '44.05', '52.11', '44.62', '5... | column | ['P@1', 'P@1', 'P@1', 'P@5', 'P@5', 'P@5', 'P@10', 'P@10', 'P@10'] | ['LLMap'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P@1 || LLMap</th> <th>P@1 || MUSE</th> <th>P@1 || RGP</th> <th>P@5 || LLMap</th> <th>P@5 || MUSE</th> <th>P@5 || RGP</th> <th>P@10 || LLMap</th> <th>P@10 || MUSE</th> <th>P@10... | Table 1 | table_1 | D19-1076 | 7 | emnlp2019 | 4.3 Results. Table 1 shows the average of precision over the 10 random splits @k = 1, 5 and 10. The bold values are statistically significant results between LLMap and the MUSE supervised method. The RGP column refers to our model without the piecewise mapping, which we discuss later in this section. In all cases, exce... | [2, 1, 2, 1, 1, 1, 1, 1] | ['4.3 Results.', 'Table 1 shows the average of precision over the 10 random splits @k = 1, 5 and 10.', 'The bold values are statistically significant results between LLMap and the MUSE supervised method.', 'The RGP column refers to our model without the piecewise mapping, which we discuss later in this section.', 'In a... | [None, ['P@1', 'P@5', 'P@10'], ['LLMap', 'MUSE'], ['RGP'], ['LLMap', 'MUSE', 'Language', 'Czech (CS)', 'Bulgarian (BG)'], ['Japanese (JA)', 'Chinese (ZH)'], ['Language'], ['P@10']] | 1 |
D19-1076table_2 | The comparison between the proposed method LLMap and MUSE on pre-split train and test dictionaries for any and all senses recovery by the algorithms for precision@5. Last row shows the average improvement achieved by LLMap over MUSE | 2 | [['Lang', 'CS'], ['Lang', 'NO'], ['Lang', 'NL'], ['Lang', 'ZH'], ['Lang', 'KO'], ['Lang', 'JA'], ['Lang', 'HR'], ['Lang', 'ID'], ['Lang', 'FA'], ['Lang', 'BG'], ['Lang', 'ES'], ['Lang', 'TA'], ['Lang', 'HI'], ['Lang', 'BN'], ['Lang', 'AVG']] | 2 | [['Any Sense', 'MUSE'], ['Any Sense', 'LLMap'], ['All Senses', 'MUSE'], ['All Senses', 'LLMap']] | [['76.66', '78.86', '61.32', '62.17'], ['80', '81.85', '65.57', '67.35'], ['88.53', '89.33', '69.28', '70.52'], ['55', '64.77', '41.88', '51.5'], ['51.94', '54.23', '42.81', '44.4'], ['3.97', '17.62', '3.22', '14.51'], ['62.46', '64.7', '52.19', '53.78'], ['81.2', '85.19', '64.87', '68.2'], ['53.73', '56.03', '42.64', ... | column | ['precision@5', 'precision@5', 'precision@5', 'precision@5'] | ['MUSE', 'LLMap'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Any Sense || MUSE</th> <th>Any Sense || LLMap</th> <th>All Senses || MUSE</th> <th>All Senses || LLMap</th> </tr> </thead> <tbody> <tr> <td>Lang || CS</td> <td>76.66</td> <td>78... | Table 2 | table_2 | D19-1076 | 8 | emnlp2019 | Table 2 shows the precision@5 for the pre-split dictionaries. In all 14 languages the LLMap outperforms the MUSE algorithm for recovering both all senses and any sense of a word with significant gains in Chinese and Japanese. These results are consistent with the more comprehensive crossvalidation settings. Note that t... | [1, 1, 2, 2] | ['Table 2 shows the precision@5 for the pre-split dictionaries.', 'In all 14 languages the LLMap outperforms the MUSE algorithm for recovering both all senses and any sense of a word with significant gains in Chinese and Japanese.', 'These results are consistent with the more comprehensive crossvalidation settings.', '... | [None, ['LLMap', 'All Senses', 'Any Sense', 'ZH', 'JA'], None, None] | 1 |
D19-1081table_6 | Results for DocRepair trained on different amount of data. For ellipsis, we show inflection/VP scores. | 1 | [['2.5m'], ['5m'], ['30m']] | 1 | [['BLEU'], ['deixis'], ['lex. c.'], ['ellipsis']] | [['34.15', '89.2', '75.5', '81.8/71.6'], ['34.44', '90.3', '77.7', '83.6/74.0'], ['34.6', '91.8', '80.6', '86.4/75.2']] | column | ['BLEU', 'deixis', 'lex. c.', 'ellipsis'] | ['2.5m', '5m', '30m'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU</th> <th>deixis</th> <th>lex. c.</th> <th>ellipsis</th> </tr> </thead> <tbody> <tr> <td>2.5m</td> <td>34.15</td> <td>89.2</td> <td>75.5</td> <td>81.8/71.6</td> ... | Table 6 | table_6 | D19-1081 | 6 | emnlp2019 | 6.1 The amount of training data . Table 6 provides BLEU and consistency scores for the DocRepair model trained on different amount of data. We see that even when using a dataset of moderate size (e.g., 5m fragments) we can achieve performance comparable to the model trained on a large amount of data (30m fragments). Mo... | [2, 1, 1, 1, 2, 2] | ['6.1 The amount of training data .', 'Table 6 provides BLEU and consistency scores for the DocRepair model trained on different amount of data.', 'We see that even when using a dataset of moderate size (e.g., 5m fragments) we can achieve performance comparable to the model trained on a large amount of data (30m fragme... | [None, ['BLEU'], ['5m', '30m'], ['deixis', 'lex. c.', 'ellipsis'], None, None] | 1 |
D19-1083table_4 | Tokenized case-sensitive BLEU (BLEU) and perplexity (PPL) on training (Train) and development (newstest2013, Dev) set. We randomly select 3K sentence pairs as our training data for evaluation. Lower PPL is better. | 2 | [['ID', '1'], ['ID', '11'], ['ID', '12'], ['ID', '13'], ['ID', '14'], ['ID', '15'], ['ID', '16']] | 2 | [['BLEU', 'Train'], ['BLEU', 'Dev'], ['PPL', 'Train'], ['PPL', 'Dev']] | [['28.64', '26.16', '5.23', '4.76'], ['29.63', '26.44', '4.48', '4.38'], ['29.75', '26.16', '4.6', '4.49'], ['29.43', '26.51', '5.09', '4.71'], ['30.71', '26.52', '3.96', '4.32'], ['30.89', '26.53', '4.09', '4.41'], ['30.25', '26.56', '4.62', '4.58']] | column | ['BLEU', 'BLEU', 'PPL', 'PPL'] | ['ID'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU || Train</th> <th>BLEU || Dev</th> <th>PPL || Train</th> <th>PPL || Dev</th> </tr> </thead> <tbody> <tr> <td>ID || 1</td> <td>28.64</td> <td>26.16</td> <td>5.23</td> ... | Table 4 | table_4 | D19-1083 | 7 | emnlp2019 | Surprisingly, training deep Transformers with both DS-Init and MAtt improves not only running efficiency but also translation quality (by 0.2 BLEU), compared with DS-Init alone. To get an improved understanding, we analyze model performance on both training and development set. Results in Table 4 show that models with ... | [2, 2, 1, 1, 2] | ['Surprisingly, training deep Transformers with both DS-Init and MAtt improves not only running efficiency but also translation quality (by 0.2 BLEU), compared with DS-Init alone.', 'To get an improved understanding, we analyze model performance on both training and development set.', 'Results in Table 4 show that mode... | [['15', 'BLEU', '13'], None, ['13', 'PPL', 'Train', 'Dev', 'BLEU', '12', '14'], ['15', 'BLEU'], ['15']] | 1 |
D19-1084table_4 | F1 results on OntoNotes test for systems trained on data projected via FastAlign and DiscAlign. | 4 | [['Method', 'Zh Gold', '# train', '36K'], ['Method', 'FastAlign', '# train', '36K'], ['Method', 'FastAlign', '# train', '53K'], ['Method', 'DiscAlign', '# train', '36K'], ['Method', 'DiscAlign', '# train', '53K']] | 1 | [['P'], ['R'], ['F1']] | [['75.46', '80.55', '77.81'], ['38.99', '36.61', '37.55'], ['39.46', '36.65', '37.77'], ['51.94', '52.37', '51.76'], ['51.92', '51.93', '51.57']] | column | ['P', 'R', 'F1'] | ['DiscAlign', 'FastAlign'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Method || Zh Gold || # train || 36K</td> <td>75.46</td> <td>80.55</td> <td>77.81</td> </tr> <tr> <td>Met... | Table 4 | table_4 | D19-1084 | 7 | emnlp2019 | 5.1 Results & Analysis. Table 4 shows that while NER systems trained on projected data do categorically worse than an NER system trained on gold-standard data, the higherquality alignments obtained from DiscAlign lead to a major improvement in F1 when compared to FastAlign. | [2, 1] | ['5.1 Results & Analysis.', 'Table 4 shows that while NER systems trained on projected data do categorically worse than an NER system trained on gold-standard data, the higherquality alignments obtained from DiscAlign lead to a major improvement in F1 when compared to FastAlign.'] | [None, ['DiscAlign', 'FastAlign']] | 1 |
D19-1085table_3 | Translation quality on Japanese–English data. As seen, the proposed models can also significantly improve translation performance, which shares the same trend with that on Chinese–English translation. | 2 | [['Model', 'Baseline'], ['Model', 'External ZP Prediction'], ['Model', 'Joint Model'], ['Model', '+ Discourse-Level Context']] | 1 | [['BLEU'], ['delta']] | [['19.94', '–'], ['20.86', '0.92'], ['21.39', '1.45'], ['22', '2.06']] | column | ['BLEU', 'delta'] | ['Joint Model', '+ Discourse-Level Context'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU</th> <th>delta</th> </tr> </thead> <tbody> <tr> <td>Model || Baseline</td> <td>19.94</td> <td>–</td> </tr> <tr> <td>Model || External ZP Prediction</td> <td>20.86</td... | Table 3 | table_3 | D19-1085 | 6 | emnlp2019 | 4.3 Results on Japanese->English Task. Table 3 lists the results. We compare our models and the best external ZP prediction approach. As seen, our models also significantly improve translation performance, demonstrating the effectiveness and universality of the proposed approach. This improvement on Japanese->English t... | [2, 1, 1, 1, 2, 2] | ['4.3 Results on Japanese->English Task.', 'Table 3 lists the results.', 'We compare our models and the best external ZP prediction approach.', 'As seen, our models also significantly improve translation performance, demonstrating the effectiveness and universality of the proposed approach.', 'This improvement on Japan... | [None, None, ['Joint Model', '+ Discourse-Level Context', 'External ZP Prediction'], ['Joint Model', '+ Discourse-Level Context'], None, None] | 1 |
D19-1092table_4 | Comparison with previous work (UAS). | 3 | [['Model', 'TreeBank Transferring', 'This'], ['Model', 'TreeBank Transferring', 'Guo15'], ['Model', 'TreeBank Transferring', 'Guo16'], ['Model', 'TreeBank Transferring', 'TA16'], ['Model', 'Annotation Projection', 'MX14'], ['Model', 'Annotation Projection', 'RC15'], ['Model', 'Annotation Projection', 'LA16'], ['Model',... | 1 | [['DE'], ['ES'], ['FR'], ['IT'], ['PT']] | [['72.78', '81.44', '83.77', '86.13', '84.05'], ['60.35', '71.9', '72.93', '-', '-'], ['65.01', '79', '77.69', '78.49', '81.86'], ['75.27', '76.85', '79.21', '-', '-'], ['74.3', '75.53', '70.14', '77.74', '76.65'], ['79.68', '80.86', '82.72', '83.67', '82.07'], ['75.99', '78.94', '80.8', '79.39', '-'], ['82.1', '82.6',... | column | ['uas', 'uas', 'uas', 'uas', 'uas'] | ['This'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>DE</th> <th>ES</th> <th>FR</th> <th>IT</th> <th>PT</th> </tr> </thead> <tbody> <tr> <td>Model || TreeBank Transferring || This</td> <td>72.78</td> <td>81.44</td> <td>8... | Table 4 | table_4 | D19-1092 | 8 | emnlp2019 | 5.5 Comparison with Previous Work . We compare our method with previous work in the literature. Table 4 shows the results, where the UAS values are reported. Our model denoted by This refers to the model of Src + Mix. Note that these models are not directly comparable due to the setting and baseline parser differences.... | [2, 2, 1, 2, 2, 1, 1, 2] | ['5.5 Comparison with Previous Work .', 'We compare our method with previous work in the literature.', 'Table 4 shows the results, where the UAS values are reported.', 'Our model denoted by This refers to the model of Src + Mix.', 'Note that these models are not directly comparable due to the setting and baseline parse... | [None, None, None, ['This'], None, ['Guo15', 'Guo16', 'TA16'], ['This', ' DE'], [' DE']] | 1 |
D19-1093table_2 | Dependency parsing results on English Penn Treebank v3.0. | 3 | [['Approach', 'Baselines', 'StackPtr (paper)'], ['Approach', 'Baselines', 'StackPtr (code)'], ['Approach', 'Proposed Model', ' H-PtrNet-PST (Gate)'], ['Approach', 'Proposed Model', ' H-PtrNet-PST (SGate)'], ['Approach', 'Proposed Model', ' H-PtrNet-PS (Gate)']] | 1 | [['UAS'], ['LAS']] | [[' 96.12±0.03', ' 95.06±0.05'], [' 95.94±0.03', ' 94.91±0.05'], [' 96.03±0.02', ' 94.99±0.02'], [' 96.04±0.05 95.00±0.06', ''], [' 96.09±0.05 95.03±0.03', '']] | column | ['UAS', 'LAS'] | [' H-PtrNet-PST (Gate)', ' H-PtrNet-PST (SGate)', ' H-PtrNet-PS (Gate)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>UAS</th> <th>LAS</th> </tr> </thead> <tbody> <tr> <td>Approach || Baselines || StackPtr (paper)</td> <td>96.12±0.03</td> <td>95.06±0.05</td> </tr> <tr> <td>Approach || Baseline... | Table 2 | table_2 | D19-1093 | 7 | emnlp2019 | Results on English Penn Treebank. Table 2 presents the results on English Penn Treebank. StackPtr (paper) refer to the results reported by Ma et al.(2018), and StackPtr (code) is our run of their code in identical settings as ours. Our model H-PtrNet-PST (Gate) outperforms the baseline by 0.09 and 0.08 in terms of UAS ... | [2, 1, 2, 1, 1, 1] | ['Results on English Penn Treebank.', 'Table 2 presents the results on English Penn Treebank.', 'StackPtr (paper) refer to the results reported by Ma et al.(2018), and StackPtr (code) is our run of their code in identical settings as ours.', 'Our model H-PtrNet-PST (Gate) outperforms the baseline by 0.09 and 0.08 in te... | [None, None, ['StackPtr (paper)', 'StackPtr (code)'], [' H-PtrNet-PST (Gate)', 'UAS', ' LAS'], [' H-PtrNet-PST (SGate)', ' H-PtrNet-PST (Gate)'], [' H-PtrNet-PS (Gate)', 'UAS', ' LAS']] | 1 |
D19-1094table_4 | CoNLL-2009 results on Chinese, German, and Spanish (test sets). Differences in F1 between our models and previous systems are statistically significant (p < 0.05) using stratified shuffling (Noreen, 1989). | 2 | [['Chinese', 'Björkelund et al. (2010)'], ['Chinese', 'Roth and Lapata (2016)'], ['Chinese', 'Marcheggiani and Titov (2017)'], ['Chinese', 'He et al. (2018b)'], ['Chinese', 'Cai et al. (2018)'], ['Chinese', 'Li et al. (2018)'], ['Chinese', 'Ours (supervised training)'], ['Chinese', 'Ours (with CVT)'], ['German', 'Björk... | 1 | [['P'], ['R'], ['F 1']] | [['82.4', '75.1', '78.6'], ['83.2', '75.9', '79.4'], ['84.6', '80.4', '82.5'], ['84.2', '81.5', '82.8'], ['84.7', '84', '84.3'], ['84.8', '81.2', '83'], ['84.9', '84.3', '84.6'], ['85.4', '84.6', '85'], ['81.2', '78.3', '79.7'], ['81.8', '78.5', '80.1'], ['84.5', '82.1', '83.3'], ['84.9', '82.7', '83.8'], ['78.9', '74.... | column | ['P', 'R', 'F1'] | ['Ours (supervised training)', 'Ours (with CVT)', 'Ours(with CVT)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F 1</th> </tr> </thead> <tbody> <tr> <td>Chinese || Björkelund et al. (2010)</td> <td>82.4</td> <td>75.1</td> <td>78.6</td> </tr> <tr> <td>Chine... | Table 4 | table_4 | D19-1094 | 7 | emnlp2019 | Table 4 presents the results of our experiments (without ELMo) on Chinese, German, and Spanish. Although we have not performed detailed parameter selection in these languages (i.e., we used the same parameters as in English), our model achieves state-of-the-art performance across all three languages. | [1, 1] | ['Table 4 presents the results of our experiments (without ELMo) on Chinese, German, and Spanish.', 'Although we have not performed detailed parameter selection in these languages (i.e., we used the same parameters as in English), our model achieves state-of-the-art performance across all three languages.'] | [['Chinese', 'German', 'Spanish'], ['Ours (supervised training)', 'Ours (with CVT)', 'Ours(with CVT)', 'Chinese', 'German', 'Spanish']] | 1 |
D19-1102table_3 | LAS results on North Sámi development data. mono-base and cross-base are models without data augmentation. % improvements over mono-base shown in parentheses. | 2 | [['size', 'T100'], ['size', 'T50'], ['size', 'T10']] | 2 | [['MONOLINGUAL', 'mono-base'], ['MONOLINGUAL', 'Morph'], ['MONOLINGUAL', 'Nonce'], ['CROSS-LINGUAL', 'cross-base'], ['CROSS-LINGUAL', 'Morph'], ['CROSS-LINGUAL', 'Nonce']] | [['53.3', '56.0 (+3.3)', ' 56.3 (+3.0)', '61.3 (+8.0)', '60.9 (+7.6)', '61.7 (+8.4)'], ['42.5', '46.6 (+4.1)', ' 46.5 (+4.0)', '52.0 (+9.5)', '51.7 (+9.2)', '52.0 (+9.5)'], ['18.5', '27.1 (+8.6)', ' 27.8 (+9.3)', '34.7 (+16.2)', '37.3 (+18.8)', '35.4 (+16.9)']] | column | ['LAS', 'LAS', 'LAS', 'LAS', 'LAS', 'LAS'] | ['Morph', 'Nonce'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MONOLINGUAL || mono-base</th> <th>MONOLINGUAL || Morph</th> <th>MONOLINGUAL || Nonce</th> <th>CROSS-LINGUAL || cross-base</th> <th>CROSS-LINGUAL || Morph</th> <th>CROSS-LINGUAL || Nonce</th>... | Table 3 | table_3 | D19-1102 | 5 | emnlp2019 | We employ two baselines: a monolingual model (ยง3.1) and a cross-lingual model (ยง2.3), both without data augmentation. The monolingual model acts as a simple baseline, to resemble a situation when the target treebank does not have any source treebank (i.e., no available treebanks from related languages). The cross-lin... | [1, 2, 2, 1, 1, 1, 1, 1, 1] | ['We employ two baselines: a monolingual model (ยง3.1) and a cross-lingual model (ยง2.3), both without data augmentation.', 'The monolingual model acts as a simple baseline, to resemble a situation when the target treebank does not have any source treebank (i.e., no available treebanks from related languages).', 'The c... | [['mono-base', 'cross-base'], ['mono-base'], ['cross-base'], ['mono-base', 'cross-base', 'Morph', 'Nonce'], None, ['CROSS-LINGUAL', 'MONOLINGUAL'], ['T10', 'CROSS-LINGUAL', 'cross-base', 'MONOLINGUAL', 'mono-base'], ['T10', 'CROSS-LINGUAL', 'Morph', 'Nonce'], ['CROSS-LINGUAL', 'Morph', 'Nonce']] | 1 |
D19-1102table_7 | LAS results on development sets. zero-shot denotes results where we predict using model trained only on the source treebank. | 2 | [['Language', 'Galician'], ['Language', 'Kazakh'], ['Language', 'Kazakh (translit.)']] | 2 | [['-', 'zero-shot'], ['CROSS-LINGUAL', ' +fastText'], ['CROSS-LINGUAL', ' +Morph']] | [['51.9', '72.8', '71'], ['12.5', '27.7', '28.4'], ['21.2', '31.1', '36.7']] | column | ['LAS', 'LAS', 'LAS'] | ['CROSS-LINGUAL'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>- || zero-shot</th> <th>CROSS-LINGUAL || +fastText</th> <th>CROSS-LINGUAL || +Morph</th> </tr> </thead> <tbody> <tr> <td>Language || Galician</td> <td>51.9</td> <td>72.8</td> ... | Table 7 | table_7 | D19-1102 | 7 | emnlp2019 | 5.1 Experimental results . Table 7 reports the LAS performance on the development sets. MORPH augmentation improves performance over the zero-shot baseline and achieves comparable or better LAS with a cross-lingual model trained with pre-trained word embeddings. Next, we look at the effects of transliteration (see Kaza... | [2, 1, 1, 2, 1, 1] | ['5.1 Experimental results .', 'Table 7 reports the LAS performance on the development sets.', 'MORPH augmentation improves performance over the zero-shot baseline and achieves comparable or better LAS with a cross-lingual model trained with pre-trained word embeddings.', 'Next, we look at the effects of transliteratio... | [None, None, [' +Morph', 'zero-shot', 'CROSS-LINGUAL'], ['Kazakh', 'Kazakh (translit.)'], ['zero-shot'], ['CROSS-LINGUAL', ' +Morph']] | 1 |
D19-1109table_2 | Main results for Task 1: Commonsense knowledge base completion (test F1 score) and Task 2: Wikipedia mining (quality scores out of 4). Results are included from the sentence generation methods of simple concatenation, hand-crafted templates, templates plus grammatical transformations, and coherency ranking. DNN, Factor... | 3 | [['Model', 'Unsupervised', 'CONCATENATION'], ['Model', 'Unsupervised', 'TEMPLATE'], ['Model', 'Unsupervised', 'TEMPL.+GRAMMAR'], ['Model', 'Unsupervised', 'COHERENCY RANK'], ['Model', 'Supervised', 'DNN'], ['Model', 'Supervised', 'FACTORIZED'], ['Model', 'Supervised', 'PROTOTYPICAL']] | 1 | [[' Task 1'], [' Task 2']] | [[' 68.8', ' 2.95 ± 0.11'], [' 72.2', ' 2.98 ± 0.11'], [' 74.4', ' 2.56 ± 0.13'], [' 78.8', ' 3.00 ± 0.12'], [' 89.2', ' 2.50'], [' 89.0', ' 2.61'], [' 79.4', ' 2.55']] | column | ['F1', 'F1'] | ['Unsupervised', 'COHERENCY RANK'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Task 1</th> <th>Task 2</th> </tr> </thead> <tbody> <tr> <td>Model || Unsupervised || CONCATENATION</td> <td>68.8</td> <td>2.95 ± 0.11</td> </tr> <tr> <td>Model || Unsupervised ... | Table 2 | table_2 | D19-1109 | 5 | emnlp2019 | Table 2 shows the full results. Our unsupervised approach achieves a test set F1 score of 78.8, comparable to the 79.4 F1 score found by the supervised prototypical approach. The Factorized and DNN models significantly outperformed our approach with F1 scores of 89.2 and 89.0, respectively. Our grid search found an opt... | [1, 1, 1, 2, 2] | ['Table 2 shows the full results.', 'Our unsupervised approach achieves a test set F1 score of 78.8, comparable to the 79.4 F1 score found by the supervised prototypical approach.', 'The Factorized and DNN models significantly outperformed our approach with F1 scores of 89.2 and 89.0, respectively.', 'Our grid search f... | [None, ['Unsupervised', 'COHERENCY RANK', 'PROTOTYPICAL'], ['FACTORIZED', 'DNN'], None, None] | 1 |
D19-1112table_1 | Results on GLUE test sets. Metrics differ per task (explained in Appendix A) but the best result is highlighted. | 2 | [['Model', 'BERT'], ['Model', 'MT-DNN'], ['Model', 'MAML'], ['Model', 'FOMAML'], ['Model', 'Reptile']] | 2 | [['Test Dataset', 'CoLA'], [' Test Dataset', ' MRPC'], [' Test Dataset', ' STS-B'], [' Test Dataset', ' RTE']] | [['52.1', ' 88.9/84.8', ' 87.1/85.8', '66.4'], ['51.7', ' 89.9/86.3', ' 87.6/86.8', '75.4'], ['53.4', ' 89.5/85.8', ' 88.0/87.3', '76.4'], ['51.6', ' 89.9/86.4', ' 88.6/88.0', '74.1'], ['53.2', ' 90.2/86.7', ' 88.7/88.1', '77']] | column | ['accuracy', 'accuracy', 'accuracy', 'accuracy'] | ['Reptile'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Test Dataset || CoLA</th> <th>Test Dataset || MRPC</th> <th>Test Dataset || STS-B</th> <th>Test Dataset || RTE</th> </tr> </thead> <tbody> <tr> <td>Model || BERT</td> <td>52.1</td... | Table 1 | table_1 | D19-1112 | 3 | emnlp2019 | 3.1 Results . We first use the three meta-learning algorithms with PPS sampling and present in Table 1 the experimental results on the GLUE test set. Generally, the meta-learning algorithms achieve better performance than the strong baseline models, with Reptile performing the best. Since the MT-DNN also uses PPS sampl... | [0, 1, 1, 1, 1] | ['3.1 Results .', 'We first use the three meta-learning algorithms with PPS sampling and present in Table 1 the experimental results on the GLUE test set.', 'Generally, the meta-learning algorithms achieve better performance than the strong baseline models, with Reptile performing the best.', 'Since the MT-DNN also use... | [None, None, ['Reptile'], ['MT-DNN'], ['Reptile', 'MAML']] | 1 |
D19-1114table_1 | Test results on different datasets. | 2 | [['Dataset', 'InferSent'], ['Dataset', 'SSE'], ['Dataset', 'DecAtt'], ['Dataset', 'ESIMtree'], ['Dataset', 'ESIMseq'], ['Dataset', 'ESIMseq+tree'], ['Dataset', 'PWIMour'], ['Dataset', 'mPWIMseq'], ['Dataset', 'mPWIMseq+tree']] | 2 | [['SNLI', 'Acc'], ['Quora', 'Acc'], ['Twitter', 'F1'], ['PIT-2015', 'F1'], ['STS-2014', 'Pearson’s r'], ['WikiQA', 'MAP'], ['WikiQA', 'MRR'], ['TrecQA', 'MAP'], ['TrecQA', 'MRR'], ['SICK', 'Pearson’s r'], ['SICK', 'ρ']] | [['0.846', '0.866', '0.746', '0.451', '0.715', '0.287', '0.287', '0.521', '0.559', '-'], ['0.855', '0.878', '0.65', '0.422', '0.378', '0.624', '0.638', '0.628', '0.670', '-'], ['0.856', '0.845', '0.652', '0.43', '0.317', '0.603', '0.619', '0.660', '0.712', '-'], ['0.864', '0.755', '0.74', '0.447', '0.493', '0.618', '0.... | column | ['Acc', 'Acc', 'F1', 'F1', 'Pearson’s r', 'MAP', 'MRR', 'MAP', 'MRR', 'Pearson’s r', 'ρ'] | ['mPWIMseq'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>SNLI || Acc</th> <th>Quora || Acc</th> <th>Twitter || F1</th> <th>PIT-2015 || F1</th> <th>STS-2014 || Pearson’s r</th> <th>WikiQA || MAP</th> <th>WikiQA || MRR</th> <th>TrecQA || M... | Table 1 | table_1 | D19-1114 | 3 | emnlp2019 | SSE (Nie and Bansal, 2017) is a stacked BiLSTM model with shortcut connections and finetuning of word embeddings. Unlike our setting, where each word is represented by its own hidden state in the final output layer, SSE applies maxpooling over time to the output of the last BiLSTM layer to extract the final sentence fe... | [2, 2, 1, 1, 2, 2] | ['SSE (Nie and Bansal, 2017) is a stacked BiLSTM model with shortcut connections and finetuning of word embeddings.', 'Unlike our setting, where each word is represented by its own hidden state in the final output layer, SSE applies maxpooling over time to the output of the last BiLSTM layer to extract the final senten... | [['SSE'], ['SSE'], ['mPWIMseq', 'SSE', 'Twitter', 'PIT-2015', 'STS-2014', 'WikiQA', 'TrecQA'], ['SSE', 'mPWIMseq', 'SNLI', 'Quora'], ['SNLI', 'Quora', 'SSE'], ['SSE']] | 1 |
D19-1122table_7 | Accuracy of models trained and evaluated on different parts of the dataset. We report the results on simple, complex, and all sentences. | 2 | [['Test data', ' AMT training data AMT'], ['Test data', ' AMT training data GCS'], ['Test data', ' AMT training data AMT + GCS'], ['Test data', ' GCS training data AMT'], ['Test data', ' GCS training data GCS'], ['Test data', ' GCS training data AMT + GCS'], ['Test data', ' AMT & GCS training data AMT'], ['Test data', ... | 1 | [[' Simple'], [' Complex'], [' All']] | [['0.99', '0.78', '0.89'], ['0.99', '0.76', '0.79'], ['0.99', '0.77', '0.88'], ['0.96', '0.76', '0.86'], ['0.97', '0.94', '0.94'], ['0.96', '0.82', '0.87'], ['0.99', '0.9', '0.95'], ['0.98', '0.94', '0.95'], ['0.99', '0.91', '0.95']] | column | ['accuracy', 'accuracy', 'accuracy'] | [' AMT & GCS training data AMT + GCS'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Simple</th> <th>Complex</th> <th>All</th> </tr> </thead> <tbody> <tr> <td>Test data || AMT training data AMT</td> <td>0.99</td> <td>0.78</td> <td>0.89</td> </tr> <tr> ... | Table 7 | table_7 | D19-1122 | 5 | emnlp2019 | Because the two subsections – AMT and GCS – are slightly different despite being obtained using the same questionnaire (see Table 3), we test whether this difference influences the relation extraction models. We evaluate models trained using training data from one source and tested using data from the other source... | [2, 2, 2, 2, 1, 1, 1] | ['Because the two subsections – AMT and GCS – are slightly different despite being obtained using the same questionnaire (see Table 3), we test whether this difference influences the relation extraction models.', 'We evaluate models trained using training data from one source and tested using data from the other s... | [None, None, None, None, None, [' GCS training data AMT', ' GCS training data GCS', ' GCS training data AMT + GCS', ' AMT training data AMT', ' AMT training data GCS', ' AMT training data AMT + GCS'], [' Simple', ' Complex', ' AMT & GCS training data AMT + GCS']] | 1 |
D19-1126table_2 | Performance (F1 Score) on ATIS Dataset | 2 | [['Approach', 'AttRNN (upper bound)'], ['Approach', 'FT-AttRNN'], ['Approach', 'FT-Lr-AttRNN'], ['Approach', 'FT-Cp-AttRNN'], ['Approach', 't-ProgModel'], ['Approach', 'c-ProgModel']] | 1 | [[' Batch 0'], ['Batch 1'], ['Batch 2'], ['Batch 3'], ['Batch 4']] | [['92.12', '92.89', '93.04', '93.56', '95.13'], ['', '91.85', '89.98', '91.25', '88.03'], ['', '91.96', '86.46', '88.03', '86.58'], ['92.12', '92.1', '90.06', '91.98', '89.67'], ['', '92.33', '92.43', '92.57', '92.58'], ['', '92.4', '92.64', '92.71', '93.91']] | column | ['F1', 'F1', 'F1', 'F1', 'F1'] | ['t-ProgModel', 'c-ProgModel'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Batch 0</th> <th>Batch 1</th> <th>Batch 2</th> <th>Batch 3</th> <th>Batch 4</th> </tr> </thead> <tbody> <tr> <td>Approach || AttRNN (upper bound)</td> <td>92.12</td> <td>92... | Table 2 | table_2 | D19-1126 | 4 | emnlp2019 | 3.2 Main Results. Table 2 show the F1 score of slot filling performance comparison results on ATIS dataset. The results show that ProgModel consistently outperforms AttRNN, where the improvement gain is up to 4.24% in ATIS. As expected, ProgModel continuously improves performance with moreand more new batches of traini... | [2, 1, 1, 1, 1, 2, 1, 1, 1] | ['3.2 Main Results.', 'Table 2 show the F1 score of slot filling performance comparison results on ATIS dataset.', 'The results show that ProgModel consistently outperforms AttRNN, where the improvement gain is up to 4.24% in ATIS.', 'As expected, ProgModel continuously improves performance with moreand more new batche... | [None, None, ['t-ProgModel', 'c-ProgModel', 'AttRNN (upper bound)', ' Batch 0'], ['t-ProgModel', 'c-ProgModel', ' Batch 0', 'Batch 1', 'Batch 2', 'Batch 3', 'Batch 4'], ['FT-Cp-AttRNN', 't-ProgModel', 'c-ProgModel'], ['FT-AttRNN', 'FT-Lr-AttRNN'], ['FT-Cp-AttRNN'], ['FT-Cp-AttRNN', ' Batch 0', 'Batch 1', 'Batch 2', 'Ba... | 1 |
D19-1131table_2 | Benchmark classifier results under each data condition using the oos-train (top half) and oos-threshold (bottom half) prediction methods. | 2 | [['Classifier', 'oos-train FastText'], ['Classifier', 'oos-train SVM'], ['Classifier', 'oos-train CNN'], ['Classifier', 'oos-train DialogFlow'], ['Classifier', 'oos-train Rasa'], ['Classifier', 'oos-train MLP'], ['Classifier', 'oos-train BERT'], ['Classifier', 'oos-threshold SVM'], ['Classifier', 'oos-thresholdFastText... | 2 | [['In-Scope Accuracy', 'Full'], ['In-Scope Accuracy', 'Small'], ['In-Scope Accuracy', ' Imbal'], ['In-Scope Accuracy', ' OOS+'], ['Out-Of-Scope Recall', 'Full'], ['Out-Of-Scope Recall', 'Small'], ['Out-Of-Scope Recall', ' Imbal'], ['Out-Of-Scope Recall', ' OOS+']] | [['89', '84.5', '87.2', '89.2', '9.7', '23.2', '12.2', '32.2'], ['91', '89.6', '89.9', '90.1', '14.5', '18.6', '16', '29.8'], ['91.2', '88.9', '89.1', '91', '18.9', '22.2', '19', '34.2'], ['91.7', '89.4', '90.7', '91.7', '14', '14.1', '15.3', '28.5'], ['91.5', '88.9', '89.2', '90.9', '45.3', '55', '49.6', '66'], ['93.5... | column | ['In-Scope Accuracy', 'In-Scope Accuracy', 'In-Scope Accuracy', 'In-Scope Accuracy', 'Out-Of-Scope Recall', 'Out-Of-Scope Recall', 'Out-Of-Scope Recall', 'Out-Of-Scope Recall'] | ['Classifier'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>In-Scope Accuracy || Full</th> <th>In-Scope Accuracy || Small</th> <th>In-Scope Accuracy || Imbal</th> <th>In-Scope Accuracy || OOS+</th> <th>Out-Of-Scope Recall || Full</th> <th>Out-Of-Sc... | Table 2 | table_2 | D19-1131 | 4 | emnlp2019 | 4 Results . 4.1 Results with oos-train . Table 2 presents results for all models across the four variations of the dataset. First, BERT is consistently the best approach for in-scope, followed by MLP. Second, out-of-scope query performance is much lower than in-scope across all methods. Training on less data (Small and... | [0, 2, 1, 1, 1, 1, 1, 2, 2, 2] | ['4 Results .', '4.1 Results with oos-train .', 'Table 2 presents results for all models across the four variations of the dataset.', 'First, BERT is consistently the best approach for in-scope, followed by MLP.', 'Second, out-of-scope query performance is much lower than in-scope across all methods.', 'Training on les... | [None, None, None, ['oos-train BERT', 'oos-threshold BERT'], ['Out-Of-Scope Recall', 'In-Scope Accuracy'], ['In-Scope Accuracy', 'Small', ' Imbal'], ['Out-Of-Scope Recall', 'Small', ' Imbal'], None, None, None] | 1 |
D19-1131table_3 | Results of oos-binary experiments on OOS+, where we compare performance of undersampling (under) and augmentation using sentences from Wikipedia (wiki aug). The wiki aug approach was too large for the DialogFlow and Rasa classifiers. | 2 | [['Classifier', 'DialogFlow'], ['Classifier', 'Rasa'], ['Classifier', 'FastText'], ['Classifier', 'SVM'], ['Classifier', 'CNN'], ['Classifier', 'MLP'], ['Classifier', 'BERT']] | 2 | [['In-Scope Accuracy', 'under'], ['In-Scope Accuracy', 'wiki aug'], ['Out-of-Scope Recall', 'under'], ['Out-of-Scope Recall', 'wiki aug']] | [['84.7', ' —', '37.3', ' —'], ['87.5', ' —', '37.7', ' —'], ['88.1', '87', '22.7', '31.4'], ['88.4', '89.3', '32.2', '37.7'], ['89.8', '90.1', '25.6', '39.7'], ['90.1', '92.9', '52.8', '32.4'], ['94.4', '96', '46.5', '40.4']] | column | ['In-Scope Accuracy', 'In-Scope Accuracy', 'Out-of-Scope Recall', 'Out-of-Scope Recall'] | ['Out-of-Scope Recall'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>In-Scope Accuracy || under</th> <th>In-Scope Accuracy || wiki aug</th> <th>Out-of-Scope Recall || under</th> <th>Out-of-Scope Recall || wiki aug</th> </tr> </thead> <tbody> <tr> <td>Class... | Table 3 | table_3 | D19-1131 | 4 | emnlp2019 | 4.3 Results with oos-binary. Table 3 compares classifier performance using the oos-binary scheme. In-scope accuracy suffers for all models using the undersampling scheme when compared to training on the full dataset using the oos-train and oos-threshold approaches shown in Table 2. However, out-of-scope recall improves... | [2, 1, 1, 1, 1] | ['4.3 Results with oos-binary.', 'Table 3 compares classifier performance using the oos-binary scheme.', 'In-scope accuracy suffers for all models using the undersampling scheme when compared to training on the full dataset using the oos-train and oos-threshold approaches shown in Table 2.', 'However, out-of-scope reca... | [None, None, ['In-Scope Accuracy', 'Out-of-Scope Recall'], ['Out-of-Scope Recall'], ['Out-of-Scope Recall', 'wiki aug']] | 1 |
D19-1132table_1 | Activity, Entity F1 results reported by previous work (rows 1-4 from Serban et al. (2017a)), and the All-operations and AutoAugment models. | 1 | [['LSTM'], ['HRED'], ['VHRED'], ['VHRED (w/ attn.)'], ['All-operations'], ['Input-aware'], ['Input-agnostic']] | 1 | [['Activity F1'], ['Entity F1']] | [['1.18', '0.87'], ['4.34', '2.22'], ['4.63', '2.53'], ['5.94', '3.52'], ['6.53', '3.79'], ['7.04', '3.9'], ['7.02', '4']] | column | ['Activity F1', 'Entity F1'] | ['All-operations', 'Input-aware', 'Input-agnostic'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Activity F1</th> <th>Entity F1</th> </tr> </thead> <tbody> <tr> <td>LSTM</td> <td>1.18</td> <td>0.87</td> </tr> <tr> <td>HRED</td> <td>4.34</td> <td>2.22</td> </tr... | Table 1 | table_1 | D19-1132 | 4 | emnlp2019 | 5 Results and Analysis . Automatic Results: Table 1 shows that all data augmentation approaches (last 3 rows) improve statistically significantly (p < 0.01) 10 over the strongest baseline VHRED (w/ attention). Moreover, our input-agnostic AutoAugment is statistically significantly (p < 0.01) better (on Activity and Ent... | [2, 1, 1] | ['5 Results and Analysis .', 'Automatic Results: Table 1 shows that all data augmentation approaches (last 3 rows) improve statistically significantly (p < 0.01) 10 over the strongest baseline VHRED (w/ attention).', 'Moreover, our input-agnostic AutoAugment is statistically significantly (p < 0.01) better (on Activity... | [None, ['All-operations', 'Input-aware', 'Input-agnostic', 'VHRED (w/ attn.)'], ['Input-agnostic', 'Activity F1', 'Entity F1']] | 1 |
D19-1140table_2 | Classification results (F1) obtained with: i) automatic English translations by three models (Generic, Reinforce, MO-Reinforce), and ii) gold-standard English (English) and untranslated German/Italian (Original) tweets. | 1 | [['Generic'], ['Reinforce'], ['MO-Reinforce'], ['English'], ['Original']] | 2 | [['De - En', '5%'], ['De - En', '100%'], [' It - En', '5%'], ['It - En', '100%']] | [['79.7', '83.2', '78.2', '81.6'], ['80.4', '83.7', '77.8', '82.8'], ['80.9', '84.4', '80.3', '84.5'], ['85.1', '85.1', '85.1', '85.1'], ['74.4', '74.4', '75.6', '75.6']] | column | ['F1', 'F1', 'F1', 'F1'] | ['Reinforce', 'MO-Reinforce'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>De - En || 5%</th> <th>De - En || 100%</th> <th>It - En || 5%</th> <th>It - En || 100%</th> </tr> </thead> <tbody> <tr> <td>Generic</td> <td>79.7</td> <td>83.2</td> <td>78.... | Table 2 | table_2 | D19-1140 | 4 | emnlp2019 | 4 Results and Discussion. Table 2 shows our classification results, presenting the F1 scores obtained by the different MT-based approaches in the two training conditions. When NMT is trained on 100% of the parallel data, for both languages Reinforce produces translations that lead to classification improvements over th... | [2, 1, 1, 1] | ['4 Results and Discussion.', 'Table 2 shows our classification results, presenting the F1 scores obtained by the different MT-based approaches in the two training conditions.', 'When NMT is trained on 100% of the parallel data, for both languages Reinforce produces translations that lead to classification improvements... | [None, None, ['100%', 'Generic', 'Reinforce'], ['Original', 'English']] | 1 |
D19-1151table_2 | Models’ performance on all words and OOV words per language. For Vietnamese, N´aplava et al. (2018) reports 2.45% for WER on a much larger dataset (∼25M words), which is significantly better than our model. | 3 | [['System', 'Arabic', 'Pasha et al. (2014)'], ['System', 'Arabic', 'Zalmout and Habash (2017)'], ['System', 'Arabic', 'LSTM'], ['System', 'Arabic', 'TCN'], ['System', 'Arabic', 'BiLSTM'], ['System', 'Arabic', 'A-TCN'], ['System', 'Vietnamese', 'Naplava et al. (2018)'], ['System', 'Vietnamese', 'LSTM'], ['System', 'Viet... | 1 | [['DER'], ['WER'], ['OOV']] | [['-', '12.3%', '29.8%'], ['-', '8.3%', '20.2%'], ['19.2%', '51.9%', '86.6%'], ['17.5%', '47.6%', '87.2%'], ['2.8%', '8.2%', '33.6%'], ['3.0%', '10.2%', '36.3%'], ['11.2%', '44.5%', '-'], ['13.3%', '39.5%', '33.1%'], ['11.1%', '32.9%', '32.4%'], ['2.6%', '7.8%', '15.3%'], ['2.5%', '7.7%', '15.3%'], ['-', '4.6%', '-'], ... | column | ['accuracy', 'accuracy', 'accuracy'] | ['BiLSTM', 'A-TCN'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>DER</th> <th>WER</th> <th>OOV</th> </tr> </thead> <tbody> <tr> <td>System || Arabic || Pasha et al. (2014)</td> <td>-</td> <td>12.3%</td> <td>29.8%</td> </tr> <tr> <t... | Table 2 | table_2 | D19-1151 | 4 | emnlp2019 | Comparison to Prior Work: Table 2 shows the performance of previous models trained on the same data. For Arabic, both A-TCN and BiLSTM provide significantly better performance than MADAMIRA (Pasha et al., 2014), which is a morphological disambiguation tool for Arabic. The performance of Zalmout and Habash (2017)’s mode... | [1, 1, 1, 2, 2, 1, 2, 1, 2, 2] | ['Comparison to Prior Work: Table 2 shows the performance of previous models trained on the same data.', 'For Arabic, both A-TCN and BiLSTM provide significantly better performance than MADAMIRA (Pasha et al., 2014), which is a morphological disambiguation tool for Arabic.', 'The performance of Zalmout and Habash (2017... | [None, ['Arabic', 'A-TCN', 'BiLSTM', 'Pasha et al. (2014)'], ['Zalmout and Habash (2017)', 'BiLSTM', 'A-TCN'], ['Pasha et al. (2014)', 'Zalmout and Habash (2017)'], None, ['Vietnamese', 'Naplava et al. (2018)', 'A-TCN', 'BiLSTM'], None, ['Yoruba', 'BiLSTM', 'A-TCN', 'Orife (2018)'], ['Orife (2018)'], None] | 1 |
D19-1168table_3 | Comparison with the Transformer model on German-English document-level translation. And the p-value between Ours and Baseline is less than 0.01. | 2 | [['Model', 'Baseline'], ['Model', 'Ours']] | 1 | [['tst13'], ['tst14'], ['Avg']] | [['27.89', '23.75', '25.82'], ['28.58', '24.85', '26.72']] | column | ['BLEU', 'BLEU', 'BLEU'] | ['Ours'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>tst13</th> <th>tst14</th> <th>Avg</th> </tr> </thead> <tbody> <tr> <td>Model || Baseline</td> <td>27.89</td> <td>23.75</td> <td>25.82</td> </tr> <tr> <td>Model || Our... | Table 3 | table_3 | D19-1168 | 6 | emnlp2019 | 4.2 Different Language Pairs. In this paper, we aim to propose a robust document context extraction model. To achieve this goal, we perform experiments on different language pairs to further illustrate the effectiveness of our proposed HM-GDC model. Table 3 shows the performance of our model on German-English document-... | [2, 2, 2, 1, 2, 1, 2] | ['4.2 Different Language Pairs.', 'In this paper, we aim to propose a robust document context extraction model.', 'To achieve this goal, we perform experiments on different language pairs to further illustrate the effectiveness of our proposed HM-GDC model.', 'Table 3 shows the performance of our model on German-Englis... | [None, None, None, ['Baseline', 'Ours'], None, ['Ours'], None] | 1 |
D19-1170table_1 | The performance of MTMSN and other competing approaches on DROP dev and test set. | 2 | [['Model', 'Heuristic Baseline (Dua et al., 2019)'], ['Model', 'Semantic Role Labeling (Carreras and Marquez, 2004)'], ['Model', 'BiDAF (Seo et al., 2017)'], ['Model', 'QANet+ELMo (Yu et al., 2018)'], ['Model', 'BERTBASE (Devlin et al., 2019)'], ['Model', 'NAQANet (Dua et al., 2019)'], ['Model', 'NABERTBASE'], ['Model'... | 2 | [['Dev', 'EM'], ['Dev', 'F1'], ['Test', 'EM'], ['Test', 'F1']] | [['4.28', '8.07', '4.18', '8.59'], ['11.03', '13.67', '10.87', '13.35'], ['26.06', '28.85', '24.75', '27.49'], ['27.71', '30.33', '27.08', '29.67'], ['30.10', '33.36', '29.45', '32.70'], ['46.20', '49.24', '44.07', '47.01'], ['55.82', '58.75', '-', '-'], ['64.61', '67.35', '-', '-'], ['68.17', '72.81', '-', '-'], ['76.... | column | ['EM', 'F1', 'EM', 'F1'] | ['MTMSNLARGE'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Dev || EM</th> <th>Dev || F1</th> <th>Test || EM</th> <th>Test || F1</th> </tr> </thead> <tbody> <tr> <td>Model || Heuristic Baseline (Dua et al., 2019)</td> <td>4.28</td> <td>8... | Table 1 | table_1 | D19-1170 | 6 | emnlp2019 | Table 1 shows the performance of our model and other competitive approaches on the development and test sets. MTMSN outperforms all existing approaches by a large margin, and creates new state-of-the-art results by achieving an EM score of 75.85 and a F1 score of 79.88 on the test set. Since our best model utilizes BER... | [1, 1, 2, 1, 1] | ['Table 1 shows the performance of our model and other competitive approaches on the development and test sets.', 'MTMSN outperforms all existing approaches by a large margin, and creates new state-of-the-art results by achieving an EM score of 75.85 and a F1 score of 79.88 on the test set.', 'Since our best model util... | [None, ['EM', 'F1', 'Test', 'MTMSNLARGE'], ['MTMSNLARGE', 'NABERTLARGE'], ['MTMSNLARGE', 'NABERTLARGE', 'EM', 'F1', 'Test'], ['Human Performance (Dua et al., 2019)', 'Heuristic Baseline (Dua et al., 2019)', 'BERTBASE (Devlin et al., 2019)', 'NAQANet (Dua et al., 2019)', 'F1', 'Test']] | 1 |
D19-1170table_4 | Performance breakdown of NABERTLARGE and MTMSNLARGE by gold answer types. | 2 | [['Type', 'Date'], ['Type', 'Number'], ['Type', 'Single Span'], ['Type', 'Multi Span']] | 2 | [['Metric', '(%)'], ['NABERT', 'EM'], ['NABERT', 'F1'], ['MTMSN', 'EM'], ['MTMSN', 'F1']] | [['1.6', '55.7', '60.8', '55.7', '69'], ['61.9', '63.8', '64', '80.9', '81.1'], ['31.7', '75.9', '80.6', '77.5', '82.8'], ['4.8', '0', '22.7', '25.1', '62.8']] | column | ['(%)', 'EM', 'F1', 'EM', 'F1'] | ['Multi Span'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Metric || (%)</th> <th>NABERT || EM</th> <th>NABERT || F1</th> <th>MTMSN || EM</th> <th>MTMSN || F1</th> </tr> </thead> <tbody> <tr> <td>Type || Date</td> <td>1.6</td> <td>... | Table 4 | table_4 | D19-1170 | 7 | emnlp2019 | Performance breakdown. We now provide a quantitative analysis by showing performance breakdown on the development set. Table 4 shows that our gains mainly come from the most frequent number type, which requires various types of symbolic, discrete reasoning operations. Moreover, significant improvements are also obtaine... | [2, 2, 1, 1, 1] | ['Performance breakdown.', 'We now provide a quantitative analysis by showing performance breakdown on the development set.', 'Table 4 shows that our gains mainly come from the most frequent number type, which requires various types of symbolic, discrete reasoning operations.', 'Moreover, significant improvements are a... | [None, None, ['Number', 'Metric'], ['Multi Span', 'F1'], ['Multi Span']] | 1 |
D19-1171table_5 | Answer selection performances (averaged over five datasets) when trained with question-answer pairs vs. WS-TB. | 2 | [['Model', 'BiLSTM'], ['Model', 'COALA']] | 1 | [['Supervised'], ['WS-TB'], ['WS-TB (all)']] | [['35.3', '37.5', '42.5'], ['44.7', '45.2', '44.5']] | column | ['accuracy', 'accuracy', 'accuracy'] | ['BiLSTM'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Supervised</th> <th>WS-TB</th> <th>WS-TB (all)</th> </tr> </thead> <tbody> <tr> <td>Model || BiLSTM</td> <td>35.3</td> <td>37.5</td> <td>42.5</td> </tr> <tr> <td>Mode... | Table 5 | table_5 | D19-1171 | 8 | emnlp2019 | The results are given in Table 5 where we report the accuracy (P@1), averaged over the five datasets. Interestingly, we do not observe large differences between supervised training and WS-TB for both models when they use the same number of positive training instances (ranging from 2.8k to 5.8k). Thus, using title-body ... | [1, 1, 2, 1, 2, 2, 2] | ['The results are given in Table 5 where we report the accuracy (P@1), averaged over the five datasets.', 'Interestingly, we do not observe large differences between supervised training and WS-TB for both models when they use the same number of positive training instances (ranging from 2.8k to 5.8k).', 'Thus, using tit... | [None, ['Supervised', 'WS-TB'], None, ['BiLSTM', 'COALA'], ['BiLSTM'], None, ['BiLSTM']] | 1 |
D19-1172table_8 | Results of clarification question generation models. | 2 | [['Model', 'Seq2Seq'], ['Model', 'Transformer'], ['Model', 'The proposed Model']] | 1 | [['Single-Turn'], ['Multi-Turn']] | [['18.84', '31.62'], ['20.69', '44.42'], ['24.04', '45.02']] | column | ['BLEU', 'BLEU'] | ['The proposed Model'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Single-Turn</th> <th>Multi-Turn</th> </tr> </thead> <tbody> <tr> <td>Model || Seq2Seq</td> <td>18.84</td> <td>31.62</td> </tr> <tr> <td>Model || Transformer</td> <td>20.69... | Table 8 | table_8 | D19-1172 | 7 | emnlp2019 | Clarification Question Generation. Table 8 shows that Seq2Seq achieves low BLEU scores, which indicates its tendency to generate irrelevant text. Transformer achieves higher performance than Seq2Seq. Our proposed coarse-to-fine model demonstrates a new state of the art, improving the current highest baseline result by ... | [2, 1, 1, 1] | ['Clarification Question Generation.', 'Table 8 shows that Seq2Seq achieves low BLEU scores, which indicates its tendency to generate irrelevant text.', 'Transformer achieves higher performance than Seq2Seq.', 'Our proposed coarse-to-fine model demonstrates a new state of the art, improving the current highest baseline... | [None, ['Seq2Seq'], ['Seq2Seq', 'Transformer'], ['The proposed Model']] | 1 |
D19-1184table_2 | Performance on MultiWOZ. MGT is compared to a baseline dual encoder, and an ensemble of dual encoders with an identical number of parameters. All bold-face results are statistically significant to p < 0.01. | 2 | [['Model Name', 'Dual Encoder'], ['Model Name', 'Ensemble (5)'], ['Model Name', 'Multi-Granularity (5)']] | 1 | [['MRR'], ['Hits@1']] | [['79.55', '66.13%'], ['81.53', '69.47%'], ['82.74', '72.18%']] | column | ['MRR', 'Hits@1'] | ['Multi-Granularity (5)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MRR</th> <th>Hits@1</th> </tr> </thead> <tbody> <tr> <td>Model Name || Dual Encoder</td> <td>79.55</td> <td>66.13%</td> </tr> <tr> <td>Model Name || Ensemble (5)</td> <td>... | Table 2 | table_2 | D19-1184 | 6 | emnlp2019 | The results in Table 2 demonstrate the strong performance gains obtained with MGT. With L = 5 granularities, MGT outperforms a similarly sized ensemble of dual encoders. These results demonstrate that explicitly enforcing the policy that makes models learn multiple granularities of representation improves the represent... | [1, 1, 2] | ['The results in Table 2 demonstrate the strong performance gains obtained with MGT.', 'With L = 5 granularities, MGT outperforms a similarly sized ensemble of dual encoders.', 'These results demonstrate that explicitly enforcing the policy that makes models learn multiple granularities of representation improves the r... | [['Multi-Granularity (5)'], ['Multi-Granularity (5)'], None] | 1 |
D19-1184table_5 | Experimental results demonstrating performance on two downstream tasks, without any finetuning of the latent representations. All bold-face results are statistically significant to p < 0.01. | 2 | [['Model Name', 'Dual Encoder'], ['Model Name', 'Ensemble (5)'], ['Model Name', 'Multi-Granularity (5)'], ['Model Name', 'Fine-tuned']] | 2 | [['BoW', 'F-1'], ['DA', 'F-1']] | [['60.13', '19.09'], ['64.11', '22.39'], ['67.51', '22.85'], ['90.33', '28.75']] | column | ['F-1', 'F-1'] | ['Multi-Granularity (5)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BoW || F-1</th> <th>DA || F-1</th> </tr> </thead> <tbody> <tr> <td>Model Name || Dual Encoder</td> <td>60.13</td> <td>19.09</td> </tr> <tr> <td>Model Name || Ensemble (5)</td> ... | Table 5 | table_5 | D19-1184 | 8 | emnlp2019 | The results shown in Table 5 demonstrate that MGT results in more general representations of language, thereby facilitating better transfer. However, there is room for improvement when comparing to models fine-tuned on the downstream task. This suggests that additional measures can be taken to improve the representativ... | [1, 1, 2] | ['The results shown in Table 5 demonstrate that MGT results in more general representations of language, thereby facilitating better transfer.', 'However, there is room for improvement when comparing to models fine-tuned on the downstream task.', 'This suggests that additional measures can be taken to improve the repre... | [['Multi-Granularity (5)'], ['Fine-tuned'], None] | 1 |
D19-1184table_6 | Experimental results demonstrating performance on the downstream task of dialog act prediction, when the model is fine-tuned on all available data. All bold-face results are statistically significant to p < 0.01. | 2 | [['Model Name', 'Random Init'], ['Model Name', 'Dual Encoder'], ['Model Name', 'Ensemble (5)'], ['Model Name', 'Multi-Granularity (5)']] | 1 | [['DA (F-1)']] | [['28.75'], ['32.63'], ['31.71'], ['33.46']] | column | ['DA (F-1)'] | ['Multi-Granularity (5)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>DA (F-1)</th> </tr> </thead> <tbody> <tr> <td>Model Name || Random Init</td> <td>28.75</td> </tr> <tr> <td>Model Name || Dual Encoder</td> <td>32.63</td> </tr> <tr> <td>M... | Table 6 | table_6 | D19-1184 | 8 | emnlp2019 | The results in Table 6 demonstrate that MGT learns general representations which effectively transfer to downstream tasks, especially more difficult tasks such as dialog act prediction. Finetuning the latent representations learned by MGT, results in improved performance on dialog act prediction. | [1, 2] | ['The results in Table 6 demonstrate that MGT learns general representations which effectively transfer to downstream tasks, especially more difficult tasks such as dialog act prediction.', 'Finetuning the latent representations learned by MGT, results in improved performance on dialog act prediction.'] | [['Multi-Granularity (5)'], ['Multi-Granularity (5)']] | 1 |
D19-1188table_1 | Quantitative evaluation results (%). | 2 | [['Models', 'SEQ2SEQ'], ['Models', 'CVAE'], ['Models', 'LAED'], ['Models', 'TA-SEQ2SEQ'], ['Models', 'DOM-SEQ2SEQ'], ['Models', 'ADAND (with context para.)'], ['Models', 'ADAND (with topic para.)'], ['Models', 'ADAND (with both)']] | 2 | [['Relevance (%)', 'BLEU'], ['Relevance (%)', 'Average'], ['Relevance (%)', 'Greedy'], ['Relevance (%)', 'Extrema'], [' Informativeness (%)', 'Distinct-1'], [' Informativeness (%)', 'Distinct-2'], [' Informativeness (%)', 'Distinct-3']] | [['0.845', '69.60', '64.94', '45.29', '0.2822', '0.5922', '0.7873'], ['1.546', '71.23', '66.67', '47.14', '0.5465', '1.716', '2.731'], ['0.7545', '69.91', '63.55', '43.12', '0.3890', '0.9165', '1.243'], ['1.465', '72.47', '65.9', '45.19', '0.3593', '0.7994', '1.016'], ['1.189', '74.42', '66.6', '48.47', '0.4977', '1.29... | column | ['BLEU', 'Average', 'Greedy', 'Extrema', 'Distinct-1', 'Distinct-2', 'Distinct-3'] | ['CVAE', 'LAED', 'SEQ2SEQ', 'TA-SEQ2SEQ'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Relevance (%) || BLEU</th> <th>Relevance (%) || Average</th> <th>Relevance (%) || Greedy</th> <th>Relevance (%) || Extrema</th> <th>Informativeness (%) || Distinct-1</th> <th>Informativeness... | Table 1 | table_1 | D19-1188 | 6 | emnlp2019 | 4.4 Overall Performance. Table 1 lists the performance of our system and the comparison systems. CVAE and LAED inject SEQ2SEQ with stochastic latent variable, resulting in more informative responses and better performance on Distinct-{1, 2, 3}. TA-SEQ2SEQ incorporates SEQ2SEQ with the outsourcing topic information from... | [0, 1, 1, 1, 1, 1, 1, 2] | ['4.4 Overall Performance.', 'Table 1 lists the performance of our system and the comparison systems.', 'CVAE and LAED inject SEQ2SEQ with stochastic latent variable, resulting in more informative responses and better performance on Distinct-{1, 2, 3}.', 'TA-SEQ2SEQ incorporates SEQ2SEQ with the outsourcing topic infor... | [None, None, ['CVAE', 'LAED', 'SEQ2SEQ'], ['TA-SEQ2SEQ', 'SEQ2SEQ'], ['TA-SEQ2SEQ'], ['DOM-SEQ2SEQ'], ['DOM-SEQ2SEQ'], None] | 1 |
D19-1192table_1 | The result of rewriting quality. | 1 | [['Last Utterance'], ['Last Utterance + Context'], ['Last Utterance + Keyword'], ['CRN'], ['CRN + RL']] | 1 | [['BLEU-4']] | [['34.2'], ['37.1'], ['49.8'], ['50.9'], ['54.2']] | column | ['BLEU-4'] | ['CRN + RL', 'CRN'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU-4</th> </tr> </thead> <tbody> <tr> <td>Last Utterance</td> <td>34.2</td> </tr> <tr> <td>Last Utterance + Context</td> <td>37.1</td> </tr> <tr> <td>Last Utterance + K... | Table 1 | table_1 | D19-1192 | 7 | emnlp2019 | Table 1 shows the experiment result, which indicates that our rewriting method outperforms heuristic methods. Moreover, a 54.2 BLEU-4 score means that the rewritten sentences are very similar to the human references. CRN-RL has a higher score than CRN-Pre-train on BLEU4, it proves reinforcement learning promotes our mo... | [1, 1, 1] | ['Table 1 shows the experiment result, which indicates that our rewriting method outperforms heuristic methods.', 'Moreover, a 54.2 BLEU-4 score means that the rewritten sentences are very similar to the human references.', 'CRN-RL has a higher score than CRN-Pre-train on BLEU4, it proves reinforcement learning promote... | [['CRN', 'CRN + RL', 'Last Utterance', 'Last Utterance + Keyword', 'Last Utterance + Context'], ['BLEU-4'], ['CRN + RL', 'CRN', 'BLEU-4']] | 1 |
D19-1193table_2 | the IMN model and previous methods on PERSONA-CHAT dataset without using personas. All the results except ours are copied from Zhang et al. (2018). | 1 | [['IR baseline'], ['Starspace'], ['Profile'], ['KV Profile'], ['IMN']] | 1 | [['hits@1'], ['MRR']] | [['21.4', '-'], ['31.8', '-'], ['31.8', '-'], ['34.9', '-'], ['63.8', '75.8']] | column | ['hits@1', 'MRR'] | ['IMN'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>hits@1</th> <th>MRR</th> </tr> </thead> <tbody> <tr> <td>IR baseline</td> <td>21.4</td> <td>-</td> </tr> <tr> <td>Starspace</td> <td>31.8</td> <td>-</td> </tr> ... | Table 2 | table_2 | D19-1193 | 8 | emnlp2019 | 6.4 Experimental Results. Table 2 presents the evaluation results of our reproduced IMN model (Gu et al., 2019) and previous methods on PERSONA-CHAT dataset without using personas. It can be seen that the IMN model outperformed other models on this dataset by a margin larger than 28.9% in terms of hits@1. As introduced... | [2, 1, 1, 2] | ['6.4 Experimental Results.', 'Table 2 presents the evaluation results of our reproduced IMN model (Gu et al., 2019) and previous methods on PERSONA-CHAT dataset without using personas.', 'It can be seen that the IMN model outperformed other models on this dataset by a margin larger than 28.9% in terms of hits@1.', 'As... | [None, ['IMN', 'IR baseline', 'Starspace', 'Profile', 'KV Profile'], ['IMN', 'IR baseline', 'Starspace', 'Profile', 'KV Profile', 'hits@1'], None] | 1 |
D19-1193table_3 | Performance of the proposed and previous methods on the PERSONA-CHAT under various persona configurations. The meanings of “Self Persona”, “Their Persona”, “Original”, and “revised” can be found in Section 6.1. All results except ours are copied from Zhang et al. (2018); Mazar´e et al. (2018). Numbers in parentheses in... | 1 | [['IR baseline'], ['Starspace'], ['Profile'], ['KV Profile'], ['FT-PC'], ['IMNctx'], ['IMNutr'], ['DIM']] | 3 | [['Self Persona', 'Original', 'hits@1'], ['Self Persona', 'Original', 'MRR'], ['Self Persona', 'Revised', 'hits@1'], ['Self Persona', 'Revised', 'MRR'], ['Their Persona', 'Original', 'hits@1'], ['Their Persona', 'Original', 'MRR'], ['Their Persona', 'Revised', 'hits@1'], ['Their Persona', 'Revised', 'MRR']] | [['41.0 (+19.6)', '-', '20.7 (-0.7)', '-', '18.1 (-3.3)', '-', '18.1 (-3.3)', '-'], ['48.1 (+16.3)', '-', '32.2 (+0.4)', '-', '24.5 (-7.3)', '-', '26.1 (-5.7)', '-'], ['47.3 (+15.5)', '-', '35.4 (+3.6)', '-', '28.3 (-3.5)', '-', '29.4 (-2.4)', '-'], ['51.1 (+16.2)', '-', '35.1 (+0.2)', '-', '29.1 (-5.8)', '-', '28.9 (-... | column | ['hits@1', 'MRR', 'hits@1', 'MRR', 'hits@1', 'MRR', 'hits@1', 'MRR'] | ['DIM'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Self Persona || Original || hits@1</th> <th>Self Persona || Original || MRR</th> <th>Self Persona || Revised || hits@1</th> <th>Self Persona || Revised || MRR</th> <th>Their Persona || Original |... | Table 3 | table_3 | D19-1193 | 8 | emnlp2019 | Table 3 presents the evaluation results of our proposed and previous methods on PERSONACHAT under various persona configurations. The t-test shows that the differences between our proposed models, i.e., IMNutt and DIM, and the baseline model, i.e. IMNctx, were both statistically significant with p-value < 0.01. We can ... | [1, 2, 2, 1, 1, 1, 2, 1] | ['Table 3 presents the evaluation results of our proposed and previous methods on PERSONACHAT under various persona configurations.', 'The t-test shows that the differences between our proposed models, i.e., IMNutt and DIM, and the baseline model, i.e.', 'IMNctx, were both statistically significant with p-value < 0.01.... | [None, None, None, ['IMNctx', 'IMNutr', 'Self Persona', 'Original', 'hits@1'], ['DIM', 'IMNctx', 'hits@1'], ['DIM', 'FT-PC', 'Self Persona', 'Revised', 'hits@1'], None, ['DIM', 'KV Profile', 'Self Persona', 'Original', 'hits@1']] | 1 |
D19-1194table_6 | The results of responses generation with BLEU, perplexity (PPL), distinct scores (1-gram to 4-gram). | 2 | [['Model', 'Seq2Seq'], ['Model', 'MemNet'], ['Model', 'MemNet + multi'], ['Model', 'TAware'], ['Model', 'TAware + multi'], ['Model', 'KAware'], ['Model', 'Qadpt'], ['Model', 'Qadpt + multi'], ['Model', 'Qadpt + TAware']] | 2 | [['HGZHZ', 'BLEU'], ['HGZHZ', 'PPL'], ['HGZHZ', 'dist-1'], ['HGZHZ', 'dist-2'], ['HGZHZ', 'dist-3'], ['HGZHZ', 'dist-4'], ['Friends', 'BLEU'], ['Friends', 'PPL'], ['Friends', 'dist-1'], ['Friends', 'dist-2'], ['Friends', 'dist-3'], ['Friends', 'dist-4']] | [['14.20', '94.48', '0.008', '0.039', '0.092', '0.150', '15.46', '73.23', '0.004', '0.016', '0.026', '0.032'], ['15.73', '88.29', '0.012', '0.062', '0.150', '0.240', '14.61', '67.58', '0.005', '0.023', '0.040', '0.049'], ['15.88', '86.76', '0.010', '0.058', '0.138', '0.224', '12.97', '54.67', '0.006', '0.022', '0.032',... | column | ['BLEU', 'PPL', 'dist-1', 'dist-2', 'dist-3', 'dist-4', 'BLEU', 'PPL', 'dist-1', 'dist-2', 'dist-3', 'dist-4'] | ['Qadpt', 'Qadpt + multi'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>HGZHZ || BLEU</th> <th>HGZHZ || PPL</th> <th>HGZHZ || dist-1</th> <th>HGZHZ || dist-2</th> <th>HGZHZ || dist-3</th> <th>HGZHZ || dist-4</th> <th>Friends || BLEU</th> <th>Friends ||... | Table 6 | table_6 | D19-1194 | 8 | emnlp2019 | Table 6 presents the BLEU-2 scores (as recommended in the prior work (Liu et al., 2016)), perplexity (PPL), and distinct scores. The results show that all models have similar levels of BLEU2 and PPL, while Qadpt+multi has slightly better distinct scores. The results suggest the same claim as Liu et al.(2016) that BLEU ... | [1, 1, 2] | ['Table 6 presents the BLEU-2 scores (as recommended in the prior work (Liu et al., 2016)), perplexity (PPL), and distinct scores.', 'The results show that all models have similar levels of BLEU2 and PPL, while Qadpt+multi has slightly better distinct scores.', 'The results suggest the same claim as Liu et al.(2016) th... | [['BLEU', 'PPL', 'dist-1', 'dist-2', 'dist-3', 'dist-4'], ['BLEU', 'PPL', 'Qadpt + multi'], ['BLEU']] | 1 |
D19-1197table_2 | Automatic evaluation results for the task of sentiment response generation. Numbers in bold mean that the improvement over the best performing baseline is statistically significant (t-test, with p-value< 0.01). | 1 | [['Seq2Seq'], ['CVAE'], ['ECM'], ['S2S-Temp-MLE'], ['S2S-Temp-None'], ['S2S-Temp-50%'], ['S2S-Temp']] | 1 | [['BLEU-1'], ['ROUGE-L'], ['AVERAGE'], ['EXTREME'], ['GREEDY']] | [['0.065', '0.118', '0.726', '0.474', '0.582'], ['0.088', '0.081', '0.727', '0.408', '0.563'], ['0.051', '0.102', '0.708', '0.462', '0.559'], ['0.103', '0.124', '0.732', '0.458', '0.593'], ['0.078', '0.089', '0.687', '0.479', '0.501'], ['0.102', '0.121', '0.691', '0.491', '0.586'], ['0.106', '0.130', '0.738', '0.492', ... | column | ['BLEU-1', 'ROUGE-L', 'AVERAGE', 'EXTREME', 'GREEDY'] | ['S2S-Temp'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU-1</th> <th>ROUGE-L</th> <th>AVERAGE</th> <th>EXTREME</th> <th>GREEDY</th> </tr> </thead> <tbody> <tr> <td>Seq2Seq</td> <td>0.065</td> <td>0.118</td> <td>0.726</td... | Table 2 | table_2 | D19-1197 | 7 | emnlp2019 | 5.4 Evaluation Results. Table 2 report the results of automatic evaluation on sentiment response generation task. We can see that S2S-Temp outperforms all baseline models in terms of all metrics, and the improvements are statistically significant (t-test with p-value< 0.01). The results demonstrate that when only limit... | [2, 1, 1, 2, 1, 1, 2, 2] | ['5.4 Evaluation Results.', 'Table 2 report the results of automatic evaluation on sentiment response generation task.', 'We can see that S2S-Temp outperforms all baseline models in terms of all metrics, and the improvements are statistically significant (t-test with p-value< 0.01).', 'The results demonstrate that when... | [None, None, ['S2S-Temp'], ['S2S-Temp'], ['S2S-Temp-None', 'S2S-Temp-50%', 'S2S-Temp'], ['S2S-Temp-None', 'CVAE'], None, ['S2S-Temp', 'S2S-Temp-MLE']] | 1 |
D19-1199table_4 | Consistency comparison between human inference and model predictions on overlapping rate (%). (cid:63) denotes p-value < 0.01 in the significance test against all the baselines. | 2 | [['Model', 'Preceding'], ['Model', 'Subsequent'], ['Model', 'DRNN'], ['Model', 'SIRNN'], ['Model', 'W2W']] | 2 | [['Len-5', 'p@1'], ['Len-5', 'p@2'], ['Len-5', 'p@3'], ['Len-5', 'Acc.'], ['Len-10', 'p@1'], ['Len-10', 'p@2'], ['Len-10', 'p@3'], ['Len-10', 'Acc.'], ['Len-15', 'p@1'], ['Len-15', 'p@2'], ['Len-15', 'p@3'], ['Len-15', 'Acc.']] | [['63.50', '90.05', '98.83', '40.46', '56.84', '80.15', '91.86', '21.06', '54.97', '77.19', '88.75', '13.08'], ['61.03', '88.86', '98.54', '40.25', '54.57', '73.60', '87.26', '20.26', '53.07', '69.85', '81.93', '12.79'], ['72.75', '93.21', '99.24', '58.18', '65.58', '85.85', '94.92', '34.47', '62.60', '82.68', '92.14',... | column | ['p@1', 'p@2', 'p@3', 'Acc.', 'p@1', 'p@2', 'p@3', 'Acc.', 'p@1', 'p@2', 'p@3', 'Acc.'] | ['W2W'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Len-5 || p@1</th> <th>Len-5 || p@2</th> <th>Len-5 || p@3</th> <th>Len-5 || Acc.</th> <th>Len-10 || p@1</th> <th>Len-10 || p@2</th> <th>Len-10 || p@3</th> <th>Len-10 || Acc.</th> ... | Table 4 | table_4 | D19-1199 | 7 | emnlp2019 | Table 4 shows the consistency between human inference and model predictions. W2W also outperforms the baselines with a larger margin on longer conversation scenarios, which is consistent with the phenomenon of automatic evaluation. The advantage on unlabeled data of our W2W model demonstrates the superiority for detect... | [1, 1, 2] | ['Table 4 shows the consistency between human inference and model predictions.', 'W2W also outperforms the baselines with a larger margin on longer conversation scenarios, which is consistent with the phenomenon of automatic evaluation.', 'The advantage on unlabeled data of our W2W model demonstrates the superiority fo... | [None, ['W2W'], ['W2W']] | 1 |
D19-1201table_2 | The results of both automatic evaluations and human evaluation. Read., Info., P-score refer to readability, informativeness, personalization scores. The kappa value between human annotators is 0.41, which indicates a moderate inter-rater. | 2 | [['Models', 'Seq2Seq'], ['Models', 'Persona'], ['Models', 'Adaptation'], ['Models', 'CVAE'], ['Models', 'RL-Persona'], ['Models', 'DiaWAE-GMD'], ['Models', 'PersonaWAE'], ['Models', 'ground-truth']] | 2 | [['Embedding Metrics', 'Extrema'], ['Embedding Metrics', 'Greedy'], ['Embedding Metrics', 'Average'], ['BLEU', 'Recall'], ['BLEU', 'Precision'], ['BLEU', 'F1'], ['Human Evaluation', 'Read.'], ['Human Evaluation', 'Info.'], ['Human Evaluation', 'P-score']] | [['0.1640', '0.4098', '0.4911', '0.1646', '0.1646', '0.1646', '2.30', '2.16', '0.49'], ['0.1631', '0.3982', '0.4871', '0.1646', '0.1646', '0.1646', '2.31', '2.15', '0.50'], ['0.1722', '0.4038', '0.5113', '0.1689', '0.1689', '0.1689', '2.29', '1.93', '0.47'], ['0.2643', '0.2911', '0.5759', '0.1931', '0.1636', '0.1771', ... | column | ['Extrema', 'Greedy', 'Average', 'Recall', 'Precision', 'F1', 'Read.', 'Info.', 'P-score'] | ['PersonaWAE'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Embedding Metrics || Extrema</th> <th>Embedding Metrics || Greedy</th> <th>Embedding Metrics || Average</th> <th>BLEU || Recall</th> <th>BLEU || Precision</th> <th>BLEU || F1</th> <th>H... | Table 2 | table_2 | D19-1201 | 6 | emnlp2019 | Incorporating personalization in the conditional GMD prior is more effective than combing personalization in decoder. As shown in Table 2, Persona model only achieves comparable results with Seq2Seq in terms of BLEU scores and BOW scores. For PersonaWAE and DiaWAE-GMD, incorporating personalizations in both decoder and... | [2, 1, 1, 1] | ['Incorporating personalization in the conditional GMD prior is more effective than combing personalization in decoder.', 'As shown in Table 2, Persona model only achieves comparable results with Seq2Seq in terms of BLEU scores and BOW scores.', 'For PersonaWAE and DiaWAE-GMD, incorporating personalizations in both dec... | [None, ['Persona', 'BLEU', 'Embedding Metrics'], ['PersonaWAE', 'DiaWAE-GMD', 'Embedding Metrics'], ['PersonaWAE', 'DiaWAE-GMD', 'Recall']] | 1 |
D19-1205table_2 | Comparison of MRR when using dialogue acts of Context-only, Response-only and Crossway fashion | 1 | [['Siamese-PDA-ST'], ['Siamese-PDA-MT (Zhao et al., 2017)'], ['Siamese-ADA (Kumar et al., 2018)']] | 2 | [['DailyDialogue', 'Context-DA (Zhao et al., 2017)'], ['DailyDialogue', 'Response-DA'], ['DailyDialogue', 'Crossway'], ['SwDA', 'Context-DA (Zhao et al., 2017)'], ['SwDA', 'Response-DA'], ['SwDA', 'Crossway']] | [['0.912', '0.900', '0.900', '0.639', '0.649', '0.669'], ['0.921', '0.919', '0.946', '0.698', '0.685', '0.703'], ['0.934', '0.927', '0.956', '0.656', '0.682', '0.719']] | column | ['MRR', 'MRR', 'MRR', 'MRR', 'MRR', 'MRR'] | ['Crossway'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>DailyDialogue || Context-DA (Zhao et al., 2017)</th> <th>DailyDialogue || Response-DA</th> <th>DailyDialogue || Crossway</th> <th>SwDA || Context-DA (Zhao et al., 2017)</th> <th>SwDA || Response-... | Table 2 | table_2 | D19-1205 | 7 | emnlp2019 | Crossway vs Response-DA/Context-DA: Although the dialogue acts have been shown to be useful for the response selection task, existing work has only used the dialogue acts of the context. Whereas, in our experiments, we have found that the model that uses the dialogue acts of both context and response outperforms the mo... | [2, 2, 2, 1, 1, 1, 1, 1, 2, 2] | ['Crossway vs Response-DA/Context-DA: Although the dialogue acts have been shown to be useful for the response selection task, existing work has only used the dialogue acts of the context.', 'Whereas, in our experiments, we have found that the model that uses the dialogue acts of both context and response outperforms t... | [None, None, None, None, ['Siamese-ADA (Kumar et al., 2018)', 'Siamese-PDA-ST', 'Context-DA (Zhao et al., 2017)', 'Response-DA', 'Crossway'], ['Context-DA (Zhao et al., 2017)', 'Response-DA'], ['Context-DA (Zhao et al., 2017)', 'Response-DA'], ['Context-DA (Zhao et al., 2017)', 'Response-DA', 'Crossway'], None, None] | 1 |
D19-1208table_2 | Comparison with the semi-supervised image captioning method, “Self-Retrieval” [Liu et al., 2018]. Our method shows improved performance even without Unlabeled-COCO data (denoted as w/o unlabeled) as well as with Unlabeled-COCO (with unlabeled), although our model is not originally proposed for such scenario. | 1 | [['Self-Retrieval [Liu et al., 2018] (w/o unlabeled)'], ['Self-Retrieval [Liu et al., 2018] (with unlabeled)'], ['Baseline (w/o unlabeled)'], ['Ours (w/o unlabeled)'], ['Ours (with unlabeled)']] | 1 | [['BLEU1'], ['BLEU2'], ['BLEU3'], ['BLEU4'], ['ROUGE-L'], ['SPICE'], ['METEOR'], ['CIDEr']] | [['79.8', '62.3', '47.1', '34.9', '56.6', '20.5', '27.5', '114.6'], ['80.1', '63.1', '48', '35.8', '57', '21', '27.4', '117.1'], ['77.7', '61.6', '46.9', '36.2', '56.8', '20', '26.7', '114.2'], ['80.8', '65.3', '49.9', '37.6', '58.7', '22.7', '28.4', '122.6'], ['81.2', '66', '50.9', '39.1', '59.4', '23.8', '29.4', '125... | column | ['BLEU1', 'BLEU2', 'BLEU3', 'BLEU4', 'ROUGE-L', 'SPICE', 'METEOR', 'CIDEr'] | ['Ours (w/o unlabeled)', 'Ours (with unlabeled)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU1</th> <th>BLEU2</th> <th>BLEU3</th> <th>BLEU4</th> <th>ROUGE-L</th> <th>SPICE</th> <th>METEOR</th> <th>CIDEr</th> </tr> </thead> <tbody> <tr> <td>Self-Retrieval [... | Table 2 | table_2 | D19-1208 | 8 | emnlp2019 | Table 2 shows the comparison with SelfRetrieval. For a fair comparison with them, we replace the cross entropy loss from our loss with the policy gradient method [Rennie et al., 2017] to directly optimize our model with CIDEr score as in [Liu et al., 2018]. As our baseline model (denoted as Baseline), we train a model ... | [1, 2, 1, 1, 1, 2] | ['Table 2 shows the comparison with SelfRetrieval.', 'For a fair comparison with them, we replace the cross entropy loss from our loss with the policy gradient method [Rennie et al., 2017] to directly optimize our model with CIDEr score as in [Liu et al., 2018].', 'As our baseline model (denoted as Baseline), we train ... | [None, None, None, ['Ours (w/o unlabeled)'], ['Ours (with unlabeled)'], None] | 1 |
D19-1211table_4 | Binary accuracy for different variants of CMFN and training scenarios outlined in Section 5. The best performance is achieved using all three modalities of text (T), visual (V) and acoustic (A). | 2 | [['Modality', 'C-MFN (P)'], ['Modality', 'C-MFN (C)'], ['Modality', 'C-MFN']] | 1 | [['T'], ['A+V'], ['T+A'], ['T+V'], ['T+A+V']] | [['62.85', '53.3', '63.28', '63.22', '64.47'], ['57.96', '50.23', '57.78', '57.99', '58.45'], ['64.44', '57.99', '64.47', '64.22', '65.23']] | column | ['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy'] | ['C-MFN'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>T</th> <th>A+V</th> <th>T+A</th> <th>T+V</th> <th>T+A+V</th> </tr> </thead> <tbody> <tr> <td>Modality || C-MFN (P)</td> <td>62.85</td> <td>53.3</td> <td>63.28</td> ... | Table 4 | table_4 | D19-1211 | 8 | emnlp2019 | 6 Results and Discussion. The results of our experiments are presented in Table 4. Results demonstrate that both context and punchline information are important as C-MFN outperforms C-MFN (P) and C-MFN (C) models. Punchline is the most important component for detecting humor as the performance of C-MFN (P) is significa... | [2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2] | ['6 Results and Discussion.', 'The results of our experiments are presented in Table 4.', 'Results demonstrate that both context and punchline information are important as C-MFN outperforms C-MFN (P) and C-MFN (C) models.', 'Punchline is the most important component for detecting humor as the performance of C-MFN (P) i... | [None, None, ['C-MFN', 'C-MFN (C)', 'C-MFN (P)'], ['C-MFN (P)', 'C-MFN (C)'], ['T+A+V', 'T', 'T+A', 'T+V', 'A+V'], ['T', 'A+V'], ['T', 'T+V', 'T+A'], ['C-MFN'], None, None, ['C-MFN'], ['C-MFN'], None] | 1 |
D19-1213table_2 | Performance compared with state-of-the-art methods on Youtube2Text dataset. The (−) is an unknown metric. | 2 | [['Model', 'LSTM-I'], ['Model', 'HRNE'], ['Model', 'MA'], ['Model', 'SCN'], ['Model', 'TSA'], ['Model', 'SA-LSTM'], ['Model', 'PickNet'], ['Model', 'ASGN+LNA']] | 1 | [['B-4'], ['M'], ['R'], ['C']] | [['0.446', '0.297', '-', '-'], ['0.438', '0.331', '-', '-'], ['0.504', '0.318', '-', '0.699'], ['0.511', '0.335', '-', '0.777'], ['0.517', '0.34', '-', '0.749'], ['0.523', '0.341', '0.698', '0.803'], ['0.523', '0.333', '0.696', '0.765'], ['0.547', '0.342', '0.717', '0.813']] | column | ['B-4', 'M', 'R', 'C'] | ['ASGN+LNA'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>B-4</th> <th>M</th> <th>R</th> <th>C</th> </tr> </thead> <tbody> <tr> <td>Model || LSTM-I</td> <td>0.446</td> <td>0.297</td> <td>-</td> <td>-</td> </tr> <tr> ... | Table 2 | table_2 | D19-1213 | 7 | emnlp2019 | 4.5 Quantitative Analysis In Table 2 and Table 3, we compare our ASGN+LNA model with the state-of-the-art models on the Youtube2Text and MSR-VTT datasets. Following the operation of (Gan et al., 2016; Pasunuru and Bansal, 2017), ASGN+LNA is the average ensemble of 5 ASGN+LNA (RL) models trained with different initializ... | [1, 1, 1, 1] | ['4.5 Quantitative Analysis In Table 2 and Table 3, we compare our ASGN+LNA model with the state-of-the-art models on the Youtube2Text and MSR-VTT datasets.', 'Following the operation of (Gan et al., 2016; Pasunuru and Bansal, 2017), ASGN+LNA is the average ensemble of 5 ASGN+LNA (RL) models trained with different init... | [['ASGN+LNA'], ['ASGN+LNA'], ['ASGN+LNA'], ['ASGN+LNA']] | 1 |
D19-1214table_3 | The SLU performance on baseline models compared with our Stack-Propagation model on two datasets. | 2 | [['Model', 'gate-mechanism'], ['Model', 'pipelined model'], ['Model', 'sentence intent augmented'], ['Model', 'lstm+last-hidden'], ['Model', 'lstm+token-level'], ['Model', 'without self-attention'], ['Model', 'Our model']] | 2 | [['SNIPS', 'Slot (F1)'], ['SNIPS', 'Intent (Acc)'], ['SNIPS', 'Overall (Acc)'], ['ATIS', 'Slot (F1)'], ['ATIS', 'Intent (Acc)'], ['ATIS', 'Overall (Acc)']] | [['92.2', '97.6', '82.4', '95.3', '96.2', '83.4'], ['90.8', '97.6', '81.8', '95.1', '96.1', '82.3'], ['93.7', '97.5', '86.1', '95.5', '96.7', '85.8'], ['-', '97.1', '-', '-', '95.2', '-'], ['-', '97.5', '-', '-', '96', '-'], ['94.1', '97.8', '86.6', '95.6', '96.6', '86.2'], ['94.2', '98', '86.9', '95.9', '96.9', '86.5'... | column | ['Slot (F1)', 'Intent (Acc)', 'Overall (Acc)', 'Slot (F1)', 'Intent (Acc)', 'Overall (Acc)'] | ['Our model'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>SNIPS || Slot (F1)</th> <th>SNIPS || Intent (Acc)</th> <th>SNIPS || Overall (Acc)</th> <th>ATIS || Slot (F1)</th> <th>ATIS || Intent (Acc)</th> <th>ATIS || Overall (Acc)</th> </tr> </the... | Table 3 | table_3 | D19-1214 | 7 | emnlp2019 | Table 3 gives the result of the comparison experiment. From the result of gate-mechanism row, we can observe that without the Stack-Propagation learning and simply using the gate-mechanism to incorporate the intent information, the slot filling (F1) performance drops significantly, which demonstrates that directly leve... | [1, 1, 1, 1, 1, 1] | ['Table 3 gives the result of the comparison experiment.', 'From the result of gate-mechanism row, we can observe that without the Stack-Propagation learning and simply using the gate-mechanism to incorporate the intent information, the slot filling (F1) performance drops significantly, which demonstrates that directly... | [None, ['gate-mechanism', 'Our model', 'Slot (F1)'], ['Intent (Acc)', 'Overall (Acc)'], ['Slot (F1)', 'Intent (Acc)'], ['pipelined model'], ['Our model']] | 1 |
D19-1217table_1 | Experimental results of answer generation on TACoS-MultiLevel and YoutubeClip datasets. | 2 | [['Method', 'ESA+'], ['Method', 'STAN+'], ['Method', 'CDMN+'], ['Method', 'LF+'], ['Method', 'HRE+'], ['Method', 'MN+'], ['Method', 'SFQIH+'], ['Method', 'HACRN'], ['Method', 'RICT (ours)']] | 2 | [['TACoS-MultiLevel', 'BLEU-1'], ['TACoS-MultiLevel', 'BLEU-2'], ['TACoS-MultiLevel', 'ROUGE'], ['TACoS-MultiLevel', 'METEOR'], ['YoutubeClip', 'BLEU-1'], ['YoutubeClip', 'BLEU-2'], ['YoutubeClip', 'ROUGE'], ['YoutubeClip', 'METEOR']] | [['0.356', '0.244', '0.422', '0.109', '0.268', '0.151', '0.276', '0.082'], ['0.408', '0.312', '0.449', '0.133', '0.315', '0.185', '0.306', '0.09'], ['0.429', '0.341', '0.46', '0.142', '0.293', '0.161', '0.311', '0.094'], ['0.404', '0.29', '0.465', '0.135', '0.284', '0.183', '0.307', '0.083'], ['0.438', '0.32', '0.502',... | column | ['BLEU-1', 'BLEU-2', 'ROUGE', 'METEOR', 'BLEU-1', 'BLEU-2', 'ROUGE', 'METEOR'] | ['RICT (ours)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>TACoS-MultiLevel || BLEU-1</th> <th>TACoS-MultiLevel || BLEU-2</th> <th>TACoS-MultiLevel || ROUGE</th> <th>TACoS-MultiLevel || METEOR</th> <th>YoutubeClip || BLEU-1</th> <th>YoutubeClip || B... | Table 1 | table_1 | D19-1217 | 7 | emnlp2019 | Table 1 shows the experimental results of answer generation on TACoS-MultiLevel and YoutubeClip datasets, and Table 2 shows the question generation results on same datasets. Our method (RICT) outperforms all above models in almost all metrics. This fact shows the effectiveness of our overall network architecture. And w... | [1, 1, 1, 1] | ['Table 1 shows the experimental results of answer generation on TACoS-MultiLevel and YoutubeClip datasets, and Table 2 shows the question generation results on same datasets.', 'Our method (RICT) outperforms all above models in almost all metrics.', 'This fact shows the effectiveness of our overall network architectur... | [None, ['RICT (ours)'], ['RICT (ours)'], ['TACoS-MultiLevel', 'YoutubeClip']] | 1 |
D19-1217table_2 | Experimental results of question generation on TACoS-MultiLevel and YoutubeClip datasets. | 2 | [['Method', 'ESA+'], ['Method', 'STAN+'], ['Method', 'CDMN+'], ['Method', 'LF+'], ['Method', 'HRE+'], ['Method', 'MN+'], ['Method', 'SFQIH+'], ['Method', 'HACRN'], ['Method', 'RICT (ours)']] | 2 | [['TACoS-MultiLevel', 'BLEU-1'], ['TACoS-MultiLevel', 'BLEU-2'], ['TACoS-MultiLevel', 'ROUGE'], ['TACoS-MultiLevel', 'METEOR'], ['YoutubeClip', 'BLEU-1'], ['YoutubeClip', 'BLEU-2'], ['YoutubeClip', 'ROUGE'], ['YoutubeClip', 'METEOR']] | [['0.693', '0.582', '0.718', '0.341', '0.497', '0.333', '0.565', '0.212'], ['0.706', '0.599', '0.73', '0.354', '0.483', '0.322', '0.559', '0.208'], ['0.707', '0.603', '0.74', '0.357', '0.507', '0.341', '0.567', '0.219'], ['0.704', '0.598', '0.728', '0.349', '0.512', '0.346', '0.574', '0.218'], ['0.694', '0.592', '0.729... | column | ['BLEU-1', 'BLEU-2', 'ROUGE', 'METEOR', 'BLEU-1', 'BLEU-2', 'ROUGE', 'METEOR'] | ['RICT (ours)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>TACoS-MultiLevel || BLEU-1</th> <th>TACoS-MultiLevel || BLEU-2</th> <th>TACoS-MultiLevel || ROUGE</th> <th>TACoS-MultiLevel || METEOR</th> <th>YoutubeClip || BLEU-1</th> <th>YoutubeClip || B... | Table 2 | table_2 | D19-1217 | 7 | emnlp2019 | Table 1 shows the experimental results of answer generation on TACoS-MultiLevel and YoutubeClip datasets, and Table 2 shows the question generation results on same datasets. Our method (RICT) outperforms all above models in almost all metrics. This fact shows the effectiveness of our overall network architecture. And w... | [1, 1, 1, 1, 2] | ['Table 1 shows the experimental results of answer generation on TACoS-MultiLevel and YoutubeClip datasets, and Table 2 shows the question generation results on same datasets.', 'Our method (RICT) outperforms all above models in almost all metrics.', 'This fact shows the effectiveness of our overall network architectur... | [None, ['RICT (ours)'], ['RICT (ours)'], ['TACoS-MultiLevel', 'YoutubeClip'], None] | 1 |
D19-1230table_2 | Performances of negative focus detection systems on the SEM’12 corpus. | 3 | [['Existing Methods', 'System', 'CLaC (Rosenberg (2012))'], ['Existing Methods', 'System', 'FOC-DET (Blanco (2013))'], ['Existing Methods', 'System', 'WTGM (Zou (2015))'], ['Our Methods', 'System', 'LSTM'], ['Our Methods', 'System', 'BiLSTM'], ['Our Methods', 'System', 'BiLSTM-CRF'], ['Our Methods', 'System', 'W-Att Bi... | 1 | [['Acc']] | [['60'], ['65.5'], ['68.4'], ['58.71'], ['60.81'], ['67.28'], ['70.22'], ['70.51'], ['69.8']] | column | ['Acc'] | ['Our Methods'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Acc</th> </tr> </thead> <tbody> <tr> <td>Existing Methods || System || CLaC (Rosenberg (2012))</td> <td>60</td> </tr> <tr> <td>Existing Methods || System || FOC-DET (Blanco (2013))</td> ... | Table 2 | table_2 | D19-1230 | 6 | emnlp2019 | 4.2 Results. Table 2 shows the performance comparison of various negative focus detection models. We can see that all of contextual attention based models (row 7-9 in Table 2) achieve better perfomances than existing methods (row 1-3 in Table 2) and models without contextual attention (row 4-6 in Table 2). In addition,... | [2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] | ['4.2 Results.', 'Table 2 shows the performance comparison of various negative focus detection models.', 'We can see that all of contextual attention based models (row 7-9 in Table 2) achieve better perfomances than existing methods (row 1-3 in Table 2) and models without contextual attention (row 4-6 in Table 2).', 'I... | [None, None, ['CLaC (Rosenberg (2012))', 'FOC-DET (Blanco (2013))', 'WTGM (Zou (2015))', 'LSTM', 'BiLSTM', 'BiLSTM-CRF', 'W-Att BiLSTM-CRF', 'T-Att BiLSTM-CRF', 'WT-Att BiLSTM-CRF'], ['WT-Att BiLSTM-CRF', 'WTGM (Zou (2015))'], ['WT-Att BiLSTM-CRF'], None, ['W-Att BiLSTM-CRF', 'T-Att BiLSTM-CRF', 'WT-Att BiLSTM-CRF'], [... | 1 |
D19-1230table_7 | Performance comparison for different pretrained word embeddings on the T-Att BiLSTM-CRF model. | 2 | [['Word Embedding', 'Senna'], ['Word Embedding', 'Glove'], ['Word Embedding', 'Word2vec'], ['Word Embedding', 'BERT'], ['Word Embedding', 'ELMo']] | 1 | [['Dimension'], ['Acc']] | [['50', '70.08'], ['100', '69.66'], ['300', '69.1'], ['768', '70.22'], ['1024', '70.51']] | column | ['Dimension', 'Acc'] | ['ELMo'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Dimension</th> <th>Acc</th> </tr> </thead> <tbody> <tr> <td>Word Embedding || Senna</td> <td>50</td> <td>70.08</td> </tr> <tr> <td>Word Embedding || Glove</td> <td>100</td... | Table 7 | table_7 | D19-1230 | 8 | emnlp2019 | Impact of Pre-trained Word Embedding. To compare the impacts of different pre-trained word embeddings for negative focus detection task, we attempt to employ other pre-trained word embeddings. Table 7 show the performances of the T-Att BiLSTM-CRF model with different pre-trained word embeddings, including Senna, Glove,... | [2, 2, 1, 1] | ['Impact of Pre-trained Word Embedding.', 'To compare the impacts of different pre-trained word embeddings for negative focus detection task, we attempt to employ other pre-trained word embeddings.', 'Table 7 show the performances of the T-Att BiLSTM-CRF model with different pre-trained word embeddings, including Senna... | [None, None, ['Word Embedding', 'Senna', 'Glove', 'Word2vec', 'BERT'], ['ELMo', 'Word Embedding']] | 1 |
D19-1231table_3 | Results in accuracy on the Local Discrimination task. (cid:63) is a pre-trained model on the global discrimination task. | 4 | [['Model', 'Lex. Neural Grid (M&J)*', 'Emb.', 'word2vec'], ['Model', 'Lex. Neural Grid (M&J)', 'Emb.', 'word2vec'], ['Model', 'Dist. sentence (L&H)', 'Emb.', 'word2vec'], ['Model', 'Our Global Model', 'Emb.', 'word2vec'], ['Model', 'Our Local Model', 'Emb.', 'word2vec'], ['Model', 'Our Local Model', 'Emb.', 'ELMo'], ['... | 1 | [['Dw=1,2,3'], ['Dw=1'], ['Dw=2'], ['Dw=3']] | [['60.27', '56.11', '60.23', '62.23'], ['55.01', '53.81', '55.37', '56.16'], ['6.76', '4.28', '6.82', '9.25'], ['57.24', '53.35', '56.58', '59.67'], ['73.23', '66.21', '73.16', '77.93'], ['74.12', '65.82', '73.54', '78.16'], ['75.37', '67.29', '75.58', '80.21'], ['77.07', '64.38', '76.12', '81.23']] | column | ['accuracy', 'accuracy', 'accuracy', 'accuracy'] | ['Our Local Model', 'Our Global Model', 'Our Full Model'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Dw=1,2,3</th> <th>Dw=1</th> <th>Dw=2</th> <th>Dw=3</th> </tr> </thead> <tbody> <tr> <td>Model || Lex. Neural Grid (M&J)* || Emb. || word2vec</td> <td>60.27</td> <td>56.11</t... | Table 3 | table_3 | D19-1231 | 8 | emnlp2019 | 5.3 Results on Local Discrimination . Table 3 shows the results in accuracy on the local discrimination task. From the table, we see that existing models including our global model perform poorly compared to our proposed local models. They are likely to fail in distinguishing the text segments that are locally coher... | [2, 1, 1, 2, 2, 2, 1] | ['5.3 Results on Local Discrimination .', 'Table 3 shows the results in accuracy on the \x93local\x94\x9d discrimination task.', 'From the table, we see that existing models including our global model perform poorly compared to our proposed local models.', 'They are likely to fail in distinguishing the text segments th... | [None, ['Our Local Model'], ['Lex. Neural Grid (M&J)', 'Dist. sentence (L&H)', 'Our Full Model', 'Our Global Model', 'Our Local Model'], None, None, None, ['Our Local Model', 'Our Global Model', 'Lex. Neural Grid (M&J)', 'Dist. sentence (L&H)']] | 1 |
D19-1233table_4 | Test set micro-averaged F1 scores on labelled attachment decisions. We report numbers for other parsers from Morey et al. (2017)’s replication study. For each metric, the highest score for all the parsers in the comparison is shown in bold, while the highest score among parsers of that type (neural or feature-based) is... | 3 | [['Model', 'Feature-based parsers', 'Hayashi et al. (2016)'], ['Model', 'Feature-based parsers', 'Surdeanu et al. (2015)'], ['Model', 'Feature-based parsers', 'Joty et al. (2015)'], ['Model', 'Feature-based parsers', 'Feng and Hirst (2014a)'], ['Model', 'Neural parsers', 'Braud et al. (2016)'], ['Model', 'Neural parser... | 1 | [['S'], ['N'], ['R'], ['F']] | [['65.1', '54.6', '44.7', '44.1'], ['65.3', '54.2', '45.1', '44.2'], ['65.1', '55.5', '45.1', '44.3'], ['68.6', '55.9', '45.8', '44.6'], ['59.5', '47.2', '34.7', '34.3'], ['64.5', '54', '38.1', '36.6'], ['61.9', '53.4', '44.5', '44'], ['65.2', '54.9', '42.8', '42.4'], ['67.1', '57.4', '45.5', '45'], ['64.1', '54.2', '4... | column | ['F1', 'F1', 'F1', 'F1'] | ['Generative Model'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>S</th> <th>N</th> <th>R</th> <th>F</th> </tr> </thead> <tbody> <tr> <td>Model || Feature-based parsers || Hayashi et al. (2016)</td> <td>65.1</td> <td>54.6</td> <td>44.7</t... | Table 4 | table_4 | D19-1233 | 9 | emnlp2019 | 5.4.2 Parsing Performance . Table 4 shows RST-DT test set labelled attachment metrics for various parsers. Our model outperforms all of the published neural models that do not use additional training data in Morey et al. (2017)'s replication study on all of the metrics. On span accuracy (S), we outperform all of the ot... | [2, 1, 1, 1, 1, 1, 1] | ['5.4.2 Parsing Performance .', 'Table 4 shows RST-DT test set labelled attachment metrics for various parsers.', "Our model outperforms all of the published neural models that do not use additional training data in Morey et al. (2017)'s replication study on all of the metrics.", "On span accuracy (S), we outperform al... | [None, None, ['Generative Model'], ['S', 'Generative Model', 'Feng and Hirst (2014a)'], ['N', 'Generative Model'], ['R', 'Generative Model', 'F'], ['Generative Model']] | 1 |
D19-1234table_2 | Evaluations of weakly supervised (Snorkel and stand alone GEN) and supervised approaches on STAC data. | 2 | [['SUPERVISED BASELINES', 'LAST'], ['SUPERVISED BASELINES', 'BiLSTM on Gold labels'], ['SUPERVISED BASELINES', 'BERT on Gold labels'], ['SUPERVISED BASELINES', 'LogReg* on Gold labels'], ['SUPERVISED BASELINES', 'BERT+LogReg* on Gold labels'], ['SNORKEL PIPELINE', 'GEN + Disc (BiLSTM)'], ['SNORKEL PIPELINE', 'GEN + Dis... | 1 | [['Precision'], ['Recall'], ['F1 score'], ['Accuracy']] | [['0.54', '0.55', '0.55', '0.84'], ['0.33', '0.8', '0.47', '0.75'], ['0.56', '0.48', '0.52', '0.88'], ['0.73', '0.52', '0.61', '0.91'], ['0.59', '0.49', '0.53', '0.89'], ['0.28', '0.59', '0.38', '0.74'], ['0.49', '0.4', '0.44', '0.86'], ['0.68', '0.65', '0.67', '0.91'], ['0.69', '0.66', '0.68', '0.92'], ['0.73', '0.71'... | column | ['Precision', 'Recall', 'F1 score', 'Accuracy'] | ['GEN'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Precision</th> <th>Recall</th> <th>F1 score</th> <th>Accuracy</th> </tr> </thead> <tbody> <tr> <td>SUPERVISED BASELINES || LAST</td> <td>0.54</td> <td>0.55</td> <td>0.55</t... | Table 2 | table_2 | D19-1234 | 8 | emnlp2019 | As seen in Table 2 on STAC test data, GEN dramatically outperformed our deep learning baselines, BiLSTM, BERT, and BERT + LogReg* architectures on gold labels, as well as the LAST baseline, which attaches every DU in a dialogue to the DU directly preceding it. In addition, stand alone GEN also outperformed all the coup... | [1, 1, 2] | ['As seen in Table 2 on STAC test data, GEN dramatically outperformed our deep learning baselines, BiLSTM, BERT, and BERT + LogReg* architectures on gold labels, as well as the LAST baseline, which attaches every DU in a dialogue to the DU directly preceding it.', 'In addition, stand alone GEN also outperformed all the... | [['GEN', 'BiLSTM on Gold labels', 'GEN + Disc (BiLSTM)', 'BERT on Gold labels', 'BERT+LogReg* on Gold labels'], ['GEN', 'GENERATIVE STAND ALONE', 'SNORKEL PIPELINE', 'GEN + Disc (BiLSTM)', 'F1 score'], ['SNORKEL PIPELINE', 'GEN']] | 1 |
D19-1239table_4 | Performance of the different ADR detection techniques on the Twitter and Reddit test sets. | 3 | [['Twitter', 'Technique', 'QuickUMLS'], ['Twitter', 'Technique', 'CRF'], ['Twitter', 'Technique', 'BLSTM-RNN'], ['Twitter', 'Technique', 'CRF+VAE'], ['Twitter', 'Technique', 'BLSTM-RNN+VAE'], ['Reddit', 'Technique', 'QuickUMLS.1'], ['Reddit', 'Technique', 'CRF'], ['Reddit', 'Technique', 'BLSTM-RNN'], ['Reddit', 'Techni... | 1 | [['Precision'], ['Recall'], ['Fscore']] | [['0.47', '0.34', '0.39'], ['0.67', '0.42', '0.51'], ['0.61', '0.87', '0.72'], ['0.68', '0.49', '0.57'], ['0.71', '0.85', '0.77'], ['0.14', '0.21', '17'], ['0.72', '0.47', '0.57'], ['0.67', '0.28', '0.39'], ['0.69', '0.52', '0.6'], ['0.63', '0.29', '0.4']] | column | ['Precision', 'Recall', 'Fscore'] | ['CRF+VAE'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Precision</th> <th>Recall</th> <th>Fscore</th> </tr> </thead> <tbody> <tr> <td>Twitter || Technique || QuickUMLS</td> <td>0.47</td> <td>0.34</td> <td>0.39</td> </tr> <tr> ... | Table 4 | table_4 | D19-1239 | 7 | emnlp2019 | 5 Results and Discussions 5.1 Comparison with ADR Detectors . In the first experiment, we compare our approach (i.e.trained with 100% of the labeled training data with 1 sample generated for each sample in the LC) against different ADR detector techniques described in Section 4.3. Table 4 reports precision, and recall ... | [2, 2, 1, 1, 2] | ['5 Results and Discussions 5.1 Comparison with ADR Detectors .', 'In the first experiment, we compare our approach (i.e.trained with 100% of the labeled training data with 1 sample generated for each sample in the LC) against different ADR detector techniques described in Section 4.3.', 'Table 4 reports precision, and... | [None, None, ['CRF+VAE', 'Precision', 'Recall', 'Fscore'], ['QuickUMLS'], None] | 1 |
D19-1247table_6 | Performances of whether using the pre-trained KB embedding by transE. | 4 | [['Model', 'Elsahar et al. (2018)', 'TransE', 'TRUE'], ['Model', 'Elsahar et al. (2018)', 'TransE', 'FALSE'], ['Model', 'Our Model ans loss', 'TransE', 'TRUE'], ['Model', 'Our Model ans loss', 'TransE', 'FALSE']] | 1 | [['BLEU4'], ['ROUGE-L'], ['METEOR']] | [['36.56', '58.09', '34.41'], ['33.67', '55.57', '33.2'], ['41.72', '69.31', '48.13'], ['41.55', '68.59', '47.52']] | column | ['BLEU4', 'ROUGE-L', 'METEOR'] | ['Our Model ans loss'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU4</th> <th>ROUGE-L</th> <th>METEOR</th> </tr> </thead> <tbody> <tr> <td>Model || Elsahar et al. (2018) || TransE || TRUE</td> <td>36.56</td> <td>58.09</td> <td>34.41</td> ... | Table 6 | table_6 | D19-1247 | 8 | emnlp2019 | 4.7.1 Without Pre-trained KB Embeddings. Pre-trained KB embeddings may provide rich structured relational information among entities. However, it heavily relies on large-scale triplets, which is time and resource-intensive. To investigate the effectiveness of pre-trained KB embedding for KBQG, we report the performance... | [2, 2, 2, 2, 1, 1, 1] | ['4.7.1 Without Pre-trained KB Embeddings.', 'Pre-trained KB embeddings may provide rich structured relational information among entities.', 'However, it heavily relies on large-scale triplets, which is time and resource-intensive.', 'To investigate the effectiveness of pre-trained KB embedding for KBQG, we report the ... | [None, None, None, ['TransE'], ['TransE'], ['Our Model ans loss', 'Elsahar et al. (2018)'], ['Our Model ans loss', 'BLEU4']] | 1 |
D19-1252table_3 | Results on the XQA. The average column is the average of fr and de result. | 2 | [['Machine translate at training (TRANSLATE-TRAIN)', 'XLM (Lample and Conneau, 2019)'], ['Machine translate at training (TRANSLATE-TRAIN)', 'Unicoder'], ['Evaluation of cross-lingual sentence encoders (Cross-lingual TEST)', 'XLM (Lample and Conneau, 2019)'], ['Evaluation of cross-lingual sentence encoders (Cross-lingua... | 1 | [['en'], ['fr'], ['de'], ['average']] | [['80.2', '65.1', '63.3', '64.2'], ['81.1', '66.2', '66.5', '66.4'], ['80.2', '62.3', '61.7', '62'], ['81.1', '64.1', '63.7', '63.9'], ['76.4', '61.6', '64.6', '63.1'], ['80.7', '67.1', '68.2', '67.7'], ['81.4', '69.3', '70.1', '69.7']] | column | ['accuracy', 'accuracy', 'accuracy', 'accuracy'] | ['Multi-language Fine-tuning'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>en</th> <th>fr</th> <th>de</th> <th>average</th> </tr> </thead> <tbody> <tr> <td>Machine translate at training (TRANSLATE-TRAIN) || XLM (Lample and Conneau, 2019)</td> <td>80.2</td> ... | Table 3 | table_3 | D19-1252 | 6 | emnlp2019 | Second, Multi-language fine-tuning is helpful to find the relation between languages, we will analyze it below. Table 3 show it can bring a significant boost in cross-lingual language understanding performance. With the help of Multi-language finetuning, Unicoder is been improved by 1.6% of accuracy on XNLI and 3.3% on... | [2, 1, 1, 1, 2] | ['Second, Multi-language fine-tuning is helpful to find the relation between languages, we will analyze it below.', 'Table 3 show it can bring a significant boost in cross-lingual language understanding performance.', 'With the help of Multi-language finetuning, Unicoder is been improved by 1.6% of accuracy on XNLI and... | [None, None, ['Multi-language Fine-tuning', 'Unicoder'], ['Multi-language Fine-tuning', 'Machine translate at training (TRANSLATE-TRAIN)'], None] | 1 |
D19-1254table_2 | Performance of AdaMRC compared with baseline models on three datasets, using SAN as the MRC model. | 3 | [['Method', 'SQuAD → NewsQA', 'SAN'], ['Method', 'SQuAD → NewsQA', 'SynNet + SAN'], ['Method', 'SQuAD → NewsQA', 'AdaMRC'], ['Method', 'SQuAD → NewsQA', 'AdaMRC with GT questions'], ['Method', 'NewsQA → SQuAD', 'SAN'], ['Method', 'NewsQA → SQuAD', 'SynNet + SAN'], ['Method', 'NewsQA → SQuAD', 'AdaMRC'], ['Method', 'New... | 1 | [['EM'], ['F1']] | [['36.68', '52.79'], ['35.19', '49.61'], ['38.46', '54.20'], ['39.37', '54.63'], ['56.83', '68.62'], ['50.34', '62.42'], ['58.20', '69.75'], ['58.82', '70.14'], ['13.06', '25.80'], ['12.52', '25.47'], ['14.09', '26.09'], ['15.59', '26.40'], ['27.06', '40.07'], ['23.67', '36.79'], ['27.92', '40.69'], ['27.79', '41.47']] | column | ['EM', 'F1'] | ['AdaMRC'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>EM</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Method || SQuAD → NewsQA || SAN</td> <td>36.68</td> <td>52.79</td> </tr> <tr> <td>Method || SQuAD → NewsQA || SynNet + SAN</... | Table 2 | table_2 | D19-1254 | 6 | emnlp2019 | Table 2 summarizes the experimental results. We observe that the proposed method consistently outperforms SAN and the SynNet+SAN model on all datasets. in the SQuAD → NewsQA setting, where the sourcedomain dataset is SQuAD and the target-domain dataset is NewsQA, AdaMRC achieves 38.46% and 54.20% in terms of EM and F1 ... | [1, 1, 1, 1] | ['Table 2 summarizes the experimental results.', 'We observe that the proposed method consistently outperforms SAN and the SynNet+SAN model on all datasets.', 'in the SQuAD → NewsQA setting, where the sourcedomain dataset is SQuAD and the target-domain dataset is NewsQA, AdaMRC achieves 38.46% and 54.20% in terms of EM... | [None, ['AdaMRC', 'AdaMRC with GT questions'], ['SQuAD → NewsQA', 'AdaMRC', 'EM', 'F1', 'SAN', 'SynNet + SAN'], ['AdaMRC', 'EM', 'F1', 'NewsQA → SQuAD', 'SQuAD → MS MARCO', 'MS MARCO → SQuAD']] | 1 |
D19-1255table_3 | Human evaluation of KEAG and state-of-theart answer generation models. Scores range in [1, 5]. | 2 | [['Model', 'gQA'], ['Model', 'gQA w/ KBLSTM'], ['Model', 'gQA w/ CRWE'], ['Model', 'MHPGM'], ['Model', 'KEAG']] | 1 | [['Syntactic'], ['Correct']] | [['3.78', '3.54'], ['3.98', '3.62'], ['3.91', '3.69'], ['4.1', '3.81'], ['4.18', '4.03']] | column | ['Syntactic', 'Correct'] | ['KEAG'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Syntactic</th> <th>Correct</th> </tr> </thead> <tbody> <tr> <td>Model || gQA</td> <td>3.78</td> <td>3.54</td> </tr> <tr> <td>Model || gQA w/ KBLSTM</td> <td>3.98</td> ... | Table 3 | table_3 | D19-1255 | 7 | emnlp2019 | Table 3 reports the human evaluation scores of KEAG and state-of-the-art answer generation models. The KEAG model surpasses all the others in generating correct answers syntactically and substantively. In terms of syntactic correctness, KEAG and MHPGM both perform well thanks to their architectures of composing answer ... | [1, 1, 2, 1] | ['Table 3 reports the human evaluation scores of KEAG and state-of-the-art answer generation models.', 'The KEAG model surpasses all the others in generating correct answers syntactically and substantively.', 'In terms of syntactic correctness, KEAG and MHPGM both perform well thanks to their architectures of composing... | [['KEAG', 'gQA', 'gQA w/ KBLSTM', 'gQA w/ CRWE', 'MHPGM'], ['KEAG', 'Correct'], ['KEAG', 'MHPGM'], ['KEAG', 'Correct']] | 1 |
D19-1256table_3 | Results on ARC Easy test | 2 | [['Model', 'Random guess'], ['Model', 'IR Solver'], ['Model', 'Reading Strategies (previous SOTA)'], ['Model', 'Attentive Ranker (ours)']] | 1 | [['Accuracy']] | [['25.00%'], ['62.55%'], ['68.90%'], ['72.30%']] | column | ['Accuracy'] | ['Attentive Ranker (ours)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy</th> </tr> </thead> <tbody> <tr> <td>Model || Random guess</td> <td>25.00%</td> </tr> <tr> <td>Model || IR Solver</td> <td>62.55%</td> </tr> <tr> <td>Model || Re... | Table 3 | table_3 | D19-1256 | 7 | emnlp2019 | Second, we verified the model performance on the ARC test sets in order to check how the model generalizes on unseen data and to compare it with other top models in the ARC public leaderboard (https://leaderboard.allenai.org/arc/subm issions/public). A summary of the results is reported in Table 3 and T... | [2, 1, 1] | ['Second, we verified the model performance on the ARC test sets in order to check how the model generalizes on unseen data and to compare it with other top models in the ARC public leaderboard (https://leaderboard.allenai.org/arc/subm issions/public).', 'A summary of the results is reported in Table 3 ... | [None, None, ['Attentive Ranker (ours)', 'Reading Strategies (previous SOTA)']] | 1 |
D19-1256table_6 | Downstream model performance on | 6 | [['Dataset', 'Val.', '# docs', 'Top 1', 'Ranking', 'TF-IDF'], ['Dataset', 'Val.', '# docs', 'Top 1', 'Ranking', 'Ours'], ['Dataset', 'Val.', '# docs', 'Top 10', 'Ranking', 'TF-IDF'], ['Dataset', 'Val.', '# docs', 'Top 10', 'Ranking', 'Ours'], ['Dataset', 'Test', '# docs', 'Top 1', 'Ranking', 'TF-IDF'], ['Dataset', 'Tes... | 1 | [['Accuracy (D)']] | [['35.59%'], ['38.3%(+2.71)'], ['35.93%'], ['43.72%(+7.79)'], ['34.93%'], ['37.51%(+3.58)'], ['37.08%'], ['40%(+2.92)']] | column | ['Accuracy (D)'] | ['Ours'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy (D)</th> </tr> </thead> <tbody> <tr> <td>Dataset || Val. || # docs || Top 1 || Ranking || TF-IDF</td> <td>35.59%</td> </tr> <tr> <td>Dataset || Val. || # docs || Top 1 || Rankin... | Table 6 | table_6 | D19-1256 | 8 | emnlp2019 | Table 6 shows that using documents ranked by our attentive neural network always leads to a performance increase in downstream models, compared to TF-IDF. On the validation set, the improvement is considerably higher (+7.79) due to a possible over-fitting of the hyperparameters during the At... | [1, 1] | ['Table 6 shows that using documents ranked by our attentive neural network always leads to a performance increase in downstream models, compared to TF-IDF.', "On the validation set, the improvement is considerably higher (+7.79) due to a possible over-fitting of the hyperparameters during t... | [['Ours', 'TF-IDF'], None] | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.