table_id_paper stringlengths 15 15 | caption stringlengths 14 1.88k | row_header_level int32 1 9 | row_headers large_stringlengths 15 1.75k | column_header_level int32 1 6 | column_headers large_stringlengths 7 1.01k | contents large_stringlengths 18 2.36k | metrics_loc stringclasses 2
values | metrics_type large_stringlengths 5 532 | target_entity large_stringlengths 2 330 | table_html_clean large_stringlengths 274 7.88k | table_name stringclasses 9
values | table_id stringclasses 9
values | paper_id stringlengths 8 8 | page_no int32 1 13 | dir stringclasses 8
values | description large_stringlengths 103 3.8k | class_sentence stringlengths 3 120 | sentences large_stringlengths 110 3.92k | header_mention stringlengths 12 1.8k | valid int32 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
P18-1097table_2 | Performance of seq2seq for GEC with different learning (row) and inference (column) methods on CoNLL-2014 dataset. (+LM) denotes decoding with the RNN language model through shallow fusion. The last 3 systems (with (cid:63)) use the additional non-public Lang-8 data for training. • Whether is fluency boost learning mech... | 2 | [['Model', 'normal seq2seq'], ['Model', 'back-boost'], ['Model', 'self-boost'], ['Model', 'dual-boost'], ['Model', 'back-boost (+native)'], ['Model', 'self-boost (+native)'], ['Model', 'dual-boost (+native)'], ['Model', 'back-boost (+native)★'], ['Model', 'self-boost (+native)★'], ['Model', 'dual-boost (+native)★']] | 2 | [['seq2seq', 'P'], ['seq2seq', 'R'], ['seq2seq', 'F0.5'], ['fluency boost', 'P'], ['fluency boost', 'R'], ['fluency boost', 'F0.5'], ['seq2seq (+LM)', 'P'], ['seq2seq (+LM)', 'R'], ['seq2seq (+LM)', 'F0.5'], ['fluency boost (+LM)', 'P'], ['fluency boost (+LM)', 'R'], ['fluency boost (+LM)', 'F0.5']] | [['61.06', '18.49', '41.81', '61.56', '18.85', '42.37', '61.75', '23.3', '46.42', '61.94', '23.7', '46.83'], ['61.66', '19.54', '43.09', '61.43', '19.61', '43.07', '61.47', '24.74', '47.4', '61.24', '25.01', '47.48'], ['61.64', '19.83', '43.35', '61.5', '19.9', '43.36', '62.13', '24.45', '47.49', '61.67', '24.76', '47.... | column | ['P', 'R', 'F0.5', 'P', 'R', 'F0.5', 'P', 'R', 'F0.5', 'P', 'R', 'F0.5'] | ['fluency boost'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>seq2seq || P</th> <th>seq2seq || R</th> <th>seq2seq || F0.5</th> <th>fluency boost || P</th> <th>fluency boost || R</th> <th>fluency boost || F0.5</th> <th>seq2seq (+LM) || P</th> ... | Table 2 | table_2 | P18-1097 | 6 | acl2018 | The effectiveness of various inference approaches can be observed by comparing the results in Table 2 by column. Compared to the normal seq2seq inference and seq2seq (+LM) baselines, fluency boost inference brings about on average 0.14 and 0.18 gain on F0.5 respectively, which is a significant6 improvement, demonstrati... | [1, 1, 1, 1] | ['The effectiveness of various inference approaches can be observed by comparing the results in Table 2 by column.', 'Compared to the normal seq2seq inference and seq2seq (+LM) baselines, fluency boost inference brings about on average 0.14 and 0.18 gain on F0.5 respectively, which is a significant6 improvement, demons... | [None, ['seq2seq', 'seq2seq (+LM)', 'fluency boost', 'fluency boost (+LM)', 'F0.5'], ['self-boost (+native)★', 'dual-boost (+native)★'], ['fluency boost (+LM)', 'F0.5', 'dual-boost (+native)★']] | 1 |
P18-1103table_1 | Experimental results of DAM and other comparison approaches on Ubuntu Corpus V1 and Douban Conversation Corpus. | 1 | [['DualEncoderlstm'], ['DualEncoderbilstm'], ['MV-LSTM'], ['Match-LSTM'], ['Multiview'], ['DL2R'], ['SMNdynamic'], ['DAM'], ['DAMfirst'], ['DAMlast'], ['DAMself'], ['DAMcross']] | 2 | [['Ubuntu Corpus', 'R2@1'], ['Ubuntu Corpus', 'R10@1'], ['Ubuntu Corpus', 'R10@2'], ['Ubuntu Corpus', 'R10@5'], ['Douban Conversation Corpus', 'MAP'], ['Douban Conversation Corpus', 'MRR'], ['Douban Conversation Corpus', 'P@1'], ['Douban Conversation Corpus', 'R10@1'], ['Douban Conversation Corpus', 'R10@2'], ['Douban ... | [['0.901', '0.638', '0.784', '0.949', '0.485', '0.527', '0.32', '0.187', '0.343', '0.72'], ['0.895', '0.63', '0.78', '0.944', '0.479', '0.514', '0.313', '0.184', '0.33', '0.716'], ['0.906', '0.653', '0.804', '0.946', '0.498', '0.538', '0.348', '0.202', '0.351', '0.71'], ['0.904', '0.653', '0.799', '0.944', '0.5', '0.53... | column | ['R2@1', 'R10@1', 'R10@2', 'R10@5', 'MAP', 'MRR', 'P@1', 'R10@1', 'R10@2', 'R10@5'] | ['DAM'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Ubuntu Corpus || R2@1</th> <th>Ubuntu Corpus || R10@1</th> <th>Ubuntu Corpus || R10@2</th> <th>Ubuntu Corpus || R10@5</th> <th>Douban Conversation Corpus || MAP</th> <th>Douban Conversation ... | Table 1 | table_1 | P18-1103 | 6 | acl2018 | Table 1 shows the evaluation results of DAM as well as all comparison models. As demonstrated, DAM significantly outperforms other competitors on both Ubuntu Corpus and Douban Conversation Corpus, including SMNdynamic, which is the state-of-the-art baseline, demonstrating the superior power of attention mechanism in ma... | [1, 1, 1, 1, 1, 1, 2] | ['Table 1 shows the evaluation results of DAM as well as all comparison models.', 'As demonstrated, DAM significantly outperforms other competitors on both Ubuntu Corpus and Douban Conversation Corpus, including SMNdynamic, which is the state-of-the-art baseline, demonstrating the superior power of attention mechanism ... | [['DAM'], ['DAM', 'Ubuntu Corpus', 'Douban Conversation Corpus', 'SMNdynamic'], ['DAMfirst', 'DAMself', 'DAM'], ['DAMfirst', 'DAMlast', 'DAM'], ['DAMcross'], ['DAMfirst', 'SMNdynamic'], ['DAMfirst']] | 1 |
P18-1108table_2 | Test set performance comparison on the CTB dataset | 3 | [['Model', 'Single Model', 'Charniak (2000)'], ['Model', 'Single Model', 'Zhu et al. (2013)'], ['Model', 'Single Model', 'Wang et al. (2015)'], ['Model', 'Single Model', 'Watanabe and Sumita (2015)'], ['Model', 'Single Model', 'Dyer et al. (2016)'], ['Model', 'Single Model', 'Liu and Zhang (2017b)'], ['Model', 'Single ... | 1 | [['LP'], ['LR'], ['F1']] | [['82.1', '79.6', '80.8'], ['84.3', '82.1', '83.2'], ['-', '-', '83.2'], ['-', '-', '84.3'], ['-', '-', '84.6'], ['85.9', '85.2', '85.5'], ['-', '-', '86.1'], ['86.6', '86.4', '86.5'], ['86.8', '84.4', '85.6'], ['-', '-', '86.3'], ['-', '-', '86.6'], ['83.8', '80.8', '82.3'], ['-', '-', '86.9']] | column | ['LP', 'LR', 'F1'] | ['Our Model'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>LP</th> <th>LR</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Model || Single Model || Charniak (2000)</td> <td>82.1</td> <td>79.6</td> <td>80.8</td> </tr> <tr> <td... | Table 2 | table_2 | P18-1108 | 6 | acl2018 | Table 2 reports our results compared to other benchmarks. To the best of our knowledge, we set a new stateof-the-art for single-model parsing achieving 86.5 F1 on the test set. | [1, 1] | ['Table 2 reports our results compared to other benchmarks.', 'To the best of our knowledge, we set a new stateof-the-art for single-model parsing achieving 86.5 F1 on the test set.'] | [None, ['Our Model', 'F1']] | 1 |
P18-1110table_4 | Performance of RSP on QBANKDEV. | 5 | [['Training Data', 'WSJ', '40k', 'QBANK', '0'], ['Training Data', 'WSJ', '0', 'QBANK', '2k'], ['Training Data', 'WSJ', '40k', 'QBANK', '2k'], ['Training Data', 'WSJ', '40k', 'QBANK', '50'], ['Training Data', 'WSJ', '40k', 'QBANK', '100'], ['Training Data', 'WSJ', '40k', 'QBANK', '400']] | 1 | [['Rec.'], ['Prec.'], ['F1']] | [['91.07', '88.77', '89.91'], ['94.44', '96.23', '95.32'], ['95.84', '97.02', '96.43'], ['93.85', '95.91', '94.87'], ['95.08', '96.06', '95.57'], ['94.94', '97.05', '95.99']] | column | ['Rec.', 'Prec.', 'F1'] | ['F1'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Rec.</th> <th>Prec.</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Training Data || WSJ || 40k || QBANK || 0</td> <td>91.07</td> <td>88.77</td> <td>89.91</td> </tr> <tr>... | Table 4 | table_4 | P18-1110 | 5 | acl2018 | Surprisingly, with only 50 annotated questions (see Table 4), performance on QBANKDEV jumps 5 points, from 89.9% to 94.9%. This is only 1.5% below training with all of WSJTRAIN and QBANKTRAIN. | [1, 1] | ['Surprisingly, with only 50 annotated questions (see Table 4), performance on QBANKDEV jumps 5 points, from 89.9% to 94.9%.', 'This is only 1.5% below training with all of WSJTRAIN and QBANKTRAIN.'] | [['F1', 'WSJ', 'QBANK'], ['F1', 'WSJ', 'QBANK']] | 1 |
P18-1110table_5 | Performance of RSP on GENIADEV. | 5 | [['Training Data', 'WSJ', '40k', 'GENIA', '0'], ['Training Data', 'WSJ', '0', 'GENIA', '14k'], ['Training Data', 'WSJ', '40k', 'GENIA', '14k'], ['Training Data', 'WSJ', '40k', 'GENIA', '50'], ['Training Data', 'WSJ', '40k', 'GENIA', '100'], ['Training Data', 'WSJ', '40k', 'GENIA', '400']] | 1 | [['Rec.'], ['Prec.'], ['F1']] | [['72.51', '88.84', '79.85'], ['88.04', '92.3', '90.12'], ['88.24', '92.33', '90.24'], ['82.3', '90.55', '86.23'], ['83.94', '89.97', '86.85'], ['85.52', '91.01', '88.18']] | column | ['Rec.', 'Prec.', 'F1'] | ['F1'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Rec.</th> <th>Prec.</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Training Data || WSJ || 40k || GENIA || 0</td> <td>72.51</td> <td>88.84</td> <td>79.85</td> </tr> <tr>... | Table 5 | table_5 | P18-1110 | 5 | acl2018 | On the more difficult GENIA corpus of biomedical abstracts (Tateisi et al., 2005), we see a similar, if somewhat less dramatic, trend. See Table 5. With 50 annotated sentences, performance on GENIADEV jumps from 79.5% to 86.2%, outperforming all but one parser from David McCloskyfs thesis (McClosky, 2010) ? the one th... | [1, 1, 1, 1] | ['On the more difficult GENIA corpus of biomedical abstracts (Tateisi et al., 2005), we see a similar, if somewhat less dramatic, trend.', 'See Table 5.', 'With 50 annotated sentences, performance on GENIADEV jumps from 79.5% to 86.2%, outperforming all but one parser from David McClosky\x81fs thesis (McClosky, 2010) ?... | [['GENIA'], None, ['GENIA', 'F1'], ['GENIA', 'F1']] | 1 |
P18-1111table_2 | Results of the proposed method and the baselines on the SemEval 2013 task. | 3 | [['Method', 'Baselines', 'SFS (Versley 2013)'], ['Method', 'Baselines', 'IIITH (Surtani et al. 2013)'], ['Method', 'Baselines', 'MELODI (Van de Cruys et al. 2013)'], ['Method', 'Baselines', 'SemEval 2013 Baseline (Hendrickx et al. 2013)'], ['Method', 'This paper', 'Baseline'], ['Method', 'This paper', 'Our method']] | 1 | [['isomorphic'], ['non-isomorphic']] | [['23.1', '17.9'], ['23.1', '25.8'], ['13', '54.8'], ['13.8', '40.6'], ['3.8', '16.1'], ['28.2', '28.4']] | column | ['F1', 'F1'] | ['Our method'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>isomorphic</th> <th>non-isomorphic</th> </tr> </thead> <tbody> <tr> <td>Method || Baselines || SFS (Versley 2013)</td> <td>23.1</td> <td>17.9</td> </tr> <tr> <td>Method || Base... | Table 2 | table_2 | P18-1111 | 7 | acl2018 | Table 2 displays the performance of the proposed method and the baselines in the two evaluation settings. Our method outperforms all the methods in the isomorphic setting. In the nonisomorphic setting, it outperforms the other two systems that score reasonably on the isomorphic setting (SFS and IIITH) but cannot compet... | [1, 1, 1, 2] | ['Table 2 displays the performance of the proposed method and the baselines in the two evaluation settings.', 'Our method outperforms all the methods in the isomorphic setting.', 'In the nonisomorphic setting, it outperforms the other two systems that score reasonably on the isomorphic setting (SFS and IIITH) but canno... | [['Our method', 'Baselines'], ['Our method', 'Baselines', 'isomorphic'], ['Our method', 'SFS (Versley 2013)', 'IIITH (Surtani et al. 2013)', 'non-isomorphic'], ['Our method', 'Baseline']] | 1 |
P18-1111table_4 | Classification results. For each dataset split, the top part consists of baseline methods and the bottom part of methods from this paper. The best performance in each part appears in bold. | 4 | [['Dataset & Split', 'Tratz fine Random', 'Method', 'Tratz and Hovy (2010)'], ['Dataset & Split', 'Tratz fine Random', 'Method', 'Dima (2016)'], ['Dataset & Split', 'Tratz fine Random', 'Method', 'Shwartz and Waterson (2018)'], ['Dataset & Split', 'Tratz fine Random', 'Method', 'distributional'], ['Dataset & Split', 'T... | 1 | [['F1']] | [['0.739'], ['0.725'], ['0.714'], ['0.677'], ['0.505'], ['0.673'], ['0.34'], ['0.334'], ['0.429'], ['0.356'], ['0.333'], ['0.37'], ['0.76'], ['0.775'], ['0.736'], ['0.689'], ['0.557'], ['0.7'], ['0.391'], ['0.372'], ['0.478'], ['0.37'], ['0.345'], ['0.393']] | column | ['F1'] | ['integrated'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Dataset & Split || Tratz fine Random || Method || Tratz and Hovy (2010)</td> <td>0.739</td> </tr> <tr> <td>Dataset & Split || Tratz fine R... | Table 4 | table_4 | P18-1111 | 8 | acl2018 | Table 4 displays the methods' performance on the two versions of the Tratz (2011) dataset and the two dataset splits. The paraphrase model on its own is inferior to the distributional model, however, the integrated version improves upon the distributional model in 3 out of 4 settings, demonstrating the complementary na... | [1, 1, 1, 1, 2] | ["Table 4 displays the methods' performance on the two versions of the Tratz (2011) dataset and the two dataset splits.", 'The paraphrase model on its own is inferior to the distributional model, however, the integrated version improves upon the distributional model in 3 out of 4 settings, demonstrating the complementa... | [['Tratz fine Random', 'Tratz fine Lexical', 'Tratz coarse Random', 'Tratz coarse Lexical'], ['paraphrase', 'distributional', 'integrated'], ['Tratz coarse Lexical', 'Tratz fine Lexical'], ['Shwartz and Waterson (2018)', 'Tratz coarse Lexical', 'integrated', 'Tratz fine Lexical'], ['Tratz coarse Lexical', 'integrated',... | 1 |
P18-1112table_3 | With and without sentiment | 4 | [['Corpus', 'Objective', 'Subcorpus Sentiment?', 'With'], ['Corpus', 'Objective', 'Subcorpus Sentiment?', 'Without'], ['Corpus', 'Subjective', 'Subcorpus Sentiment?', 'With'], ['Corpus', 'Subjective', 'Subcorpus Sentiment?', 'Without'], ['Random Embeddings', '-', '-', '-']] | 2 | [['Sentiment', 'Amazon'], ['Sentiment', 'RT'], ['Subjectivity', '-'], ['Topic', '-']] | [['81.8', '75.2', '90.7', '83.1'], ['76.1', '67.2', '87.8', '82.6'], ['85.5', '78.0', '90.3', '82.5'], ['79.8', '71.0', '89.1', '82.2'], ['76.1', '62.2', '80.1', '71.5']] | column | ['accuracy', 'accuracy', 'accuracy', 'accuracy'] | ['With', 'Without'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Sentiment || Amazon</th> <th>Sentiment || RT</th> <th>Subjectivity || -</th> <th>Topic || -</th> </tr> </thead> <tbody> <tr> <td>Corpus || Objective || Subcorpus Sentiment? || With</td> ... | Table 3 | table_3 | P18-1112 | 5 | acl2018 | To control for the “amount” of sentiment in the Subjective and Objective corpora, we use sentiment lexicon compiled by Hu and Liu (2004). For each corpus, we create two subcorpora: With Sentiment contains only the sentences with at least one word from the sentiment lexicon, while Without Sentiment is the complement. We... | [2, 1, 2, 1, 1, 1] | ['To control for the “amount” of sentiment in the Subjective and Objective corpora, we use sentiment lexicon compiled by Hu and Liu (2004).', 'For each corpus, we create two subcorpora: With Sentiment contains only the sentences with at least one word from the sentiment lexicon, while Without Sentiment is the complemen... | [['Subjective', 'Objective'], ['Subcorpus Sentiment?', 'With', 'Without'], None, ['Random Embeddings'], ['Sentiment', 'Subjectivity', 'Topic'], ['Without', 'Subjective', 'Objective', 'Sentiment', 'Random Embeddings', 'Amazon']] | 1 |
P18-1112table_4 | Comparison of Sentiment-Infused Word Embeddings on Sentiment Classification Task | 3 | [['Corpus/Category', 'Amazon', 'Amazon Instant Video'], ['Corpus/Category', 'Amazon', 'Android Apps'], ['Corpus/Category', 'Amazon', 'Automotive'], ['Corpus/Category', 'Amazon', 'Baby'], ['Corpus/Category', 'Amazon', 'Beauty'], ['Corpus/Category', 'Amazon', 'Books'], ['Corpus/Category', 'Amazon', 'CD & Vinyl'], ['Corpu... | 3 | [['Objective Embeddings', 'Word2Vec', '-'], ['Objective Embeddings', 'Retrofitting', '-'], ['Objective Embeddings', 'Refining', '-'], ['Objective Embeddings', 'SentiVec', 'Spherical'], ['Objective Embeddings', 'SentiVec', 'Logistic'], ['Subjective Embeddings', 'Word2Vec', '-'], ['Subjective Embeddings', 'Retrofitting',... | [['84.1', '84.1', '81.9', '84.9*', '84.9*', '87.8', '87.8', '86.9', '88.1', '88.2'], ['83.0', '83.0', '80.9', '84.0*', '84.0*', '86.3', '86.3', '85.0', '86.6', '86.5'], ['80.7', '80.7', '78.8', '81.0', '81.3', '85.1', '85.1', '83.8', '84.9', '85.0'], ['80.9', '80.9', '78.6', '82.1', '82.2*', '84.2', '84.2', '82.8', '84... | column | ['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy'] | ['SentiVec'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Objective Embeddings || Word2Vec || -</th> <th>Objective Embeddings || Retrofitting || -</th> <th>Objective Embeddings || Refining || -</th> <th>Objective Embeddings || SentiVec || Spherical</th> ... | Table 4 | table_4 | P18-1112 | 8 | acl2018 | Table 4 shows the unfolded results for the 24 classification datasets of Amazon, as well as for Rotten Tomatoes. For each classification dataset (row), and for the Objective and Subjective embedding corpora respectively, the best word embedding methods are shown in bold. An asterisk indicates statistically significant8... | [1, 1, 1, 1, 1, 2, 2, 2, 1, 1, 1] | ['Table 4 shows the unfolded results for the 24 classification datasets of Amazon, as well as for Rotten Tomatoes.', 'For each classification dataset (row), and for the Objective and Subjective embedding corpora respectively, the best word embedding methods are shown in bold.', 'An asterisk indicates statistically sign... | [['Amazon', 'Rotten Tomatoes'], ['Objective Embeddings', 'Subjective Embeddings'], ['Word2Vec'], ['Word2Vec', 'SentiVec', 'Objective Embeddings', 'Subjective Embeddings'], ['SentiVec', 'Objective Embeddings', 'Subjective Embeddings'], None, ['Corpus/Category'], None, ['SentiVec', 'Retrofitting', 'Refining'], ['Retrofit... | 1 |
P18-1112table_5 | Comparison of Word Embeddings on Subjectivity and Topic Classification Tasks | 3 | [['Corpus/Category', 'Topic', 'Computers'], ['Corpus/Category', 'Topic', 'Misc'], ['Corpus/Category', 'Topic', 'Politics'], ['Corpus/Category', 'Topic', 'Recreation'], ['Corpus/Category', 'Topic', 'Religion'], ['Corpus/Category', 'Topic', 'Science'], ['Corpus/Category', 'Average', '-'], ['Corpus/Category', '-', '-']] | 3 | [['Objective Embeddings', 'Word2Vec', '-'], ['Objective Embeddings', 'Retrofitting', '-'], ['Objective Embeddings', 'Refining', '-'], ['Objective Embeddings', 'SentiVec', 'Spherical'], ['Objective Embeddings', 'SentiVec', 'Logistic'], ['Subjective Embeddings', 'Word2Vec', '-'], ['Subjective Embeddings', 'Retrofitting',... | [['79.8', '79.8', '79.6', '79.6', '79.8', '79.8', '79.8', '79.8', '79.7', '79.7'], ['89.8', '89.8', '89.7', '89.8', '90.0', '90.4', '90.4', '90.6', '90.4', '90.3'], ['84.6', '84.6', '84.4', '84.5', '84.6', '83.8', '83.8', '83.5', '83.6', '83.5'], ['83.4', '83.4', '83.1', '83.1', '83.2', '82.6', '82.6', '82.5', '82.7', ... | column | ['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy'] | ['SentiVec'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Objective Embeddings || Word2Vec || -</th> <th>Objective Embeddings || Retrofitting || -</th> <th>Objective Embeddings || Refining || -</th> <th>Objective Embeddings || SentiVec || Spherical</th> ... | Table 5 | table_5 | P18-1112 | 8 | acl2018 | Table 5 shows that the unfolded results for topic classification on the six datasets, and the result for subjectivity classification are similar across methods. Neither the SentiVec variants, nor Retrofitting and Refining, change the subjectivity and topic classification capabilities much, which means that the used sen... | [1, 1] | ['Table 5 shows that the unfolded results for topic classification on the six datasets, and the result for subjectivity classification are similar across methods.', 'Neither the SentiVec variants, nor Retrofitting and Refining, change the subjectivity and topic classification capabilities much, which means that the use... | [['Computers', 'Misc', 'Politics', 'Recreation', 'Religion'], ['SentiVec', 'Retrofitting', 'Refining', 'Topic']] | 1 |
P18-1113table_1 | Metaphor identification results. NB: * denotes that our model outperforms the baseline significantly, based on two-tailed paired t-test with p < 0.001. | 3 | [['Method', 'Phrase', 'Shutova et al. (2016)'], ['Method', 'Phrase', 'Rei et al. (2017)'], ['Method', 'Phrase', 'SIM-CBOWI+O'], ['Method', 'Phrase', 'SIM-SGI+O'], ['Method', 'Sent.', 'Melamud et al. (2016)'], ['Method', 'Sent.', 'SIM-SGI'], ['Method', 'Sent.', 'SIM-SGI+O'], ['Method', 'Sent.', 'SIM-CBOWI'], ['Method', ... | 1 | [['P'], ['R'], ['F1']] | [['0.67', '0.76', '0.71'], ['0.74', '0.76', '0.74'], ['0.66', '0.78', '0.72'], ['0.68', '0.82', '0.74*'], ['0.60', '0.80', '0.69'], ['0.56', '0.95', '0.70'], ['0.62', '0.89', '0.73'], ['0.59', '0.91', '0.72'], ['0.66', '0.88', '0.75*']] | column | ['P', 'R', 'F1'] | ['SIM-SGI+O', 'SIM-CBOWI+O'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Method || Phrase || Shutova et al. (2016)</td> <td>0.67</td> <td>0.76</td> <td>0.71</td> </tr> <tr> <td>... | Table 1 | table_1 | P18-1113 | 7 | acl2018 | Table 1 shows the performance of our model and the baselines on the task of metaphor identification. All the results for our models are based on a threshold of 0.6, which is empirically determined based on the developing set. For sentence level metaphor identification, it can be observed that all our models outperform ... | [1, 2, 1, 1, 2, 1, 1, 1, 2, 2, 1, 2] | ['Table 1 shows the performance of our model and the baselines on the task of metaphor identification.', 'All the results for our models are based on a threshold of 0.6, which is empirically determined based on the developing set.', 'For sentence level metaphor identification, it can be observed that all our models out... | [None, ['SIM-CBOWI+O', 'SIM-SGI+O', 'SIM-SGI', 'SIM-CBOWI'], ['Melamud et al. (2016)', 'Sent.', 'SIM-SGI', 'SIM-SGI+O', 'SIM-CBOWI', 'F1'], ['SIM-CBOWI+O', 'SIM-SGI+O', 'SIM-CBOWI', 'SIM-SGI'], None, ['P', 'R', 'SIM-CBOWI+O', 'SIM-SGI+O', 'SIM-SGI', 'SIM-CBOWI'], ['SIM-CBOWI+O', 'SIM-SGI+O', 'Shutova et al. (2016)', 'R... | 1 |
P18-1114table_2 | Performance comparison (%) of our LMMs and the baselines on two basic NLP tasks (word similarity & syntactic analogy) and one downstream task (text classification). The bold digits indicate the best performances. | 1 | [['Wordsim-353'], ['RW'], ['RG-65'], ['SCWS'], ['Men-3k'], ['WS-353-REL'], ['Syntactic Analogy'], ['Text Classification']] | 1 | [['CBOW'], ['Skip-gram'], ['GloVe'], ['EMM'], ['LMM-A'], ['LMM-S'], ['LMM-M']] | [['58.77', '61.94', '49.40', '60.01', '62.05', '63.13', '61.54'], ['40.58', '36.42', '33.40', '40.83', '43.12', '42.14', '40.51'], ['56.50', '62.81', '59.92', '60.85', '62.51', '62.49', '63.07'], ['63.13', '60.20', '47.98', '60.28', '61.86', '61.71', '63.02'], ['68.07', '66.30', '60.56', '66.76', '66.26', '68.36', '64.... | column | ['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy'] | ['LMM-A', 'LMM-S', 'LMM-M'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>CBOW</th> <th>Skip-gram</th> <th>GloVe</th> <th>EMM</th> <th>LMM-A</th> <th>LMM-S</th> <th>LMM-M</th> </tr> </thead> <tbody> <tr> <td>Wordsim-353</td> <td>58.77</td> ... | Table 2 | table_2 | P18-1114 | 8 | acl2018 | Word similarity is conducted to test the semantic information which is encoded in word embeddings, and the results are listed in Table 2 (first 6 rows). We observe that our models surpass the comparative baselines on five datasets. Compared with the base model CBOW, it is remarkable that our models approximately achiev... | [1, 1, 1, 1, 2, 2, 1, 2, 1, 0, 0, 0, 1, 1, 0, 2, 1, 0, 2, 1, 1, 1] | ['Word similarity is conducted to test the semantic information which is encoded in word embeddings, and the results are listed in Table 2 (first 6 rows).', 'We observe that our models surpass the comparative baselines on five datasets.', 'Compared with the base model CBOW, it is remarkable that our models approximatel... | [None, ['LMM-A', 'LMM-S', 'LMM-M'], ['CBOW', 'Wordsim-353', 'RG-65'], ['WS-353-REL', 'CBOW', 'LMM-S'], ['LMM-A', 'LMM-S', 'LMM-M'], None, ['EMM', 'LMM-A', 'LMM-S', 'LMM-M'], ['EMM'], ['GloVe'], None, None, None, ['Syntactic Analogy', 'EMM', 'LMM-A', 'LMM-S', 'LMM-M'], ['Syntactic Analogy', 'CBOW', 'LMM-A'], None, None,... | 1 |
P18-1118table_4 | Our Memory-to-Context Source Memory NMT variants vs. S-NMT and Source context NMT baselines. bold: Best performance, †, ♠, ♣, ♦: Statistically significantly better than only S-NMT, S-NMT & Jean et al. (2017), S-NMT & Wang et al. (2017), all baselines, respectively. | 1 | [['Jean et al. (2017)'], ['Wang et al. (2017)'], ['S-NMT'], ['S-NMT + src mem'], ['S-NMT + both mems']] | 3 | [['BLEU', 'Fr→En', '-'], ['BLEU', 'De→En', 'NC-11'], ['BLEU', 'De→En', 'NC-16'], ['BLEU', 'Et→En', '-'], ['METEOR', 'Fr→En', '-'], ['METEOR', 'De→En', 'NC-11'], ['METEOR', 'De→En', 'NC-16'], ['METEOR', 'Et→En', '-']] | [['21.95', '6.04', '10.26', '21.67', '24.10', '11.61', '15.56', '25.77'], ['21.87', '5.49', '10.14', '22.06', '24.13', '11.05', '15.20', '26.00'], ['20.85', '5.24', '9.18', '20.42', '23.27', '10.90', '14.35', '24.65'], ['21.91', '6.26', '10.20', '22.10', '24.04', '11.52', '15.45', '25.92'], ['22.00', '6.57', '10.54', '... | column | ['BLEU', 'BLEU', 'BLEU', 'BLEU', 'METEOR', 'METEOR', 'METEOR', 'METEOR'] | ['S-NMT + src mem', 'S-NMT + both mems'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU || Fr→En || -</th> <th>BLEU || De→En || NC-11</th> <th>BLEU || De→En || NC-16</th> <th>BLEU || Et→En || -</th> <th>METEOR || Fr→En || -</th> <th>METEOR || De→En || NC-11</th> <th>M... | Table 4 | table_4 | P18-1118 | 7 | acl2018 | Table 4 shows comparison of our Memory-to-Context model variants source context-NMT models (Jean et al., 2017; Wang et al., 2017). For German→English, our S-NMT+src mem model is comparable to Jean et al. (2017) but outperforms Wang et al. (2017) for one test set according to BLEU, and for both test sets according to ME... | [1, 1, 1, 2] | ['Table 4 shows comparison of our Memory-to-Context model variants source context-NMT models (Jean et al., 2017; Wang et al., 2017).', 'For German→English, our S-NMT+src mem model is comparable to Jean et al. (2017) but outperforms Wang et al. (2017) for one test set according to BLEU, and for both test sets according ... | [['S-NMT', 'S-NMT + src mem', 'S-NMT + both mems', 'Jean et al. (2017)', 'Wang et al. (2017)'], ['De→En', 'S-NMT + src mem', 'Jean et al. (2017)', 'Wang et al. (2017)', 'BLEU', 'METEOR'], ['Et→En', 'S-NMT + src mem', 'S-NMT + both mems', 'Jean et al. (2017)'], ['S-NMT + src mem', 'S-NMT + both mems']] | 1 |
P18-1119table_1 | Results on LGL, WikToR (WIK) and GeoVirus (GEO). Lower AUC and Average Error are better while higher Acc@161km is better. Figures in brackets are scores on identical subsets of each dataset. †Only the AUC decimal part shown. ‡Average Error rounded up to the nearest 100km. | 2 | [['Geocoder', 'CamCoder'], ['Geocoder', 'Edinburgh'], ['Geocoder', 'Yahoo!'], ['Geocoder', 'Population'], ['Geocoder', 'CLAVIN'], ['Geocoder', 'GeoTxt'], ['Geocoder', 'Topocluster'], ['Geocoder', 'Santos et al.']] | 2 | [['Area Under Curve', 'LGL'], ['Area Under Curve', 'WIK'], ['Area Under Curve', 'GEO'], ['Average Error', 'LGL'], ['Average Error', 'WIK'], ['Average Error', 'GEO'], ['Accuracy@161km', 'LGL'], ['Accuracy@161km', 'WIK'], ['Accuracy@161km', 'GEO']] | [['22 (18)', '33 (37)', '31 (32)', '7 (5)', '11 (9)', '3 (3)', '76 (83)', '65 (57)', '82 (80)'], ['25 (22)', '53 (58)', '33 (34)', '8 (8)', '31 (30)', '5 (4)', '76 (80)', '42 (36)', '78 (78)'], ['34 (35)', '44 (53)', '40 (44)', '6 (5)', '23 (25)', '3 (3)', '72 (75)', '52 (39)', '70 (65)'], ['27 (22)', '68 (71)', '32 (3... | column | ['Area Under Curve', 'Area Under Curve', 'Area Under Curve', 'Average Error', 'Average Error', 'Average Error', 'Accuracy@161km', 'Accuracy@161km', 'Accuracy@161km'] | ['CamCoder'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Area Under Curve || LGL</th> <th>Area Under Curve || WIK</th> <th>Area Under Curve || GEO</th> <th>Average Error || LGL</th> <th>Average Error || WIK</th> <th>Average Error || GEO</th> ... | Table 1 | table_1 | P18-1119 | 7 | acl2018 | Each system geoparses its particular majority of the dataset to obtain a representative data sample, shown in Table 1 as strongly correlated scores for subsets of different sizes, with which to assess model performance. Table 1 also shows scores in brackets for the overlapping partition of all systems in order to compa... | [1, 1, 1, 2, 2, 1, 1, 1, 2, 2, 1, 2, 1, 1, 2] | ['Each system geoparses its particular majority of the dataset to obtain a representative data sample, shown in Table 1 as strongly correlated scores for subsets of different sizes, with which to assess model performance.', 'Table 1 also shows scores in brackets for the overlapping partition of all systems in order to ... | [None, ['LGL', 'WIK', 'GEO'], ['Geocoder', 'LGL', 'WIK', 'GEO'], None, None, ['Geocoder'], ['WIK'], None, None, None, ['Edinburgh', 'GeoTxt', 'CLAVIN', 'Topocluster', 'CamCoder', 'Santos et al.', 'Yahoo!', 'Population'], None, ['CamCoder'], ['CamCoder', 'Area Under Curve', 'LGL', 'WIK', 'GEO'], ['CamCoder', 'Area Under... | 1 |
P18-1129table_2 | The dependency parsing results. Significance test (Nilsson and Nivre, 2008) shows the improvement of our Distill (both) over Baseline is statistically significant with p < 0.01. | 1 | [['Baseline'], ['Ensemble'], ['Distill (reference alpha=1.0)'], ['Distill (exploration T=1.0)'], ['Distill (both)'], ['Ballesteros et al. (2016) (dyn. oracle)'], ['Andor et al. (2016) (local B=1)'], ['Buckman et al. (2016) (local B=8)'], ['Andor et al. (2016) (local B=32)'], ['Andor et al. (2016) (global B=32)'], ['Doz... | 1 | [['LAS']] | [['90.83'], ['92.73'], ['91.99'], ['92.00'], ['92.14'], ['91.42'], ['91.02'], ['91.19'], ['91.70'], ['92.79'], ['94.08'], ['92.06'], ['94.60']] | column | ['LAS'] | ['Ensemble', 'Distill (reference alpha=1.0)', 'Distill (exploration T=1.0)', 'Distill (both)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>LAS</th> </tr> </thead> <tbody> <tr> <td>Baseline</td> <td>90.83</td> </tr> <tr> <td>Ensemble</td> <td>92.73</td> </tr> <tr> <td>Distill (reference alpha=1.0)</td> <... | Table 2 | table_2 | P18-1129 | 6 | acl2018 | Table 2 shows our PTB experimental results. From this result, we can see that the ensemble model outperforms the baseline model by 1.90 in LAS. For our distillation from reference, when setting alpha = 1.0, best performance on development set is achieved and the test LAS is 91.99. We also compare our parser with the ot... | [1, 1, 1, 1, 1, 1, 1, 1, 1, 1] | ['Table 2 shows our PTB experimental results.', 'From this result, we can see that the ensemble model outperforms the baseline model by 1.90 in LAS.', 'For our distillation from reference, when setting alpha = 1.0, best performance on development set is achieved and the test LAS is 91.99.', 'We also compare our parser ... | [None, ['Ensemble', 'Baseline', 'LAS'], ['Distill (reference alpha=1.0)', 'LAS'], ['Distill (reference alpha=1.0)', 'Distill (exploration T=1.0)', 'Distill (both)', 'Ballesteros et al. (2016) (dyn. oracle)', 'Andor et al. (2016) (local B=1)'], ['Ballesteros et al. (2016) (dyn. oracle)', 'Andor et al. (2016) (local B=1)... | 1 |
P18-1129table_3 | The machine translation results. MIXER denotes that of Ranzato et al. (2015), BSO denotes that of Wiseman and Rush (2016). Significance test (Koehn, 2004) shows the improvement of our Distill (both) over Baseline is statistically significant with p < 0.01. | 1 | [['Baseline'], ['Ensemble'], ['Distill (reference alpha=0.8)'], ['Distill (exploration T=0.1)'], ['Distill (both)'], ['MIXER'], ['BSO (local B=1)'], ['BSO (global B=1)']] | 1 | [['BLEU']] | [['22.79'], ['26.26'], ['24.76'], ['24.64'], ['25.44'], ['20.73'], ['22.53'], ['23.83']] | column | ['BLEU'] | ['Ensemble', 'Distill (reference alpha=0.8)', 'Distill (exploration T=0.1)', 'Distill (both)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU</th> </tr> </thead> <tbody> <tr> <td>Baseline</td> <td>22.79</td> </tr> <tr> <td>Ensemble</td> <td>26.26</td> </tr> <tr> <td>Distill (reference alpha=0.8)</td> ... | Table 3 | table_3 | P18-1129 | 6 | acl2018 | Table 3 shows the experimental results on IWSLT 2014 dataset. Similar to the PTB parsing results, the ensemble 10 translators outperforms the baseline translator by 3.47 in BLEU score. Distilling from the ensemble by following the reference leads to a single translator of 24.76 BLEU score. Like in the parsing experimen... | [1, 1, 1, 0, 0, 1, 1, 1, 1] | ['Table 3 shows the experimental results on IWSLT 2014 dataset.', 'Similar to the PTB parsing results, the ensemble 10 translators outperforms the baseline translator by 3.47 in BLEU score.', 'Distilling from the ensemble by following the reference leads to a single translator of 24.76 BLEU score.', 'Like in the parsin... | [None, ['Ensemble', 'Baseline', 'BLEU'], ['Distill (reference alpha=0.8)', 'BLEU'], None, None, ['Distill (exploration T=0.1)', 'BLEU'], ['Distill (both)', 'BLEU'], ['MIXER', 'BSO (local B=1)', 'BSO (global B=1)'], ['Distill (reference alpha=0.8)', 'Distill (exploration T=0.1)', 'Distill (both)']] | 1 |
P18-1129table_4 | The ranking performance of parsers’ output distributions evaluated in MAP on “problematic” states. | 1 | [['Baseline'], ['Ensemble'], ['Distill (both)']] | 1 | [['optimal-yet-ambiguous'], ['non-optimal']] | [['68.59', '89.59'], ['74.19', '90.90'], ['81.15', '91.38']] | column | ['MAP', 'MAP'] | ['Ensemble', 'Distill (both)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>optimal-yet-ambiguous</th> <th>non-optimal</th> </tr> </thead> <tbody> <tr> <td>Baseline</td> <td>68.59</td> <td>89.59</td> </tr> <tr> <td>Ensemble</td> <td>74.19</td> ... | Table 4 | table_4 | P18-1129 | 8 | acl2018 | The comparison in Table 4 shows that the ensemble model significantly outperforms the baseline on ambiguous and non-optimal states. This observation indicates the ensemble output distribution is more informative thus generalizes well on problematic states and achieves better performance. We also observe that the distil... | [1, 2, 1, 2] | ['The comparison in Table 4 shows that the ensemble model significantly outperforms the baseline on ambiguous and non-optimal states.', 'This observation indicates the ensemble output distribution is more informative thus generalizes well on problematic states and achieves better performance.', 'We also observe that th... | [['Ensemble', 'Baseline', 'optimal-yet-ambiguous', 'non-optimal'], ['Ensemble'], ['Distill (both)', 'Ensemble', 'Baseline', 'optimal-yet-ambiguous', 'non-optimal'], ['Distill (both)']] | 1 |
P18-1130table_1 | UAS and LAS of four versions of our model on test sets for three languages, together with topperforming parsing systems. “T” and “G” indicate transitionand graph-based models, respectively. For BIAF, we provide the original results reported in Dozat and Manning (2017) and our re-implementation. For STACKPTR and our re-... | 3 | [['System', 'Chen and Manning (2014)', 'T'], ['System', 'Ballesteros et al. (2015)', 'T'], ['System', 'Dyer et al. (2015)', 'T'], ['System', 'Bohnet and Nivre (2012)', 'T'], ['System', 'Ballesteros et al. (2016)', 'T'], ['System', 'Kiperwasser and Goldberg (2016)', 'T'], ['System', 'Weiss et al. (2015)', 'T'], ['System... | 2 | [['English', 'UAS'], ['English', 'LAS'], ['Chinese', 'UAS'], ['Chinese', 'LAS'], ['German', 'UAS'], ['German', 'LAS']] | [['91.8', '89.6', '83.9', '82.4', '-', '-'], ['91.63', '89.44', '85.30', '83.72', '88.83', '86.10'], ['93.1', '90.9', '87.2', '85.7', '-', '-'], ['93.33', '21.22', '87.3', '85.9', '91.4', '89.4'], ['93.56', '91.42', '87.65', '86.21', '-', '-'], ['93.9', '91.9', '87.6', '86.1', '-', '-'], ['94.26', '92.41', '-', '-', '-... | column | ['UAS', 'LAS', 'UAS', 'LAS', 'UAS', 'LAS'] | ['STACKPTR: Full'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>English || UAS</th> <th>English || LAS</th> <th>Chinese || UAS</th> <th>Chinese || LAS</th> <th>German || UAS</th> <th>German || LAS</th> </tr> </thead> <tbody> <tr> <td>System ... | Table 1 | table_1 | P18-1130 | 7 | acl2018 | Table 1 illustrates the UAS and LAS of the four versions of our model (with decoding beam size 10) on the three treebanks, together with previous top-performing systems for comparison. Note that the results of STACKPTR and our reimplementation of BIAF are the average of 5 repetitions instead of a single run. Our Full m... | [1, 2, 1, 1, 1, 1] | ['Table 1 illustrates the UAS and LAS of the four versions of our model (with decoding beam size 10) on the three treebanks, together with previous top-performing systems for comparison.', 'Note that the results of STACKPTR and our reimplementation of BIAF are the average of 5 repetitions instead of a single run.', 'Ou... | [['STACKPTR: Org', 'STACKPTR: +gpar', 'STACKPTR: +sib', 'STACKPTR: Full', 'UAS', 'LAS'], ['BIAF: re-impl', 'STACKPTR: Org', 'STACKPTR: +gpar', 'STACKPTR: +sib', 'STACKPTR: Full'], ['STACKPTR: Full', 'English', 'Chinese', 'German'], ['BIAF: re-impl', 'BIAF: Dozat and Manning (2017)'], ['UAS', 'LAS', 'Chinese', 'English'... | 1 |
P18-1130table_2 | Parsing performance on the test data of PTB with different versions of POS tags. | 2 | [['POS', 'Gold'], ['POS', 'Pred'], ['POS', 'None']] | 1 | [['UAS'], ['LAS'], ['UCM'], ['LCM']] | [['96.12±0.03', '95.06±0.05', '62.22±0.33', '55.74±0.44'], ['95.87±0.04', '94.19±0.04', '61.43±0.49', '49.68±0.47'], ['95.90±0.05', '94.21±0.04', '61.58±0.39', '49.87±0.46']] | column | ['UAS', 'LAS', 'UCM', 'LCM'] | ['Gold'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>UAS</th> <th>LAS</th> <th>UCM</th> <th>LCM</th> </tr> </thead> <tbody> <tr> <td>POS || Gold</td> <td>96.12±0.03</td> <td>95.06±0.05</td> <td>62.22±0.33</td> <td>55.74±... | Table 2 | table_2 | P18-1130 | 7 | acl2018 | Table 2 gives results of the parsers with different versions of POS tags on the test data of PTB. The parser with gold-standard POS tags significantly outperforms the other two parsers, showing that dependency parsers can still benefit from accurate POS information. The parser with predicted (imperfect) POS tags, howev... | [1, 1, 1, 1] | ['Table 2 gives results of the parsers with different versions of POS tags on the test data of PTB.', 'The parser with gold-standard POS tags significantly outperforms the other two parsers, showing that dependency parsers can still benefit from accurate POS information.', 'The parser with predicted (imperfect) POS tag... | [['POS', 'UAS', 'LAS', 'UCM', 'LCM'], ['Gold', 'Pred', 'None'], ['Pred', 'None'], ['Pred', 'None']] | 1 |
P18-1130table_4 | UAS and LAS on both the development and test datasets of 12 treebanks from UD Treebanks, together with BIAF for comparison. | 1 | [['bg'], ['ca'], ['cs'], ['de'], ['en'], ['es'], ['fr'], ['it'], ['nl'], ['no'], ['ro'], ['ru']] | 3 | [['Dev', 'BIAF', 'UAS'], ['Dev', 'BIAF', 'LAS'], ['Dev', 'STACKPTR', 'UAS'], ['Dev', 'STACKPTR', 'LAS'], ['Test', 'BIAF', 'UAS'], ['Test', 'BIAF', 'LAS'], ['Test', 'STACKPTR', 'UAS'], ['Test', 'STACKPTR', 'LAS']] | [['93.92±0.13', '89.05±0.11', '94.09±0.16', '89.17±0.14', '94.30±0.16', '90.04±0.16', '94.31±0.06', '89.96±0.07'], ['94.21±0.05', '91.97±0.06', '94.47±0.02', '92.51±0.05', '94.36±0.06', '92.05±0.07', '94.47±0.02', '92.39±0.02'], ['94.14±0.03', '90.89±0.04', '94.33±0.04', '91.24±0.05', '94.06±0.04', '90.60±0.05', '94.21... | column | ['UAS', 'LAS', 'UAS', 'LAS', 'UAS', 'LAS', 'UAS', 'LAS'] | ['STACKPTR'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Dev || BIAF || UAS</th> <th>Dev || BIAF || LAS</th> <th>Dev || STACKPTR || UAS</th> <th>Dev || STACKPTR || LAS</th> <th>Test || BIAF || UAS</th> <th>Test || BIAF || LAS</th> <th>Test ||... | Table 4 | table_4 | P18-1130 | 9 | acl2018 | Table 4 summarizes the results of the STACKPTR parser, along with BIAF for comparison, on both the development and test datasets for each language. First, both BIAF and STACKPTR parsers achieve relatively high parsing accuracies on all the 12 languages ? all with UAS are higher than 90%. On nine languages ? Catalan, Cz... | [1, 1, 1, 1, 1] | ['Table 4 summarizes the results of the STACKPTR parser, along with BIAF for comparison, on both the development and test datasets for each language.', 'First, both BIAF and STACKPTR parsers achieve relatively high parsing accuracies on all the 12 languages ? all with UAS are higher than 90%.', 'On nine languages ? Cat... | [['BIAF', 'STACKPTR'], ['BIAF', 'STACKPTR', 'UAS'], ['ca', 'cs', 'nl', 'de', 'en', 'fr', 'no', 'ru', 'es', 'STACKPTR', 'UAS', 'LAS'], ['bg', 'STACKPTR', 'UAS', 'BIAF', 'LAS'], ['it', 'ro', 'BIAF']] | 1 |
P18-1132table_2 | Number agreement error rates for various LSTM language models, broken down by the number of attractors. The top two rows represent the random and majority class baselines, while the next row (†) is the reported result from Linzen et al. (2016) for an LSTM language model with 50 hidden units (some entries, denoted by ≈,... | 1 | [['Random'], ['Majority'], ['LSTM H=50'], ['Our LSTM H=50'], ['Our LSTM H=150'], ['Our LSTM H=250'], ['Our LSTM H=350'], ['1B Word LSTM (repl)'], ['Char LSTM']] | 1 | [['n=0'], ['n=1'], ['n=2'], ['n=3'], ['n=4']] | [['50.0', '50.0', '50.0', '50.0', '50.0'], ['32.0', '32.0', '32.0', '32.0', '32.0'], ['6.8', '32.6', '?50', '?65', '?70'], ['2.4', '8.0', '15.7', '26.1', '34.65'], ['1.5', '4.5', '9.0', '14.3', '17.6'], ['1.4', '3.3', '5.9', '9.7', '13.9'], ['1.3', '3.0', '5.7', '9.7', '13.8'], ['2.8', '8.0', '14.0', '21.8', '20.0'], [... | column | ['error', 'error', 'error', 'error', 'error'] | ['Our LSTM H=350'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>n=0</th> <th>n=1</th> <th>n=2</th> <th>n=3</th> <th>n=4</th> </tr> </thead> <tbody> <tr> <td>Random</td> <td>50.0</td> <td>50.0</td> <td>50.0</td> <td>50.0</td> ... | Table 2 | table_2 | P18-1132 | 3 | acl2018 | Table 2 indicates that, given enough capacity, LSTM language models without explicit syntactic supervision are able to perform well in number agreement. For cases with multiple attractors, we observe that the LSTM language model with 50 hidden units trails behind its larger counterparts by a substantial margin despite ... | [1, 1, 1] | ['Table 2 indicates that, given enough capacity, LSTM language models without explicit syntactic supervision are able to perform well in number agreement.', 'For cases with multiple attractors, we observe that the LSTM language model with 50 hidden units trails behind its larger counterparts by a substantial margin des... | [['Our LSTM H=350'], ['LSTM H=50', 'Our LSTM H=50', 'Our LSTM H=150', 'Our LSTM H=250', 'Our LSTM H=350'], ['Char LSTM', '1B Word LSTM (repl)']] | 1 |
P18-1136table_5 | Evaluation on In-Car Assistant. Human, rulebased and KV Retrieval Net evaluation (with *) are reported from (Eric et al., 2017), which are not directly comparable. Mem2Seq achieves highest BLEU and entity F1 score over baselines. | 1 | [['Human*'], ['Rule-Based*'], ['KV Retrieval Net*'], ['Seq2Seq'], ['+Attn'], ['Ptr-Unk'], ['Mem2Seq H1'], ['Mem2Seq H3'], ['Mem2Seq H6']] | 1 | [['BLEU'], ['Ent. F1'], ['Sch. F1'], ['Wea. F1'], ['Nav. F1']] | [['13.5', '60.7', '64.3', '61.6', '55.2'], ['6.6', '43.8', '61.3', '39.5', '40.4'], ['13.2', '48.0', '62.9', '47.0', '41.3'], ['8.4', '10.3', '09.7', '14.1', '07.0'], ['9.3', '19.9', '23.4', '25.6', '10.8'], ['8.3', '22.7', '26.9', '26.7', '14.9'], ['11.6', '32.4', '39.8', '33.6', '24.6'], ['12.6', '33.4', '49.3', '32.... | column | ['BLEU', 'Ent. F1', 'Sch. F1', 'Wea. F1', 'Nav. F1'] | ['Mem2Seq H3'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU</th> <th>Ent. F1</th> <th>Sch. F1</th> <th>Wea. F1</th> <th>Nav. F1</th> </tr> </thead> <tbody> <tr> <td>Human*</td> <td>13.5</td> <td>60.7</td> <td>64.3</td> ... | Table 5 | table_5 | P18-1136 | 5 | acl2018 | In Table 5, our model can achieve highest 12.6 BLEU score. In addition, Mem2Seq has shown promising results in terms of Entity F1 scores (33.4%), which are, in general, much higher than those of other baselines. Note that the numbers reported from Eric et al. (2017) are not directly comparable to ours as we mention bel... | [1, 1, 2, 1, 1, 2, 2, 2] | ['In Table 5, our model can achieve highest 12.6 BLEU score.', 'In addition, Mem2Seq has shown promising results in terms of Entity F1 scores (33.4%), which are, in general, much higher than those of other baselines.', 'Note that the numbers reported from Eric et al. (2017) are not directly comparable to ours as we men... | [['Mem2Seq H3', 'BLEU'], ['Mem2Seq H3', 'Ent. F1'], ['Human*', 'Rule-Based*', 'KV Retrieval Net*'], ['Seq2Seq', 'Ptr-Unk', 'Mem2Seq H1', 'Mem2Seq H3', 'Mem2Seq H6'], ['Human*', 'Ent. F1', 'Sch. F1', 'Wea. F1', 'Nav. F1'], ['Human*', 'Ent. F1', 'Sch. F1', 'Wea. F1', 'Nav. F1'], ['Human*', 'BLEU'], ['KV Retrieval Net*']] | 1 |
P18-1138table_3 | Evaluation results on factoid question answering dialogues. | 2 | [['model', 'LSTM'], ['model', 'HRED'], ['model', 'GenDS'], ['model', 'NKD-ori'], ['model', 'NKD-gated'], ['model', 'NKD-atte']] | 1 | [['accuracy (%)'], ['recall (%)']] | [['7.8', '7.5'], ['3.7', '3.9'], ['70.3', '63.1'], ['67.0', '56.2'], ['77.6', '77.3'], ['55.1', '46.6']] | column | ['accuracy (%)', 'recall (%)'] | ['NKD-ori', 'NKD-gated', 'NKD-atte'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>accuracy (%)</th> <th>recall (%)</th> </tr> </thead> <tbody> <tr> <td>model || LSTM</td> <td>7.8</td> <td>7.5</td> </tr> <tr> <td>model || HRED</td> <td>3.7</td> <td>... | Table 3 | table_3 | P18-1138 | 6 | acl2018 | Table 3 displays the accuracy and recall of entities on factoid question answering dialogues. The performance of NKD is slightly better than the specific QA solution GenDS, while LSTM and HRED which are designed for chi-chat almost fail in this task. All the variants of NKD models are capable of generating entities wit... | [1, 1, 1] | ['Table 3 displays the accuracy and recall of entities on factoid question answering dialogues.', 'The performance of NKD is slightly better than the specific QA solution GenDS, while LSTM and HRED which are designed for chi-chat almost fail in this task.', 'All the variants of NKD models are capable of generating enti... | [['accuracy (%)', 'recall (%)'], ['NKD-gated', 'GenDS', 'LSTM', 'HRED'], ['NKD-ori', 'NKD-gated', 'NKD-atte', 'accuracy (%)', 'recall (%)']] | 1 |
P18-1138table_4 | Evaluation results on entire dataset. | 2 | [['model', 'LSTM'], ['model', 'HRED'], ['model', 'GenDS'], ['model', 'NKD-ori'], ['model', 'NKD-gated'], ['model', 'NKD-atte']] | 1 | [['accuracy (%)'], ['recall (%)'], ['entity number']] | [['2.6', '2.5', '1.65'], ['1.4', '1.5', '1.79'], ['20.9', '17.4', '1.34'], ['22.9', '19.7', '2.55'], ['24.8', '25.6', '1.59'], ['18.4', '16.0', '3.41']] | column | ['accuracy (%)', 'recall (%)', 'entity number'] | ['NKD-gated'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>accuracy (%)</th> <th>recall (%)</th> <th>entity number</th> </tr> </thead> <tbody> <tr> <td>model || LSTM</td> <td>2.6</td> <td>2.5</td> <td>1.65</td> </tr> <tr> <td... | Table 4 | table_4 | P18-1138 | 6 | acl2018 | Table 4 lists the accuracy and recall of entities on the entire dataset including both the factoid QA and knowledge grounded chit-chats. Not surprisingly, both NKD-ori and NKD-gated outperform GenDS on the entire dataset, and the relative improvement over GenDS is even higher than the improvement in QA dialogues. It co... | [1, 1, 2, 1, 1, 1, 2] | ['Table 4 lists the accuracy and recall of entities on the entire dataset including both the factoid QA and knowledge grounded chit-chats.', 'Not surprisingly, both NKD-ori and NKD-gated outperform GenDS on the entire dataset, and the relative improvement over GenDS is even higher than the improvement in QA dialogues.'... | [['accuracy (%)', 'recall (%)', 'entity number'], ['NKD-ori', 'NKD-gated', 'GenDS'], ['NKD-ori', 'NKD-gated', 'GenDS'], ['NKD-ori', 'NKD-gated', 'NKD-atte', 'entity number'], ['LSTM', 'HRED', 'entity number', 'accuracy (%)', 'recall (%)'], ['NKD-gated', 'NKD-ori', 'NKD-atte', 'accuracy (%)', 'recall (%)', 'entity numbe... | 1 |
P18-1138table_5 | Human evaluation result. | 2 | [['model', 'LSTM'], ['model', 'HRED'], ['model', 'GenDS'], ['model', 'NKD-ori'], ['model', 'NKD-gated'], ['model', 'NKD-atte']] | 1 | [['Fluency'], ['Appropriateness of knowledge'], ['Entire Correctness']] | [['2.52', '0.88', '0.8'], ['2.48', '0.36', '0.32'], ['2.76', '1.36', '1.34'], ['2.42', '1.92', '1.58'], ['2.08', '1.72', '1.44'], ['2.7', '1.54', '1.38']] | column | ['Fluency', 'Appropriateness of knowledge', 'Entire Correctness'] | ['NKD-ori', 'NKD-gated', 'NKD-atte'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Fluency</th> <th>Appropriateness of knowledge</th> <th>Entire Correctness</th> </tr> </thead> <tbody> <tr> <td>model || LSTM</td> <td>2.52</td> <td>0.88</td> <td>0.8</td> </t... | Table 5 | table_5 | P18-1138 | 7 | acl2018 | The results of human evaluation in Table 5 also validate the superiority of the proposed model, especially on appropriateness. Responses generated by LSTM and HRED are of high fluency, but are simply repetitions, or even dull responses as gI donft know.h, gGood.h. NKD-gated is more adept at incorporating the knowl... | [1, 1, 1, 1, 2, 2] | ['The results of human evaluation in Table 5 also validate the superiority of the proposed model, especially on appropriateness.', 'Responses generated by LSTM and HRED are of high fluency, but are simply repetitions, or even dull responses as \x81gI don\x81ft know.\x81h, \x81gGood.\x81h.', 'NKD-gated is more adept at ... | [None, ['LSTM', 'HRED', 'Fluency'], ['NKD-gated', 'Appropriateness of knowledge', 'Entire Correctness', 'NKD-atte', 'Fluency'], ['NKD-ori', 'Appropriateness of knowledge', 'Entire Correctness'], None, ['Fluency', 'Appropriateness of knowledge', 'Entire Correctness']] | 1 |
P18-1141table_3 | Roundtrip translation (mean/median accuracy) and sentiment analysis (F1) results for wordbased (WORD) and character-based (CHAR) multilingual embeddings. N (coverage): # queries contained in the embedding space. The best result across WORD and CHAR is set in bold. | 1 | [['RTSIMPLE'], ['BOW'], ['S-ID'], ['SAMPLE'], ['CLIQUE'], ['N(t)'], ['N(t)-CLIQUE'], ['N(t)-CC'], ['N(t)-EDGE']] | 4 | [['roundtrip translation', 'WORD', 'S1', 'µ'], ['roundtrip translation', 'WORD', 'S1', 'Md'], ['roundtrip translation', 'WORD', 'R1', 'µ'], ['roundtrip translation', 'WORD', 'R1', 'Md'], ['roundtrip translation', 'WORD', 'S4', 'µ'], ['roundtrip translation', 'WORD', 'S4', 'Md'], ['roundtrip translation', 'WORD', 'S16',... | [['33', '24', '37', '36', '', '', '', '', '67', '24', '13', '32', '21', '', '', '', '', '70', '', '', '', ''], ['7', '5', '8', '7', '13', '12', '26', '28', '69', '3', '2', '3', '2', '5', '4', '10', '11', '70', '33', '81', '13', '83'], ['46', '46', '52', '55', '63', '76', '79', '91', '65', '9', '5', '9', '5', '14', '9',... | column | ['µ', 'Md', 'µ', 'Md', 'µ', 'Md', 'µ', 'Md', 'N', 'µ', 'Md', 'µ', 'Md', 'µ', 'Md', 'µ', 'Md', 'N', 'pos', 'neg', 'pos', 'neg'] | ['N(t)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>roundtrip translation || WORD || S1 || µ</th> <th>roundtrip translation || WORD || S1 || Md</th> <th>roundtrip translation || WORD || R1 || µ</th> <th>roundtrip translation || WORD || R1 || Md</th> ... | Table 3 | table_3 | P18-1141 | 7 | acl2018 | Table 3 presents evaluation results for roundtrip translation and sentiment analysis. Validity of roundtrip (RT) evaluation results. RTSIMPLE (line 1) is not competitive; e.g., its accuracy is lower by almost half compared to N(t). We also see that RT is an excellent differentiator of poor multilingual embeddings (e.g.... | [1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 2, 1, 1, 2, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1] | ['Table 3 presents evaluation results for roundtrip translation and sentiment analysis.', 'Validity of roundtrip (RT) evaluation results.', 'RTSIMPLE (line 1) is not competitive; e.g., its accuracy is lower by almost half compared to N(t).', 'We also see that RT is an excellent differentiator of poor multilingual embed... | [['roundtrip translation', 'sentiment analysis'], ['roundtrip translation'], ['RTSIMPLE', 'N(t)'], ['RTSIMPLE', 'S-ID', 'N(t)', 'BOW'], None, ['CLIQUE', 'N(t)'], ['BOW'], ['S-ID', 'WORD', 'N(t)', 'S4', 'µ'], ['CHAR', 'S-ID'], ['sentiment analysis', 'N(t)', 'S-ID'], ['S-ID', 'BOW'], ['CLIQUE'], ['N(t)-CLIQUE', 'N(t)-CC'... | 1 |
P18-1144table_4 | Development results. | 4 | [['Input', 'Auto seg', 'Models', 'Word baseline'], ['Input', 'Auto seg', 'Models', 'Word+char LSTM'], ['Input', 'Auto seg', 'Models', 'Word+char LSTM'], ['Input', 'Auto seg', 'Models', 'Word+char+bichar LSTM'], ['Input', 'Auto seg', 'Models', 'Word+char CNN'], ['Input', 'Auto seg', 'Models', 'Word+char+bichar CNN'], ['... | 1 | [['P'], ['R'], ['F1']] | [['73.20', '57.05', '64.12'], ['71.98', '65.41', '68.54'], ['71.08', '65.83', '68.35'], ['72.63', '67.60', '70.03'], ['73.06', '66.29', '69.51'], ['72.01', '65.50', '68.60'], ['67.12', '58.42', '62.47'], ['69.30', '62.47', '65.71'], ['71.67', '64.02', '67.63'], ['72.64', '66.89', '69.64'], ['74.64', '68.83', '71.62']] | column | ['P', 'R', 'F1'] | ['Lattice'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Input || Auto seg || Models || Word baseline</td> <td>73.20</td> <td>57.05</td> <td>64.12</td> </tr> <tr> ... | Table 4 | table_4 | P18-1144 | 7 | acl2018 | As shown in Table 4, without using word segmentation, a characterbased LSTM-CRF model gives a development F1- score of 62.47%. Adding character-bigram and softword representations as described in Section 3.1 increases the F1-score to 67.63% and 65.71%, respectively, demonstrating the usefulness of both sources of infor... | [1, 1, 1, 2, 0, 1, 1, 2, 1, 1, 1, 2, 1, 1, 1] | ['As shown in Table 4, without using word segmentation, a characterbased LSTM-CRF model gives a development F1- score of 62.47%.', 'Adding character-bigram and softword representations as described in Section 3.1 increases the F1-score to 67.63% and 65.71%, respectively, demonstrating the usefulness of both sources of ... | [['F1', 'Char baseline'], ['F1', 'Char+softword', 'Char+bichar'], ['F1', 'Char+bichar+softword'], ['Char+bichar+softword'], None, ['Word+char LSTM', "Word+char LSTM'", 'Word+char+bichar LSTM', 'Word+char CNN', 'Word+char+bichar CNN'], ['Auto seg', 'F1', 'Word baseline', 'Char baseline'], ['Word baseline', 'Char baselin... | 1 |
P18-1145table_3 | Performances of character-based methods on KBP2017Eval Trigger Identification task. | 2 | [['Model', 'FBRNN(Char)'], ['Model', 'NPN(IOB)'], ['Model', 'NPN(Task-specific)']] | 1 | [['P'], ['R'], ['F1']] | [['57.97', '36.92', '45.11'], ['60.96', '47.39', '53.32'], ['64.32', '53.16', '58.21']] | column | ['P', 'R', 'F1'] | ['NPN(Task-specific)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Model || FBRNN(Char)</td> <td>57.97</td> <td>36.92</td> <td>45.11</td> </tr> <tr> <td>Model || NPN(IOB)<... | Table 3 | table_3 | P18-1145 | 7 | acl2018 | Table 3 shows the results on KBP2017Eval. We can see that NPN(Task-specific) outperforms other methods significantly. We believe this is because: 1) FBRNN(Char) only regards tokens in the candidate table as potential trigger nuggets, which limits the choice of possible trigger nuggets and
results in a very low recall r... | [1, 1, 2, 2] | ['Table 3 shows the results on KBP2017Eval.', 'We can see that NPN(Task-specific) outperforms other methods significantly.', 'We believe this is because: 1) FBRNN(Char) only regards tokens in the candidate table as potential trigger nuggets, which limits the choice of possible trigger nuggets and\nresults in a very low... | [None, ['NPN(Task-specific)'], ['FBRNN(Char)'], ['NPN(IOB)']] | 1 |
P18-1145table_6 | Results of using different representation on Trigger Classification task on KBP2017Eval. | 2 | [['Model', 'DMCNN(Word)'], ['Model', 'NPN(Char)'], ['Model', 'NPN(Task-specific)']] | 1 | [['P'], ['R'], ['F1']] | [['54.81', '46.84', '50.51'], ['56.19', '43.88', '49.28'], ['57.63', '47.63', '52.15']] | column | ['P', 'R', 'F1'] | ['NPN(Task-specific)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Model || DMCNN(Word)</td> <td>54.81</td> <td>46.84</td> <td>50.51</td> </tr> <tr> <td>Model || NPN(Char)... | Table 6 | table_6 | P18-1145 | 8 | acl2018 | Table 6 shows the experiment results. We can see that neither character-level or wordlevel representation can achieve competitive results with the NPNs. This verified the necessity of hybrid representation. | [1, 1, 1] | ['Table 6 shows the experiment results.', 'We can see that neither character-level or wordlevel representation can achieve competitive results with the NPNs.', 'This verified the necessity of hybrid representation.'] | [None, ['NPN(Task-specific)', 'DMCNN(Word)', 'NPN(Char)'], ['NPN(Task-specific)']] | 1 |
P18-1150table_2 | TEST results. “(200K)”, “(2M)” and “(20M)” represent training with the corresponding number of additional sentences from Gigaword. | 2 | [['Model', 'PBMT'], ['Model', 'SNRG'], ['Model', 'Tree2Str'], ['Model', 'MSeq2seq+Anon'], ['Model', 'Graph2seq+copy'], ['Model', 'Graph2seq+charLSTM+copy'], ['Model', 'MSeq2seq+Anon (200K)'], ['Model', 'MSeq2seq+Anon (2M)'], ['Model', 'Seq2seq+charLSTM+copy (200K)'], ['Model', 'Seq2seq+charLSTM+copy (2M)'], ['Model', '... | 1 | [['BLEU']] | [['26.9'], ['25.6'], ['23.0'], ['22.0'], ['22.7'], ['23.3'], ['27.4'], ['32.3'], ['27.4'], ['31.7'], ['28.2'], ['33.0']] | column | ['BLEU'] | ['Graph2seq+charLSTM+copy (200K)', 'Graph2seq+charLSTM+copy (2M)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU</th> </tr> </thead> <tbody> <tr> <td>Model || PBMT</td> <td>26.9</td> </tr> <tr> <td>Model || SNRG</td> <td>25.6</td> </tr> <tr> <td>Model || Tree2Str</td> <td>... | Table 2 | table_2 | P18-1150 | 9 | acl2018 | Table 2 compares our final results with existing work. MSeq2seq+Anon (Konstas et al., 2017) is an attentional multi-layer sequence-to-sequence model trained with the anonymized data. PBMT (Pourdamghani et al., 2016) adopts a phrase-based model for machine translation (Koehn et al., 2003) on the input of linearized AMR ... | [1, 2, 2, 1, 1, 2, 2, 1, 1, 1, 2] | ['Table 2 compares our final results with existing work.', 'MSeq2seq+Anon (Konstas et al., 2017) is an attentional multi-layer sequence-to-sequence model trained with the anonymized data.', 'PBMT (Pourdamghani et al., 2016) adopts a phrase-based model for machine translation (Koehn et al., 2003) on the input of lineari... | [None, ['MSeq2seq+Anon'], ['PBMT', 'SNRG', 'Tree2Str'], ['BLEU', 'Graph2seq+charLSTM+copy', 'MSeq2seq+Anon'], ['BLEU', 'MSeq2seq+Anon', 'Graph2seq+copy'], ['MSeq2seq+Anon'], None, None, ['BLEU', 'Graph2seq+charLSTM+copy (200K)', 'Graph2seq+charLSTM+copy (2M)', 'MSeq2seq+Anon (200K)', 'MSeq2seq+Anon (2M)'], ['BLEU', 'Gr... | 1 |
P18-1151table_4 | Human evaluation results. | 3 | [['Model', 'Existing Models', 'BLSTM'], ['Model', 'Existing Models', 'SMT'], ['Model', 'Existing Models', 'TFF'], ['Model', 'Adapted Model', 'TLSTM'], ['Model', 'Our Proposed', 'GTR-LSTM']] | 3 | [['Dataset/Metric', 'Seen', 'Correctness'], ['Dataset/Metric', 'Seen', 'Grammar'], ['Dataset/Metric', 'Seen', 'Fluency'], ['Dataset/Metric', 'Unseen', 'Correctness'], ['Dataset/Metric', 'Unseen', 'Grammar'], ['Dataset/Metric', 'Unseen', 'Fluency'], ['Dataset/Metric', 'GKB', 'Correctness'], ['Dataset/Metric', 'GKB', 'Gr... | [['2.25', '2.33', '2.29', '1.53', '1.71', '1.68', '1.54', '1.84', '1.84'], ['2.03', '2.11', '2.07', '1.36', '1.48', '1.44', '1.81', '1.99', '1.89'], ['1.77', '1.91', '1.88', '1.44', '1.69', '1.66', '1.71', '1.99', '1.96'], ['2.53', '2.61', '2.55', '1.75', '1.93', '1.86', '2.21', '2.38', '2.35'], ['2.64', '2.66', '2.57'... | column | ['Correctness', 'Grammar', 'Fluency', 'Correctness', 'Grammar', 'Fluency', 'Correctness', 'Grammar', 'Fluency'] | ['GTR-LSTM'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Dataset/Metric || Seen || Correctness</th> <th>Dataset/Metric || Seen || Grammar</th> <th>Dataset/Metric || Seen || Fluency</th> <th>Dataset/Metric || Unseen || Correctness</th> <th>Dataset/Metri... | Table 4 | table_4 | P18-1151 | 9 | acl2018 | Table 4 shows the results of the human evaluations. The results confirm the automatic evaluation in which our proposed model achieves the best scores. | [1, 1] | ['Table 4 shows the results of the human evaluations.', 'The results confirm the automatic evaluation in which our proposed model achieves the best scores.'] | [None, ['GTR-LSTM']] | 1 |
P18-1152table_4 | Crowd-sourced ablation evaluation of generations on TripAdvisor. Each ablation uses only one discriminative communication model, and is compared to ADAPTIVELM. | 2 | [['Ablation vs. LM', 'REPETITION ONLY'], ['Ablation vs. LM', 'ENTAILMENT ONLY'], ['Ablation vs. LM', 'RELEVANCE ONLY'], ['Ablation vs. LM', 'LEXICAL STYLE ONLY'], ['Ablation vs. LM', 'ALL']] | 1 | [['Repetition'], ['Contradiction'], ['Relevance'], ['Clarity'], ['Better'], ['Neither'], ['Worse']] | [['+0.63', '+0.30', '+0.37', '+0.42', '50%', '23%', '27%'], ['+0.01', '+0.02', '+0.05', '-0.10', '39%', '20%', '41%'], ['-0.19', '+0.09', '+0.10', '+0.060', '36%', '22%', '42%'], ['+0.11', '+0.16', '+0.20', '+0.16', '38%', '25%', '38%'], ['+0.23', '-0.02', '+0.19', '-0.03', '47%', '19%', '34%']] | column | ['Repetition', 'Contradiction', 'Relevance', 'Clarity', 'Better', 'Neither', 'Worse'] | ['Ablation vs. LM'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Repetition</th> <th>Contradiction</th> <th>Relevance</th> <th>Clarity</th> <th>Better</th> <th>Neither</th> <th>Worse</th> </tr> </thead> <tbody> <tr> <td>Ablation vs. LM |... | Table 4 | table_4 | P18-1152 | 8 | acl2018 | To investigate the effect of individual discriminators on the overall performance, we report the results of ablations of our model in Table 4. For each ablation we include only one of the communication modules, and train a single mixture coefficient for combining that module and the language model. The diagonal of Tabl... | [1, 2, 2, 1] | ['To investigate the effect of individual discriminators on the overall performance, we report the results of ablations of our model in Table 4.', 'For each ablation we include only one of the communication modules, and train a single mixture coefficient for combining that module and the language model.', 'The diagonal... | [None, None, None, ['Ablation vs. LM', 'Repetition', 'Contradiction', 'Relevance', 'Clarity']] | 1 |
P18-1154table_3 | Performance of baselines and our model with different subsets of features as per various quantitative measures. ( S = Score, M= Move, T = Threat features; ) On all data subsets, our model outperforms the TEMP and NN baselines. Among proposed models, GAC performs better than GACsparse & RAW in general. For NN, GAC-spars... | 4 | [['Dataset', 'MoveDesc', 'Features', 'TEMP'], ['Dataset', 'MoveDesc', 'Features', 'NN (M+T+S)'], ['Dataset', 'MoveDesc', 'Features', 'RAW'], ['Dataset', 'MoveDesc', 'Features', 'GAC-sparse'], ['Dataset', 'MoveDesc', 'Features', 'GAC (M+T)'], ['Dataset', 'Quality', 'Features', 'TEMP'], ['Dataset', 'Quality', 'Features',... | 1 | [['BLEU'], ['BLEU-2'], ['Diversity']] | [['0.72', '20.77', '4.43'], ['1.28', '21.07', '7.85'], ['1.13', '13.74', '2.37'], ['1.76', '21.49', '4.29'], ['1.85', '23.35', '4.72'], ['16.17', '47.29', '1.16'], ['5.98', '42.97', '4.52'], ['16.92', '47.72', '1.07'], ['14.98', '51.46', '2.63'], ['16.94', '47.65', '1.01'], ['1.28', '24.49', '6.97'], ['2.80', '23.26', ... | column | ['BLEU', 'BLEU-2', 'Diversity'] | ['GAC (M+T)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU</th> <th>BLEU-2</th> <th>Diversity</th> </tr> </thead> <tbody> <tr> <td>Dataset || MoveDesc || Features || TEMP</td> <td>0.72</td> <td>20.77</td> <td>4.43</td> </tr> ... | Table 3 | table_3 | P18-1154 | 6 | acl2018 | Table 3 shows the BLEU and BLEU-2 scores for the proposed model under different subsets of features. Overall BLEU scores are low, likely due to the inherent variance in the language generation task (Novikova et al., 2017) , although a precursory examination of the outputs for data points selected randomly from test set... | [1, 1] | ['Table 3 shows the BLEU and BLEU-2 scores for the proposed model under different subsets of features.', 'Overall BLEU scores are low, likely due to the inherent variance in the language generation task (Novikova et al., 2017) , although a precursory examination of the outputs for data points selected randomly from tes... | [['BLEU', 'BLEU-2'], ['BLEU', 'BLEU-2']] | 1 |
P18-1156table_1 | Comparison between various RC datasets | 2 | [['Metrics for Comparative Analysis', 'Avg. word distance'], ['Metrics for Comparative Analysis', 'Avg. sentence distance'], ['Metrics for Comparative Analysis', 'Number of sentences for inferencing'], ['Metrics for Comparative Analysis', '% of instances where both Query&Answer entities were found in passage'], ['Metri... | 1 | [['Movie QA'], ['NarrativeQA over plot-summaries'], ['SelfRC'], ['ParaphraseRC']] | [['20.67', '24.94', '13.4', '45.3'], ['1.67', '1.95', '1.34', '2.7'], ['2.3', '1.95', '1.51', '2.47'], ['67.96', '59.4', '58.79', '12.25'], ['59.61', '61.77', '63.39', '47.05'], ['25', '26.26', '38', '21']] | row | ['Avg. word distance', 'Avg. sentence distance', 'Number of sentences for inferencing', '% of instances where both Query&Answer entities were found in passage', '% of instances where Only Query entities were found in passage', '% Length of the Longest Common sequence of non-stop words in Query (w.r.t Query Length) and ... | ['SelfRC', 'ParaphraseRC'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Movie QA</th> <th>NarrativeQA over plot-summaries</th> <th>SelfRC</th> <th>ParaphraseRC</th> </tr> </thead> <tbody> <tr> <td>Metrics for Comparative Analysis || Avg. word distance</td> ... | Table 1 | table_1 | P18-1156 | 5 | acl2018 | In Table 1, we compare various RC datasets with two embodiments of our dataset i.e. the SelfRC and ParaphraseRC. We use NER and noun phrase/verb phrase extraction over the entire dataset to identify key entities in the question, plot and answer which is in turn used to compute the metrics mentioned in the table. The me... | [1, 1, 2, 2, 1] | ['In Table 1, we compare various RC datasets with two embodiments of our dataset i.e. the SelfRC and ParaphraseRC.', 'We use NER and noun phrase/verb phrase extraction over the entire dataset to identify key entities in the question, plot and answer which is in turn used to compute the metrics mentioned in the table.',... | [['SelfRC', 'ParaphraseRC'], ['Movie QA', 'NarrativeQA over plot-summaries'], ['Avg. word distance', 'Avg. sentence distance'], ['Number of sentences for inferencing'], ['ParaphraseRC']] | 1 |
P18-1156table_3 | Performance of the SpanModel and GenModel on the Span Test subset and the Full Test Set of the Self and ParaphraseRC. | 2 | [['SelfRC', 'SpanModel'], ['SelfRC', 'GenModel (with augmented training data)'], ['ParaphraseRC', 'SpanModel'], ['ParaphraseRC', 'SpanModel with Preprocessed Data'], ['ParaphraseRC', 'GenModel (with augmented training data)']] | 2 | [['Span Test', 'Acc.'], ['Span Test', 'F1'], ['Span Test', 'BLEU'], ['Full Test', 'Acc.'], ['Full Test', 'F1'], ['Full Test', 'BLEU']] | [['46.14', '57.49', '22.98', '37.53', '50.56', '7.47'], ['16.45', '26.97', '7.61', '15.31', '24.05', '5.50'], ['17.93', '26.27', '9.39', '9.78', '16.33', '2.60'], ['27.49', '35.10', '12.78', '14.92', '21.53', '2.75'], ['12.66', '19.48', '4.41', '5.42', '9.64', '1.75']] | column | ['Acc.', 'F1', 'BLEU', 'Acc.', 'F1', 'BLEU'] | ['SelfRC', 'ParaphraseRC'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Span Test || Acc.</th> <th>Span Test || F1</th> <th>Span Test || BLEU</th> <th>Full Test || Acc.</th> <th>Full Test || F1</th> <th>Full Test || BLEU</th> </tr> </thead> <tbody> <tr> ... | Table 3 | table_3 | P18-1156 | 8 | acl2018 | SpanModel v/s GenModel:. Comparing the first two rows (SelfRC) and the last two rows (ParaphraseRC) of Table 3 we see that the SpanModel clearly outperforms the GenModel. This is not very surprising for two reasons. First, around 70% (and 50%) of the answers in SelfRC (and ParaphraseRC)
respectively, match an exact spa... | [1, 1, 2, 2, 2, 2, 2, 2, 2, 1, 1, 1] | ['SpanModel v/s GenModel:.', 'Comparing the first two rows (SelfRC) and the last two rows (ParaphraseRC) of Table 3 we see that the SpanModel clearly outperforms the GenModel.', 'This is not very surprising for two reasons.', 'First, around 70% (and 50%) of the answers in SelfRC (and ParaphraseRC)\nrespectively, match ... | [None, ['SpanModel', 'GenModel (with augmented training data)', 'SpanModel with Preprocessed Data'], None, ['SpanModel', 'SpanModel with Preprocessed Data'], ['GenModel (with augmented training data)'], ['GenModel (with augmented training data)'], ['GenModel (with augmented training data)'], ['GenModel (with augmented ... | 1 |
P18-1157table_1 | Main results—Comparison of different answer module architectures. Note that SAN performs best in both Exact Match and F1 metrics. | 2 | [['Answer Module', 'Standard 1-step'], ['Answer Module', 'Fixed 5-step with Memory Network (prediction from final step)'], ['Answer Module', 'Fixed 5-step with Memory Network (prediction averaged from all steps)'], ['Answer Module', 'Dynamic steps (max 5) with ReasoNet'], ['Answer Module', 'Stochastic Answer Network (S... | 1 | [['EM'], ['F1']] | [['75.139', '83.367'], ['75.033', '83.327'], ['75.256', '83.215'], ['75.355', '83.360'], ['76.235', '84.056']] | column | ['EM', 'F1'] | ['Stochastic Answer Network (SAN) Fixed 5-step'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>EM</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Answer Module || Standard 1-step</td> <td>75.139</td> <td>83.367</td> </tr> <tr> <td>Answer Module || Fixed 5-step with Memo... | Table 1 | table_1 | P18-1157 | 6 | acl2018 | The main results in terms of EM and F1 are shown in Table 1. We observe that SAN achieves 76.235 EM and 84.056 F1, outperforming all other models. Standard 1-step model only achieves 75.139 EM and dynamic steps (via ReasoNet) achieves only 75.355 EM. SAN also outperforms a 5-step memory net with averaging, which implie... | [1, 1, 1, 1] | ['The main results in terms of EM and F1 are shown in Table 1.', 'We observe that SAN achieves 76.235 EM and 84.056 F1, outperforming all other models.', 'Standard 1-step model only achieves 75.139 EM and dynamic steps (via ReasoNet) achieves only 75.355 EM.', 'SAN also outperforms a 5-step memory net with averaging, w... | [['EM', 'F1'], ['Stochastic Answer Network (SAN) Fixed 5-step', 'EM', 'F1'], ['Standard 1-step', 'EM', 'F1', 'Dynamic steps (max 5) with ReasoNet'], ['Stochastic Answer Network (SAN) Fixed 5-step']] | 1 |
P18-1157table_4 | Effect of number of steps: best and worst results are boldfaced. | 2 | [['SAN', '1 step'], ['SAN', '2 step'], ['SAN', '3 step'], ['SAN', '4 step'], ['SAN', '5 step'], ['SAN', '6 step'], ['SAN', '7 step'], ['SAN', '8 step'], ['SAN', '9 step'], ['SAN', '10 step']] | 1 | [['EM'], ['F1']] | [['75.38', '83.29'], ['75.43', '83.41'], ['75.89', '83.57'], ['75.92', '83.85'], ['76.24', '84.06'], ['75.99', '83.72'], ['76.04', '83.92'], ['76.03', '83.82'], ['75.95', '83.75'], ['76.04', '83.89']] | column | ['EM', 'F1'] | ['SAN'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>EM</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>SAN || 1 step</td> <td>75.38</td> <td>83.29</td> </tr> <tr> <td>SAN || 2 step</td> <td>75.43</td> <td>83.41</td> ... | Table 4 | table_4 | P18-1157 | 7 | acl2018 | Table 4 shows the development set scores for T = 1 to T = 10. We observe that there is a gradual improvement as we increase T = 1 to T = 5, but after 5 steps the improvements have saturated. In fact, the EM/F1 scores drop slightly, but considering that the random initialization results in Table 3 show a standard deviat... | [1, 1, 1, 2] | ['Table 4 shows the development set scores for T = 1 to T = 10.', 'We observe that there is a gradual improvement as we increase T = 1 to T = 5, but after 5 steps the improvements have saturated.', 'In fact, the EM/F1 scores drop slightly, but considering that the random initialization results in Table 3 show a standar... | [['1 step', '2 step', '3 step', '4 step', '5 step', '6 step', '7 step', '8 step', '9 step', '10 step'], ['1 step', '2 step', '3 step', '4 step', '5 step'], ['EM', 'F1', '10 step', '5 step'], None] | 1 |
P18-1157table_5 | Test performance on the adversarial SQuAD dataset in F1 score. | 2 | [['Single model:', 'LR (Rajpurkar et al., 2016)'], ['Single model:', 'SEDT (Liu et al., 2017a)'], ['Single model:', 'BiDAF (Seo et al., 2016)'], ['Single model:', 'jNet (Zhang et al., 2017)'], ['Single model:', 'ReasoNet(Shen et al., 2017)'], ['Single model:', 'RaSoR(Lee et al., 2016)'], ['Single model:', 'Mnemonic(Hu ... | 1 | [['AddSent'], ['AddOneSent']] | [['23.2', '30.3'], ['33.9', '44.8'], ['34.3', '45.7'], ['37.9', '47.0'], ['39.4', '50.3'], ['39.5', '49.5'], ['46.6', '56.0'], ['45.2', '55.7'], ['45.4', '55.8'], ['46.6', '56.5']] | column | ['F1', 'F1'] | ['SAN'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>AddSent</th> <th>AddOneSent</th> </tr> </thead> <tbody> <tr> <td>Single model: || LR (Rajpurkar et al., 2016)</td> <td>23.2</td> <td>30.3</td> </tr> <tr> <td>Single model: || S... | Table 5 | table_5 | P18-1157 | 8 | acl2018 | The results in Table 5 show that SAN achieves the new state-of-the-art performance and SANfs superior result is mainly attributed to the multi-step answer module, which leads to significant improvement in F1 score over the Standard 1-step answer module, i.e., +1.2 on AddSent and +0.7 on AddOneSent. | [1] | ['The results in Table 5 show that SAN achieves the new state-of-the-art performance and SAN\x81fs superior result is mainly attributed to the multi-step answer module, which leads to significant improvement in F1 score over the Standard 1-step answer module, i.e., +1.2 on AddSent and +0.7 on AddOneSent.'] | [['AddSent', 'AddOneSent', 'SAN', 'Standard 1-step in Table 1']] | 1 |
P18-1157table_7 | MS MARCO devset results. | 2 | [['SingleModel', 'ReasoNet++(Shen et al. 2017)'], ['SingleModel', 'V-Net(Wang et al. 2018)'], ['SingleModel', 'Standard 1-step in Table 1'], ['SingleModel', 'SAN']] | 1 | [['ROUGE'], ['BLEU']] | [['38.01', '38.62'], ['45.65', '-'], ['42.30', '42.39'], ['46.14', '43.85']] | column | ['ROUGE', 'BLEU'] | ['SAN'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>ROUGE</th> <th>BLEU</th> </tr> </thead> <tbody> <tr> <td>SingleModel || ReasoNet++(Shen et al. 2017)</td> <td>38.01</td> <td>38.62</td> </tr> <tr> <td>SingleModel || V-Net(Wang... | Table 7 | table_7 | P18-1157 | 9 | acl2018 | The results in Table 7 show that SAN outperforms V-Net (Wang et al., 2018) and becomes the new state of the art6. | [1] | ['The results in Table 7 show that SAN outperforms V-Net (Wang et al., 2018) and becomes the new state of the art.'] | [['V-Net(Wang et al. 2018)', 'SAN']] | 1 |
P18-1160table_5 | Results on the dev set of SQuAD (First two) and NewsQA (Last). For Top k, we use k = 1 and k = 3 for SQuAD and NewsQA, respectively. We compare with GNR (Raiman and Miller, 2017), FusionNet (Huang et al., 2018) and FastQA (Weissenborn et al., 2017), which are the model leveraging sentence selection for question answeri... | 1 | [['FULL'], ['ORACLE'], ['MINIMAL(Top k)'], ['MINIMAL(Dyn)'], ['GNR'], ['FastQA'], ['FusionNet']] | 2 | [['SQuAD (with S-Reader)', 'F1'], ['SQuAD (with S-Reader)', 'EM'], ['SQuAD (with S-Reader)', 'Train Sp'], ['SQuAD (with S-Reader)', 'Infer Sp'], ['SQuAD (with DCN+)', 'F1'], ['SQuAD (with DCN+)', 'EM'], ['SQuAD (with DCN+)', 'Train Sp'], ['SQuAD (with DCN+)', 'Infer Sp'], ['NewsQA (with S-Reader)', 'F1'], ['NewsQA (wit... | [['79.9', '71', 'x1.0', 'x1.0', '83.1', '74.5', 'x1.0', 'x1.0', '63.8', '50.7', 'x1.0', 'x1.0'], ['84.3', '74.9', 'x6.7', 'x5.1', '85.1', '76', 'x3.0', 'x5.1', '75.5', '59.2', 'x18.8', 'x21.7'], ['78.7', '69.9', 'x6.7', 'x5.1', '79.2', '70.7', 'x3.0', 'x5.1', '62.3', '49.3', 'x15.0', 'x6.9'], ['79.8', '70.9', 'x6.7', '... | column | ['F1', 'EM', 'Train Sp', 'Infer Sp', 'F1', 'EM', 'Train Sp', 'Infer Sp', 'F1', 'EM', 'Train Sp', 'Infer Sp'] | ['MINIMAL(Top k)', 'MINIMAL(Dyn)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>SQuAD (with S-Reader) || F1</th> <th>SQuAD (with S-Reader) || EM</th> <th>SQuAD (with S-Reader) || Train Sp</th> <th>SQuAD (with S-Reader) || Infer Sp</th> <th>SQuAD (with DCN+) || F1</th> <... | Table 5 | table_5 | P18-1160 | 6 | acl2018 | Table 5 shows results in the task of QA on SQuAD and NewsQA. MINIMAL is more efficient in training and inference than FULL. On SQuAD, S-Reader achieves 6.7 training and 3.6 inference speedup on SQuAD, and 15.0 training and 6.9 inference speedup on NewsQA. In addition to the speedup, MINIMAL achieves comparable result t... | [1, 1, 1, 1] | ['Table 5 shows results in the task of QA on SQuAD and NewsQA.', 'MINIMAL is more efficient in training and inference than FULL.', 'On SQuAD, S-Reader achieves 6.7 training and 3.6 inference speedup on SQuAD, and 15.0 training and 6.9 inference speedup on NewsQA.', 'In addition to the speedup, MINIMAL achieves comparab... | [['SQuAD (with S-Reader)', 'SQuAD (with DCN+)', 'NewsQA (with S-Reader)'], ['MINIMAL(Top k)', 'MINIMAL(Dyn)', 'FULL'], ['SQuAD (with S-Reader)', 'NewsQA (with S-Reader)', 'MINIMAL(Top k)', 'MINIMAL(Dyn)', 'Train Sp', 'Infer Sp'], ['SQuAD (with S-Reader)', 'FULL', 'MINIMAL(Dyn)', 'NewsQA (with S-Reader)', 'F1']] | 1 |
P18-1160table_8 | Results on the dev-full set of TriviaQA (Wikipedia) and the dev set of SQuAD-Open. Full results (including the dev-verified set on TriviaQA) are in Appendix C. For training FULL and MINIMAL on TriviaQA, we use 10 paragraphs and 20 sentences, respectively. For training FULL and MINIMAL on SQuAD-Open, we use 20 paragraph... | 2 | [['FULL', '-'], ['MINIMAL', 'TF-IDF'], ['MINIMAL', 'TF-IDF'], ['MINIMAL', 'Our Selector'], ['MINIMAL', 'Our Selector'], ['Rank 1', '-'], ['Rank 2', '-'], ['Rank 3', '-']] | 2 | [['TriviaQA (Wikipedia)', 'n sent'], ['TriviaQA (Wikipedia)', 'Acc'], ['TriviaQA (Wikipedia)', 'Sp'], ['TriviaQA (Wikipedia)', 'F1'], ['TriviaQA (Wikipedia)', 'EM'], ['SQuAD-Open', 'n sent'], ['SQuAD-Open', 'Acc'], ['SQuAD-Open', 'Sp'], ['SQuAD-Open', 'F1'], ['SQuAD-Open', 'EM']] | [['69', '95.9', 'x1.0', '59.6', '53.5', '124', '76.9', 'x1.0', '41.0', '33.1'], ['5', '73.0', 'x13.8', '51.9', '45.8', '5', '46.1', 'x12.4', '36.6', '29.6'], ['10', '79.9', 'x6.9', '57.2', '51.5', '10', '54.3', 'x6.2', '39.8', '32.5'], ['5.0', '84.9', 'x13.8', '59.5', '54.0', '5.3', '58.9', 'x11.7', '42.3', '34.6'], ['... | column | ['n sent', 'Acc', 'Sp', 'F1', 'EM', 'n sent', 'Acc', 'Sp', 'F1', 'EM'] | ['Our Selector'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>TriviaQA (Wikipedia) || n sent</th> <th>TriviaQA (Wikipedia) || Acc</th> <th>TriviaQA (Wikipedia) || Sp</th> <th>TriviaQA (Wikipedia) || F1</th> <th>TriviaQA (Wikipedia) || EM</th> <th>SQuAD... | Table 8 | table_8 | P18-1160 | 8 | acl2018 | Table 8 shows results on TriviaQA (Wikipedia) and SQuAD-Open. First, MINIMAL obtains higher F1 and EM over FULL, with the inference speedup of up to 13.8. Second, the model with our sentence selector with Dyn achieves higher F1 and EM over the model with TF-IDF selector. For example, on the development-full set, with 5... | [1, 1, 1, 1, 1] | ['Table 8 shows results on TriviaQA (Wikipedia) and SQuAD-Open.', 'First, MINIMAL obtains higher F1 and EM over FULL, with the inference speedup of up to 13.8.', 'Second, the model with our sentence selector with Dyn achieves higher F1 and EM over the model with TF-IDF selector.', 'For example, on the development-full ... | [['TriviaQA (Wikipedia)', 'SQuAD-Open'], ['TriviaQA (Wikipedia)', 'SQuAD-Open', 'FULL', 'Our Selector', 'Sp', 'F1', 'EM'], ['Our Selector', 'TF-IDF', 'F1', 'EM'], ['TriviaQA (Wikipedia)', 'F1', 'Our Selector', 'TF-IDF'], ['Our Selector', 'Rank 1', 'Rank 2', 'Rank 3']] | 1 |
P18-1160table_9 | Results on the dev set of SQuADAdversarial. We compare with RaSOR (Lee et al., 2016), ReasoNet (Shen et al., 2017) and Mnemonic Reader (Hu et al., 2017), the previous state-of-the-art on SQuAD-Adversarial, where the numbers are from Jia and Liang (2017). | 3 | [['SQuAD-Adversarial', 'DCN+', 'FULL'], ['SQuAD-Adversarial', 'DCN+', 'ORACLE'], ['SQuAD-Adversarial', 'DCN+', 'MINIMAL'], ['SQuAD-Adversarial', 'S-Reader', 'FULL'], ['SQuAD-Adversarial', 'S-Reader', 'ORACLE'], ['SQuAD-Adversarial', 'S-Reader', 'MINIMAL'], ['SQuAD-Adversarial', 'RaSOR', '-'], ['SQuAD-Adversarial', 'Rea... | 2 | [['AddSent', 'F1'], ['AddSent', 'EM'], ['AddSent', 'Sp'], ['AddOneSent', 'F1'], ['AddOneSent', 'EM'], ['AddOneSent', 'Sp']] | [['52.6', '46.2', 'x0.7', '63.5', '56.8', 'x0.7'], ['84.2', '75.3', 'x4.3', '84.5', '75.8', 'x4.3'], ['59.7', '52.2', 'x4.3', '67.5', '60.1', 'x4.3'], ['57.7', '51.1', 'x1.0', '66.5', '59.7', 'x1.0'], ['82.5', '74.1', 'x6.0', '82.9', '74.6', 'x6.0'], ['58.5', '51.5', 'x6.0', '66.5', '59.5', 'x6.0'], ['39.5', '-', '-', ... | column | ['F1', 'EM', 'Sp', 'F1', 'EM', 'Sp'] | ['MINIMAL'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>AddSent || F1</th> <th>AddSent || EM</th> <th>AddSent || Sp</th> <th>AddOneSent || F1</th> <th>AddOneSent || EM</th> <th>AddOneSent || Sp</th> </tr> </thead> <tbody> <tr> <td>SQ... | Table 9 | table_9 | P18-1160 | 8 | acl2018 | Table 9 shows that MINIMAL outperforms FULL, achieving the new state-of-the-art by large margin (+11.1 and +11.5 F1 on AddSent and AddOneSent, respectively). | [1] | ['Table 9 shows that MINIMAL outperforms FULL, achieving the new state-of-the-art by large margin (+11.1 and +11.5 F1 on AddSent and AddOneSent, respectively).'] | [['MINIMAL', 'FULL', 'AddSent', 'AddOneSent', 'Mnemonic Reader']] | 1 |
P18-1165table_4 | Results on TED test data for training with estimated (E) and direct (D) rewards from simulation (S), humans (H) and filtered (F) human ratings. Significant (p ≤ 0.05) differences to the baseline are marked with (cid:63). For RL experiments we show three runs with different random seeds, mean and standard deviation in s... | 5 | [['Model', 'Baseline', 'Rewards', '-', '-'], ['Model', 'RL', 'Rewards', 'D', 'S'], ['Model', 'OPL', 'Rewards', 'D', 'S'], ['Model', 'RL+MSE', 'Rewards', 'E', 'S'], ['Model', 'RL+PW', 'Rewards', 'E', 'S'], ['Model', 'OPL', 'Rewards', 'D', 'H'], ['Model', 'RL+MSE', 'Rewards', 'E', 'H'], ['Model', 'RL+PW', 'Rewards', 'E',... | 1 | [['BLEU'], ['METEOR'], ['BEER']] | [['27.0', '30.7', '59.48'], ['32.5★ ±0.01', '33.7★ ±0.01', '63.47★ ±0.10'], ['27.5★', '30.9★', '59.62★'], ['28.2★ ±0.09', '31.6★ ±0.04', '60.23★ ±0.14'], ['27.8★ ±0.01', '31.2★ ±0.01', '59.83★ ±0.04'], ['27.5★', '30.9★', '59.72★'], ['28.1★ ±0.01', '31.5★ ±0.01', '60.21★ ±0.12'], ['27.8★ ±0.09', '31.3★ ±0.09', '59.88★ ±... | column | ['BLEU', 'METEOR', 'BEER'] | ['RL'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU</th> <th>METEOR</th> <th>BEER</th> </tr> </thead> <tbody> <tr> <td>Model || Baseline || Rewards || - || -</td> <td>27.0</td> <td>30.7</td> <td>59.48</td> </tr> <tr> ... | Table 4 | table_4 | P18-1165 | 9 | acl2018 | Table 4 lists the results for this simulation experiment in rows 2-5 (S). If unlimited clean feedback was given (RL with direct simulated rewards), improvements of over 5 BLEU can be achieved. When limiting the amount of feedback to a log of 800 translations, the improvements over the baseline are only marginal (OPL). ... | [1, 1, 1, 1, 2, 1, 1, 1, 1] | ['Table 4 lists the results for this simulation experiment in rows 2-5 (S).', 'If unlimited clean feedback was given (RL with direct simulated rewards), improvements of over 5 BLEU can be achieved.', 'When limiting the amount of feedback to a log of 800 translations, the improvements over the baseline are only marginal... | [['Baseline', 'RL', 'OPL', 'RL+MSE', 'RL+PW'], ['BLEU', 'RL'], ['BLEU', 'OPL'], ['BLEU', 'RL+MSE', 'RL+PW'], None, ['OPL', 'RL+MSE', 'RL+PW'], ['OPL'], ['RL+MSE', 'RL+PW'], ['Baseline', 'RL+MSE']] | 1 |
P18-1166table_4 | Detokenized BLEU scores for WMT17 translation tasks. Results are reported with multi-bleudetok.perl. “winner” denotes the translation results generated by the WMT17 winning systems. (cid:52)d indicates the difference between our model and the Transformer. | 1 | [['En→De'], ['De→En'], ['En→Fi'], ['Fi→En'], ['En→Lv'], ['Lv→En'], ['En→Ru'], ['Ru→En'], ['En→Tr'], ['Tr→En'], ['En→Cs'], ['Cs→En']] | 2 | [['Case-sensitive BLEU', 'winner'], ['Case-sensitive BLEU', 'Transformer'], ['Case-sensitive BLEU', 'Our Model'], ['Case-sensitive BLEU', 'Δd'], ['Case-insensitive BLEU', 'winner'], ['Case-insensitive BLEU', 'Transformer'], ['Case-insensitive BLEU', 'Our Model'], ['Case-insensitive BLEU', 'Δd']] | [['28.3', '27.33', '27.22', '-0.11', '28.9', '27.92', '27.80', '-0.12'], ['35.1', '32.63', '32.73', '+0.10', '36.5', '34.06', '34.13', '+0.07'], ['20.7', '21.00', '20.87', '-0.13', '21.1', '21.54', '21.47', '-0.07'], ['20.5', '25.19', '24.78', '-0.41', '21.4', '26.22', '25.74', '-0.48'], ['21.1', '16.83', '16.63', '-0.... | column | ['BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU'] | ['Our Model'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Case-sensitive BLEU || winner</th> <th>Case-sensitive BLEU || Transformer</th> <th>Case-sensitive BLEU || Our Model</th> <th>Case-sensitive BLEU || Δd</th> <th>Case-insensitive BLEU || winner</th... | Table 4 | table_4 | P18-1166 | 8 | acl2018 | Table 4 shows the overall results on 12 translation directions. We also provide the results from WMT17 winning systems4. Notice that unlike the Transformer and our model, these winner systems typically use model ensemble, system combination and large-scale monolingual corpus. Although different languages have different... | [1, 1, 2, 1, 1, 1, 2, 1] | ['Table 4 shows the overall results on 12 translation directions.', 'We also provide the results from WMT17 winning systems4.', 'Notice that unlike the Transformer and our model, these winner systems typically use model ensemble, system combination and large-scale monolingual corpus.', 'Although different languages hav... | [None, ['winner'], ['Transformer', 'Our Model', 'winner'], ['Our Model', 'Transformer'], ['Our Model', 'De→En', 'Δd', 'Transformer'], ['Our Model', 'Transformer', 'En→Tr'], ['En→Tr'], ['Our Model', 'Transformer']] | 1 |
P18-1171table_1 | Performance breakdown of each transition phase. | 1 | [['Peng et al. (2018)'], ['Soft+feats'], ['Hard+feats']] | 1 | [['ShiftOrPop'], ['PushIndex'], ['ArcBinary'], ['ArcLabel']] | [['0.87', '0.87', '0.83', '0.81'], ['0.93', '0.84', '0.91', '0.75'], ['0.94', '0.85', '0.93', '0.77']] | column | ['accuracy', 'accuracy', 'accuracy', 'accuracy'] | ['Soft+feats', 'Hard+feats'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>ShiftOrPop</th> <th>PushIndex</th> <th>ArcBinary</th> <th>ArcLabel</th> </tr> </thead> <tbody> <tr> <td>Peng et al. (2018)</td> <td>0.87</td> <td>0.87</td> <td>0.83</td> ... | Table 1 | table_1 | P18-1171 | 7 | acl2018 | Table 1 shows the phase-wise accuracy of our sequence-to-sequence model. Peng et al. (2018) use a separate feedforward network to predict each phase independently. We use the same alignment from the SemEval dataset as in Peng et al. (2018) to avoid differences resulting from the aligner. Soft+feats shows the performanc... | [1, 2, 2, 2, 1, 1, 1] | ['Table 1 shows the phase-wise accuracy of our sequence-to-sequence model.', 'Peng et al. (2018) use a separate feedforward network to predict each phase independently.', 'We use the same alignment from the SemEval dataset as in Peng et al. (2018) to avoid differences resulting from the aligner.', 'Soft+feats shows the... | [None, ['Peng et al. (2018)'], ['Peng et al. (2018)'], ['Soft+feats', 'Hard+feats'], ['Hard+feats', 'Soft+feats', 'ShiftOrPop', 'PushIndex', 'ArcBinary', 'ArcLabel'], ['Soft+feats', 'Hard+feats', 'Peng et al. (2018)', 'ShiftOrPop', 'ArcBinary'], ['Soft+feats', 'Hard+feats', 'Peng et al. (2018)', 'PushIndex', 'ArcLabel'... | 1 |
P18-1171table_4 | Comparison to other AMR parsers. *Model has been trained on the previous release of the corpus (LDC2014T12). | 2 | [['System', 'Buys and Blunsom (2017)'], ['System', 'Konstas et al. (2017)'], ['System', 'Ballesteros and Al-Onaizan (2017)*'], ['System', 'Damonte et al. (2017)'], ['System', 'Peng et al. (2018)'], ['System', 'Wang et al. (2015b)'], ['System', 'Wang et al. (2015a)'], ['System', 'Flanigan et al. (2016)'], ['System', 'Wa... | 1 | [['P'], ['R'], ['F']] | [['-', '-', '0.60'], ['0.60', '0.65', '0.62'], ['-', '-', '0.64'], ['-', '-', '0.64'], ['0.69', '0.59', '0.64'], ['0.64', '0.62', '0.63'], ['0.70', '0.63', '0.66'], ['0.70', '0.65', '0.67'], ['0.72', '0.65', '0.68'], ['0.68', '0.63', '0.65'], ['0.69', '0.64', '0.66']] | column | ['P', 'R', 'F'] | ['Ours soft attention', 'Ours hard attention'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F</th> </tr> </thead> <tbody> <tr> <td>System || Buys and Blunsom (2017)</td> <td>-</td> <td>-</td> <td>0.60</td> </tr> <tr> <td>System || Konst... | Table 4 | table_4 | P18-1171 | 8 | acl2018 | Table 4 shows the comparison with other AMR parsers. The first three systems are some competitive neural models. We can see that our parser significantly outperforms the sequence-to-action-sequence model of Buys and Blunsom (2017). Konstas et al.(2017) use a linearization approach that linearizes the AMR graph to a seq... | [1, 1, 1, 2, 2, 1, 1, 1, 2, 2] | ['Table 4 shows the comparison with other AMR parsers.', 'The first three systems are some competitive neural models.', 'We can see that our parser significantly outperforms the sequence-to-action-sequence model of Buys and Blunsom (2017).', 'Konstas et al.(2017) use a linearization approach that linearizes the AMR gra... | [None, ['Buys and Blunsom (2017)', 'Konstas et al. (2017)', 'Ballesteros and Al-Onaizan (2017)*'], ['Ours soft attention', 'Ours hard attention', 'Buys and Blunsom (2017)', 'F'], ['Konstas et al. (2017)'], ['Ours soft attention', 'Ours hard attention'], ['Ours soft attention', 'Ours hard attention', 'Ballesteros and Al... | 1 |
P18-1173table_2 | Test accuracy of sentiment classification on Stanford Sentiment Treebank. Bold font indicates the best performance. | 2 | [['Model', 'BILSTM'], ['Model', 'PIPELINE'], ['Model', 'STE'], ['Model', 'SPIGOT']] | 1 | [['Accuracy (%)']] | [['84.8'], ['85.7'], ['85.4'], ['86.3']] | column | ['Accuracy (%)'] | ['SPIGOT'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy (%)</th> </tr> </thead> <tbody> <tr> <td>Model || BILSTM</td> <td>84.8</td> </tr> <tr> <td>Model || PIPELINE</td> <td>85.7</td> </tr> <tr> <td>Model || STE</td> ... | Table 2 | table_2 | P18-1173 | 8 | acl2018 | Table 2 compares our SPIGOT method to three baselines. Pipelined semantic dependency predictions brings 0.9% absolute improvement in classification accuracy, and SPIGOT outperforms all baselines. In this task STE achieves slightly worse performance than a fixed pre-trained PIPELINE. | [1, 1, 1] | ['Table 2 compares our SPIGOT method to three baselines.', 'Pipelined semantic dependency predictions brings 0.9% absolute improvement in classification accuracy, and SPIGOT outperforms all baselines.', 'In this task STE achieves slightly worse performance than a fixed pre-trained PIPELINE.'] | [['SPIGOT', 'BILSTM', 'PIPELINE', 'STE'], ['PIPELINE', 'BILSTM', 'SPIGOT', 'Accuracy (%)'], ['STE', 'PIPELINE']] | 1 |
P18-1173table_3 | Syntactic parsing performance (in unlabeled attachment score, UAS) and DM semantic parsing performance (in labeled F1) on different groups of the development data. Both systems predict the same syntactic parses for instances from SAME, and they disagree on instances from DIFF (§5). tree, we consider three cases: (a) h(... | 6 | [['Split', 'SAME', '# Sent.', '1011', 'Model', 'PIPELINE'], ['Split', 'SAME', '# Sent.', '1011', 'Model', 'SPIGOT'], ['Split', 'DIFF', '# Sent.', '681', 'Model', 'PIPELINE'], ['Split', 'DIFF', '# Sent.', '681', 'Model', 'SPIGOT']] | 1 | [['UAS'], ['DM']] | [['97.4', '94.0'], ['97.4', '94.3'], ['91.3', '88.1'], ['89.6', '89.2']] | column | ['UAS', 'DM'] | ['SPIGOT'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>UAS</th> <th>DM</th> </tr> </thead> <tbody> <tr> <td>Split || SAME || # Sent. || 1011 || Model || PIPELINE</td> <td>97.4</td> <td>94.0</td> </tr> <tr> <td>Split || SAME || # Se... | Table 3 | table_3 | P18-1173 | 8 | acl2018 | Table 3 compares a pipelined system to one jointly trained using SPIGOT. We consider the development set instances where both syntactic and semantic annotations are available, and partition them based on whether the two systemsfsyntactic predictions agree (SAME), or not (DIFF). The second group includes sentences with... | [1, 2, 1, 1] | ['Table 3 compares a pipelined system to one jointly trained using SPIGOT.', 'We consider the development set instances where both syntactic and semantic annotations are available, and partition them based on whether the two systems\x81fsyntactic predictions agree (SAME), or not (DIFF).', 'The second group includes sen... | [['PIPELINE', 'SPIGOT'], ['SAME', 'DIFF'], ['PIPELINE', 'SPIGOT', 'UAS'], ['PIPELINE', 'SPIGOT', 'DM']] | 1 |
P18-1177table_2 | Evaluation results for question generation. | 2 | [['Models', 'Baseline (Du et al. 2017) (w/o answer)'], ['Models', 'Seq2seq + copy (w/ answer)'], ['Models', 'ContextNQG: Seq2seq + copy (w/ full context + answer)'], ['Models', 'CorefNQG'], ['Models', 'CorefNQG - gating'], ['Models', 'CorefNQG - mention-pair score']] | 2 | [['Training set', 'BLEU-3'], ['Training set', 'BLEU-4'], ['Training set', 'METEOR'], ['Training set w/ noisy examples', 'BLEU-3'], ['Training set w/ noisy examples', 'BLEU-4'], ['Training set w/ noisy examples', 'METEOR']] | [['17.50', '12.28', '16.62', '15.81', '10.78', '15.31'], ['20.01', '14.31', '18.50', '19.61', '13.96', '18.19'], ['20.31', '14.58', '18.84', '19.57', '14.05', '18.19'], ['20.90', '15.16', '19.12', '20.19', '14.52', '18.59'], ['20.68', '14.84', '18.98', '20.08', '14.40', '18.64'], ['20.56', '14.75', '18.85', '19.73', '1... | column | ['BLEU-3', 'BLEU-4', 'METEOR', 'BLEU-3', 'BLEU-4', 'METEOR'] | ['CorefNQG'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Training set || BLEU-3</th> <th>Training set || BLEU-4</th> <th>Training set || METEOR</th> <th>Training set w/ noisy examples || BLEU-3</th> <th>Training set w/ noisy examples || BLEU-4</th> ... | Table 2 | table_2 | P18-1177 | 7 | acl2018 | Table 2 shows the BLEU-{3, 4} and METEOR scores of different models. Our CorefNQG outperforms the seq2seq baseline of Du et al. (2017) by a large margin. This shows that the copy mechanism, answer features and coreference resolution all aid question generation. In addition, CorefNQG outperforms both Seq2seq+Copy models... | [1, 1, 2, 1, 2, 1, 1, 1] | ['Table 2 shows the BLEU-{3, 4} and METEOR scores of different models.', 'Our CorefNQG outperforms the seq2seq baseline of Du et al. (2017) by a large margin.', 'This shows that the copy mechanism, answer features and coreference resolution all aid question generation.', 'In addition, CorefNQG outperforms both Seq2seq+... | [['BLEU-3', 'BLEU-4', 'METEOR'], ['CorefNQG', 'Baseline (Du et al. 2017) (w/o answer)', 'Training set'], ['CorefNQG'], ['CorefNQG', 'Seq2seq + copy (w/ answer)', 'ContextNQG: Seq2seq + copy (w/ full context + answer)', 'Training set'], ['CorefNQG'], ['Training set w/ noisy examples'], ['Training set', 'Training set w/ ... | 1 |
P18-1177table_6 | Performance of the neural machine reading comprehension model (no initialization with pretrained embeddings) on our generated corpus. | 1 | [['DocReader (Chen et al. 2017)']] | 2 | [['Exact Match', 'Dev'], ['Exact Match', 'Test'], ['F-1', 'Dev'], ['F-1', 'Test']] | [['82.33', '81.65', '88.20', '87.79']] | column | ['Exact Match', 'Exact Match', 'F-1', 'F-1'] | ['DocReader (Chen et al. 2017)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Exact Match || Dev</th> <th>Exact Match || Test</th> <th>F-1 || Dev</th> <th>F-1 || Test</th> </tr> </thead> <tbody> <tr> <td>DocReader (Chen et al. 2017)</td> <td>82.33</td> <t... | Table 6 | table_6 | P18-1177 | 9 | acl2018 | Table 6 shows the performance of a topperforming system for the SQuAD dataset (Document Reader (Chen et al., 2017)) when applied to the development and test set portions of our generated dataset. The system was trained on the training set portion of our dataset. We use the SQuAD evaluation scripts, which calculate exac... | [1, 2, 1, 1] | ['Table 6 shows the performance of a topperforming system for the SQuAD dataset (Document Reader (Chen et al., 2017)) when applied to the development and test set portions of our generated dataset.', 'The system was trained on the training set portion of our dataset.', 'We use the SQuAD evaluation scripts, which calcul... | [['DocReader (Chen et al. 2017)'], None, ['Exact Match', 'F-1'], ['DocReader (Chen et al. 2017)']] | 1 |
P18-1178table_3 | Performance of our method and competing models on the MS-MARCO test set | 2 | [['Model', 'FastQA Ext (Weissenborn et al. 2017)'], ['Model', 'Prediction (Wang and Jiang 2016)'], ['Model', 'ReasoNet (Shen et al. 2017)'], ['Model', 'R-Net (Wang et al. 2017c)'], ['Model', 'S-Net (Tan et al. 2017)'], ['Model', 'Our Model'], ['Model', 'S-Net (Ensemble)'], ['Model', 'Our Model (Ensemble)'], ['Model', '... | 1 | [['ROUGE-L'], ['BLEU-1']] | [['33.67', '33.93'], ['37.33', '40.72'], ['38.81', '39.86'], ['42.89', '42.22'], ['45.23', '43.78'], ['46.15', '44.47'], ['46.65', '44.78'], ['46.66', '45.41'], ['47', '46']] | column | ['ROUGE-L', 'BLEU-1'] | ['Our Model', 'Our Model (Ensemble)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>ROUGE-L</th> <th>BLEU-1</th> </tr> </thead> <tbody> <tr> <td>Model || FastQA Ext (Weissenborn et al. 2017)</td> <td>33.67</td> <td>33.93</td> </tr> <tr> <td>Model || Prediction... | Table 3 | table_3 | P18-1178 | 6 | acl2018 | Table 3 shows the results of our system and other state-of-the-art models on the MS-MARCO test set. We adopt the official evaluation metrics, including ROUGE-L (Lin, 2004) and BLEU-1 (Papineni et al., 2002). As we can see, for both metrics, our single model outperforms all the other competing models with an evident mar... | [1, 1, 1, 1] | ['Table 3 shows the results of our system and other state-of-the-art models on the MS-MARCO test set.', 'We adopt the official evaluation metrics, including ROUGE-L (Lin, 2004) and BLEU-1 (Papineni et al., 2002).', 'As we can see, for both metrics, our single model outperforms all the other competing models with an evi... | [['Our Model', 'FastQA Ext (Weissenborn et al. 2017)', 'Prediction (Wang and Jiang 2016)', 'ReasoNet (Shen et al. 2017)', 'R-Net (Wang et al. 2017c)', 'S-Net (Tan et al. 2017)'], ['ROUGE-L', 'BLEU-1'], ['Our Model', 'ROUGE-L', 'BLEU-1'], ['Our Model (Ensemble)', 'ROUGE-L', 'BLEU-1', 'S-Net (Ensemble)']] | 1 |
P18-1181table_2 | Component evaluation for the language model (“Ppl” = perplexity), pentameter model (“Stress Acc”), and rhyme model (“Rhyme F1”). Each number is an average across 10 runs. | 2 | [['Model', 'LM'], ['Model', 'LM*'], ['Model', 'LM**'], ['Model', 'LM**-C'], ['Model', 'LM**+PM+RM'], ['Model', 'Stress-BL'], ['Model', 'Rhyme-BL'], ['Model', 'Rhyme-EM']] | 1 | [['Ppl'], ['Stress Acc'], ['Rhyme F1']] | [['90.13', '-', '-'], ['84.23', '-', '-'], ['80.41', '-', '-'], ['83.68', '-', '-'], ['80.22', '0.74', '0.91'], ['-', '0.80', '-'], ['-', '-', '0.74'], ['-', '-', '0.71']] | column | ['Ppl', 'Stress Acc', 'Rhyme F1'] | ['LM**+PM+RM'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Ppl</th> <th>Stress Acc</th> <th>Rhyme F1</th> </tr> </thead> <tbody> <tr> <td>Model || LM</td> <td>90.13</td> <td>-</td> <td>-</td> </tr> <tr> <td>Model || LM*</td> ... | Table 2 | table_2 | P18-1181 | 7 | acl2018 | Perplexity on the test partition is detailed in Table 2. Encouragingly, we see that the incorporation of character encodings and preceding context improves performance substantially, reducing perplexity by almost 10 points from LM to LM??. The inferior performance of LM??-C compared to LM?? demonstrates that our approa... | [1, 1, 1, 1, 1, 1, 1, 1, 2] | ['Perplexity on the test partition is detailed in Table 2.', 'Encouragingly, we see that the incorporation of character encodings and preceding context improves performance substantially, reducing perplexity by almost 10 points from LM to LM??.', 'The inferior performance of LM??-C compared to LM?? demonstrates that ou... | [['Ppl'], ['LM', 'LM**', 'Ppl'], ['LM**-C', 'LM**', 'Ppl'], ['LM**+PM+RM', 'Ppl'], ['Stress Acc'], ['LM**+PM+RM'], ['Rhyme F1'], ['LM**+PM+RM', 'Rhyme-BL', 'Rhyme-EM'], ['Rhyme-EM']] | 1 |
P18-1182table_1 | (1) Accuracy (Acc.) and String Edit Distance (SED) results in the prediction of all referring expressions; (2) Accuracy (Acc.), Precision (Prec.), Recall (Rec.) and F-Score results in the prediction of pronominal forms; and (3) Accuracy (Acc.) and BLEU score results of the texts with the generated referring expressions... | 1 | [['OnlyNames'], ['Ferreira'], ['NeuralREG+Seq2Seq'], ['NeuralREG+CAtt'], ['NeuralREG+HierAtt']] | 2 | [['All References', 'Acc.'], ['All References', 'SED'], ['Pronouns', 'Acc.'], ['Pronouns', 'Prec.'], ['Pronouns', 'Rec.'], ['Pronouns', 'F-Score'], ['Text', 'Acc.'], ['Text', 'BLEU']] | [['0.53D', '4.05D', '-', '-', '-', '-', '0.15D', '69.03D'], ['0.61C', '3.18C', '0.43B', '0.57', '0.54', '0.55', '0.19C', '72.78C'], ['0.74A,B', '2.32A,B', '0.75A', '0.77', '0.78', '0.78', '0.28B', '79.27A,B'], ['0.74A', '2.25A', '0.75A', '0.73', '0.78', '0.75', '0.30A', '79.39A'], ['0.73B', '2.36B', '0.73A', '0.74', '0... | column | ['Acc.', 'SED', 'Acc.', 'Prec.', 'Rec.', 'F-Score', 'Acc.', 'BLEU'] | ['NeuralREG+Seq2Seq', 'NeuralREG+CAtt', 'NeuralREG+HierAtt'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>All References || Acc.</th> <th>All References || SED</th> <th>Pronouns || Acc.</th> <th>Pronouns || Prec.</th> <th>Pronouns || Rec.</th> <th>Pronouns || F-Score</th> <th>Text || Acc.</... | Table 1 | table_1 | P18-1182 | 9 | acl2018 | Table 1 summarizes the results for all models on all metrics on the test set and Table 2 depicts a text example lexicalized by each model. The first thing to note in the results of the first table is that the baselines in the top two rows performed quite strong on this task, generating more than half of the referring e... | [1, 1, 1, 2, 1, 1, 2, 2, 2, 1, 1] | ['Table 1 summarizes the results for all models on all metrics on the test set and Table 2 depicts a text example lexicalized by each model.', 'The first thing to note in the results of the first table is that the baselines in the top two rows performed quite strong on this task, generating more than half of the referr... | [None, ['OnlyNames', 'Ferreira'], ['OnlyNames', 'Ferreira'], ['NeuralREG+Seq2Seq', 'NeuralREG+CAtt', 'NeuralREG+HierAtt'], ['NeuralREG+Seq2Seq', 'NeuralREG+CAtt', 'NeuralREG+HierAtt'], ['NeuralREG+Seq2Seq', 'NeuralREG+CAtt', 'NeuralREG+HierAtt', 'Text', 'BLEU', 'All References', 'Acc.', 'SED'], ['NeuralREG+Seq2Seq', 'N... | 1 |
P18-1182table_3 | Fluency, Grammaticality and Clarity results obtained in the human evaluation. Rankings were determined by statistical significance. | 1 | [['OnlyNames'], ['Ferreira'], ['NeuralREG+Seq2Seq'], ['NeuralREG+CAtt'], ['NeuralREG+HierAtt'], ['Original']] | 1 | [['Fluency'], ['Grammar'], ['Clarity']] | [['4.74C', '4.68B', '4.90B'], ['4.74C', '4.58B', '4.93B'], ['4.95B,C', '4.82A,B', '4.97B'], ['5.23A,B', '4.95A,B', '5.26A,B'], ['5.07B,C', '4.90A,B', '5.13A,B'], ['5.41A', '5.17A', '5.42A']] | column | ['Fluency', 'Grammar', 'Clarity'] | ['NeuralREG+Seq2Seq', 'NeuralREG+CAtt', 'NeuralREG+HierAtt'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Fluency</th> <th>Grammar</th> <th>Clarity</th> </tr> </thead> <tbody> <tr> <td>OnlyNames</td> <td>4.74C</td> <td>4.68B</td> <td>4.90B</td> </tr> <tr> <td>Ferreira</td... | Table 3 | table_3 | P18-1182 | 9 | acl2018 | Table 3 summarizes the results. Inspection of the Table reveals a clear pattern: all three neural models scored higher than the baselines on all metrics, with especially NeuralREG+CAtt approaching the ratings for the original sentences, although? again ?differences between the neural models were small. Concerning the s... | [1, 1, 1, 2, 2, 1, 1, 1] | ['Table 3 summarizes the results.', 'Inspection of the Table reveals a clear pattern: all three neural models scored higher than the baselines on all metrics, with especially NeuralREG+CAtt approaching the ratings for the original sentences, although? again ?differences between the neural models were small.', 'Concerni... | [None, ['NeuralREG+Seq2Seq', 'NeuralREG+CAtt', 'NeuralREG+HierAtt', 'Fluency', 'Grammar', 'Clarity'], ['NeuralREG+Seq2Seq', 'NeuralREG+CAtt', 'NeuralREG+HierAtt'], None, None, ['NeuralREG+CAtt', 'OnlyNames', 'Ferreira'], ['NeuralREG+Seq2Seq', 'NeuralREG+CAtt', 'NeuralREG+HierAtt'], ['Original', 'NeuralREG+Seq2Seq', 'Ne... | 1 |
P18-1186table_1 | NED performance on the SnapCaptionsKB dataset at Top-1, 3, 5, 10, 50 accuracies. The classification is over 1M entities. Candidates generation methods: N/A, or over a fixed number of candidates generated with methods: m→e hash list and kNN (lexical neighbors). | 6 | [['Modalities', 'W', 'Model', 'ARNN (Eshel et al. 2017)', 'Candidates Generation', 'm→e list'], ['Modalities', 'W', 'Model', 'ARNN', 'Candidates Generation', '5-NN (lexical)'], ['Modalities', 'W', 'Model', 'ARNN', 'Candidates Generation', '10-NN (lexical)'], ['Modalities', 'W', 'Model', 'sDA-NED (He et al. 2013)', 'Can... | 2 | [['Accuracy (%)', 'Top-1'], ['Accuracy (%)', 'Top-3'], ['Accuracy (%)', 'Top-5'], ['Accuracy (%)', 'Top-10'], ['Accuracy (%)', 'Top-50']] | [['51.2', '60.4', '66.5', '66.9', '66.9'], ['35.2', '43.3', '45.0', '-', '-'], ['31.9', '40.1', '44.5', '50.7', '-'], ['48.7', '57.3', '66.3', '66.9', '66.9'], ['43.6', '63.8', '67.1', '70.5', '77.2'], ['67.0', '72.7', '74.8', '76.8', '85'], ['67.8', '73.5', '74.8', '76.2', '84.6'], ['67.2', '74.6', '77.7', '80.5', '88... | column | ['Accuracy (%)', 'Accuracy (%)', 'Accuracy (%)', 'Accuracy (%)', 'Accuracy (%)'] | ['DZMNED + Modality Attention'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy (%) || Top-1</th> <th>Accuracy (%) || Top-3</th> <th>Accuracy (%) || Top-5</th> <th>Accuracy (%) || Top-10</th> <th>Accuracy (%) || Top-50</th> </tr> </thead> <tbody> <tr> <... | Table 1 | table_1 | P18-1186 | 7 | acl2018 | Table 1 shows the Top-1, 3, 5, 10, and 50 candidates retrieval accuracy results on the Snap Captions dataset. We see that the proposed approach significantly outperforms the baselines which use fixed candidates generation method. Note that m→e hash list-based methods, which retrieve as candidates the KB entities that a... | [1, 1, 2, 2, 2, 1, 2, 1, 1] | ['Table 1 shows the Top-1, 3, 5, 10, and 50 candidates retrieval accuracy results on the Snap Captions dataset.', 'We see that the proposed approach significantly outperforms the baselines which use fixed candidates generation method.', 'Note that m→e hash list-based methods, which retrieve as candidates the KB entitie... | [['Accuracy (%)', 'Top-1', 'Top-3', 'Top-5', 'Top-10', 'Top-50'], ['DZMNED'], ['m→e list'], ['5-NN (lexical)', '10-NN (lexical)'], ['Zeroshot'], ['Zeroshot'], None, ['W + C + V', 'W + C'], ['DZMNED + Modality Attention']] | 1 |
P18-1186table_2 | MNED performance (Top-1, 5, 10 accuracies) on SnapCaptionsKB with varying qualities of KB embeddings. Model: DZMNED (W+C+V) method. Note that m → e hash list-based methods, which retrieve as candidates the KB entities that appear in the training set of captions only, has upper performance limit at 66.9%, showing the li... | 2 | [['KB Embeddings', 'Trained with 1M entities'], ['KB Embeddings', 'Trained with 10K entities'], ['KB Embeddings', 'Random embeddings']] | 1 | [['Top-1'], ['Top-5'], ['Top-10']] | [['68.1', '78.2', '80.9'], ['60.3', '72.5', '75.9'], ['41.4', '45.8', '48.0']] | column | ['accuracy', 'accuracy', 'accuracy'] | ['KB Embeddings'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Top-1</th> <th>Top-5</th> <th>Top-10</th> </tr> </thead> <tbody> <tr> <td>KB Embeddings || Trained with 1M entities</td> <td>68.1</td> <td>78.2</td> <td>80.9</td> </tr> <t... | Table 2 | table_2 | P18-1186 | 7 | acl2018 | To characterize this aspect, we provide Table 2 which shows MNED performance with varying quality of embeddings as follows: KB embeddings learned from 1M knowledge graph entities (same as in the main experiments), from 10K subset of entities (less triplets to train with in Eq.3, hence lower quality), and random embeddi... | [1, 1, 1] | ['To characterize this aspect, we provide Table 2 which shows MNED performance with varying quality of embeddings as follows: KB embeddings learned from 1M knowledge graph entities (same as in the main experiments), from 10K subset of entities (less triplets to train with in Eq.3, hence lower quality), and random embed... | [['Trained with 1M entities', 'Trained with 10K entities', 'Random embeddings'], ['Trained with 1M entities', 'Trained with 10K entities', 'Random embeddings'], ['Random embeddings']] | 1 |
P18-1188table_1 | Ablation results on the validation set. We report R1, R2, R3, R4, RL and their average (Avg.). The first block of the table presents LEAD and POINTERNET which do not use any external information. LEAD is the baseline system selecting first three sentences. POINTERNET is the sentence extraction system of Cheng and Lapat... | 3 | [['MODELS', 'LEAD', '-'], ['MODELS', 'POINTERNET', '-'], ['MODELS', 'XNET+TITLE', '-'], ['MODELS', 'XNET+CAPTION', '-'], ['MODELS', 'XNET+FS', '-'], ['MODELS', 'Combination Models (XNET+)', 'TITLE+CAPTION'], ['MODELS', 'Combination Models (XNET+)', 'TITLE+FS'], ['MODELS', 'Combination Models (XNET+)', 'CAPTION+FS'], ['... | 1 | [['R1'], ['R2'], ['R3'], ['R4'], ['RL'], ['Avg.']] | [['49.2', '18.9', '9.8', '6.0', '43.8', '25.5'], ['53.3', '19.7', '10.4', '6.4', '47.2', '27.4'], ['55.0', '21.6', '11.7', '7.5', '48.9', '28.9'], ['55.3', '21.3', '11.4', '7.2', '49.0', '28.8'], ['54.8', '21.1', '11.3', '7.2', '48.6', '28.6'], ['55.4', '21.8', '11.8', '7.5', '49.2', '29.2'], ['55.1', '21.6', '11.6', '... | column | ['R1', 'R2', 'R3', 'R4', 'RL', 'Avg.'] | ['TITLE+CAPTION'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>R1</th> <th>R2</th> <th>R3</th> <th>R4</th> <th>RL</th> <th>Avg.</th> </tr> </thead> <tbody> <tr> <td>MODELS || LEAD || -</td> <td>49.2</td> <td>18.9</td> <td>9.8... | Table 1 | table_1 | P18-1188 | 5 | acl2018 | We report the performance of several variants of XNET on the validation set in Table 1. We also compare them against the LEAD baseline and POINTERNET. These two systems do not use any additional information. Interestingly, all the variants of XNET significantly outperform LEAD and POINTERNET. When the title (TITLE), im... | [1, 1, 2, 1, 1, 2, 1, 1, 1, 1, 1] | ['We report the performance of several variants of XNET on the validation set in Table 1.', 'We also compare them against the LEAD baseline and POINTERNET.', 'These two systems do not use any additional information.', 'Interestingly, all the variants of XNET significantly outperform LEAD and POINTERNET.', 'When the tit... | [['XNET+TITLE', 'XNET+CAPTION', 'XNET+FS', 'TITLE+CAPTION', 'TITLE+FS', 'CAPTION+FS', 'TITLE+CAPTION+FS'], ['LEAD', 'POINTERNET'], None, ['XNET+TITLE', 'XNET+CAPTION', 'XNET+FS'], ['XNET+TITLE', 'XNET+CAPTION', 'XNET+FS'], ['XNET+TITLE'], ['XNET+TITLE', 'XNET+CAPTION', 'XNET+FS'], ['TITLE+CAPTION', 'TITLE+FS', 'CAPTION... | 1 |
P18-1188table_4 | Results (in percentage) for answer selection comparing our approaches (bottom part) to baselines (top): AP-CNN (dos Santos et al., 2016), ABCNN (Yin et al., 2016), L.D.C (Wang and Jiang, 2017), KV-MemNN (Miller et al., 2016), and COMPAGGR, a state-of-the-art system by Wang et al. (2017). (WGT) WRD CNT stands for the (w... | 1 | [['WRD CNT'], ['WGT WRD CNT'], ['AP-CNN'], ['ABCNN'], ['L.D.C'], ['KV-MemNN'], ['LOCALISF'], ['ISF'], ['PAIRCNN'], ['COMPAGGR'], ['XNET'], ['XNETTOPK'], ['LRXNET'], ['XNET+']] | 2 | [['SQuAD', 'ACC'], ['SQuAD', 'MAP'], ['SQuAD', 'MRR'], ['WikiQA', 'ACC'], ['WikiQA', 'MAP'], ['WikiQA', 'MRR'], ['NewsQA', 'ACC'], ['NewsQA', 'MAP'], ['NewsQA', 'MRR'], ['MSMarco', 'ACC'], ['MSMarco', 'MAP'], ['MSMarco', 'MRR']] | [['77.84', '27.50', '27.77', '51.05', '48.91', '49.24', '44.67', '46.48', '46.91', '20.16', '19.37', '19.51'], ['78.43', '28.10', '28.38', '49.79', '50.99', '51.32', '45.24', '48.20', '48.64', '20.50', '20.06', '20.23'], ['-', '-', '-', '-', '68.86', '69.57', '-', '-', '-', '-', '-', '-'], ['-', '-', '-', '-', '69.21',... | column | ['ACC', 'MAP', 'MRR', 'ACC', 'MAP', 'MRR', 'ACC', 'MAP', 'MRR', 'ACC', 'MAP', 'MRR'] | ['LRXNET'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>SQuAD || ACC</th> <th>SQuAD || MAP</th> <th>SQuAD || MRR</th> <th>WikiQA || ACC</th> <th>WikiQA || MAP</th> <th>WikiQA || MRR</th> <th>NewsQA || ACC</th> <th>NewsQA || MAP</th> ... | Table 4 | table_4 | P18-1188 | 8 | acl2018 | Table 4 gives the results for the test sets of NewsQA and WikiQA, and the original validation sets of SQuAD and MSMarco. Our first observation is that XNET outperforms PAIRCNN, supporting our claim that it is beneficial to read the whole document in order to make decisions, instead of only observing each candidate in i... | [1, 1, 1, 2, 1, 1, 0, 0, 1, 2, 2, 1, 1, 1, 1, 2, 1, 2, 2, 1, 1] | ['Table 4 gives the results for the test sets of NewsQA and WikiQA, and the original validation sets of SQuAD and MSMarco.', 'Our first observation is that XNET outperforms PAIRCNN, supporting our claim that it is beneficial to read the whole document in order to make decisions, instead of only observing each candidate... | [['SQuAD', 'WikiQA', 'NewsQA', 'MSMarco'], ['XNET', 'PAIRCNN'], ['ISF', 'XNET'], ['XNET'], ['XNET+'], ['LRXNET', 'COMPAGGR'], None, None, ['XNETTOPK'], ['ISF'], ['XNET+'], ['LRXNET'], ['SQuAD', 'LRXNET', 'COMPAGGR'], ['LRXNET', 'COMPAGGR', 'WikiQA', 'NewsQA'], ['WikiQA', 'NewsQA', 'SQuAD'], None, ['MSMarco', 'LRXNET', ... | 1 |
P18-1189table_1 | Performance of our various models in an unsupervised setting (i.e., without labels or covariates) on the IMDB dataset using a 5,000-word vocabulary and 50 topics. The supplementary materials contain additional results for 20 newsgroups and Yahoo answers. | 2 | [['Model', 'LDA'], ['Model', 'SAGE'], ['Model', 'NVDM'], ['Model', 'SCHOLAR - B.G.'], ['Model', 'SCHOLAR'], ['Model', 'SCHOLAR + W.V.'], ['Model', 'SCHOLAR + REG.']] | 1 | [['Ppl.'], ['NPMI (int.)'], ['NPMI (ext.)'], ['Sparsity']] | [['1508', '0.13', '0.14', '0'], ['1767', '0.12', '0.12', '0.79'], ['1748', '0.06', '0.04', '0'], ['1889', '0.09', '0.13', '0'], ['1905', '0.14', '0.13', '0'], ['1991', '0.18', '0.17', '0'], ['2185', '0.10', '0.12', '0.58']] | column | ['Ppl.', 'NPMI (int.)', 'NPMI (ext.)', 'Sparsity'] | ['SCHOLAR', 'SCHOLAR + W.V.', 'SCHOLAR + REG.'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Ppl.</th> <th>NPMI (int.)</th> <th>NPMI (ext.)</th> <th>Sparsity</th> </tr> </thead> <tbody> <tr> <td>Model || LDA</td> <td>1508</td> <td>0.13</td> <td>0.14</td> <td>0... | Table 1 | table_1 | P18-1189 | 7 | acl2018 | We therefore use the same experimental setup as Srivastava and Sutton (2017) (learning rate, momentum, batch size, and number of epochs) and find the same general patterns as they reported (see Table 1 and supplementary material): our model returns more coherent topics than LDA, but at the cost of worse perplexity. SAG... | [1, 1, 1, 2, 1, 1] | ['We therefore use the same experimental setup as Srivastava and Sutton (2017) (learning rate, momentum, batch size, and number of epochs) and find the same general patterns as they reported (see Table 1 and supplementary material): our model returns more coherent topics than LDA, but at the cost of worse perplexity.',... | [['LDA'], ['SAGE', 'Sparsity'], ['NVDM', 'NPMI (int.)', 'NPMI (ext.)'], None, ['SCHOLAR + REG.'], ['SCHOLAR + W.V.']] | 1 |
P18-1191table_4 | Results for Span Detection on the dense development dataset. Span detection results are given with the cutoff threshold τ at 0.5, and at the value which maximizes F-score. The top chart lists precision, recall and F-score with exact span match, while the bottom reports matches where the intersection over union (IOU) is... | 1 | [['BIO'], ['Span (tau = 0.5)'], ['Span (tau = tau*)']] | 2 | [['Exact Match', 'P'], ['Exact Match', 'R'], ['Exact Match', 'F'], ['IOU ≥ 0.5', 'P'], ['IOU ≥ 0.5', 'R'], ['IOU ≥ 0.5', 'F']] | [['69.0', '75.9', '72.2', '80.4', '86.0', '83.1'], ['81.7', '80.9', '81.3', '87.5', '84.2', '85.8'], ['80.0', '84.7', '82.2', '83.8', '93.0', '88.1']] | column | ['P', 'R', 'F', 'P', 'R', 'F'] | ['Span (tau = 0.5)', 'Span (tau = tau*)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Exact Match || P</th> <th>Exact Match || R</th> <th>Exact Match || F</th> <th>IOU ≥ 0.5 || P</th> <th>IOU ≥ 0.5 || R</th> <th>IOU ≥ 0.5 || F</th> </tr> </thead> <tbody> <tr> <td... | Table 4 | table_4 | P18-1191 | 6 | acl2018 | Table 4 shows span detection results on the development set. We report results for the span-based models at two threshold values tau : tau = 0.5, and tau = tau* maximizing F1. The span-based model significantly improves over the BIO model in both precision and recall, although the difference is less pronounced under IO... | [1, 1, 1] | ['Table 4 shows span detection results on the development set.', 'We report results for the span-based models at two threshold values tau : tau = 0.5, and tau = tau* maximizing F1.', 'The span-based model significantly improves over the BIO model in both precision and recall, although the difference is less pronounced ... | [None, ['Span (tau = 0.5)', 'Span (tau = tau*)'], ['Span (tau = 0.5)', 'Span (tau = tau*)', 'P', 'R']] | 1 |
P18-1191table_5 | Question Generation results on the dense development set. EM Exact Match accuracy, PM Partial Match Accuracy, SA Slot-level accuracy | 1 | [['Local'], ['Seq.']] | 1 | [['EM'], ['PM'], ['SA']] | [['44.2', '62.0', '83.2'], ['47.2', '62.3', '82.9']] | column | ['EM', 'PM', 'SA'] | ['Seq.'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>EM</th> <th>PM</th> <th>SA</th> </tr> </thead> <tbody> <tr> <td>Local</td> <td>44.2</td> <td>62.0</td> <td>83.2</td> </tr> <tr> <td>Seq.</td> <td>47.2</td> ... | Table 5 | table_5 | P18-1191 | 6 | acl2018 | Table 5 shows the results for question generation on the development set. The sequential model exact match accuracy is significantly higher, while word-level accuracy is roughly comparable, reflecting the fact that the local model learns the slot-level posteriors. | [1, 1] | ['Table 5 shows the results for question generation on the development set.', 'The sequential model exact match accuracy is significantly higher, while word-level accuracy is roughly comparable, reflecting the fact that the local model learns the slot-level posteriors.'] | [None, ['Seq.', 'EM', 'PM', 'SA']] | 1 |
P18-1192table_4 | Results on the Chinese test set. | 2 | [['System (syntax-aware)', 'Zhao et al. (2009a)'], ['System (syntax-aware)', 'Bjorkelund et al. (2009)'], ['System (syntax-aware)', 'Roth and Lapata (2016)'], ['System (syntax-aware)', 'Marcheggiani and Titov (2017)'], ['System (syntax-aware)', 'Ours'], ['System (syntax-agnostic)', 'Marcheggiani et al. (2017)'], ['Syst... | 1 | [['P'], ['R'], ['F1']] | [['80.4', '75.2', '77.7'], ['82.4', '75.1', '78.6'], ['83.2', '75.9', '79.4'], ['84.6', '80.4', '82.5'], ['84.2', '81.5', '82.8'], ['83.4', '79.1', '81.2'], ['84.5', '79.3', '81.8']] | column | ['P', 'R', 'F1'] | ['Ours'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>System (syntax-aware) || Zhao et al. (2009a)</td> <td>80.4</td> <td>75.2</td> <td>77.7</td> </tr> <tr> <... | Table 4 | table_4 | P18-1192 | 5 | acl2018 | Table 4 presents the results on Chinese test set. Even though we use the same parameters as for English, our model also outperforms the best reported results by 0.3% (syntax-aware) and 0.6% (syntax-agnostic) in F1 scores. | [1, 1] | ['Table 4 presents the results on Chinese test set.', 'Even though we use the same parameters as for English, our model also outperforms the best reported results by 0.3% (syntax-aware) and 0.6% (syntax-agnostic) in F1 scores.'] | [None, ['Ours', 'Marcheggiani and Titov (2017)', 'System (syntax-aware)', 'F1', 'System (syntax-agnostic)']] | 1 |
P18-1192table_5 | SRL results without predicate sense. | 2 | [['System(without predicate sense)', '1st-order'], ['System(without predicate sense)', '2nd-order'], ['System(without predicate sense)', '3rd-order'], ['System(without predicate sense)', 'Marcheggiani and Titov (2017)']] | 1 | [['P'], ['R'], ['F1']] | [['84.4', '82.6', '83.5'], ['84.8', '83.0', '83.9'], ['85.1', '83.3', '84.2'], ['85.2', '81.6', '83.3']] | column | ['P', 'R', 'F1'] | ['1st-order', '2nd-order', '3rd-order'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>System(without predicate sense) || 1st-order</td> <td>84.4</td> <td>82.6</td> <td>83.5</td> </tr> <tr> <... | Table 5 | table_5 | P18-1192 | 6 | acl2018 | Table 5 shows the results from our syntax-aware model with lower order argument pruning. Compared to the best previous model, our system still yields an increment in recall by more than 1%, leading to improvements in F1 score. It demonstrates that refining syntactic parser tree based candidate pruning does help in argu... | [1, 1, 2] | ['Table 5 shows the results from our syntax-aware model with lower order argument pruning.', 'Compared to the best previous model, our system still yields an increment in recall by more than 1%, leading to improvements in F1 score.', 'It demonstrates that refining syntactic parser tree based candidate pruning does help... | [['1st-order', '2nd-order', '3rd-order'], ['Marcheggiani and Titov (2017)', '1st-order', '2nd-order', '3rd-order', 'R', 'F1'], ['1st-order', '2nd-order', '3rd-order']] | 1 |
P18-1192table_9 | Results on English test set, in terms of labeled attachment score for syntactic dependencies (LAS), semantic precision (P), semantic recall (R), semantic labeled F1 score (Sem-F1), the ratio SemF1/LAS. A superscript * indicates LAS results from our personal communication with the authors. | 2 | [['System', 'Zhao et al. (2009c) [SRL-only]'], ['System', 'Zhao et al. (2009a) [Joint]'], ['System', 'Bjorkelund et al. (2010)'], ['System', 'Lei et al. (2015)'], ['System', 'Roth and Lapata (2016)'], ['System', 'Marcheggiani and Titov (2017)'], ['System', 'Ours + CoNLL-2009 predicted'], ['System', 'Ours + Auto syntax'... | 1 | [['LAS (%)'], ['P (%)'], ['R (%)'], ['Sem-F1 (%)'], ['Sem-F1/LAS (%)']] | [['86.0', '-', '-', '85.4', '99.3'], ['89.2', '-', '-', '86.2', '96.6'], ['89.8', '87.1', '84.5', '85.8', '95.6'], ['90.4', '-', '-', '86.6', '95.8'], ['89.8', '88.1', '85.3', '86.7', '96.5'], ['90.3*', '89.1', '86.8', '88.0', '97.5'], ['86.0', '89.7', '89.3', '89.5', '104.0'], ['90.0', '90.5', '89.3', '89.9', '99.9'],... | column | ['LAS (%)', 'P (%)', 'R (%)', 'Sem-F1 (%)', 'Sem-F1/LAS (%)'] | ['Ours + CoNLL-2009 predicted', 'Ours + Auto syntax'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>LAS (%)</th> <th>P (%)</th> <th>R (%)</th> <th>Sem-F1 (%)</th> <th>Sem-F1/LAS (%)</th> </tr> </thead> <tbody> <tr> <td>System || Zhao et al. (2009c) [SRL-only]</td> <td>86.0</td... | Table 9 | table_9 | P18-1192 | 8 | acl2018 | Table 9 reports the performance of existing models7 in term of Sem-F1/LAS ratio on CoNLL2009 English test set. Interestingly, even though our system has significantly lower scores than others by 3.8% LAS in syntactic components, we obtain the highest results both on Sem-F1 and
the Sem-F1/LAS ratio, respectively. These ... | [1, 1, 2, 2, 2, 1] | ['Table 9 reports the performance of existing models7 in term of Sem-F1/LAS ratio on CoNLL2009 English test set.', 'Interestingly, even though our system has significantly lower scores than others by 3.8% LAS in syntactic components, we obtain the highest results both on Sem-F1 and\nthe Sem-F1/LAS ratio, respectively.'... | [['Sem-F1 (%)', 'Sem-F1/LAS (%)'], ['Ours + CoNLL-2009 predicted', 'Ours + Auto syntax', 'LAS (%)', 'Sem-F1 (%)', 'Sem-F1/LAS (%)'], None, None, None, ['Sem-F1 (%)', 'Sem-F1/LAS (%)']] | 1 |
P18-1195table_1 | MS-COCO ’s test set evaluation measures. | 6 | [['Loss', 'MLE', 'Reward', '-', 'Vsub', '-'], ['Loss', 'MLE + lambda H', 'Reward', '-', 'Vsub', '-'], ['Loss', 'Tok', 'Reward', 'Glove sim', 'Vsub', '-'], ['Loss', 'Tok', 'Reward', 'Glove sim rfreq', 'Vsub', '-'], ['Loss', 'Seq', 'Reward', 'Hamming', 'Vsub', 'V'], ['Loss', 'Seq', 'Reward', 'Hamming', 'Vsub', 'Vbatch'],... | 2 | [['Without attention', 'BLEU-1'], ['Without attention', 'BLEU-4'], ['Without attention', 'CIDER'], ['With attention', 'BLEU-1'], ['With attention', 'BLEU-4'], ['With attention', 'CIDER']] | [['70.63', '30.14', '93.59', '73.40', '33.11', '101.63'], ['70.79', '30.29', '93.61', '72.68', '32.15', '99.77'], ['71.94', '31.27', '95.79', '73.49', '32.93', '102.33'], ['72.39', '31.76', '97.47', '74.01', '33.25', '102.81'], ['71.76', '31.16', '96.37', '73.12', '32.71', '101.25'], ['71.46', '31.15', '96.53', '73.26'... | column | ['BLEU-1', 'BLEU-4', 'CIDER', 'BLEU-1', 'BLEU-4', 'CIDER'] | ['Tok-Seq'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Without attention || BLEU-1</th> <th>Without attention || BLEU-4</th> <th>Without attention || CIDER</th> <th>With attention || BLEU-1</th> <th>With attention || BLEU-4</th> <th>With attenti... | Table 1 | table_1 | P18-1195 | 6 | acl2018 | For reference, we include in Table 1 baseline results obtained using MLE, and our implementation of MLE with entropy regularization (MLE + lambda H) (Pereyra et al., 2017), as well as the RAML approach of Norouzi et al. (2016) which corresponds to sequence-level smoothing based on the Hamming reward and sampling replac... | [1, 1, 2, 1, 1, 1] | ['For reference, we include in Table 1 baseline results obtained using MLE, and our implementation of MLE with entropy regularization (MLE + lambda H) (Pereyra et al., 2017), as well as the RAML approach of Norouzi et al. (2016) which corresponds to sequence-level smoothing based on the Hamming reward and sampling repl... | [['MLE', 'MLE + lambda H', 'Seq', 'Hamming', 'V'], ['MLE', 'MLE + lambda H', 'Without attention', 'With attention'], None, ['Tok', 'Without attention', 'With attention'], ['Seq', 'CIDER', 'Hamming'], ['Tok-Seq', 'CIDER', 'Without attention', 'With attention']] | 1 |
P18-1196table_3 | Test set regression evaluation for the clinical and scientific data. Mean absolute percentage error (MAPE) is scale independent and allows for comparison across data, whereas root mean square and mean absolute errors (RMSE, MAE) are scale dependent. Medians (MdAE, MdAPE) are informative of the distribution of errors. | 2 | [['Model', 'mean'], ['Model', 'median'], ['Model', 'softmax'], ['Model', 'softmax+rnn'], ['Model', 'h-softmax'], ['Model', 'h-softmax+rnn'], ['Model', 'd-RNN'], ['Model', 'MoG'], ['Model', 'combination']] | 2 | [['Clinical', 'RMSE'], ['Clinical', 'MAE'], ['Clinical', 'MdAE'], ['Clinical', 'MAPE%'], ['Clinical', 'MdAPE%'], ['Scientific', 'MdAE'], ['Scientific', 'MAPE%'], ['Scientific', 'MdAPE%']] | [['1043.68', '294.95', '245.59', '2353.11', '409.47', '?10 20', '?10 23', '?10 22'], ['1036.18', '120.24', '34.52', '425.81', '52.05', '4.20', '8039.15', '98.65'], ['997.84', '80.29', '12.70', '621.78', '22.41', '3.00', '1947.44', '80.62'], ['991.38', '74.44', '13.00', '503.57', '23.91', '3.50', '15208.37', '80.00'], [... | column | ['RMSE', 'MAE', 'MdAE', 'MAPE%', 'MdAPE%', 'MdAE', 'MAPE%', 'MdAPE%'] | ['d-RNN', 'MoG'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Clinical || RMSE</th> <th>Clinical || MAE</th> <th>Clinical || MdAE</th> <th>Clinical || MAPE%</th> <th>Clinical || MdAPE%</th> <th>Scientific || MdAE</th> <th>Scientific || MAPE%</th> ... | Table 3 | table_3 | P18-1196 | 6 | acl2018 | Table 3 shows evaluation results, where we also include two naive baselines of constant predictions: with the mean and median of the training data. For both datasets, RMSE and MAE were too sensitive to extreme errors to allow drawing safe conclusions, particularly for the scientific dataset, where both metrics were in ... | [1, 1, 1, 1, 1, 1, 1, 1, 2] | ['Table 3 shows evaluation results, where we also include two naive baselines of constant predictions: with the mean and median of the training data.', 'For both datasets, RMSE and MAE were too sensitive to extreme errors to allow drawing safe conclusions, particularly for the scientific dataset, where both metrics wer... | [['mean', 'median'], ['Clinical', 'Scientific', 'RMSE', 'MAE'], ['MdAE'], ['MoG', 'Clinical', 'Scientific', 'MAPE%'], ['MoG', 'Scientific', 'MdAPE%'], ['d-RNN', 'Clinical', 'Scientific'], ['d-RNN', 'Scientific', 'MdAPE%'], ['combination'], ['MoG']] | 1 |
P18-1197table_1 | Results on test dataset for SICK and MSRpar semantic relatedness task. Mean scores are presented based on 5 runs (standard deviation in parenthesis). Categories of results: (1) Previous models (2) Dependency structure (3) Constituency structure (4) Linear structure | 4 | [['Dataset', 'SICK', 'Model', 'Illinois-LH (2014)'], ['Dataset', 'SICK', 'Model', 'UNAL-NLP (2014)'], ['Dataset', 'SICK', 'Model', 'Meaning factory (2014)'], ['Dataset', 'SICK', 'Model', 'ECNU (2014)'], ['Dataset', 'SICK', 'Model', 'Dependency Tree-LSTM (2015)'], ['Dataset', 'SICK', 'Model', 'Decomp-Attn (Dependency)']... | 1 | [['Pearson r'], ['Spearman rho'], ['MSE']] | [['0.7993', '0.7538', '0.3692'], ['0.8070', '0.7489', '0.3550'], ['0.8268', '0.7721', '0.3224'], ['0.8414', '-', '-'], ['0.8676 (0.0030)', '0.8083 (0.0042)', '0.2532 (0.0052)'], ['0.8239 (0.0120)', '0.7614 (0.0103)', '0.3326 (0.0223)'], ['0.8424 (0.0042)', '0.7733 (0.0066)', '0.2963 (0.0077)'], ['0.8582 (0.0038)', '0.7... | column | ['Pearson r', 'Spearman rho', 'MSE'] | ['Progressive-Attn (Constituency)', 'Progressive-Attn (Linear)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Pearson r</th> <th>Spearman rho</th> <th>MSE</th> </tr> </thead> <tbody> <tr> <td>Dataset || SICK || Model || Illinois-LH (2014)</td> <td>0.7993</td> <td>0.7538</td> <td>0.3692<... | Table 1 | table_1 | P18-1197 | 8 | acl2018 | Table 1 summarizes our results. According to (Marelli et al., 2014), we compute three evaluation metrics: Pearson r, Spearman rho and Mean Squared Error (MSE). We compare our attention models against the original Tree-LSTM (Tai et al., 2015), instantiated on both constituency trees and dependency trees. We also compare... | [1, 1, 1, 1, 2, 2, 1] | ['Table 1 summarizes our results.', 'According to (Marelli et al., 2014), we compute three evaluation metrics: Pearson r, Spearman rho and Mean Squared Error (MSE).', 'We compare our attention models against the original Tree-LSTM (Tai et al., 2015), instantiated on both constituency trees and dependency trees.', 'We a... | [None, ['Pearson r', 'Spearman rho', 'MSE'], ['Dependency Tree-LSTM (2015)', 'Constituency Tree-LSTM (2015)', 'Decomp-Attn (Dependency)', 'Progressive-Attn (Dependency)', 'Decomp-Attn (Constituency)', 'Progressive-Attn (Constituency)', 'Decomp-Attn (Linear)', 'Progressive-Attn (Linear)'], ['Illinois-LH (2014)', 'UNAL-N... | 1 |
P18-1197table_2 | Results on test dataset for Quora paraphrase detection task. Mean scores are presented based on 5 runs (standard deviation in parenthesis). Categories of results: (1) Dependency structure (2) Constituency structure (3) Linear structure | 2 | [['Model', 'Dependency Tree-LSTM'], ['Model', 'Decomp-Attn (Dependency)'], ['Model', 'Progressive-Attn (Dependency)'], ['Model', 'Constituency Tree-LSTM'], ['Model', 'Decomp-Attn (Constituency)'], ['Model', 'Progressive-Attn (Constituency)'], ['Model', 'Linear Bi-LSTM'], ['Model', 'Decomp-Attn (Linear)'], ['Model', 'Pr... | 1 | [['Accuracy'], ['F-1 score (class=1)'], ['Precision (class=1)'], ['Recall (class=1)']] | [['0.7897 (0.0009)', '0.7060 (0.0050)', '0.7298 (0.0055)', '0.6840 (0.0139)'], ['0.7803 (0.0026)', '0.6977 (0.0074)', '0.7095 (0.0083)', '0.6866 (0.0199)'], ['0.7896 (0.0025)', '0.7113 (0.0087)', '0.7214 (0.0117)', '0.7025 (0.0266)'], ['0.7881 (0.0042)', '0.7065 (0.0034)', '0.7192 (0.0216)', '0.6846 (0.0380)'], ['0.777... | column | ['Accuracy', 'F-1 score (class=1)', 'Precision (class=1)', 'Recall (class=1)'] | ['Progressive-Attn (Constituency)', 'Progressive-Attn (Linear)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy</th> <th>F-1 score (class=1)</th> <th>Precision (class=1)</th> <th>Recall (class=1)</th> </tr> </thead> <tbody> <tr> <td>Model || Dependency Tree-LSTM</td> <td>0.7897 (0.000... | Table 2 | table_2 | P18-1197 | 8 | acl2018 | Table 2 summarizes our results where best results are highlighted in bold within each category. It should be noted that Quora is a new dataset and we have done our analysis on only 50,000 samples. Therefore, to the best of our knowledge, there is no published baseline result yet. For this task, we considered four stand... | [1, 2, 2, 1, 1] | ['Table 2 summarizes our results where best results are highlighted in bold within each category.', 'It should be noted that Quora is a new dataset and we have done our analysis on only 50,000 samples.', 'Therefore, to the best of our knowledge, there is no published baseline result yet.', 'For this task, we considered... | [None, None, None, ['Accuracy', 'F-1 score (class=1)', 'Precision (class=1)', 'Recall (class=1)'], ['Progressive-Attn (Constituency)', 'Progressive-Attn (Linear)']] | 1 |
P18-1201table_6 | Performance on Various Types Using Justice Subtypes for Training | 4 | [['Type', 'Justice', 'Subtype', 'Sentence'], ['Type', 'Justice', 'Subtype', 'Appeal'], ['Type', 'Justice', 'Subtype', 'Release-Parole'], ['Type', 'Conflict', 'Subtype', 'Attack'], ['Type', 'Transaction', 'Subtype', 'Transfer-Money'], ['Type', 'Business', 'Subtype', 'Start-Org'], ['Type', 'Movement', 'Subtype', 'Transpo... | 2 | [['Hit@k Trigger Classification', '1'], ['Hit@k Trigger Classification', '3'], ['Hit@k Trigger Classification', '5']] | [['68.3', '68.3', '69.5'], ['67.5', '97.5', '97.5'], ['73.9', '73.9', '73.9'], ['26.5', '44.5', '46.7'], ['48.4', '68.9', '79.5'], ['0', '33.3', '66.7'], ['2.6', '3.7', '7.8'], ['9.1', '50.4', '53.7'], ['60.8', '88.2', '90.2'], ['87.6', '91.0', '91.0']] | column | ['accuracy', 'accuracy', 'accuracy'] | ['Type', 'Subtype'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Hit@k Trigger Classification || 1</th> <th>Hit@k Trigger Classification || 3</th> <th>Hit@k Trigger Classification || 5</th> </tr> </thead> <tbody> <tr> <td>Type || Justice || Subtype || Sente... | Table 6 | table_6 | P18-1201 | 7 | acl2018 | We further evaluated the performance of our transfer approach on similar and distinct unseen types. The 33 subtypes defined in ACE fall within 8 coarse-grained main types, such as Life and Justice. Each subtype belongs to one main type. Subtypes that belong to the same main type tend to have similar structures. For exa... | [2, 2, 2, 2, 2, 2, 2, 2, 1] | ['We further evaluated the performance of our transfer approach on similar and distinct unseen types.', 'The 33 subtypes defined in ACE fall within 8 coarse-grained main types, such as Life and Justice.', 'Each subtype belongs to one main type.', 'Subtypes that belong to the same main type tend to have similar structur... | [None, ['Justice', 'Conflict', 'Transaction', 'Business', 'Movement', 'Personnel', 'Contact', 'Life'], ['Subtype'], ['Subtype', 'Type'], ['Subtype', 'Type'], None, ['Justice', 'Sentence', 'Appeal', 'Release-Parole'], ['Subtype', 'Type'], ['Subtype', 'Type']] | 1 |
P18-1201table_7 | Event Trigger and Argument Extraction Performance (%) on Unseen ACE Types. | 2 | [['Method', 'Supervised LSTM'], ['Method', 'Supervised Joint'], ['Method', 'Transfer']] | 2 | [['Trigger Identification', 'P'], ['Trigger Identification', 'R'], ['Trigger Identification', 'F'], ['Trigger Identification + Classification', 'P'], ['Trigger Identification + Classification', 'R'], ['Trigger Identification + Classification', 'F'], ['Arg Identification', 'P'], ['Arg Identification', 'R'], ['Arg Identi... | [['94.7', '41.8', '58.0', '89.4', '39.5', '54.8', '47.8', '22.6', '30.6', '28.9', '13.7', '18.6'], ['55.8', '67.4', '61.1', '50.6', '61.2', '55.4', '36.4', '28.1', '31.7', '33.3', '25.7', '29.0'], ['85.7', '41.2', '55.6', '75.5', '36.3', '49.1', '28.2', '27.3', '27.8', '16.1', '15.6', '15.8']] | column | ['P', 'R', 'F', 'P', 'R', 'F', 'P', 'R', 'F', 'P', 'R', 'F'] | ['Transfer'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Trigger Identification || P</th> <th>Trigger Identification || R</th> <th>Trigger Identification || F</th> <th>Trigger Identification + Classification || P</th> <th>Trigger Identification + Class... | Table 7 | table_7 | P18-1201 | 8 | acl2018 | We first identified the candidate triggers and arguments, then mapped each of these to the target event ontology. We evaluated our model on their extracting of event mentions which were classified into 23 testing ACE types. Table 7 shows the performance. To further demonstrate the effectiveness of zero-shot learning in... | [2, 2, 1, 1, 2, 2, 0, 1] | ['We first identified the candidate triggers and arguments, then mapped each of these to the target event ontology.', 'We evaluated our model on their extracting of event mentions which were classified into 23 testing ACE types.', 'Table 7 shows the performance.', 'To further demonstrate the effectiveness of zero-shot ... | [None, None, None, ['Supervised LSTM'], ['Supervised LSTM'], ['Supervised LSTM'], None, ['Transfer', 'Supervised LSTM']] | 1 |
P18-1202table_2 | Comparisons with different baselines. | 2 | [['Models', 'CrossCRF'], ['Models\t', 'CrossCRF'], ['Models\t', 'RAP'], ['Models', 'RAP'], ['Models\t', 'Hier-Joint'], ['Models\t', 'Hier-Joint'], ['Models\t', 'RNCRF'], ['Models\t', 'RNCRF'], ['Models\t', 'RNGRU'], ['Models\t', 'RNGRU'], ['Models\t', 'RNSCN-CRF'], ['Models', 'RNSCN-CRF'], ['Models\t', 'RNSCN-GRU'], ['... | 2 | [['R\x81¨L', 'AS'], ['R\x81¨L', 'OP'], ['R\x81¨D', 'AS'], ['R\x81¨D', 'OP'], ['L\x81¨R', 'AS'], ['L\x81¨R', 'OP'], ['L\x81¨D', 'AS'], ['L\x81¨D', 'OP'], ['D\x81¨R', 'AS'], ['D\x81¨R', 'OP'], ['D\x81¨L', 'AS'], ['D\x81¨L', 'OP']] | [['19.72', '59.2', '21.07', '52.05', '28.19', '65.52', '29.96', '56.17', '6.59', '39.38', '24.22', '46.67'], ['-1.82', '-1.34', '-0.44', '-1.67', '-0.58', '-0.89', '-1.69', '-1.49', '-0.49', '-3.06', '-2.54', '-2.43'], ['25.92', '62.72', '22.63', '54.44', '46.9', '67.98', '34.54', '54.25', '45.44', '60.67', '28.22', '5... | column | ['AS', 'OP', 'AS', 'OP', 'AS', 'OP', 'AS', 'OP', 'AS', 'OP', 'AS', 'OP'] | ['RNSCN-GRU', 'RNSCN-CRF', 'RNSCN+-GRU'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>R¨L || AS</th> <th>R¨L || OP</th> <th>R¨D || AS</th> <th>R¨D || OP</th> <th>L¨R || AS</th> <th>L¨R || OP</th> <th>L¨D || AS</th> <th>L¨D || OP</th> <th>D¨R || AS</t... | Table 2 | table_2 | P18-1202 | 8 | acl2018 | The overall comparison results with the baselines are shown in Table 2 with average F1 scores and standard deviations over three random splits. Clearly, the results for aspect terms (AS) transfer are much lower than opinion terms (OP) transfer, which indicate that the aspect terms are usually quite different across dom... | [1, 1, 2, 1, 1, 2] | ['The overall comparison results with the baselines are shown in Table 2 with average F1 scores and standard deviations over three random splits.', 'Clearly, the results for aspect terms (AS) transfer are much lower than opinion terms (OP) transfer, which indicate that the aspect terms are usually quite different acros... | [['CrossCRF', 'RAP', 'Hier-Joint', 'RNCRF', 'RNGRU', 'RNSCN-CRF', 'RNSCN-GRU', 'RNSCN+-GRU'], ['AS', 'OP'], None, ['RNSCN-CRF', 'RNSCN-GRU', 'RNSCN+-GRU'], ['RNSCN-CRF', 'RNSCN-GRU', 'RNSCN+-GRU', 'R\x81¨L', 'L\x81¨D', 'D\x81¨L', 'AS'], ['RNCRF', 'RNGRU']] | 1 |
P18-1205table_4 | Human Evaluation of various PERSONA-CHAT models, along with a comparison to human performance, and Twitter and OpenSubtitles based models (last 4 rows), standard deviation in parenthesis. | 6 | [['Method', 'Model', 'Human', '-', 'Profile', 'Self'], ['Method', 'Model', 'Generative PersonaChat Models', 'Seq2Seq', 'Profile', 'None'], ['Method', 'Model', 'Generative PersonaChat Models', 'Profile Memory', 'Profile', 'Self'], ['Method', 'Model', 'Ranking PersonaChat Models', 'KV Memory', 'Profile', 'None'], ['Metho... | 1 | [['Fluency'], ['Engagingness'], ['Consistency'], ['Persona Detection']] | [['4.31(1.07)', '4.25(1.06)', '4.36(0.92)', '0.95(0.22)'], ['3.17(1.10)', '3.18(1.41)', '2.98(1.45)', '0.51(0.50)'], ['3.08(1.40)', '3.13(1.39)', '3.14(1.26)', '0.72(0.45)'], ['3.81(1.14)', '3.88(0.98)', '3.36(1.37)', '0.59(0.49)'], ['3.97(0.94)', '3.50(1.17)', '3.44(1.30)', '0.81(0.39)'], ['3.21(1.54)', '1.75(1.04)', ... | column | ['Fluency', 'Engagingness', 'Consistency', 'Persona Detection'] | ['KV Memory', 'KV Profile Memory'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Fluency</th> <th>Engagingness</th> <th>Consistency</th> <th>Persona Detection</th> </tr> </thead> <tbody> <tr> <td>Method || Model || Human || - || Profile || Self</td> <td>4.31(1.07... | Table 4 | table_4 | P18-1205 | 7 | acl2018 | The results are reported in Table 4 for the best performing generative and ranking models, in both the No Persona and Self Persona categories, 100 dialogues each. We also evaluate the scores of human performance by replacing the chatbot with a human (another Turker). This effectively gives us upper bound scores which w... | [1, 1, 2, 1, 0, 1, 2] | ['The results are reported in Table 4 for the best performing generative and ranking models, in both the No Persona and Self Persona categories, 100 dialogues each.', 'We also evaluate the scores of human performance by replacing the chatbot with a human (another Turker).', 'This effectively gives us upper bound scores... | [['Generative PersonaChat Models', 'Ranking PersonaChat Models', 'Profile'], ['Human'], None, ['KV Memory', 'KV Profile Memory', 'Twitter LM', 'OpenSubtitles 2018 LM', 'OpenSubtitles 2009 LM', 'OpenSubtitles 2009 KV Memory'], None, ['Fluency', 'Engagingness', 'Consistency', 'Seq2Seq', 'Profile Memory', 'KV Memory', 'KV... | 1 |
P18-1209table_3 | Comparison of the training and testing speeds between TFN and LMF. The second and the third columns indicate the number of data point inferences per second (IPS) during training and testing time respectively. Both models are implemented in the same framework with equivalent running environment. | 2 | [['Model', 'TFN'], ['Model', 'LMF']] | 1 | [['Training Speed (IPS)'], ['Testing Speed (IPS)']] | [['340.74', '1177.17'], ['1134.82', '2249.90']] | column | ['Training Speed (IPS)', 'Testing Speed (IPS)'] | ['LMF'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Training Speed (IPS)</th> <th>Testing Speed (IPS)</th> </tr> </thead> <tbody> <tr> <td>Model || TFN</td> <td>340.74</td> <td>1177.17</td> </tr> <tr> <td>Model || LMF</td> ... | Table 3 | table_3 | P18-1209 | 8 | acl2018 | Table 3 illustrates the impact of Low-rank Multimodal Fusion on the training and testing speeds compared with TFN model. Here we set rank to be 4 since it can generally achieve fairly competent performance. Based on these results, performing a low-rank multimodal fusion with modality-specific low-rank factors significa... | [1, 2, 2, 1] | ['Table 3 illustrates the impact of Low-rank Multimodal Fusion on the training and testing speeds compared with TFN model.', 'Here we set rank to be 4 since it can generally achieve fairly competent performance.', 'Based on these results, performing a low-rank multimodal fusion with modality-specific low-rank factors s... | [['LMF', 'TFN', 'Training Speed (IPS)', 'Testing Speed (IPS)'], None, None, ['LMF', 'Training Speed (IPS)', 'TFN']] | 1 |
P18-1211table_1 | Performance of our approach on storycloze task from Mostafazadeh et al. (2016) compared with other unsupervised approaches (accuracy numbers as reported in Mostafazadeh et al. (2016)). | 2 | [['Our Method variants', 'Sequential CG + Unigram Mixture'], ['Our Method variants', 'Sequential CG + Brown clustering'], ['Our Method variants', 'Sequential CG + Sentiment'], ['Our Method variants', 'Sequential CG'], ['Our Method variants', 'Sequential CG (unnormalized)'], ['DSSM', '-'], ['GenSim', '-'], ['Skip-though... | 1 | [['Accuracy']] | [['0.602'], ['0.593'], ['0.581'], ['0.589'], ['0.531'], ['0.585'], ['0.539'], ['0.552'], ['0.494'], ['0.494']] | column | ['Accuracy'] | ['Sequential CG + Unigram Mixture'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy</th> </tr> </thead> <tbody> <tr> <td>Our Method variants || Sequential CG + Unigram Mixture</td> <td>0.602</td> </tr> <tr> <td>Our Method variants || Sequential CG + Brown clust... | Table 1 | table_1 | P18-1211 | 7 | acl2018 | Table 1 shows the performance of variants of our approach for the task. Our baselines include previous approaches for the same task: DSSM is a deep-learning based approach, which maps the context and ending to the same space, and is the best-performing method in Mostafazadeh et al.(2016). GenSim and N-gram return the e... | [1, 2, 2, 2, 2, 1, 1, 1, 1, 1, 1, 1, 1] | ['Table 1 shows the performance of variants of our approach for the task.', 'Our baselines include previous approaches for the same task: DSSM is a deep-learning based approach, which maps the context and ending to the same space, and is the best-performing method in Mostafazadeh et al.(2016).', 'GenSim and N-gram retu... | [None, ['DSSM'], ['GenSim', 'N-grams'], ['Narrative-Chain(Stories)'], ['Our Method variants'], ['Sequential CG + Unigram Mixture', 'Sequential CG + Brown clustering', 'Sequential CG + Sentiment', 'Sequential CG'], ['Narrative-Chain(Stories)', 'GenSim'], ['Sequential CG (unnormalized)'], ['Sequential CG + Sentiment'], [... | 1 |
P18-1220table_4 | Results of correcting lines in the RDD newspapers and TCP books with multiple witnesses when decoding with different strategies using the same supervised model. Attention combination strategies that statistically significantly outperform single-input decoding are highlighted with * (p < 0.05, paired-permutation test). ... | 2 | [['Decode', 'None'], ['Decode', 'Single'], ['Decode', 'Flat'], ['Decode', 'Weighted'], ['Decode', 'Average']] | 2 | [['RDD Newspapers', 'CER'], ['RDD Newspapers', 'LCER'], ['RDD Newspapers', 'WER'], ['RDD Newspapers', 'LWER'], ['TCP Books', 'CER'], ['TCP Books', 'LCER'], ['TCP Books', 'WER'], ['TCP Books', 'LWER']] | [['0.15149', '0.04717', '0.37111', '0.13799', '0.10590', '0.07666', '0.30549', '0.23495'], ['0.07199', '0.03300', '0.14906', '0.06948', '0.04508', '0.01407', '0.11283', '0.03392'], ['0.07238', '0.02904*', '0.15818', '0.06241*', '0.05554', '0.01727', '0.13487', '0.04079'], ['0.06882*', '0.02145*', '0.15221', '0.05375', ... | column | ['CER', 'LCER', 'WER', 'LWER', 'CER', 'LCER', 'WER', 'LWER'] | ['Average'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>RDD Newspapers || CER</th> <th>RDD Newspapers || LCER</th> <th>RDD Newspapers || WER</th> <th>RDD Newspapers || LWER</th> <th>TCP Books || CER</th> <th>TCP Books || LCER</th> <th>TCP Bo... | Table 4 | table_4 | P18-1220 | 7 | acl2018 | The results from Table 4 reveal that average attention combination performs best among all the decoding strategies on RDD newspapers and TCP books datasets. It reduces the CER of single input decoding by 41.5% for OCRfd lines in RDD newspapers and 9.76% for TCP books. The comparison between two hierarchical attention ... | [1, 1, 1, 1] | ['The results from Table 4 reveal that average attention combination performs best among all the decoding strategies on RDD newspapers and TCP books datasets.', 'It reduces the CER of single input decoding by 41.5% for OCR\x81fd lines in RDD newspapers and 9.76% for TCP books.', 'The comparison between two hierarchical... | [['Average', 'RDD Newspapers', 'TCP Books'], ['CER', 'Single', 'RDD Newspapers', 'TCP Books'], ['Average', 'Weighted'], ['Flat', 'CER', 'WER']] | 1 |
P18-1220table_5 | Results from model trained under different settings on single-input decoding and multiple-input decoding for both the RDD newspapers and TCP books. All training is unsupervised except for supervised results in italics. Unsupervised training settings with multi-input decoding that are significantly better than other uns... | 4 | [['Decode', '-', 'Model', 'None'], ['Decode', 'Single', 'Model', 'Seq2Seq-Super'], ['Decode', 'Single', 'Model', 'Seq2Seq-Noisy'], ['Decode', 'Single', 'Model', 'Seq2Seq-Syn'], ['Decode', 'Single', 'Model', 'Seq2Seq-Boots'], ['Decode', 'Multi', 'Model', 'LMR'], ['Decode', 'Multi', 'Model', 'Majority Vote'], ['Decode', ... | 2 | [['RDD Newspapers', 'CER'], ['RDD Newspapers', 'LCER'], ['RDD Newspapers', 'WER'], ['RDD Newspapers', 'LWER'], ['TCP Books', 'CER'], ['TCP Books', 'LCER'], ['TCP Books', 'WER'], ['TCP Books', 'LWER']] | [['0.18133', '0.13552', '0.41780', '0.31544', '0.10670', '0.08800', '0.31734', '0.27227'], ['0.09044', '0.04469', '0.17812', '0.09063', '0.04944', '0.01498', '0.12186', '0.03500'], ['0.10524', '0.05565', '0.20600', '0.11416', '0.08704', '0.05889', '0.25994', '0.15725'], ['0.16136', '0.11986', '0.35802', '0.26547', '0.0... | column | ['CER', 'LCER', 'WER', 'LWER', 'CER', 'LCER', 'WER', 'LWER'] | ['Seq2Seq-Noisy', 'Seq2Seq-Boots'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>RDD Newspapers || CER</th> <th>RDD Newspapers || LCER</th> <th>RDD Newspapers || WER</th> <th>RDD Newspapers || LWER</th> <th>TCP Books || CER</th> <th>TCP Books || LCER</th> <th>TCP Bo... | Table 5 | table_5 | P18-1220 | 8 | acl2018 | Table 5 presents the results for our model trained in different training settings as well as the baseline language model reranking (LMR) and majority vote methods. Multiple input decoding performs better than single input decoding for every training setting, and the model trained in supervised mode with multi-input dec... | [1, 1, 1, 1, 1, 2, 2, 1, 1] | ['Table 5 presents the results for our model trained in different training settings as well as the baseline language model reranking (LMR) and majority vote methods.', 'Multiple input decoding performs better than single input decoding for every training setting, and the model trained in supervised mode with multi-inpu... | [['Seq2Seq-Super', 'Seq2Seq-Noisy', 'Seq2Seq-Syn', 'Seq2Seq-Boots', 'LMR', 'Majority Vote'], ['Multi', 'Single', 'Seq2Seq-Super'], ['Majority Vote', 'RDD Newspapers', 'TCP Books'], ['Seq2Seq-Noisy', 'Seq2Seq-Boots', 'RDD Newspapers', 'Seq2Seq-Super'], ['Seq2Seq-Noisy', 'RDD Newspapers', 'TCP Books'], None, ['Seq2Seq-No... | 1 |
P18-1221table_1 | Comparing the performance of recipe generation task. All the results are on the test set of the corresponding corpus. AWD LSTM (type model) is our type model implemented with the baseline language model AWD LSTM (Merity et al., 2017). Our second baseline is the same language model (AWD LSTM) with the type information a... | 4 | [['Model', 'AWD LSTM', 'Dataset (Recipe Corpus)', 'original'], ['Model', 'AWD LSTM type model', 'Dataset (Recipe Corpus)', 'modified type'], ['Model', 'AWD LSTM with type feature', 'Dataset (Recipe Corpus)', 'original'], ['Model', 'our model', 'Dataset (Recipe Corpus)', 'original']] | 1 | [['Vocabulary Size'], ['Perplexity']] | [['52,472', '20.23'], ['51,675', '17.62'], ['52,472', '18.23'], ['52,472', '9.67']] | column | ['Vocabulary Size', 'Perplexity'] | ['our model'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Vocabulary Size</th> <th>Perplexity</th> </tr> </thead> <tbody> <tr> <td>Model || AWD LSTM || Dataset (Recipe Corpus) || original</td> <td>52,472</td> <td>20.23</td> </tr> <tr> ... | Table 1 | table_1 | P18-1221 | 7 | acl2018 | We compare our model with the baselines using perplexity metric?lower perplexity means the better prediction. Table 1 summarizes the result. The 3rd row shows that adding type as a simple feature does not guarantee a significant performance improvement while our proposed method significantly outperforms both baselines ... | [1, 1, 1] | ['We compare our model with the baselines using perplexity metric?lower perplexity means the better prediction.', 'Table 1 summarizes the result.', 'The 3rd row shows that adding type as a simple feature does not guarantee a significant performance improvement while our proposed method significantly outperforms both ba... | [['Perplexity'], None, ['Perplexity', 'AWD LSTM with type feature', 'our model', 'AWD LSTM', 'AWD LSTM type model']] | 1 |
P18-1221table_2 | Comparing the performance of code generation task. All the results are on the test set of the corresponding corpus. fLSTM, bLSTM denotes forward and backward LSTM respectively. SLP-Core refers to (Hellendoorn and Devanbu, 2017). | 4 | [['Model', 'SLP-Core', 'Dataset (Code Corpus)', 'original'], ['Model', 'fLSTM', 'Dataset (Code Corpus)', 'original'], ['Model', 'fLSTM [type model]', 'Dataset (Code Corpus)', 'modified type'], ['Model', 'fLSTM with type feature', 'Dataset (Code Corpus)', 'original'], ['Model', 'our model (fLSTM)', 'Dataset (Code Corpus... | 1 | [['Vocabulary Size'], ['Perplexity']] | [['38,297', '3.40'], ['38,297', '21.97'], ['14,177', '7.94'], ['38,297', '20.05'], ['38,297', '12.52'], ['38,297', '7.19'], ['14,177', '2.58'], ['38,297', '6.11'], ['38,297', '2.65']] | column | ['Vocabulary Size', 'Perplexity'] | ['our model (fLSTM)', 'our model (bLSTM)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Vocabulary Size</th> <th>Perplexity</th> </tr> </thead> <tbody> <tr> <td>Model || SLP-Core || Dataset (Code Corpus) || original</td> <td>38,297</td> <td>3.40</td> </tr> <tr> <t... | Table 2 | table_2 | P18-1221 | 8 | acl2018 | Table 2 shows that adding type as simple features does not guarantee a significant performance improvement while our proposed method significantly outperforms both forward and backward LSTM baselines. Our approach with backward LSTM has 40.3% better perplexity than original backward LSTM and forward has 63.14% lower (i... | [1, 1, 1] | ['Table 2 shows that adding type as simple features does not guarantee a significant performance improvement while our proposed method significantly outperforms both forward and backward LSTM baselines.', 'Our approach with backward LSTM has 40.3% better perplexity than original backward LSTM and forward has 63.14% low... | [['fLSTM', 'bLSTM', 'our model (fLSTM)', 'our model (bLSTM)'], ['our model (bLSTM)', 'Perplexity', 'bLSTM', 'our model (fLSTM)', 'fLSTM'], ['SLP-Core', 'Perplexity', 'our model (bLSTM)']] | 1 |
P18-1222table_7 | DBLP results evaluated on 63,342 citation contexts with newcomer ground-truth. | 4 | [['Model', 'w2v (I4O)', 'Newcomer Friendly', 'no'], ['Model', 'NPM', 'Newcomer Friendly', 'no'], ['Model', 'd2v-nc', 'Newcomer Friendly', 'yes'], ['Model', 'd2v-cac', 'Newcomer Friendly', 'yes'], ['Model', 'h-d2v', 'Newcomer Friendly', 'yes']] | 1 | [['Rec'], ['MAP'], ['MRR'], ['nDCG']] | [['3.64', '3.23', '3.41', '2.73'], ['1.37', '1.13', '1.15', '0.92'], ['6.48', '3.52', '3.54', '3.96'], ['8.16', '5.13', '5.24', '5.21'], ['6.41', '4.95', '5.21', '4.49']] | column | ['Rec', 'MAP', 'MRR', 'nDCG'] | ['d2v-cac'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Rec</th> <th>MAP</th> <th>MRR</th> <th>nDCG</th> </tr> </thead> <tbody> <tr> <td>Model || w2v (I4O) || Newcomer Friendly || no</td> <td>3.64</td> <td>3.23</td> <td>3.41</td... | Table 7 | table_7 | P18-1222 | 8 | acl2018 | Table 7 analyzes the impact of newcomer friendliness. Opposite from what is done in Section 5.2.2,we only evaluate on testing examples where at least a ground-truth paper is a newcomer. Please note that newcomer unfriendly approaches do not necessarily get zero scores. The table shows that newcomer friendly approaches... | [1, 2, 1, 2] | ['Table 7 analyzes the impact of newcomer friendliness. Opposite from what is done in Section 5.2.2,we only evaluate on testing examples where at least a ground-truth paper is a newcomer.', ' Please note that newcomer unfriendly approaches do not necessarily get zero scores.', 'The table shows that newcomer friendly ap... | [['Newcomer Friendly'], None, ['d2v-nc', 'd2v-cac', 'h-d2v', 'w2v (I4O)', 'NPM'], None] | 1 |
P18-1228table_2 | Evaluation results. Our method performs best on both Standard English and Twitter. | 3 | [['Standard English', 'Method', 'SEMAXIS'], ['Standard English', 'Method', 'DENSIFIER'], ['Standard English', 'Method', 'SENTPROP'], ['Standard English', 'Method', 'WordNet'], ['Twitter', 'Method', 'SEMAXIS'], ['Twitter', 'Method', 'DENSIFIER'], ['Twitter', 'Method', 'SENTPROP'], ['Twitter', 'Method', 'Sentiment140']] | 1 | [['AUC'], ['Ternary F1'], ['Tau']] | [['92.2', '61', '0.48'], ['91', '58.2', '0.46'], ['88.4', '56.1', '0.41'], ['89.5', '58.7', '0.34'], ['90', '59.2', '0.57'], ['88.5', '58.8', '0.55'], ['85', '58.2', '0.5'], ['86.2', '57.7', '0.51']] | column | ['AUC', 'Ternary F1', 'Tau'] | ['SEMAXIS'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>AUC</th> <th>Ternary F1</th> <th>Tau</th> </tr> </thead> <tbody> <tr> <td>Standard English || Method || SEMAXIS</td> <td>92.2</td> <td>61</td> <td>0.48</td> </tr> <tr> ... | Table 2 | table_2 | P18-1228 | 5 | acl2018 | Table 2 summarizes the performance. Surprisingly, SEMAXIS - the simplest approach - outperforms others on both Standard English and Twitter datasets across all measures. | [1, 1] | ['Table 2 summarizes the performance.', 'Surprisingly, SEMAXIS - the simplest approach - outperforms others on both Standard English and Twitter datasets across all measures.'] | [None, ['SEMAXIS', 'Standard English', 'Twitter', 'AUC', 'Ternary F1', 'Tau']] | 1 |
P18-1229table_1 | Results of the end-to-end taxonomy induction experiment. Our approach significantly outperforms two-phase methods (Panchenko et al., 2016; Shwartz et al., 2016; Bansal et al., 2014). Bansal et al. (2014) and TaxoRL (NR) + FG are listed separately because they use extra resources. | 2 | [['Model', 'TAXI'], ['Model', 'HypeNET'], ['Model', 'HypeNET+MST'], ['Model', 'TaxoRL (RE)'], ['Model', 'TaxoRL (NR)'], ['Model', 'Bansal et al. (2014)'], ['Model', 'TaxoRL (NR) + FG']] | 1 | [['P a'], ['R a'], ['F1 a'], ['P e'], ['R e'], ['F1 e']] | [['66.1', '13.9', '23.0', '54.8', '18.0', '27.1'], ['32.8', '26.7', '29.4', '26.1', '17.2', '20.7'], ['33.7', '41.1', '37.0', '29.2', '29.2', '29.2'], ['35.8', '47.4', '40.8', '35.4', '35.4', '35.4'], ['41.3', '49.2', '44.9', '35.6', '35.6', '35.6'], ['48.0', '55.2', '51.4', '-', '-', '-'], ['52.9', '58.6', '55.6', '43... | column | ['P a', 'R a', 'F1 a', 'P e', 'R e', 'F1 e'] | ['TaxoRL (NR) + FG'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P a</th> <th>R a</th> <th>F1 a</th> <th>P e</th> <th>R e</th> <th>F1 e</th> </tr> </thead> <tbody> <tr> <td>Model || TAXI</td> <td>66.1</td> <td>13.9</td> <td>23.... | Table 1 | table_1 | P18-1229 | 7 | acl2018 | Table 1 shows the results of the first experiment. HypeNET (Shwartz et al., 2016) uses additional surface features described in Section 2.2. HypeNET+MST extends HypeNET by first constructing a hypernym graph using HypeNET output as weights of edges and then finding the MST (Chu, 1965) of this graph. TaxoRL (RE) denotes... | [1, 2, 2, 2, 1, 1, 1, 1, 1, 1] | ['Table 1 shows the results of the first experiment.', 'HypeNET (Shwartz et al., 2016) uses additional surface features described in Section 2.2.', 'HypeNET+MST extends HypeNET by first constructing a hypernym graph using HypeNET output as weights of edges and then finding the MST (Chu, 1965) of this graph.', 'TaxoRL (... | [None, ['HypeNET'], ['HypeNET+MST'], ['TaxoRL (RE)', 'TaxoRL (NR)'], ['HypeNET', 'F1 e', 'TAXI', 'F1 a'], ['HypeNET', 'F1 e', 'TAXI', 'F1 a'], ['HypeNET+MST', 'HypeNET', 'F1 a', 'F1 e'], ['TaxoRL (RE)', 'HypeNET+MST'], ['TaxoRL (RE)', 'TaxoRL (NR)'], ['TaxoRL (NR) + FG']] | 1 |
P18-1232table_2 | Results of lexicon term sentiment classification. | 1 | [['EmbeddingP'], ['EmbeddingQ'], ['EmbeddingCat'], ['EmbeddingAll'], ['Yang'], ['SSWE'], ['DSE']] | 2 | [['B & D', 'HL'], ['B & D', 'MPQA'], ['B & E', 'HL'], ['B & E', 'MPQA'], ['B & K', 'HL'], ['B & K', 'MPQA'], ['D & E', 'HL'], ['D & E', 'MPQA'], ['D & K', 'HL'], ['D & K', 'MPQA'], ['E & K', 'HL'], ['E & K', 'MPQA']] | [['0.740', '0.733', '0.742', '0.734', '0.747', '0.735', '0.744', '0.701', '0.745', '0.709', '0.628', '0.574'], ['0.743', '0.701', '0.627', '0.573', '0.464', '0.453', '0.621', '0.577', '0.462', '0.450', '0.465', '0.453'], ['0.780', '0.772', '0.773', '0.756', '0.772', '0.751', '0.744', '0.728', '0.755', '0.702', '0.683',... | column | ['F1', 'F1', 'F1', 'F1', 'F1', 'F1', 'F1', 'F1', 'F1', 'F1', 'F1', 'F1'] | ['DSE'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>B & D || HL</th> <th>B & D || MPQA</th> <th>B & E || HL</th> <th>B & E || MPQA</th> <th>B & K || HL</th> <th>B & K || MPQA</th> <th>D & E || HL</th> <th... | Table 2 | table_2 | P18-1232 | 8 | acl2018 | Table 2 shows the experimental results of lexicon term sentiment classification. Our DSE method can achieve competitive performance among all the methods. Compared with SSWE, our DSE is still competitive because both of them consider the sentiment information in the embeddings. Our DSE model outperforms other methods w... | [1, 1, 1, 1, 2] | ['Table 2 shows the experimental results of lexicon term sentiment classification.', 'Our DSE method can achieve competitive performance among all the methods.', 'Compared with SSWE, our DSE is still competitive because both of them consider the sentiment information in the embeddings.', 'Our DSE model outperforms othe... | [None, ['DSE'], ['SSWE', 'DSE'], ['DSE', 'Yang', 'EmbeddingCat', 'EmbeddingAll'], None] | 1 |
P18-1239table_2 | Our results are consistently better than those reported by Kiela et al. (2015), averaged over Dutch, French, German, Italian, and Spanish on a similar set of 500 concrete nouns. The rightmost column shows the added challenge with our larger, more realistic dataset. | 1 | [['MRR'], ['Top 1'], ['Top 5'], ['Top 20']] | 3 | [['dataset', 'BERGSMA500 Kiela et al. (2015)', '# words 500'], ['dataset', 'BERGSMA500 (ours)', '# words 500'], ['-', 'all (ours)', '# words 8500']] | [['0.658', '0.704', '0.277'], ['0.567', '0.679', '0.229'], ['0.692', '0.763', '0.326'], ['0.774', '0.811', '0.385']] | row | ['MRR', 'Top 1', 'Top 5', 'Top 20'] | ['BERGSMA500 (ours)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>dataset || BERGSMA500 Kiela et al. (2015) || # words 500</th> <th>dataset || BERGSMA500 (ours) || # words 500</th> <th>- || all (ours) || # words 8500</th> </tr> </thead> <tbody> <tr> <td>MRR<... | Table 2 | table_2 | P18-1239 | 5 | acl2018 | Table 2 shows the results reported by Kiela et al. (2015) on the BERGSMA500 dataset, along with results using our image crawl method (Section 3.2) on BERGSMA500fs vocabulary. On all five languages, our dataset performs better than that of Kiela et al. (2015). We attribute this to improvements in image search since the... | [1, 1, 2, 2, 2, 1, 1] | ['Table 2 shows the results reported by Kiela et al. (2015) on the BERGSMA500 dataset, along with results using our image crawl method (Section 3.2) on BERGSMA500\x81fs vocabulary.', 'On all five languages, our dataset performs better than that of Kiela et al. (2015).', 'We attribute this to improvements in image searc... | [['BERGSMA500 Kiela et al. (2015)', 'BERGSMA500 (ours)'], ['BERGSMA500 (ours)'], None, ['BERGSMA500 (ours)'], ['BERGSMA500 (ours)'], ['all (ours)'], ['all (ours)', 'Top 1', 'BERGSMA500 (ours)']] | 1 |
P18-1246table_2 | Results for XPOS tags. The first column shows the language acronym, the column named DQM shows the results of Dozat et al. (2017). Our system outperforms Dozat et al. (2017) on 32 out of 54 treebanks and Dozat et al. outperforms our model on 10 of 54 treebanks, with 13 ties. RRIE is the relative reduction in error. We ... | 2 | [['lang.', 'cs_cac'], ['lang.', 'cs'], ['lang.', 'fi'], ['lang.', 'sl'], ['lang.', 'la_ittb'], ['lang.', 'grc'], ['lang.', 'bg'], ['lang.', 'ca'], ['lang.', 'grc_proiel'], ['lang.', 'pt'], ['lang.', 'cu'], ['lang.', 'it'], ['lang.', 'fa'], ['lang.', 'ru'], ['lang.', 'sv'], ['lang.', 'ko'], ['lang.', 'sk'], ['lang.', 'n... | 1 | [['CONLL Winner'], ['DQM'], ['ours']] | [['95.16', '95.16', '96.91'], ['95.86', '95.86', '97.28'], ['97.37', '97.37', '97.81'], ['94.74', '94.74', '95.54'], ['94.79', '94.79', '95.56'], ['84.47', '84.47', '86.51'], ['96.71', '96.71', '97.05'], ['98.58', '98.58', '98.72'], ['97.51', '97.51', '97.72'], ['83.04', '83.04', '84.39'], ['96.20', '96.20', '96.49'], ... | column | ['accuracy', 'accuracy', 'accuracy'] | ['ours'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>CONLL Winner</th> <th>DQM</th> <th>ours</th> <th>RRIE</th> </tr> </thead> <tbody> <tr> <td>lang. || cs_cac</td> <td>95.16</td> <td>95.16</td> <td>96.91</td> <td>36.2</... | Table 2 | table_2 | P18-1246 | 6 | acl2018 | Table 2 contains the results of this task for the large treebanks. Because Dozat et al. (2017) won the challenge for the majority of the languages, we first compare our results with the performance of their system. Our model outperforms Dozat et al. (2017) in 32 of the 54 treebanks with 13 ties. These ties correspond m... | [1, 1, 1, 2, 1] | ['Table 2 contains the results of this task for the large treebanks.', 'Because Dozat et al. (2017) won the challenge for the majority of the languages, we first compare our results with the performance of their system.', 'Our model outperforms Dozat et al. (2017) in 32 of the 54 treebanks with 13 ties.', 'These ties c... | [None, ['DQM'], ['DQM', 'ours'], None, ['ours', 'sv', 'DQM', 'en', 'gl_treegal', 'pt_br', 'et']] | 1 |
P18-1246table_3 | Results on WSJ test set. | 2 | [['System', 'Sogaard (2011)'], ['System', 'Huang et al. (2015)'], ['System', 'Choi (2016)'], ['System', 'Andor et al. (2016)'], ['System', 'Dozat et al. (2017)'], ['System', 'ours']] | 1 | [['Accuracy']] | [['97.50'], ['97.55'], ['97.64'], ['97.44'], ['97.41'], ['97.96']] | column | ['Accuracy'] | ['ours'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy</th> </tr> </thead> <tbody> <tr> <td>System || Sogaard (2011)</td> <td>97.50</td> </tr> <tr> <td>System || Huang et al. (2015)</td> <td>97.55</td> </tr> <tr> <td... | Table 3 | table_3 | P18-1246 | 7 | acl2018 | Table 3 shows the results of our model in comparison to the results reported in state-ofthe-art literature. Our model significantly outperforms these systems, with an absolute difference of 0.32% in accuracy, which corresponds to a RRIE of 12%. | [1, 1] | ['Table 3 shows the results of our model in comparison to the results reported in state-ofthe-art literature.', 'Our model significantly outperforms these systems, with an absolute difference of 0.32% in accuracy, which corresponds to a RRIE of 12%.'] | [None, ['ours', 'Accuracy', 'Sogaard (2011)', 'Huang et al. (2015)', 'Choi (2016)', 'Andor et al. (2016)', 'Dozat et al. (2017)']] | 1 |
P18-1246table_5 | Comparison of optimization methods: Separate optimization of the word, character and meta model is more accurate on average than full back-propagation using a single loss function.The results are statistically significant with two-tailed paired t-test for xpos with p<0.001 and for morphology with p <0.0001. | 2 | [['Optimization', 'separate'], ['Optimization', 'jointly']] | 1 | [['Avg. F1 Score morphology'], ['Avg. F1 Score xpos']] | [['94.57', '94.85'], ['94.15', '94.48']] | column | ['Avg. F1 Score morphology', 'Avg. F1 Score xpos'] | ['separate'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Avg. F1 Score morphology</th> <th>Avg. F1 Score xpos</th> </tr> </thead> <tbody> <tr> <td>Optimization || separate</td> <td>94.57</td> <td>94.85</td> </tr> <tr> <td>Optimizatio... | Table 5 | table_5 | P18-1246 | 8 | acl2018 | Table 5 shows that separately optimized models are significantly more accurate on average than jointly optimized models. | [1] | ['Table 5 shows that separately optimized models are significantly more accurate on average than jointly optimized models.'] | [['separate', 'jointly']] | 1 |
P18-1246table_8 | F1 score of char models and their performance on the dev. set for selected languages with different gather strategies, concatenate to gi (Equation 1). DQM shows results for our reimplementation of Dozat et al. (2017) (cf. §3.2), where we feed in only the characters. The final column shows the number of xpos tags in the... | 2 | [['dev. set lang.', 'el'], ['dev. set lang.', 'grc'], ['dev. set lang.', 'la_ittb'], ['dev. set lang.', 'ru'], ['dev. set lang.', 'tr']] | 1 | [['Flast B1st'], ['F1st Blast'], ['Flast Blast'], ['F1st B1st'], ['DQM']] | [['96.6', '96.6', '96.2', '96.1', '95.9'], ['87.3', '87.1', '87.1', '86.8', '86.7'], ['91.1', '91.5', '91.9', '91.3', '91.0'], ['95.6', '95.4', '95.6', '95.3', '95.8'], ['93.5', '93.3', '93.2', '92.5', '93.9']] | column | ['F1', 'F1', 'F1', 'F1', 'F1'] | ['DQM'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Flast B1st</th> <th>F1st Blast</th> <th>Flast Blast</th> <th>F1st B1st</th> <th>DQM</th> <th>|xpos|</th> </tr> </thead> <tbody> <tr> <td>dev. set lang. || el</td> <td>96.6<... | Table 8 | table_8 | P18-1246 | 9 | acl2018 | Table 8 reports, for a few morphological rich languages, the part-of-speech tagging performance of different strategies to gather the characters when creating initial word encodings. The strategies were defined in 3.1. The Table also contains a column with results for our reimplementation of Dozat et al.(2017). We remo... | [1, 2, 1, 2, 1, 1] | ['Table 8 reports, for a few morphological rich languages, the part-of-speech tagging performance of different strategies to gather the characters when creating initial word encodings.', 'The strategies were defined in 3.1.', 'The Table also contains a column with results for our reimplementation of Dozat et al.(2017).... | [['el', 'grc', 'la_ittb', 'ru', 'tr'], None, ['DQM'], None, ['el', 'grc', 'la_ittb', 'ru', 'tr'], ['la_ittb', 'Flast Blast']] | 1 |
P18-1248table_4 | Experiment results (UAS, %) on the UD 2.0 development set. Bold: best result per language. | 2 | [['Lan.', 'eu'], ['Lan.', 'ur'], ['Lan.', 'got'], ['Lan.', 'hu'], ['Lan.', 'cu'], ['Lan.', 'da'], ['Lan.', 'el'], ['Lan.', 'hi'], ['Lan.', 'de'], ['Lan.', 'ro'], ['-', 'Avg.']] | 2 | [['Global Models', 'MH 3'], ['Global Models', 'MST'], ['Global Models', 'MH 4-two'], ['Global Models', 'MH 4-hybrid'], ['Global Models', '1EC'], ['Greedy Models', 'MH 3'], ['Greedy Models', 'MH 4']] | [['82.07 ± 0.17', '83.61 ± 0.16', '82.94 ± 0.24', '84.13 ± 0.13', '84.09 ± 0.19', '81.27 ± 0.20', '81.71 ± 0.33'], ['86.89 ± 0.18', '86.78 ± 0.13', '86.84 ± 0.26', '87.06 ± 0.24', '87.11 ± 0.11', '86.40 ± 0.16', '86.05 ± 0.18'], ['83.72 ± 0.19', '84.74 ± 0.28', '83.85 ± 0.19', '84.59 ± 0.38', '84.77 ± 0.27', '82.28 ± 0... | column | ['UAS', 'UAS', 'UAS', 'UAS', 'UAS', 'UAS', 'UAS'] | ['MH 4-hybrid'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Global Models || MH 3</th> <th>Global Models || MST</th> <th>Global Models || MH 4-two</th> <th>Global Models || MH 4-hybrid</th> <th>Global Models || 1EC</th> <th>Greedy Models || MH 3</th>... | Table 4 | table_4 | P18-1248 | 7 | acl2018 | Table 4 shows the developmentset performance of our models as compared with baseline systems. MST considers non-projective structures, and thus enjoys a theoretical advantage over projective MH 3, especially for the most non-projective languages. However, it has a vastly larger output space, making the selection of cor... | [1, 1, 2, 2, 2, 2, 2, 2, 1, 1, 1, 1, 1, 2] | ['Table 4 shows the developmentset performance of our models as compared with baseline systems.', 'MST considers non-projective structures, and thus enjoys a theoretical advantage over projective MH 3, especially for the most non-projective languages.', 'However, it has a vastly larger output space, making the selectio... | [['MH 4-two', 'MH 4-hybrid', 'MH 3', 'MST', '1EC'], ['MH 3', 'MST'], ['MST'], ['MST'], ['MH 3', 'MST'], ['MH 4-two', 'MH 4-hybrid', '1EC'], ['MH 4-two', 'MH 4-hybrid', '1EC'], ['MST'], ['MH 4-two', 'MH 4-hybrid', '1EC'], ['MH 4-two', 'MH 4-hybrid'], ['Greedy Models', 'MH 3', 'MH 4'], ['Global Models', 'MH 3', 'MH 4-two... | 1 |
P18-1250table_3 | The overall performance of the two sequential models on development data. | 1 | [['Interspace'], ['Pre2'], ['Pre3'], ['Prepost']] | 3 | [['Linear CRF', 'Without POS', 'P'], ['Linear CRF', 'Without POS', 'R'], ['Linear CRF', 'Without POS', 'F1'], ['Linear CRF', 'With POS', 'P'], ['Linear CRF', 'With POS', 'R'], ['Linear CRF', 'With POS', 'F1'], ['LSTM-CRF', 'Without POS', 'P'], ['LSTM-CRF', 'Without POS', 'R'], ['LSTM-CRF', 'Without POS', 'F1'], ['LSTM-... | [['74.6', '20.6', '32.2', '71.2', '30.3', '42.5', '67.9', '59.8', '63.6', '73.0', '61.6', '66.8'], ['72.4', '30.1', '42.5', '72.8', '32.4', '44.8', '71.1', '58.3', '64.1', '74.8', '57.4', '65.0'], ['73.1', '30.2', '42.8', '73.0', '32.5', '44.9', '71.1', '58.5', '64.2', '73.8', '57.0', '64.3'], ['70.9', '32.9', '45.0', ... | column | ['P', 'R', 'F1', 'P', 'R', 'F1', 'P', 'R', 'F1', 'P', 'R', 'F1'] | ['LSTM-CRF'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Linear CRF || Without POS || P</th> <th>Linear CRF || Without POS || R</th> <th>Linear CRF || Without POS || F1</th> <th>Linear CRF || With POS || P</th> <th>Linear CRF || With POS || R</th> ... | Table 3 | table_3 | P18-1250 | 7 | acl2018 | Table 3 shows overall performances of the two sequential models on development data. From the results, we can clearly see that the introduction of neural structure pushes up the scores exceptionally. The reason is that our LSTM-CRF model not only benefits from the linear weighted combination of local characteristics li... | [1, 1, 1, 1, 1, 1, 1, 2, 1] | ['Table 3 shows overall performances of the two sequential models on development data.', 'From the results, we can clearly see that the introduction of neural structure pushes up the scores exceptionally.', 'The reason is that our LSTM-CRF model not only benefits from the linear weighted combination of local characteri... | [['Linear CRF', 'LSTM-CRF'], ['LSTM-CRF'], ['LSTM-CRF'], ['LSTM-CRF'], ['Interspace', 'Pre2', 'Pre3', 'Prepost'], ['LSTM-CRF', 'Interspace'], ['Pre3'], None, ['With POS', 'Without POS']] | 1 |
P18-1250table_6 | The performances of the firstand second-order in-parsing models on test data. | 2 | [['Type', 'pro'], ['Type', 'PRO'], ['Type', 'OP'], ['Type', 'T'], ['Type', 'RNR'], ['Type', '*'], ['Type', 'Overall']] | 3 | [['-', 'First-order', 'P'], ['-', 'First-order', 'R'], ['-', 'First-order', 'F1'], ['-', 'Second-order', 'P'], ['-', 'Second-order', 'R'], ['-', 'Second-order', 'F1'], ['Evaluation with Head', 'First-order', 'P'], ['Evaluation with Head', 'First-order', 'R'], ['Evaluation with Head', 'First-order', 'F1'], ['Evaluation ... | [['52.5', '16.8', '25.5', '54.4', '19.7', '28.9', '50.5', '16.2', '24.5', '52.6', '19.1', '28'], ['59.7', '47.3', '52.8', '60.6', '58', '59.3', '58.4', '46.3', '51.7', '57.8', '55.3', '56.6'], ['74.5', '55.8', '63.8', '79.6', '67.8', '73.2', '72.2', '54.1', '61.8', '78.6', '67', '72.3'], ['70.6', '51.7', '59.7', '77.3'... | column | ['P', 'R', 'F1', 'P', 'R', 'F1', 'P', 'R', 'F1', 'P', 'R', 'F1'] | ['Second-order'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>- || First-order || P</th> <th>- || First-order || R</th> <th>- || First-order || F1</th> <th>- || Second-order || P</th> <th>- || Second-order || R</th> <th>- || Second-order || F1</th> ... | Table 6 | table_6 | P18-1250 | 7 | acl2018 | Table 6 presents detailed results of the in-parsing models on test data. Compared with the stateof-the-art, the first-order model performs a little worse while the second-order model achieves a remarkable score. The first-order parsing model only constrains the dependencies of both the covert and overt tokens to make u... | [1, 1, 2, 2, 1, 2, 2, 1] | ['Table 6 presents detailed results of the in-parsing models on test data.', 'Compared with the stateof-the-art, the first-order model performs a little worse while the second-order model achieves a remarkable score.', 'The first-order parsing model only constrains the dependencies of both the covert and overt tokens t... | [None, ['First-order', 'Second-order'], ['First-order'], ['First-order'], ['Overall'], ['pro', 'OP', 'T', 'RNR', '*'], None, ['OP', 'T', 'F1']] | 1 |
P18-1252table_5 | Parsing accuracy on test data. LAS difference between any two systems is statistically significant (p < 0:005) according to Dan Bikel’s randomized parsing evaluation comparer for significance test Noreen (1989). | 3 | [['Single', 'Training data', 'train'], ['Single (hetero)', 'Training data', 'train-HIT'], ['Multi-task', 'Training data', 'train & train-HIT'], ['Single (large)', 'Training data', 'converted train-HIT']] | 1 | [['UAS'], ['LAS']] | [['75.99', '70.95'], ['76.20', '68.43'], ['79.29', '74.51'], ['80.45', '75.83']] | column | ['UAS', 'LAS'] | ['converted train-HIT'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>UAS</th> <th>LAS</th> </tr> </thead> <tbody> <tr> <td>Single || Training data || train</td> <td>75.99</td> <td>70.95</td> </tr> <tr> <td>Single (hetero) || Training data || tra... | Table 5 | table_5 | P18-1252 | 9 | acl2018 | Table 5 shows the empirical results. Please kindly note that the parsing accuracy looks very low, because the test data is partially annotated and only about 30% most uncertain (difficult) words are manually labeled with their heads according to our guideline, as discussed in Section 2.1. The first-row, gsingleh is t... | [1, 2, 1, 1, 1, 2, 1, 1, 2, 1, 2, 1, 2] | ['Table 5 shows the empirical results.', 'Please kindly note that the parsing accuracy looks very low, because the test data is partially annotated and only about 30% most uncertain (difficult) words are manually labeled with their heads according to our guideline, as discussed in Section 2.1.', 'The first-row, \x81gsi... | [None, None, ['Single', 'train'], ['Single (hetero)', 'train-HIT'], ['UAS', 'Single', 'Single (hetero)'], ['LAS'], ['Multi-task', 'train & train-HIT'], ['Multi-task', 'LAS', 'Single'], ['Multi-task', 'train & train-HIT'], ['Single (large)', 'converted train-HIT'], ['Single (large)', 'converted train-HIT'], ['Single (la... | 1 |
P18-1255table_2 | Model performances on 500 samples when evaluated against the union of the “best” annotations (B1 ∪ B2), intersection of the “valid” annotations (V 1 ∩ V 2) and the original question paired with the post in the dataset. The difference between the bold and the non-bold numbers is statistically significant with p < 0.05 a... | 2 | [['Model', 'Random'], ['Model', 'Bag-of-ngrams'], ['Model', 'Community QA'], ['Model', 'Neural (p q)'], ['Model', 'Neural (p a)'], ['Model', 'Neural (p q a)'], ['Model', 'EVPI']] | 2 | [['B1 ∪ B2', 'p@1'], ['B1 ∪ B2', 'p@3'], ['B1 ∪ B2', 'p@5'], ['B1 ∪ B2', 'MAP'], ['V1 ∩ V2', 'p@1'], ['V1 ∩ V2', 'p@3'], ['V1 ∩ V2', 'p@5'], ['V1 ∩ V2', 'MAP'], ['Original', 'p@1']] | [['17.5', '17.5', '17.5', '35.2', '26.4', '26.4', '26.4', '42.1', '10.0'], ['19.4', '19.4', '18.7', '34.4', '25.6', '27.6', '27.5', '42.7', '10.7'], ['23.1', '21.2', '20.0', '40.2', '33.6', '30.8', '29.1', '47.0', '18.5'], ['21.9', '20.9', '19.5', '39.2', '31.6', '30.0', '28.9', '45.5', '15.4'], ['24.1', '23.5', '20.6'... | column | ['p@1', 'p@3', 'p@5', 'MAP', 'p@1', 'p@3', 'p@5', 'MAP', 'p@1'] | ['EVPI'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>B1 ∪ B2 || p@1</th> <th>B1 ∪ B2 || p@3</th> <th>B1 ∪ B2 || p@5</th> <th>B1 ∪ B2 || MAP</th> <th>V1 ∩ V2 || p@1</th> <th>V1 ∩ V2 || p@3</th> <th>V1 ∩ V2 || p@5</th> <th>V1 ∩ V2 || M... | Table 2 | table_2 | P18-1255 | 7 | acl2018 | We first describe the results of the different models when evaluated against the expert annotations we collect on 500 samples (4). Since the annotators had a low agreement on a single best, we evaluate against the union of the best annotations (B1 ∪ B2 in Table 2) and against the intersection of the valid annotations (... | [1, 1, 1, 1, 1, 1, 2, 2, 2, 0, 1, 1, 1, 1] | ['We first describe the results of the different models when evaluated against the expert annotations we collect on 500 samples (4).', 'Since the annotators had a low agreement on a single best, we evaluate against the union of the best annotations (B1 ∪ B2 in Table 2) and against the intersection of the valid annotati... | [None, ['B1 ∪ B2', 'V1 ∩ V2'], ['Random', 'Bag-of-ngrams', 'Community QA'], ['Community QA', 'Neural (p q)'], ['Neural (p a)', 'Neural (p q a)', 'Neural (p q)'], ['EVPI', 'Neural (p q a)'], ['EVPI', 'Neural (p q a)'], ['EVPI'], None, None, ['p@1'], ['Bag-of-ngrams', 'Random'], ['Community QA', 'Neural (p q)', 'Neural (... | 1 |
P18-2002table_1 | Comparison of validation and test set perplexity for r-RNTNs with f mapping (K = 100 for PTB, K = 376 for text8) versus s-RNNs and m-RNN. r-RNTNs with the same H as corresponding s-RNNs significantly increase model capacity and performance with no computational cost. The RNTN was not run on text8 due to the number of p... | 4 | [['Method', 's-RNN', 'H', '100'], ['Method', 'r-RNTN f', 'H', '100'], ['Method', 'RNTN', 'H', '100'], ['Method', 'm-RNN', 'H', '100'], ['Method', 's-RNN', 'H', '150'], ['Method', 'r-RNTN f', 'H', '150'], ['Method', 'GRU', 'H', '244'], ['Method', 'GRU', 'H', '650'], ['Method', 'r-GRU f', 'H', '244'], ['Method', 'LSTM', ... | 2 | [['PTB', '# Params'], ['PTB', 'Test PPL'], ['text8', '# Params'], ['text8', 'Test PPL']] | [['2M', '146.7', '7.6M', '236.4'], ['3M', '131.2', '11.4M', '190.1'], ['103M', '128.8', '388M', '-'], ['3M', '164.2', '11.4M', '895'], ['3M', '133.7', '11.4M', '207.9'], ['5.3M', '126.4', '19.8M', '171.7'], ['9.6M', '92.2', '-', '-'], ['15.5M', '90.3', '-', '-'], ['15.5M', '87.5', '-', '-'], ['10M', '88.8', '-', '-'], ... | column | ['# Params', 'Test PPL', '# Params', 'Test PPL'] | ['r-RNTN f'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>PTB || # Params</th> <th>PTB || Test PPL</th> <th>text8 || # Params</th> <th>text8 || Test PPL</th> </tr> </thead> <tbody> <tr> <td>Method || s-RNN || H || 100</td> <td>2M</td> ... | Table 1 | table_1 | P18-2002 | 4 | acl2018 | As shown in table 1, with an equal number of parameters, the r-RNTN with f mapping outperforms the s-RNN with a bigger hidden layer. It appears that heuristically allocating increased model capacity as done by the f based r-RNTN is a better way to increase performance than simply increasing hidden layer size, which als... | [1, 2, 1, 1, 1] | ['As shown in table 1, with an equal number of parameters, the r-RNTN with f mapping outperforms the s-RNN with a bigger hidden layer.', 'It appears that heuristically allocating increased model capacity as done by the f based r-RNTN is a better way to increase performance than simply increasing hidden layer size, whic... | [['# Params', 'r-RNTN f', 's-RNN'], ['r-RNTN f'], ['m-RNN'], None, ['r-RNTN f', 's-RNN', 'GRU', 'LSTM']] | 1 |
P18-2005table_1 | POS prediction accuracy [%] using the Trustpilot test set, stratified by SEX and AGE (higher is better), and the absolute difference (∆) within each bias group (smaller is better). The best result is indicated in bold. | 1 | [['BASELINE'], ['ADV']] | 2 | [['SEX', 'F'], ['SEX', 'M'], ['SEX', 'delta'], ['AGE', 'O45'], ['AGE', 'U35'], ['AGE', 'delta']] | [['90.9', '91.1', '0.2', '91.4', '89.9', '1.5'], ['92.2', '92.1', '0.1', '92.3', '92.0', '0.3']] | column | ['F', 'M', 'delta', 'O45', 'U35', 'delta'] | ['ADV'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>SEX || F</th> <th>SEX || M</th> <th>SEX || delta</th> <th>AGE || O45</th> <th>AGE || U35</th> <th>AGE || delta</th> </tr> </thead> <tbody> <tr> <td>BASELINE</td> <td>90.9</... | Table 1 | table_1 | P18-2005 | 5 | acl2018 | Table 1 shows the results for the TrustPilot dataset. Observe that the disparity for the BASELINE tagger accuracy (the delta column), for AGE is larger than for SEX, consistent with the results of Hovy and Sogaard (2015). Our ADV method leads to a sizeable reduction in the difference in accuracy across both SEX and AGE... | [1, 1, 1, 1, 2] | ['Table 1 shows the results for the TrustPilot dataset.', 'Observe that the disparity for the BASELINE tagger accuracy (the delta column), for AGE is larger than for SEX, consistent with the results of Hovy and Sogaard (2015).', 'Our ADV method leads to a sizeable reduction in the difference in accuracy across both SEX... | [None, ['BASELINE', 'delta', 'AGE', 'SEX'], ['ADV', 'SEX', 'AGE', 'delta'], ['ADV', 'F', 'M', 'O45', 'U35', 'SEX', 'AGE'], None] | 1 |
P18-2005table_2 | POS predictive accuracy [%] over the AAVE dataset, stratified over the three domains, alongside the macro-average accuracy. The best result is indicated in bold. | 1 | [['BASELINE'], ['ADV']] | 1 | [['LYRICS'], ['SUBTITLES'], ['TWEETS'], ['Average']] | [['73.7', '81.4', '59.9', '71.7'], ['80.5', '85.8', '65.4', '77.0']] | column | ['accuracy', 'accuracy', 'accuracy', 'accuracy'] | ['ADV'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>LYRICS</th> <th>SUBTITLES</th> <th>TWEETS</th> <th>Average</th> </tr> </thead> <tbody> <tr> <td>BASELINE</td> <td>73.7</td> <td>81.4</td> <td>59.9</td> <td>71.7</td> ... | Table 2 | table_2 | P18-2005 | 5 | acl2018 | Table 2 shows the results for the AAVE heldout domain. Note that we do not have annotations for SEX or AGE, and thus we only report the overall accuracy on this dataset. Note that ADV also significantly outperforms the BASELINE across the three heldout domains. | [1, 2, 1] | ['Table 2 shows the results for the AAVE heldout domain.', 'Note that we do not have annotations for SEX or AGE, and thus we only report the overall accuracy on this dataset.', 'Note that ADV also significantly outperforms the BASELINE across the three heldout domains.'] | [None, None, ['ADV', 'LYRICS', 'SUBTITLES', 'TWEETS', 'BASELINE']] | 1 |
P18-2010table_4 | Evaluation results on the dataset of polysemous verb classes by Korhonen et al. (2003). | 2 | [['Method', 'LDA-Frames'], ['Method', 'Triframes WATSET'], ['Method', 'NOAC'], ['Method', 'HOSG'], ['Method', 'Triadic Spectral'], ['Method', 'Triadic k-Means'], ['Method', 'Triframes CW'], ['Method', 'Whole'], ['Method', 'Singletons']] | 1 | [['nmPU'], ['niPU'], ['F1']] | [['52.60', '45.84', '48.98'], ['40.05', '62.09', '48.69'], ['37.19', '64.09', '47.07'], ['38.22', '43.76', '40.80'], ['35.76', '38.96', '36.86'], ['52.22', '27.43', '35.96'], ['18.05', '12.72', '14.92'], ['24.14', '79.09', '36.99'], ['0.00', '27.21', '0.00']] | column | ['nmPU', 'niPU', 'F1'] | ['LDA-Frames'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>nmPU</th> <th>niPU</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Method || LDA-Frames</td> <td>52.60</td> <td>45.84</td> <td>48.98</td> </tr> <tr> <td>Method || Tr... | Table 4 | table_4 | P18-2010 | 5 | acl2018 | Table 4 presents results on the second dataset for the best models identified on the first dataset. The LDA-Frames yielded the best results with our approach performing comparably in terms of the F1-score. We attribute the low performance of the Triframes method based on CW clustering to its hard partitioning output, w... | [1, 1, 1, 2] | ['Table 4 presents results on the second dataset for the best models identified on the first dataset.', 'The LDA-Frames yielded the best results with our approach performing comparably in terms of the F1-score.', 'We attribute the low performance of the Triframes method based on CW clustering to its hard partitioning o... | [None, ['LDA-Frames', 'F1'], ['Triframes CW'], None] | 1 |
P18-2012table_2 | Performance as a function of the number of RNN units with a fixed unit size of 64; averaged across 5 runs apart from the 16 unit (average across 10 runs). | 2 | [['# RNN units', '1'], ['# RNN units', '2'], ['# RNN units', '4'], ['# RNN units', '8'], ['# RNN units', '16'], ['# RNN units', '32']] | 1 | [['F1']] | [['90.53 ±0.31'], ['90.79 ±0.18'], ['90.64 ±0.24'], ['91.09 ±0.28'], ['91.48 ±0.22'], ['90.68 ±0.18']] | column | ['F1'] | ['# RNN units'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>F1</th> </tr> </thead> <tbody> <tr> <td># RNN units || 1</td> <td>90.53 ±0.31</td> </tr> <tr> <td># RNN units || 2</td> <td>90.79 ±0.18</td> </tr> <tr> <td># RNN units ||... | Table 2 | table_2 | P18-2012 | 4 | acl2018 | Table 2 shows performance as a function of the number of RNN units with a fixed unit size. The number of units is clearly a hyperparameter which
must be optimized for. We find good performance across the board (there is no catastrophic collapse in results) however when using 16 units we do outperform other models subst... | [1, 2, 1] | ['Table 2 shows performance as a function of the number of RNN units with a fixed unit size.', 'The number of units is clearly a hyperparameter which\nmust be optimized for.', 'We find good performance across the board (there is no catastrophic collapse in results) however when using 16 units we do outperform other mod... | [['# RNN units'], None, ['16', 'F1']] | 1 |
P18-2013table_3 | KBC performance for base, typed, and related formulations. Typed models outperform their base models across all datasets. | 2 | [['Model', 'E'], ['Model', 'DM+E'], ['Model', 'DM'], ['Model', 'TypeDM'], ['Model', 'Complex'], ['Model', 'TypeComplex']] | 2 | [['FB15K', 'MRR'], ['FB15K', 'HITS@1'], ['FB15K', 'HITS@10'], ['FB15K237', 'MRR'], ['FB15K237', 'HITS@1'], ['FB15K237', 'HITS@10'], ['YAGO3-10', 'MRR'], ['YAGO3-10', 'HITS@1'], ['YAGO3-10', 'HITS@10']] | [['23.40', '17.39', '35.29', '21.30', '14.51', '36.38', '7.87', '6.22', '10.00'], ['60.84', '49.53', '79.70', '38.15', '28.06', '58.02', '52.48', '38.72', '77.40'], ['67.47', '56.52', '84.86', '37.21', '27.43', '56.12', '55.31', '46.80', '70.76'], ['75.01', '66.07', '87.92', '38.70', '29.30', '57.36', '58.16', '51.36',... | column | ['MRR', 'HITS@1', 'HITS@10', 'MRR', 'HITS@1', 'HITS@10', 'MRR', 'HITS@1', 'HITS@10'] | ['TypeComplex'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>FB15K || MRR</th> <th>FB15K || HITS@1</th> <th>FB15K || HITS@10</th> <th>FB15K237 || MRR</th> <th>FB15K237 || HITS@1</th> <th>FB15K237 || HITS@10</th> <th>YAGO3-10 || MRR</th> <th>... | Table 3 | table_3 | P18-2013 | 4 | acl2018 | Table 3 shows that TypeDM and TypeComplex dominate across all data sets. E by itself is understandably weak, and DM+E does not lift it much. Each typed model improves upon the corresponding base model on all measures, underscoring the value of type compatibility scores.3 . To the best of our knowledge, the results of o... | [1, 1, 1, 2] | ['Table 3 shows that TypeDM and TypeComplex dominate across all data sets.', 'E by itself is understandably weak, and DM+E does not lift it much.', 'Each typed model improves upon the corresponding base model on all measures, underscoring the value of type compatibility scores.3 .', 'To the best of our knowledge, the r... | [['TypeDM', 'TypeComplex'], ['E', 'DM+E'], ['TypeDM', 'TypeComplex', 'DM', 'Complex'], ['TypeDM', 'TypeComplex']] | 1 |
P18-2014table_1 | Relation extraction performance on ACE 2005 test dataset. * denotes significance at p < 0.05 compared to SPTree, (cid:5) denotes significance at p < 0.05 compared to the Baseline. | 2 | [['Model', 'SPTree'], ['Model', 'Baseline'], ['Model', 'No walks l = 1'], ['Model', '+ Walks l = 2'], ['Model', '+ Walks l = 4'], ['Model', '+ Walks l = 8']] | 1 | [['P'], ['R'], ['F1 (%)']] | [['70.1', '61.2', '65.3'], ['72.5', '53.3', '61.4*'], ['71.9', '55.6', '62.7'], ['69.9', '58.4', '63.6◇'], ['69.7', '59.5', '64.2◇'], ['71.5', '55.3', '62.4']] | column | ['P', 'R', 'F1 (%)'] | ['No walks l = 1', '+ Walks l = 2', '+ Walks l = 4'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1 (%)</th> </tr> </thead> <tbody> <tr> <td>Model || SPTree</td> <td>70.1</td> <td>61.2</td> <td>65.3</td> </tr> <tr> <td>Model || Baseline</td>... | Table 1 | table_1 | P18-2014 | 4 | acl2018 | Table 1 illustrates the performance of our proin comparison with SPTree sysposed model tem Miwa and Bansal (2016) on ACE 2005. We use the same data split with SPTree to compare with their model. We retrained their model with gold entities in order to compare the performances on the relation extraction task. The Baselin... | [1, 1, 2, 1, 1, 1, 1, 1, 1] | ['Table 1 illustrates the performance of our proin comparison with SPTree sysposed model tem Miwa and Bansal (2016) on ACE 2005.', 'We use the same data split with SPTree to compare with their model.', 'We retrained their model with gold entities in order to compare the performances on the relation extraction task.', '... | [['SPTree', 'No walks l = 1', '+ Walks l = 2', '+ Walks l = 4', '+ Walks l = 8'], ['SPTree', 'No walks l = 1', '+ Walks l = 2', '+ Walks l = 4', '+ Walks l = 8'], None, ['Baseline'], ['Baseline', 'F1 (%)'], ['No walks l = 1', 'F1 (%)'], ['+ Walks l = 2', 'F1 (%)'], ['+ Walks l = 4'], ['+ Walks l = 8']] | 1 |
P18-2014table_2 | Relation extraction performance (F1 %) on ACE 2005 development set for different number of entities. * denotes significance at p < 0.05 compared to l = 1. | 2 | [['# Entities', '2'], ['# Entities', '3'], ['# Entities', '[4, 6)'], ['# Entities', '[6, 12)'], ['# Entities', '[12, 23)']] | 1 | [['l = 1'], ['l = 2'], ['l = 4'], ['l = 8']] | [['71.2', '69.8', '72.9', '71.0'], ['70.1', '67.5', '67.8', '63.5*'], ['56.5', '59.7', '59.3', '59.9'], ['59.2', '64.2*', '62.2', '60.4'], ['54.7', '59.3', '62.3*', '55.0']] | column | ['F1', 'F1', 'F1', 'F1'] | ['# Entities'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>l = 1</th> <th>l = 2</th> <th>l = 4</th> <th>l = 8</th> </tr> </thead> <tbody> <tr> <td># Entities || 2</td> <td>71.2</td> <td>69.8</td> <td>72.9</td> <td>71.0</td> ... | Table 2 | table_2 | P18-2014 | 5 | acl2018 | Finally, we show the performance of the proposed model as a function of the number of entities in a sentence. Results in Table 2 reveal that for multi-pair sentences the model performs significantly better compared to the no-walks models, proving the effectiveness of the method. Additionally, it is observed that for mo... | [1, 1, 1, 1] | ['Finally, we show the performance of the proposed model as a function of the number of entities in a sentence.', 'Results in Table 2 reveal that for multi-pair sentences the model performs significantly better compared to the no-walks models, proving the effectiveness of the method.', 'Additionally, it is observed tha... | [['# Entities'], ['2', '3'], ['l = 1', 'l = 2', 'l = 4'], ['l = 8']] | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.