table_id_paper stringlengths 15 15 | caption stringlengths 14 1.88k | row_header_level int32 1 9 | row_headers large_stringlengths 15 1.75k | column_header_level int32 1 6 | column_headers large_stringlengths 7 1.01k | contents large_stringlengths 18 2.36k | metrics_loc stringclasses 2
values | metrics_type large_stringlengths 5 532 | target_entity large_stringlengths 2 330 | table_html_clean large_stringlengths 274 7.88k | table_name stringclasses 9
values | table_id stringclasses 9
values | paper_id stringlengths 8 8 | page_no int32 1 13 | dir stringclasses 8
values | description large_stringlengths 103 3.8k | class_sentence stringlengths 3 120 | sentences large_stringlengths 110 3.92k | header_mention stringlengths 12 1.8k | valid int32 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
D18-1052table_1 | Performance on SQuAD dev set with the PIQA constraint (top), and without the constraint (bottom). See Section 4 for the description of the terms. | 4 | [['Constraint', 'PI', 'Model', 'TF-IDF'], ['Constraint', 'PI', 'Model', 'LSTM'], ['Constraint', 'PI', 'Model', 'LSTM+SA'], ['Constraint', 'PI', 'Model', 'LSTM+ELMo'], ['Constraint', 'PI', 'Model', 'LSTM+SA+ELMo'], ['Constraint', 'None', 'Model', 'Rajpurkar et al. (2016)'], ['Constraint', 'None', 'Model', 'Yu et al. (20... | 1 | [['F1 (%)'], ['EM (%)']] | [['15.0', '3.9'], ['57.2', '46.8'], ['59.8', '49.0'], ['60.9', '50.9'], ['62.7', '52.7'], ['51.0', '40.0'], ['89.3', '82.5']] | column | ['F1 (%)', 'EM (%)'] | ['LSTM+SA+ELMo'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>F1 (%)</th> <th>EM (%)</th> </tr> </thead> <tbody> <tr> <td>Constraint || PI || Model || TF-IDF</td> <td>15.0</td> <td>3.9</td> </tr> <tr> <td>Constraint || PI || Model || LSTM... | Table 1 | table_1 | D18-1052 | 4 | emnlp2018 | Results. Table 1 shows the results for the PIQA baselines (top) and the unconstrained state of the art (bottom). First, the TF-IDF model performs poorly, which signifies the limitations of traditional document retrieval models for the task. Second, we note that the addition of self-attention makes a significant impact ... | [2, 1, 1, 1, 1, 1, 2] | ['Results.', 'Table 1 shows the results for the PIQA baselines (top) and the unconstrained state of the art (bottom).', 'First, the TF-IDF model performs poorly, which signifies the limitations of traditional document retrieval models for the task.', 'Second, we note that the addition of self-attention makes a signific... | [None, ['PI', 'None'], ['TF-IDF'], ['LSTM', 'LSTM+SA', 'F1 (%)'], ['LSTM', 'LSTM+SA', 'LSTM+ELMo', 'LSTM+SA+ELMo', 'F1 (%)'], ['LSTM+SA+ELMo', 'Rajpurkar et al. (2016)', 'Yu et al. (2018)', 'F1 (%)'], None] | 1 |
D18-1057table_1 | Word similarity and analogy results (ρ× 100 and analogy accuracy). We denote context overlap enhanced method with “+ CO”. 300-dimensional embeddings are used. The datasets used include WS353 (Finkelstein et al., 2001), SL999 (Hill et al., 2016), SCWS (Huang et al., 2012), RW (Luong et al., 2013), MEN (Bruni et al., 201... | 2 | [['Method', 'GloVe'], ['Method', 'GloVe + CO'], ['Method', 'SGNS'], ['Method', 'Swivel'], ['Method', 'Swivel + CO']] | 2 | [['WS353', '-'], ['SL999', '-'], ['SCWS', '-'], ['RW', '-'], ['MEN', '-'], ['MT771', '-'], ['Analogy', 'Sem'], ['Analogy', 'Syn']] | [['66.8', '35.0', '59.3', '44.1', '74.7', '69.9', '76.0', '75.3'], ['69.7', '38.0', '63.8', '45.1', '77.6', '71.3', '78.6', '75.0'], ['71.1', '40.7', '67.1', '52.8', '78.1', '70.4', '67.2', '77.3'], ['73.1', '39.9', '66.4', '53.4', '79.1', '71.7', '78.6', '78.0'], ['74.0', '41.2', '66.3', '53.6', '79.8', '72.5', '79.4'... | column | ['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy'] | ['GloVe + CO', 'Swivel + CO'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>WS353 || -</th> <th>SL999 || -</th> <th>SCWS || -</th> <th>RW || -</th> <th>MEN || -</th> <th>MT771 || -</th> <th>Analogy || Sem</th> <th>Analogy || Syn</th> </tr> </thead> <t... | Table 1 | table_1 | D18-1057 | 4 | emnlp2018 | 4.2 Intrinsic Evalution. Table 1 shows the evaluation results of word similarity tasks and word analogy tasks. Word similarity is measured as the Spearman’s rank correlation ρ between human-judged similarity and cosine distance of word vectors. In word analogy task, the questions are answered over the whole vocabulary ... | [2, 1, 2, 2, 1, 2, 1, 2] | ['4.2 Intrinsic Evalution.', 'Table 1 shows the evaluation results of word similarity tasks and word analogy tasks.', 'Word similarity is measured as the Spearman’s rank correlation ρ between human-judged similarity and cosine distance of word vectors.', 'In word analogy task, the questions are answered over the whole ... | [None, None, None, None, ['GloVe', 'Swivel', 'SGNS'], ['SGNS'], ['Swivel + CO', 'WS353', 'SL999', 'SCWS', 'RW', 'MEN', 'MT771', 'Analogy'], None] | 1 |
D18-1060table_5 | The breakdown of performance on the VUA sequence labeling test set by POS tags. We show data statistics (count, % metaphor) on the training set. We only show POS tags whose % metaphor > 10. | 2 | [['POS', 'VERB'], ['POS', 'NOUN'], ['POS', 'ADP'], ['POS', 'ADJ'], ['POS', 'PART']] | 1 | [['#'], ['% metaphor'], ['P'], ['R'], ['F1.']] | [['20K', '18.1', '68.1', '71.9', '69.9'], ['20K', '13.6', '59.9', '60.8', '60.4'], ['13K', '28.0', '86.8', '89.0', '87.9'], ['9K', '11.5', '56.1', '60.6', '58.3'], ['3K', '10.1', '57.1', '59.1', '58.1']] | column | ['#', '% metaphor', 'P', 'R', 'F1.'] | ['POS'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>#</th> <th>% metaphor</th> <th>P</th> <th>R</th> <th>F1.</th> </tr> </thead> <tbody> <tr> <td>POS || VERB</td> <td>20K</td> <td>18.1</td> <td>68.1</td> <td>71.9</... | Table 5 | table_5 | D18-1060 | 3 | emnlp2018 | Table 5 reports the breakdown of performance by POS tags. Not surprisingly, tags with more data are easier to classify. Adposition is the easiest to identify as metaphorical and is also the most frequently metaphorical class (28%). On the other hand, particles are challenging to identify, since they are often associate... | [1, 2, 1, 2] | ['Table 5 reports the breakdown of performance by POS tags.', 'Not surprisingly, tags with more data are easier to classify.', 'Adposition is the easiest to identify as metaphorical and is also the most frequently metaphorical class (28%).', 'On the other hand, particles are challenging to identify, since they are ofte... | [['POS'], None, ['ADP'], None] | 1 |
D18-1060table_6 | Model performances for the verb classification task. Our models achieve strong performance on all datasets. The CLS model performs better than the SEQ model when only one word per sentence is annotated by human (TroFi and MOH-X). When all words in the sentence are accurately annotated (VUA), the SEQ model outperforms t... | 2 | [['Model', 'Lexical Baseline'], ['Model', 'Klebanov (2016)'], ['Model', 'Rei (2017)'], ['Model', 'Koper (2017)'], ['Model', 'Wu (2018) ensemble'], ['Model', 'CLS'], ['Model', 'SEQ']] | 2 | [['MOH-X (10 fold)', 'P'], ['MOH-X (10 fold)', 'R'], ['MOH-X (10 fold)', 'F1'], ['MOH-X (10 fold)', 'Acc.'], ['TroFi (10 fold)', 'P'], ['TroFi (10 fold)', 'R'], ['TroFi (10 fold)', 'F1'], ['TroFi (10 fold)', 'Acc.'], ['VUA - Test', 'P'], ['VUA - Test', 'R'], ['VUA - Test', 'F1'], ['VUA - Test', 'Acc.'], ['VUA - Test', ... | [['39.1', '26.7', '31.3', '43.6', '72.4', '55.7', '62.9', '71.4', '67.9', '40.7', '50.9', '76.4', '48.9'], ['-', '-', '-', '-', '-', '-', '-', '-', '-', '-', '-', '-', '60.0'], ['73.6', '76.1', '74.2', '74.8', '-', '-', '-', '-', '-', '-', '-', '-', '-'], ['-', '-', '-', '-', '-', '', '75.0', '-', '-', '-', '62.0', '-'... | column | ['P', 'R', 'F1', 'Acc.', 'P', 'R', 'F1', 'Acc.', 'P', 'R', 'F1', 'Acc.', 'MaF1'] | ['CLS', 'SEQ'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MOH-X (10 fold) || P</th> <th>MOH-X (10 fold) || R</th> <th>MOH-X (10 fold) || F1</th> <th>MOH-X (10 fold) || Acc.</th> <th>TroFi (10 fold) || P</th> <th>TroFi (10 fold) || R</th> <th>T... | Table 6 | table_6 | D18-1060 | 4 | emnlp2018 | Verb Classification Results. Table 6 shows performance on the verb classification task for three datasets (MOH-X , TroFi and VUA). Our models achieve strong performance on all datasets, outperforming existing models on the MOH-X and VUA datasets. On the MOH-X dataset, the CLS model outperforms the SEQ model, likely due... | [2, 1, 1, 1, 1, 2, 1, 2, 2] | ['Verb Classification Results.', 'Table 6 shows performance on the verb classification task for three datasets (MOH-X , TroFi and VUA).', 'Our models achieve strong performance on all datasets, outperforming existing models on the MOH-X and VUA datasets.', 'On the MOH-X dataset, the CLS model outperforms the SEQ model,... | [None, ['MOH-X (10 fold)', 'TroFi (10 fold)', 'VUA - Test'], ['CLS', 'SEQ', 'Lexical Baseline', 'Klebanov (2016)', 'Rei (2017)', 'Koper (2017)', 'Wu (2018) ensemble', 'MOH-X (10 fold)', 'TroFi (10 fold)', 'VUA - Test'], ['MOH-X (10 fold)', 'CLS', 'SEQ'], ['VUA - Test', 'CLS', 'SEQ'], None, ['Koper (2017)', 'TroFi (10 f... | 1 |
D18-1062table_2 | Experimental results on Chinese-English dataset. The results of baseline models are cited from Zhang et al. (2017). | 4 | [['Model', 'MonoGiza w/o emb.', '#seeds', '0'], ['Model', 'MonoGiza w/ emb.', '#seeds', '0'], ['Model', 'TM', '#seeds', '50'], ['Model', 'IA', '#seeds', '100'], ['Model', 'Zhang et al. (2017)', '#seeds', '0'], ['Model', 'Ours', '#seeds', '0']] | 1 | [['Accuracy (%)']] | [['0.05'], ['0.09'], ['0.29'], ['21.79'], ['43.31'], ['51.37']] | column | ['Accuracy (%)'] | ['Ours'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy (%)</th> </tr> </thead> <tbody> <tr> <td>Model || MonoGiza w/o emb. || #seeds || 0</td> <td>0.05</td> </tr> <tr> <td>Model || MonoGiza w/ emb. || #seeds || 0</td> <td>0.09<... | Table 2 | table_2 | D18-1062 | 4 | emnlp2018 | Table 2 summarizes the performance of baseline models and our approach. The results of baseline models are cited from Zhang et al. (2017). As we can see from the table, our model could achieve superior performance compared with other baseline models. | [1, 2, 1] | ['Table 2 summarizes the performance of baseline models and our approach.', 'The results of baseline models are cited from Zhang et al. (2017).', 'As we can see from the table, our model could achieve superior performance compared with other baseline models.'] | [['MonoGiza w/ emb.', 'TM', 'IA', 'Zhang et al. (2017)', 'Ours'], ['MonoGiza w/ emb.', 'TM', 'IA', 'Zhang et al. (2017)'], ['Ours', 'MonoGiza w/ emb.', 'TM', 'IA', 'Zhang et al. (2017)']] | 1 |
D18-1067table_3 | Performance of sentiment classifiers on OPT. | 8 | [['Model', 'LSTM', 'Train', 'SST', 'Dev', 'SST', 'Test', 'OPT'], ['Model', 'BiLSTM', 'Train', 'SST', 'Dev', 'SST', 'Test', 'OPT'], ['Model', 'CNN', 'Train', 'SST', 'Dev', 'SST', 'Test', 'OPT'], ['Model', 'CNN', 'Train', 'TSA', 'Dev', 'TSA', 'Test', 'OPT'], ['Model', 'RNN(char)', 'Train', 'TSA', 'Dev', 'OPT', 'Test', 'O... | 1 | [['Acc%']] | [['63.20'], ['63.60'], ['59.60'], ['67.60'], ['55.20'], ['80.19']] | column | ['Acc%'] | ['SST', 'TSA', 'OPT'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Acc%</th> </tr> </thead> <tbody> <tr> <td>Model || LSTM || Train || SST || Dev || SST || Test || OPT</td> <td>63.20</td> </tr> <tr> <td>Model || BiLSTM || Train || SST || Dev || SST || T... | Table 3 | table_3 | D18-1067 | 3 | emnlp2018 | Table 3 shows the performance of several deep learning models trained on either SST or TSA datasets and evaluated on the OPT dataset. Note that the Dev set was used for model selection. As can be seen from the table, the models trained on the sentiment datasets perform poorly on the optimism/pessimism dataset. For exam... | [1, 2, 1, 1, 2, 2, 2] | ['Table 3 shows the performance of several deep learning models trained on either SST or TSA datasets and evaluated on the OPT dataset.', 'Note that the Dev set was used for model selection.', 'As can be seen from the table, the models trained on the sentiment datasets perform poorly on the optimism/pessimism dataset.'... | [['LSTM', 'BiLSTM', 'CNN', 'RNN(char)', 'SST', 'TSA'], ['Dev'], ['LSTM', 'BiLSTM', 'CNN', 'RNN(char)'], ['GRUStack', 'CNN', 'Acc%', 'TSA', 'SST'], ['SST', 'TSA'], None, None] | 1 |
D18-1071table_1 | The results of human annotations (C = Consistency, L = Logic, E = Emotion). | 2 | [['Method', 'S2S'], ['Method', 'S2S-AW'], ['Method', 'E-SCBA']] | 2 | [['Overall', 'C'], ['Overall', 'L'], ['Overall', 'E'], ['Happy', 'C'], ['Happy', 'L'], ['Happy', 'E'], ['Like', 'C'], ['Like', 'L'], ['Like', 'E'], ['Surprise', 'C'], ['Surprise', 'L'], ['Surprise', 'E'], ['Sad', 'C'], ['Sad', 'L'], ['Sad', 'E'], ['Fear', 'C'], ['Fear', 'L'], ['Fear', 'E'], ['Angry', 'C'], ['Angry', 'L... | [['1.301', '0.776', '0.197', '1.368', '0.924', '0.285', '1.341', '0.757', '0.217', '1.186', '0.723', '0.076', '1.393', '0.928', '0.237', '1.245', '0.782', '0.215', '1.205', '0.535', '0.113', '1.368', '0.680', '0.236'], ['1.348', '1.063', '0.231', '1.437', '1.097', '0.237', '1.418', '1.125', '0.276', '1.213', '0.916', '... | column | ['C', 'L', 'E', 'C', 'L', 'E', 'C', 'L', 'E', 'C', 'L', 'E', 'C', 'L', 'E', 'C', 'L', 'E', 'C', 'L', 'E', 'C', 'L', 'E'] | ['E-SCBA'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Overall || C</th> <th>Overall || L</th> <th>Overall || E</th> <th>Happy || C</th> <th>Happy || L</th> <th>Happy || E</th> <th>Like || C</th> <th>Like || L</th> <th>Like || E</... | Table 1 | table_1 | D18-1071 | 4 | emnlp2018 | Table 1 depicts the human annotations (t-test: p < 0.05 for C and L, p < 0.01 for E). Overall, E-SCBA outperforms S2S-AW on all three metrics, where the compound information plays a positive role in the comprehensive promotion. However, in Surprise and Angry, the grades of Consistency and Logic are not satisfactory, si... | [1, 1, 1, 1, 2, 2] | ['Table 1 depicts the human annotations (t-test: p < 0.05 for C and L, p < 0.01 for E).', 'Overall, E-SCBA outperforms S2S-AW on all three metrics, where the compound information plays a positive role in the comprehensive promotion.', 'However, in Surprise and Angry, the grades of Consistency and Logic are not satisfac... | [None, ['Overall', 'E-SCBA', 'S2S-AW', 'C', 'L', 'E'], ['Surprise', 'Angry', 'E-SCBA', 'S2S', 'S2S-AW', 'C', 'L'], ['Surprise', 'E', 'E-SCBA', 'S2S', 'S2S-AW'], ['Surprise'], ['E']] | 1 |
D18-1074table_3 | Results of classification evaluations. | 5 | [['Perez2017 lin', 'Features', 'Ngram+', 'Method', 'Co'], ['Perez2017 lin', 'Features', 'Ngram+', 'Method', 'Co+Ct'], ['Perez2017 lin', 'Features', 'Ngram+', 'Method', 'All'], ['Perez2017 vec', 'Features', 'Vec-con', 'Method', 'Co'], ['Perez2017 vec', 'Features', 'Vec-con', 'Method', 'Co+Ct'], ['Perez2017 vec', 'Featur... | 1 | [['Precision'], ['Recall'], ['F1']] | [['0.62', '0.62', '0.62'], ['0.60', '0.61', '0.61'], ['0.61', '0.61', '0.62'], ['0.60', '0.58', '0.59'], ['0.61', '0.59', '0.60'], ['0.61', '0.57', '0.58'], ['0.65', '0.63', '0.64'], ['0.68', '0.64', '0.65'], ['0.67', '0.67', '0.67'], ['0.65', '0.64', '0.65'], ['0.70', '0.66', '0.68'], ['0.74', '0.67', '0.70']] | column | ['Precision', 'Recall', 'F1'] | ['Proposed model'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Precision</th> <th>Recall</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Perez2017 lin || Features || Ngram+ || Method || Co</td> <td>0.62</td> <td>0.62</td> <td>0.62</td> ... | Table 3 | table_3 | D18-1074 | 4 | emnlp2018 | The results of our experiments are summarized in the Table 3. Findings indicate that our proposed approach leads to a small performance boost after using the topic embeddings. Thus, our simple feature augmentation approach has the potential to make classifiers more robust. In addition, the contextual information (“Ct”)... | [1, 1, 2, 2, 2, 2, 2, 0, 1] | ['The results of our experiments are summarized in the Table 3.', 'Findings indicate that our proposed approach leads to a small performance boost after using the topic embeddings.', 'Thus, our simple feature augmentation approach has the potential to make classifiers more robust.', 'In addition, the contextual informa... | [None, ['Proposed model'], ['Proposed model'], ['Co+Ct', 'Co+Ct+T'], None, ['Proposed model'], ['Proposed model'], None, ['Proposed model']] | 1 |
D18-1075table_3 | Human evaluation results of the AEM model and the Seq2Seq model. | 2 | [['Models', 'Seq2Seq'], ['Models', 'AEM'], ['Models', 'Seq2Seq+Attention'], ['Models', 'AEM+Attention']] | 1 | [['Fluency'], ['Coherence'], ['G-Score']] | [['6.97', '3.51', '4.95'], ['8.11', '4.18', '5.82'], ['5.11', '3.30', '4.10'], ['7.92', '4.97', '6.27']] | column | ['Fluency', 'Coherence', 'G-Score'] | ['AEM', 'AEM+Attention'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Fluency</th> <th>Coherence</th> <th>G-Score</th> </tr> </thead> <tbody> <tr> <td>Models || Seq2Seq</td> <td>6.97</td> <td>3.51</td> <td>4.95</td> </tr> <tr> <td>Model... | Table 3 | table_3 | D18-1075 | 4 | emnlp2018 | Table 3 shows the results of human evaluation. The inter-annotator agreement is satisfactory considering the difficulty of human evaluation. The Pearson’s correlation coefficient is 0.69 on coherence and 0.57 on fluency, with p < 0.0001. First, it is clear that the AEM model outperforms the Seq2Seq model with a large... | [1, 2, 2, 1, 1, 2, 2] | ['Table 3 shows the results of human evaluation.', 'The inter-annotator agreement is satisfactory considering the difficulty of human evaluation.', 'The Pearson’s correlation coefficient is 0.69 on coherence and 0.57 on fluency, with p < 0.0001.', 'First, it is clear that the AEM model outperforms the Seq2Seq model w... | [None, None, None, ['AEM', 'Seq2Seq'], ['Seq2Seq+Attention', 'AEM+Attention', 'Coherence'], None, ['AEM+Attention', 'G-Score']] | 1 |
D18-1078table_2 | Accuracy results over the test set ASR transcripts, for w2v and skip-thought (ST). | 2 | [['Method', 'all-yes'], ['Method', 'w2v title-speech'], ['Method', 'w2v arg-speech'], ['Method', 'w2v title-sentence'], ['Method', 'w2v arg-sentence'], ['Method', 'ST arg-sentence']] | 1 | [['Accuracy (%)']] | [['39.8'], ['49.8'], ['57.6'], ['55.8'], ['64.6'], ['60.2']] | column | ['Accuracy (%)'] | ['w2v title-speech', 'w2v arg-speech', 'w2v title-sentence', 'w2v arg-sentence', 'ST arg-sentence'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy (%)</th> </tr> </thead> <tbody> <tr> <td>Method || all-yes</td> <td>39.8</td> </tr> <tr> <td>Method || w2v title-speech</td> <td>49.8</td> </tr> <tr> <td>Method ... | Table 2 | table_2 | D18-1078 | 5 | emnlp2018 | 3.2 Results. Table 2 shows the accuracy of all w2v configurations. Representing an argument using its more verbose several-sentences-long content outperforms using its short single-sentence title. On the speech side, considering each sentence separately is preferable to using the entire speech. We compared the results ... | [2, 1, 1, 2, 1, 1, 1, 2, 1] | ['3.2 Results.', 'Table 2 shows the accuracy of all w2v configurations.', 'Representing an argument using its more verbose several-sentences-long content outperforms using its short single-sentence title.', 'On the speech side, considering each sentence separately is preferable to using the entire speech.', 'We compare... | [None, ['w2v title-speech', 'w2v arg-speech', 'w2v title-sentence', 'w2v arg-sentence'], ['w2v arg-sentence', 'w2v title-sentence', 'Accuracy (%)'], None, ['w2v arg-sentence', 'ST arg-sentence'], ['ST arg-sentence', 'Accuracy (%)'], ['w2v arg-sentence', 'Accuracy (%)'], ['w2v arg-sentence'], ['w2v arg-sentence', 'all-y... | 1 |
D18-1084table_1 | Results of our model compared with prior published results. Note that Liang et al. (2017) also trains a model on additional data, but here we only compare models trained on Visual Genome. Also note that our models employ greedy search, whereas other models employ beam search. | 1 | [['Krause et al. (Template)'], ['Krause et al. (Flat w/o object detector)'], ['Krause et al. (Flat)'], ['Krause et al. (Hierarchical)'], ['Liang et al. (w/o discriminator)'], ['Liang et al.'], ['Ours (XE training w/o rep. penalty)'], ['Ours (XE training w/ rep. penalty)'], ['Ours (SCST training w/o rep. penalty)'], ['O... | 1 | [['METEOR'], ['CIDEr'], ['BLEU-1'], ['BLEU-2'], ['BLEU-3'], ['BLEU-4']] | [['14.31', '12.15', '37.47', '21.02', '12.30', '7.38'], ['12.82', '11.06', '34.04', '19.95', '12.20', '7.71'], ['13.54', '11.14', '37.30', '21.70', '13.07', '8.07'], ['15.95', '13.52', '41.90', '24.11', '14.23', '8.69'], ['16.57', '15.07', '41.86', '24.33', '14.56', '8.99'], ['17.12', '16.87', '41.99', '24.86', '14.89'... | column | ['METEOR', 'CIDEr', 'BLEU-1', 'BLEU-2', 'BLEU-3', 'BLEU-4'] | ['Ours (XE training w/o rep. penalty)', 'Ours (XE training w/ rep. penalty)', 'Ours (SCST training w/o rep. penalty)', 'Ours (SCST training w/ rep. penalty)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>METEOR</th> <th>CIDEr</th> <th>BLEU-1</th> <th>BLEU-2</th> <th>BLEU-3</th> <th>BLEU-4</th> </tr> </thead> <tbody> <tr> <td>Krause et al. (Template)</td> <td>14.31</td> ... | Table 1 | table_1 | D18-1084 | 4 | emnlp2018 | Results. Table 1 shows the main experimental results. Our baseline cross-entropy captioning model gets similar scores to the original flat model. When the repetition penalty is applied to a model trained with cross-entropy, we see a large improvement on CIDEr and a minor improvement on other metrics. When combining the... | [2, 1, 1, 1, 1, 2, 2] | ['Results.', 'Table 1 shows the main experimental results.', 'Our baseline cross-entropy captioning model gets similar scores to the original flat model.', 'When the repetition penalty is applied to a model trained with cross-entropy, we see a large improvement on CIDEr and a minor improvement on other metrics.', 'When... | [None, None, ['Ours (XE training w/o rep. penalty)', 'Krause et al. (Flat)'], ['Ours (XE training w/ rep. penalty)', 'Ours (SCST training w/ rep. penalty)', 'CIDEr', 'METEOR', 'BLEU-1', 'BLEU-2', 'BLEU-3', 'BLEU-4'], ['Ours (XE training w/ rep. penalty)', 'CIDEr', 'METEOR', 'BLEU-1', 'BLEU-2', 'BLEU-3', 'BLEU-4'], ['Ou... | 1 |
D18-1085table_1 | Correlation results with the manual metrics of Pyramid, Responsiveness, and Readability using the correlation metrics of Pearson r, Spearman ρ, and Kendall τ. The best correlations are specified in bold, and the underlined scores show the top correlations in the TAC AESOP 2011. | 2 | [['Metric', 'C S IIITH3'], ['Metric', 'DemokritosGR1'], ['Metric', 'Catolicasc1'], ['Metric', 'ROUGE-1'], ['Metric', 'ROUGE-2'], ['Metric', 'ROUGE-SU4'], ['Metric', 'ROUGE-WE-1'], ['Metric', 'ROUGE-WE-2'], ['Metric', 'ROUGE-WE-SU4'], ['Metric', 'ROUGE-G-1'], ['Metric', 'ROUGE-G-2'], ['Metric', 'ROUGE-G-SU4']] | 2 | [['Pyramid', 'Pearson'], ['Pyramid', 'Spearman'], ['Pyramid', 'Kendall'], ['Responsiveness', 'Pearson'], ['Responsiveness', 'Spearman'], ['Responsiveness', 'Kendall'], ['Readability', 'Pearson'], ['Readability', 'Spearman'], ['Readability', 'Kendall']] | [['0.965', '0.903', '0.758', '0.933', '0.781', '0.596', '0.731', '0.358', '0.242'], ['0.974', '0.897', '0.747', '0.947', '0.845', '0.675', '0.794', '0.497', '0.359'], ['0.967', '0.902', '0.735', '0.950', '0.837', '0.666', '0.819', '0.494', '0.366'], ['0.966', '0.909', '0.747', '0.935', '0.818', '0.633', '0.790', '0.391... | column | ['Pearson', 'Spearman', 'Kendall', 'Pearson', 'Spearman', 'Kendall', 'Pearson', 'Spearman', 'Kendall'] | ['ROUGE-G-1', 'ROUGE-G-2', 'ROUGE-G-SU4', 'ROUGE-1', 'ROUGE-2', 'ROUGE-SU4', 'ROUGE-WE-1', 'ROUGE-WE-2', 'ROUGE-WE-SU4'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Pyramid || Pearson</th> <th>Pyramid || Spearman</th> <th>Pyramid || Kendall</th> <th>Responsiveness || Pearson</th> <th>Responsiveness || Spearman</th> <th>Responsiveness || Kendall</th> ... | Table 1 | table_1 | D18-1085 | 5 | emnlp2018 | We evaluate ROUGE-G, against the top metrics (C S IIITH3, DemokritosGR1, Catolicasc1) among the 23 metrics participated in TAC AESOP 2011, ROUGE, and the most recent related work (ROUGE-WE) (Table 1). Overall results support our proposal to consider semantics besides surface with ROUGE. We analyze the correlation resul... | [1, 1, 1, 1, 1, 1, 2, 2, 1, 1, 2, 1, 2, 1, 1] | ['We evaluate ROUGE-G, against the top metrics (C S IIITH3, DemokritosGR1, Catolicasc1) among the 23 metrics participated in TAC AESOP 2011, ROUGE, and the most recent related work (ROUGE-WE) (Table 1).', 'Overall results support our proposal to consider semantics besides surface with ROUGE.', 'We analyze the correlati... | [['ROUGE-G-1', 'ROUGE-G-2', 'ROUGE-G-SU4', 'C S IIITH3', 'DemokritosGR1', 'Catolicasc1', 'ROUGE-1', 'ROUGE-2', 'ROUGE-SU4', 'ROUGE-WE-1', 'ROUGE-WE-2', 'ROUGE-WE-SU4'], ['ROUGE-G-1', 'ROUGE-G-2', 'ROUGE-G-SU4', 'ROUGE-1', 'ROUGE-2', 'ROUGE-SU4', 'ROUGE-WE-1', 'ROUGE-WE-2', 'ROUGE-WE-SU4'], None, ['ROUGE-G-2', 'Pyramid'... | 1 |
D18-1086table_1 | Results for AMR-to-text | 2 | [['Model', 'Our model (unguided NLG)'], ['Model', 'NeuralAMR (Konstas et al. 2017)'], ['Model', 'TSP (Song et al. 2016)'], ['Model', 'TreeToStr (Flanigan et al. 2016)']] | 1 | [['BLEU']] | [['21.1'], ['22.0'], ['22.4'], ['23.0']] | column | ['BLEU'] | ['Our model (unguided NLG)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU</th> </tr> </thead> <tbody> <tr> <td>Model || Our model (unguided NLG)</td> <td>21.1</td> </tr> <tr> <td>Model || NeuralAMR (Konstas et al. 2017)</td> <td>22.0</td> </tr> ... | Table 1 | table_1 | D18-1086 | 3 | emnlp2018 | AMR-to-Text baseline comparison. We compare our baseline model (described in ยง3.2) against previous works in AMR-to-text using the data from the recent SemEval-2016 Task 8 (May, 2016, LDC2015E86). Table 1 reports BLEU scores comparing our model against previous works. Here, we see that our model achieves a BLEU score ... | [2, 2, 1, 1] | ['AMR-to-Text baseline comparison.', 'We compare our baseline model (described in ยง3.2) against previous works in AMR-to-text using the data from the recent SemEval-2016 Task 8 (May, 2016, LDC2015E86).', 'Table 1 reports BLEU scores comparing our model against previous works.', 'Here, we see that our model achieves a ... | [None, ['NeuralAMR (Konstas et al. 2017)', 'TSP (Song et al. 2016)', 'TreeToStr (Flanigan et al. 2016)'], ['Our model (unguided NLG)', 'NeuralAMR (Konstas et al. 2017)', 'TSP (Song et al. 2016)', 'TreeToStr (Flanigan et al. 2016)', 'BLEU'], ['Our model (unguided NLG)', 'NeuralAMR (Konstas et al. 2017)', 'TSP (Song et a... | 1 |
D18-1086table_2 | BLEU and ROUGE results for guided and unguided models using test dataset. | 2 | [['Model', 'Guided NLG (Oracle)'], ['Model', 'Guided NLG'], ['Model', 'Unguided NLG']] | 2 | [['-', 'BLEU'], ['F1 ROUGE', 'R-1'], ['F1 ROUGE', 'R-2'], ['F1 ROUGE', 'R-L']] | [['61.3', '79.4', '63.7', '76.4'], ['45.8', '70.7', '49.5', '64.9'], ['29.6', '68.6', '39.6', '61.3']] | column | ['BLEU', 'R-1', 'R-2', 'R-L'] | ['Guided NLG (Oracle)', 'Guided NLG'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU || -</th> <th>F1 ROUGE || R-1</th> <th>F1 ROUGE || R-2</th> <th>F1 ROUGE || R-L</th> </tr> </thead> <tbody> <tr> <td>Model || Guided NLG (Oracle)</td> <td>61.3</td> <td>79.... | Table 2 | table_2 | D18-1086 | 4 | emnlp2018 | Guided NLG for AMR-to-Text. In this experiment we apply our guided NLG mechanism described in §3.3 to our baseline seq2seq model. To isolate the effects of guidance we skip the actual summarization process and proceed to directly generating the summary text from the gold standard summary AMR graphs from the Proxy Repor... | [2, 2, 2, 0, 2, 2, 2, 2, 2, 1, 1] | ['Guided NLG for AMR-to-Text.', 'In this experiment we apply our guided NLG mechanism described in §3.3 to our baseline seq2seq model.', 'To isolate the effects of guidance we skip the actual summarization process and proceed to directly generating the summary text from the gold standard summary AMR graphs from the Pro... | [None, ['Guided NLG'], None, None, ['Guided NLG (Oracle)', 'Guided NLG'], ['Guided NLG (Oracle)'], ['Guided NLG (Oracle)'], ['Guided NLG'], ['Guided NLG'], ['Guided NLG (Oracle)', 'Guided NLG', 'Unguided NLG'], ['Guided NLG', 'Unguided NLG', 'BLEU', 'R-2', 'Guided NLG (Oracle)']] | 1 |
D18-1110table_3 | Evaluation results on ATIS where Accori and Accpara denote the accuracy on the original and paraphrased development set of ATIS, respectively. | 2 | [['Feature', 'Word Order'], ['Feature', 'Dep'], ['Feature', 'Cons'], ['Feature', 'Dep + Cons'], ['Feature', 'Word Order + Dep'], ['Feature', 'Word Order + Cons'], ['Feature', 'Word Order + Dep + Cons']] | 1 | [['Accori'], ['Accpara'], ['Diff.']] | [['84.8', '78.7', '-6.1'], ['83.5', '80.1', '-3.4'], ['82.9', '77.3', '-5.6'], ['84.0', '80.7', '-3.3'], ['85.2', '82.3', '-2.9'], ['84.9', '79.9', '-5.0'], ['86.0', '83.5', '-2.5']] | column | ['Accori', 'Accpara', 'Diff.'] | ['Word Order', 'Dep', 'Cons', 'Dep + Cons', 'Word Order + Dep', 'Word Order + Cons', 'Word Order + Dep + Cons'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accori</th> <th>Accpara</th> <th>Diff.</th> </tr> </thead> <tbody> <tr> <td>Feature || Word Order</td> <td>84.8</td> <td>78.7</td> <td>-6.1</td> </tr> <tr> <td>Featur... | Table 3 | table_3 | D18-1110 | 5 | emnlp2018 | Table 3 shows the results of our model on the second type of adversarial examples, i.e., the paraphrased ATIS development set. We also report the result of our model on the original ATIS development set. We can see that (1) no matter which feature our model uses, the performance degrades at least 2.5% on the paraphrase... | [1, 1, 1, 1, 1, 2] | ['Table 3 shows the results of our model on the second type of adversarial examples, i.e., the paraphrased ATIS development set.', 'We also report the result of our model on the original ATIS development set.', 'We can see that (1) no matter which feature our model uses, the performance degrades at least 2.5% on the pa... | [['Accpara'], ['Accori'], ['Word Order + Dep + Cons', 'Accpara', 'Diff.'], ['Word Order'], ['Word Order + Dep + Cons'], None] | 1 |
D18-1111table_2 | Results compared to baselines. YN17 result is taken from Yin and Neubig (2017). ASN result is taken from Rabinovich et al. (2017) | 1 | [['SEQ2SEQ'], ['YN17'], ['ASN'], ['ASN + SUPATT'], ['RECODE']] | 2 | [['HS', 'Acc'], ['HS', 'BLEU'], ['Django', 'Acc'], ['Django', 'BLEU']] | [['0.0', '55.0', '13.9', '67.3'], ['16.2', '75.8', '71.6', '84.5'], ['18.2', '77.6', '-', '-'], ['22.7', '79.2', '-', '-'], ['19.6', '78.4', '72.8', '84.7']] | column | ['Acc', 'BLEU', 'Acc', 'BLEU'] | ['RECODE'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>HS || Acc</th> <th>HS || BLEU</th> <th>Django || Acc</th> <th>Django || BLEU</th> </tr> </thead> <tbody> <tr> <td>SEQ2SEQ</td> <td>0.0</td> <td>55.0</td> <td>13.9</td> ... | Table 2 | table_2 | D18-1111 | 4 | emnlp2018 | 5.1 Results. Table 2 shows that RECODE outperforms the baselines in both BLEU and accuracy, providing evidence for the effectiveness of incorporating retrieval methods into tree-based approaches. We ran statistical significance tests for RECODE and YN17, using bootstrap resampling with N = 10,000. For the BLEU scores o... | [2, 1, 2, 2, 2, 2, 1, 1, 2] | ['5.1 Results.', 'Table 2 shows that RECODE outperforms the baselines in both BLEU and accuracy, providing evidence for the effectiveness of incorporating retrieval methods into tree-based approaches.', 'We ran statistical significance tests for RECODE and YN17, using bootstrap resampling with N = 10,000.', 'For the BL... | [None, ['RECODE', 'SEQ2SEQ', 'YN17', 'ASN', 'ASN + SUPATT', 'Acc', 'BLEU'], ['RECODE', 'YN17'], ['BLEU', 'HS', 'Django'], ['Django', 'HS', 'YN17'], ['HS'], ['RECODE', 'ASN + SUPATT', 'HS'], ['RECODE', 'ASN', 'ASN + SUPATT'], ['ASN + SUPATT', 'RECODE']] | 1 |
D18-1112table_1 | Results on the WikiSQL (above) and Stackoverflow (below). | 1 | [['Template'], ['Seq2Seq'], ['Seq2Seq + Copy'], ['Tree2Seq'], ['Graph2Seq-PGE'], ['Graph2Seq-NGE'], ['(Iyer et al. 2016)'], ['Graph2Seq-PGE'], ['Graph2Seq-NGE']] | 1 | [['BLEU-4'], ['Grammar.'], ['Correct.']] | [['15.71', '1.50', '-'], ['20.91', '2.54', '62.1%'], ['24.12', '2.65', '64.5%'], ['26.67', '2.70', '66.8%'], ['38.97', '3.81', '79.2%'], ['34.28', '3.26', '75.3%'], ['18.4', '3.16', '64.2%'], ['23.3', '3.23', '70.2%'], ['21.9', '2.97', '65.1%']] | column | ['BLEU-4', 'Grammar.', 'Correct.'] | ['Graph2Seq-PGE', 'Graph2Seq-NGE'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU-4</th> <th>Grammar.</th> <th>Correct.</th> </tr> </thead> <tbody> <tr> <td>Template</td> <td>15.71</td> <td>1.50</td> <td>-</td> </tr> <tr> <td>Seq2Seq</td> ... | Table 1 | table_1 | D18-1112 | 4 | emnlp2018 | Results and Discussion. Table 1 summarizes the results of our models and baselines. Although the template-based method achieves decent BLEU scores, its grammaticality score is substantially worse than other baselines. We can see that on both two datasets, our Graph2Seq models perform significantly better than the Seq2S... | [2, 1, 1, 1, 2, 2, 2, 1, 1] | ['Results and Discussion.', 'Table 1 summarizes the results of our models and baselines.', 'Although the template-based method achieves decent BLEU scores, its grammaticality score is substantially worse than other baselines.', 'We can see that on both two datasets, our Graph2Seq models perform significantly better tha... | [None, ['Template', 'Seq2Seq', 'Seq2Seq + Copy', 'Tree2Seq', 'Graph2Seq-PGE', 'Graph2Seq-NGE', '(Iyer et al. 2016)'], ['Template', 'BLEU-4', 'Grammar.', 'Seq2Seq', 'Seq2Seq + Copy', 'Tree2Seq'], ['Graph2Seq-PGE', 'Graph2Seq-NGE', 'Seq2Seq', 'Tree2Seq'], ['Graph2Seq-PGE', 'Graph2Seq-NGE'], None, ['Graph2Seq-NGE'], ['Gra... | 1 |
D18-1114table_6 | Breakdown by property of binary classification F1 on SPR1. All new results outperforming prior work (CRF) in bold. | 1 | [['micro f1'], ['macro f1']] | 1 | [['CRF'], ['SPR1'], ['MT:SPR1'], ['SPR1+2']] | [['81.7', '82.2', '83.3', '83.3'], ['65.9', '69.3', '71.1', '70.4']] | row | ['micro f1', 'macro f1'] | ['SPR1', 'MT:SPR1', 'SPR1+2'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>CRF</th> <th>SPR1</th> <th>MT:SPR1</th> <th>SPR1+2</th> </tr> </thead> <tbody> <tr> <td>micro f1</td> <td>81.7</td> <td>82.2</td> <td>83.3</td> <td>83.3</td> </tr> ... | Table 6 | table_6 | D18-1114 | 10 | emnlp2018 | Table 6 shows binary classification F1 on SPR1. All new results outperforming prior work (CRF) in bold. | [1, 1] | ['Table 6 shows binary classification F1 on SPR1.', 'All new results outperforming prior work (CRF) in bold.'] | [None, ['SPR1', 'MT:SPR1', 'SPR1+2', 'CRF']] | 1 |
D18-1124table_1 | Main results in terms of F1 score (%). w/s: # of words decoded per second, number with † is retrieved from the original paper. | 2 | [['Models', 'Finkel and Manning (2009)'], ['Models', 'Lu and Roth (2015)'], ['Models', 'Muis and Lu (2017)'], ['Models', 'Katiyar and Cardie (2018)'], ['Models', 'Ju et al. (2018)'], ['Models', 'Ours'], ['Models', '- char-level LSTM'], ['Models', '- pre-trained embeddings'], ['Models', '- dropout layer']] | 1 | [['ACE04'], ['ACE05'], ['GENIA'], ['w/s']] | [['-', '-', '70.3', '38'], ['62.8', '62.5', '70.3', '454'], ['64.5', '63.1', '70.8', '263'], ['72.7', '70.5', '73.6', '-'], ['-', '72.2', '74.7', '-'], ['73.3', '73.0', '73.9', '1445'], ['72.3', '71.9', '72.1', '1546'], ['71.3', '71.5', '72.0', '1452'], ['71.7', '72.0', '72.7', '1440']] | column | ['F1', 'F1', 'F1', 'F1'] | ['Ours'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>ACE04</th> <th>ACE05</th> <th>GENIA</th> <th>w/s</th> </tr> </thead> <tbody> <tr> <td>Models || Finkel and Manning (2009)</td> <td>-</td> <td>-</td> <td>70.3</td> <td>... | Table 1 | table_1 | D18-1124 | 4 | emnlp2018 | The main results are reported in Table 1. Our neural transition-based model achieves the best results in ACE datasets and comparable results in GENIA dataset in terms of F1 measure. We hypothesize that the performance gain of our model compared with other methods is largely due to improved performance on the portions o... | [1, 1, 2, 0, 2, 0, 0, 0, 1, 0, 2, 2, 2, 1, 2, 2, 1, 1] | ['The main results are reported in Table 1.', 'Our neural transition-based model achieves the best results in ACE datasets and comparable results in GENIA dataset in terms of F1 measure.', 'We hypothesize that the performance gain of our model compared with other methods is largely due to improved performance on the po... | [None, ['Ours', 'ACE04', 'ACE05', 'GENIA'], ['Ours'], None, None, None, None, None, ['Ours', 'ACE04', 'ACE05', 'GENIA'], None, None, ['Lu and Roth (2015)', 'Muis and Lu (2017)'], ['Lu and Roth (2015)', 'Muis and Lu (2017)'], ['Ours', 'Lu and Roth (2015)', 'Muis and Lu (2017)', 'w/s'], None, ['Ours', '- char-level LSTM'... | 1 |
D18-1126table_1 | Results on the WikilinksNED dev and test sets. Our model including features achieves state-ofthe-art performance on the test set, compared to both the reported numbers from Eshel et al. (2017) as well as their released software. Incorporating character CNNs surprisingly leads to lower performance compared to these simp... | 2 | [['Model', 'Eshel et al. (2017)'], ['Model', 'Eshel system release'], ['Model', 'GRU+ATTN'], ['Model', 'GRU+ATTN+FEATS'], ['Model', 'GRU'], ['Model', 'GRU+ATTN'], ['Model', 'GRU+ATTN+FEATS'], ['Model', 'GRU+ATTN+CNN']] | 1 | [['Accuracy on Test (%)'], ['Accuracy on Dev (%)']] | [['73.0', ''], ['72.2', ''], ['74.5', ''], ['75.8', ''], ['', '73.4'], ['', '74.4'], ['', '74.9'], ['', '73.8']] | column | ['Accuracy on Test (%)', 'Accuracy on Dev (%)'] | ['GRU+ATTN', 'GRU+ATTN+FEATS', 'GRU+ATTN+CNN'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy on Test (%)</th> <th>Accuracy on Dev (%)</th> </tr> </thead> <tbody> <tr> <td>Model || Eshel et al. (2017)</td> <td>73.0</td> <td></td> </tr> <tr> <td>Model || Eshel s... | Table 1 | table_1 | D18-1126 | 3 | emnlp2018 | Results. The model set forth in this section is the basis for the remaining models in this paper;. we call it the GRU model as that is the only context encoding mechanism it uses. As shown in Table 1, this GRU model gets a score of 73.4 on the WikilinksNED development set. Results. In Table 1, we see that our model wit... | [2, 2, 2, 1, 2, 1, 1, 2, 2, 1, 1, 2, 1, 1, 2, 1] | ['Results.', 'The model set forth in this section is the basis for the remaining models in this paper;.', 'we call it the GRU model as that is the only context encoding mechanism it uses.', 'As shown in Table 1, this GRU model gets a score of 73.4 on the WikilinksNED development set.', 'Results.', 'In Table 1, we see t... | [None, ['GRU'], ['GRU'], ['GRU', 'Accuracy on Dev (%)'], None, ['GRU', 'GRU+ATTN', 'Accuracy on Dev (%)'], ['GRU+ATTN', 'Eshel et al. (2017)', 'Accuracy on Test (%)'], None, ['GRU+ATTN'], ['GRU+ATTN+CNN'], ['GRU+ATTN+CNN', 'Accuracy on Dev (%)'], ['GRU+ATTN+CNN'], ['GRU+ATTN+FEATS'], ['GRU+ATTN+FEATS', 'Accuracy on Dev... | 1 |
D18-1130table_2 | Validation & Test results on all datasets. AttSum* are our models, including variants with features and multi-task loss. Others indicate previous best published results. All improvements over AttSum are statistically significant (α = 0.05) according to the McNemar test with continuity correction (Dietterich, 1998). | 2 | [['LAMBADA', 'GA Reader (Chu et al. 2017)'], ['LAMBADA', 'MAGE (48) (Dhingra et al. 2017)'], ['LAMBADA', 'MAGE (64) (Dhingra et al. 2017)'], ['LAMBADA', 'GA + C-GRU (Dhingra et al. 2018)'], ['LAMBADA', 'AttSum'], ['LAMBADA', 'AttSum + L1'], ['LAMBADA', 'AttSum + L2'], ['LAMBADA', 'AttSum-Feat'], ['LAMBADA', 'AttSum-Fea... | 1 | [['Val'], ['Test']] | [['-', '49.00'], ['51.10', '51.60'], ['52.10', '51.10'], ['-', '55.69'], ['56.03', '55.60'], ['58.35', '56.86'], ['58.08', '57.29'], ['59.62', '59.05'], ['60.22', '59.23'], ['60.13', '58.47'], ['78.50', '74.90'], ['75.30', '69.70'], ['77.10', '72.20'], ['77.80', '72.0'], ['79.60', '74.0'], ['74.35', '69.96'], ['76.20',... | column | ['accuracy', 'accuracy'] | ['AttSum', 'AttSum + L1', 'AttSum + L2', 'AttSum-Feat', 'AttSum-Feat + L1', 'AttSum-Feat + L2'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Val</th> <th>Test</th> </tr> </thead> <tbody> <tr> <td>LAMBADA || GA Reader (Chu et al. 2017)</td> <td>-</td> <td>49.00</td> </tr> <tr> <td>LAMBADA || MAGE (48) (Dhingra et al.... | Table 2 | table_2 | D18-1130 | 4 | emnlp2018 | Results and Discussion. Table 2 shows the full results of our best models on the LAMBADA and CBT-NE datasets, and compares them to recent, best-performing results in the literature. For both tasks the inclusion of either entity features or multi-task objectives leads to large statistically significant increases in vali... | [2, 1, 2, 1, 1, 1, 1, 1, 1, 1] | ['Results and Discussion.', 'Table 2 shows the full results of our best models on the LAMBADA and CBT-NE datasets, and compares them to recent, best-performing results in the literature.', 'For both tasks the inclusion of either entity features or multi-task objectives leads to large statistically significant increases... | [None, ['AttSum', 'AttSum + L1', 'AttSum + L2', 'AttSum-Feat', 'AttSum-Feat + L1', 'AttSum-Feat + L2', 'LAMBADA', 'CBT-NE', 'GA Reader (Chu et al. 2017)', 'MAGE (48) (Dhingra et al. 2017)', 'MAGE (64) (Dhingra et al. 2017)', 'GA + C-GRU (Dhingra et al. 2018)', 'EpiReader (Trischler et al. 2016)', 'DIM Reader (Liu et al... | 1 |
D18-1130table_3 | Ablation results on validation sets, see text for definitions of the numeric columns and models. | 2 | [['LAMBADA', 'AttSum'], ['LAMBADA', 'AttSum + L1'], ['LAMBADA', 'AttSum + L2'], ['LAMBADA', 'AttSum-Feat'], ['LAMBADA', 'AttSum-Feat + L1'], ['LAMBADA', 'AttSum-Feat + L2'], ['CBT-NE', 'AttSum'], ['CBT-NE', 'AttSum + L1'], ['CBT-NE', 'AttSum + L2'], ['CBT-NE', 'AttSum-Feat'], ['CBT-NE', 'AttSum-Feat + L1'], ['CBT-NE', ... | 1 | [['All'], ['Entity'], ['Speaker'], ['Quote']] | [['56.03', '75.17', '74.81', '73.31'], ['58.35', '78.51', '78.38', '79.42'], ['58.08', '78.17', '77.96', '76.76'], ['59.62', '79.40', '80.34', '79.68'], ['60.22', '82.00', '82.98', '81.67'], ['60.14', '82.06', '83.06', '82.60'], ['74.35', '76.28', '75.08', '74.96'], ['76.20', '78.03', '76.98', '77.33'], ['76.80', '77.4... | column | ['accuracy', 'accuracy', 'accuracy', 'accuracy'] | ['AttSum', 'AttSum + L1', 'AttSum + L2', 'AttSum-Feat', 'AttSum-Feat + L1', 'AttSum-Feat + L2'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>All</th> <th>Entity</th> <th>Speaker</th> <th>Quote</th> </tr> </thead> <tbody> <tr> <td>LAMBADA || AttSum</td> <td>56.03</td> <td>75.17</td> <td>74.81</td> <td>73.31<... | Table 3 | table_3 | D18-1130 | 5 | emnlp2018 | Table 3 considers the performance of the different models based on a segmentation of the data. Here we consider examples where:. (1) Entity if the answer is a named entity;. (2) Speaker if the answer is a named entity and the speaker of quote;. (3) Quote if the answer is found within a quoted speech. Note that Speaker ... | [1, 2, 2, 2, 2, 2, 1, 1, 1] | ['Table 3 considers the performance of the different models based on a segmentation of the data.', 'Here we consider examples where:.', '(1) Entity if the answer is a named entity;.', '(2) Speaker if the answer is a named entity and the speaker of quote;.', '(3) Quote if the answer is found within a quoted speech.', 'N... | [['AttSum', 'AttSum + L1', 'AttSum + L2', 'AttSum-Feat', 'AttSum-Feat + L1', 'AttSum-Feat + L2', 'All', 'Entity', 'Speaker', 'Quote', 'LAMBADA', 'CBT-NE'], None, ['Entity'], ['Speaker'], ['Quote'], ['Entity', 'Speaker', 'Quote'], ['AttSum', 'AttSum + L1', 'AttSum + L2', 'AttSum-Feat', 'AttSum-Feat + L1', 'AttSum-Feat +... | 1 |
D18-1138table_2 | Results of human evaluation. | 2 | [['Model', 'CAE'], ['Model', 'MAE'], ['Model', 'SMAE']] | 1 | [['Sentiment'], ['Content'], ['Fluency']] | [['6.55', '4.46', '5.98'], ['6.64', '4.43', '5.36'], ['6.57', '5.98', '6.69']] | column | ['Sentiment', 'Content', 'Fluency'] | ['SMAE'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Sentiment</th> <th>Content</th> <th>Fluency</th> </tr> </thead> <tbody> <tr> <td>Model || CAE</td> <td>6.55</td> <td>4.46</td> <td>5.98</td> </tr> <tr> <td>Model || M... | Table 2 | table_2 | D18-1138 | 4 | emnlp2018 | Table 2 shows the evaluation results. Our model has obvious advantage over the baseline systems in content preservation, and also performs well in other aspects. | [1, 1] | ['Table 2 shows the evaluation results.', 'Our model has obvious advantage over the baseline systems in content preservation, and also performs well in other aspects.'] | [None, ['SMAE', 'CAE', 'MAE', 'Content']] | 1 |
D18-1139table_1 | Results on the GermEval data, aspect + sentiment task. Micro-averaged F1-score for both aspect category and aspect polarity classification as computed by the GermEval evaluation script. In the bottom part of the table, we report results from (Wojatzki et al., 2017). | 1 | [['Pipeline LSTM + word2vec'], ['End-to-end LSTM + word2vec'], ['Pipeline CNN + word2vec'], ['End-to-end CNN + word2vec'], ['Pipeline LSTM + glove'], ['End-to-end LSTM + glove'], ['Pipeline CNN + glove'], ['End-to-end CNN + glove'], ['Pipeline LSTM + fasttext'], ['End-to-end LSTM + fasttext'], ['Pipeline CNN + fasttext... | 1 | [['development set'], ['synchronic test set'], ['diachronic test set']] | [['.350', '.297', '.342'], ['.378', '.315', '.383'], ['.350', '.298', '.343'], ['.400', '.319', '.388'], ['.350', '.297', '.342'], ['.378', '.315', '.384'], ['.350', '.298', '.342'], ['.415', '.315', '.390'], ['.350', '.297', '.342'], ['.378', '.315', '.384'], ['.342', '.295', '.342'], ['.511', '.423', '.465'], ['-', '... | column | ['f1-score', 'f1-score', 'f1-score'] | ['Pipeline LSTM + word2vec', 'End-to-end LSTM + word2vec', 'Pipeline CNN + word2vec', 'End-to-end CNN + word2vec', 'Pipeline LSTM + glove', 'End-to-end LSTM + glove', 'Pipeline CNN + glove', 'End-to-end CNN + glove', 'Pipeline LSTM + fasttext', 'End-to-end LSTM + fasttext', 'Pipeline CNN + fasttext', 'End-to-end CNN + ... | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>development set</th> <th>synchronic test set</th> <th>diachronic test set</th> </tr> </thead> <tbody> <tr> <td>Pipeline LSTM + word2vec</td> <td>.350</td> <td>.297</td> <td>.342... | Table 1 | table_1 | D18-1139 | 5 | emnlp2018 | Aspect polarity. Table 1 shows the results of our experiments, as well as the results of our strong baselines. Note that the majority class baseline already provides good results. This is due to highly unbalanced data;. the aspect category “Allgemein” (“general”), e.g., constitutes 61.5% of the cases. This imbalance ma... | [2, 1, 1, 2, 2, 2, 1, 2, 1, 1] | ['Aspect polarity.', 'Table 1 shows the results of our experiments, as well as the results of our strong baselines.', 'Note that the majority class baseline already provides good results.', 'This is due to highly unbalanced data;.', 'the aspect category “Allgemein” (“general”), e.g., constitutes 61.5% of the cases.', '... | [None, ['Pipeline LSTM + word2vec', 'End-to-end LSTM + word2vec', 'Pipeline CNN + word2vec', 'End-to-end CNN + word2vec', 'Pipeline LSTM + glove', 'End-to-end LSTM + glove', 'Pipeline CNN + glove', 'End-to-end CNN + glove', 'Pipeline LSTM + fasttext', 'End-to-end LSTM + fasttext', 'Pipeline CNN + fasttext', 'End-to-end... | 1 |
D18-1139table_2 | Micro-averaged F1-score for the prediction of aspect categories only (i.e. without taking polarity into account at all) as computed by the GermEval evaluation script. The results in the bottom part of the table are taken from (Wojatzki et al., 2017). | 1 | [['End-to-end LSTM + word2vec'], ['End-to-end CNN + word2vec'], ['End-to-end LSTM + glove'], ['End-to-end CNN + glove'], ['End-to-end LSTM + fasttext'], ['End-to-end CNN + fasttext'], ['majority class baseline'], ['GermEval baseline'], ['GermEval best submission']] | 1 | [['development set'], ['synchronic test set'], ['diachronic test set']] | [['.517', '.442', '.455'], ['.521', '.436', '.470'], ['.517', '.442', '.456'], ['.537', '.457', '.480'], ['.517', '.442', '.456'], ['.623', '.523', '.557'], ['-', '.442', '.456'], ['-', '.481', '.495'], ['-', '.482', '.460']] | column | ['f1-score', 'f1-score', 'f1-score'] | ['End-to-end LSTM + word2vec', 'End-to-end CNN + word2vec', 'End-to-end LSTM + glove', 'End-to-end CNN + glove', 'End-to-end LSTM + fasttext', 'End-to-end CNN + fasttext'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>development set</th> <th>synchronic test set</th> <th>diachronic test set</th> </tr> </thead> <tbody> <tr> <td>End-to-end LSTM + word2vec</td> <td>.517</td> <td>.442</td> <td>.4... | Table 2 | table_2 | D18-1139 | 5 | emnlp2018 | Aspect category only. Even though our architectures are designed for the task of joint prediction of aspect category and polarity, we can also evaluate them on the detection of aspect categories only. Table 2 shows the results for this task. First of all, we can see that the SVM-based GermEval baseline model has very d... | [2, 2, 1, 1, 2, 1, 1] | ['Aspect category only.', 'Even though our architectures are designed for the task of joint prediction of aspect category and polarity, we can also evaluate them on the detection of aspect categories only.', 'Table 2 shows the results for this task.', 'First of all, we can see that the SVM-based GermEval baseline model... | [None, None, None, ['GermEval baseline', 'GermEval best submission', 'synchronic test set', 'diachronic test set'], ['GermEval baseline'], ['End-to-end LSTM + fasttext', 'End-to-end CNN + fasttext'], ['End-to-end CNN + fasttext', 'majority class baseline', 'GermEval baseline', 'GermEval best submission']] | 1 |
D18-1146table_2 | Evaluation results of HSLDAs in comparison with sommeliers’ performance. | 1 | [['HSLDA1 Monovarietal'], ['HSLDA2 Blend'], ['HSLDA3 Balanced'], ['Sommeliers']] | 2 | [['F1 Scores', 'Training Set'], ['F1 Scores', 'Testing Set']] | [['71.1', '68.4'], ['62.5', '59.1'], ['59.8', '56.4'], ['NA', '62.1']] | column | ['F1 Scores', 'F1 Scores'] | ['HSLDA1 Monovarietal', 'HSLDA2 Blend', 'HSLDA3 Balanced'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>F1 Scores || Training Set</th> <th>F1 Scores || Testing Set</th> </tr> </thead> <tbody> <tr> <td>HSLDA1 Monovarietal</td> <td>71.1</td> <td>68.4</td> </tr> <tr> <td>HSLDA2 Blen... | Table 2 | table_2 | D18-1146 | 5 | emnlp2018 | Table 2 shows the average F1 scores of different models versus sommeliers’ performance. Likewise, the sommeliers’ performance measures represent a conservative(ly higher) estimate since scores lower than 60% in section 1 were removed. We find the HSLDA model, especially of monovarietals, outperforms sommeliers by 6.3%,... | [1, 1, 1] | ['Table 2 shows the average F1 scores of different models versus sommeliers’ performance.', 'Likewise, the sommeliers’ performance measures represent a conservative(ly higher) estimate since scores lower than 60% in section 1 were removed.', 'We find the HSLDA model, especially of monovarietals, outperforms sommeliers ... | [['HSLDA1 Monovarietal', 'HSLDA2 Blend', 'HSLDA3 Balanced', 'Sommeliers', 'F1 Scores'], ['Sommeliers'], ['HSLDA1 Monovarietal', 'Sommeliers', 'Testing Set']] | 1 |
D18-1147table_3 | Emotion detection results using 10-fold cross validation. The numbers are percentages. | 3 | [['Method', 'Joy', 'ConvLexLSTM'], ['Method', 'Joy', 'ConvLSTM'], ['Method', 'Joy', 'CNN'], ['Method', 'Joy', 'LSTM'], ['Method', 'Joy', 'Seven-Lexicon'], ['Method', '-', 'C-ConvLSTM'], ['Method', '-', 'SWAT'], ['Method', '-', 'EmoSVM'], ['Method', 'Sad', 'ConvLexLSTM'], ['Method', 'Sad', 'ConvLSTM'], ['Method', 'Sad',... | 2 | [['B-DS', 'Pr'], ['B-DS', 'Re'], ['B-DS', 'F1'], ['L-DS', 'Pr'], ['L-DS', 'Re'], ['L-DS', 'F1']] | [['92.3', '94.3', '93.2', '90.4', '89.3', '89.8'], ['86.6', '88.4', '87.4', '87.0', '83.0', '85.0'], ['85.0', '84.0', '84.5', '82.2', '82.8', '82.5'], ['86.0', '86.6', '86.3', '85.0', '83.0', '84.0'], ['63.4', '87.3', '73.45', '60.0', '85.1', '70.37'], ['86.2', '87.0', '86.6', '85.0', '82.0', '83.47'], ['66.0', '68.0',... | column | ['Pr', 'Re', 'F1', 'Pr', 'Re', 'F1'] | ['ConvLexLSTM'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>B-DS || Pr</th> <th>B-DS || Re</th> <th>B-DS || F1</th> <th>L-DS || Pr</th> <th>L-DS || Re</th> <th>L-DS || F1</th> </tr> </thead> <tbody> <tr> <td>Method || Joy || ConvLexLSTM<... | Table 3 | table_3 | D18-1147 | 4 | emnlp2018 | Table 3 shows the results of this comparison. As can be seen from the table, ConvLexLSTM achieves the best results consistently throughout all experiments in terms of all compared measures. This ablation experiment confirms our intuition that all components are contributing to the final emotion detection. For example, ... | [1, 1, 2, 1, 1, 2, 1, 2, 1, 1, 2, 1, 2, 2] | ['Table 3 shows the results of this comparison.', 'As can be seen from the table, ConvLexLSTM achieves the best results consistently throughout all experiments in terms of all compared measures.', 'This ablation experiment confirms our intuition that all components are contributing to the final emotion detection.', 'Fo... | [None, ['ConvLexLSTM', 'B-DS', 'L-DS', 'Pr', 'Re', 'F1'], None, ['ConvLexLSTM', 'ConvLSTM', 'B-DS', 'F1', 'Joy', 'Sad'], ['ConvLSTM', 'F1'], ['ConvLexLSTM'], ['Seven-Lexicon'], ['ConvLexLSTM', 'C-ConvLSTM', 'SWAT', 'EmoSVM'], None, ['C-ConvLSTM', 'SWAT', 'EmoSVM', 'B-DS', 'L-DS'], ['Seven-Lexicon'], ['ConvLexLSTM', 'Co... | 1 |
D18-1148table_4 | Prediction results (Pearson r, using unigrams + topics) using full 10% data vs. users with 30+ tweets. The number of tweets used in each task is listed to highlight the fact that the “User to County” tasks use less tweets than the “all” tasks. | 1 | [['User to County'], ['Nuser−tweets'], ['Tweet to County (all)'], ['County (all)'], ['Nall−tweets']] | 1 | [['Income'], ['Educat.'], ['Life Satis.'], ['Heart Disease']] | [['.82', '.88', '.47', '.75'], ['1.350B', '1.350B', '1.356B', '1.360B'], ['.72', '.81', '.36', '.71'], ['.73', '.82', '.31', '.72'], ['1.621B', '1.621B', '1.628B', '1.634B']] | column | ['r', 'r', 'r', 'r'] | ['User to County', 'Tweet to County (all)', 'County (all)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Income</th> <th>Educat.</th> <th>Life Satis.</th> <th>Heart Disease</th> </tr> </thead> <tbody> <tr> <td>User to County</td> <td>.82</td> <td>.88</td> <td>.47</td> <td... | Table 4 | table_4 | D18-1148 | 4 | emnlp2018 | In Table 4 we remove the 30+ tweet requirement from the “Tweet to County” and “County” methods and compare against the “User to County” method (with the 30+ tweet requirement). Again we see the “User to County” method outperforms all others in spite of the fact that the “User to County” approach uses less data than bot... | [1, 1] | ['In Table 4 we remove the 30+ tweet requirement from the “Tweet to County” and “County” methods and compare against the “User to County” method (with the 30+ tweet requirement).', 'Again we see the “User to County” method outperforms all others in spite of the fact that the “User to County” approach uses less data tha... | [['User to County', 'Tweet to County (all)', 'County (all)'], ['User to County', 'Tweet to County (all)', 'County (all)']] | 1 |
D18-1148table_5 | 1% sample prediction results (Pearson r) using topics + unigrams. ∗ same counties as the 10% prediction task. | 1 | [['Tweet to County'], ['County'], ['User to County'], ['Nuser−tweets'], ['County (all)'], ['Nall−tweets'], ['Ncounties']] | 1 | [['Income'], ['Income'], ['Educat.'], ['Educat.'], ['Life Satis.'], ['Life Satis.'], ['Heart Disease'], ['Heart Disease']] | [['.71', '.62', '.77', '.71', '.35', '.32', '.64', '.63'], ['.70', '.60', '.76', '.67', '.32', '.28', '.62', '.62'], ['.76', '.70', '.79', '.74', '.39', '.28', '.66', '.66'], ['127M', '130M', '127M', '130M', '127M', '130M', '127M', '131M'], ['.75', '.67', '.83', '.77', '.37', '.34', '.68', '.66'], ['191M', '195M', '191... | column | ['r', 'r', 'r', 'r', 'r', 'r', 'r', 'r'] | ['Nuser−tweets'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Income</th> <th>Income</th> <th>Educat.</th> <th>Educat.</th> <th>Life Satis.</th> <th>Life Satis.</th> <th>Heart Disease</th> <th>Heart Disease</th> </tr> </thead> <tbody> ... | Table 5 | table_5 | D18-1148 | 5 | emnlp2018 | 1% data. In Table 5 we repeat the above experiment on a 1% Twitter sample. Here we see that the “User to County” method outperforms both the “Tweet to County” and “County” methods (with all three tasks using the same number of tweets). When we compare the “User to County” and “County (all)” methods we see the “User to ... | [2, 1, 1, 1, 1, 1] | ['1% data.', 'In Table 5 we repeat the above experiment on a 1% Twitter sample.', 'Here we see that the “User to County” method outperforms both the “Tweet to County” and “County” methods (with all three tasks using the same number of tweets).', 'When we compare the “User to County” and “County (all)” methods we see th... | [None, None, ['User to County', 'Tweet to County', 'County'], ['User to County', 'County (all)', 'Income', 'Life Satis.'], ['User to County', 'County (all)'], ['User to County', 'County (all)']] | 1 |
D18-1152table_3 | Language modeling perplexity on PTB test set (lower is better). LSTM numbers are taken from Lei et al. (2017b). ` denotes the number of layers. Bold font indicates best performance. | 6 | [['Model', 'LSTM', 'l', '2', '# Params.', '24M'], ['Model', 'LSTM', 'l', '3', '# Params.', '24M'], ['Model', 'RRNN(B)', 'l', '2', '# Params.', '10M'], ['Model', 'RRNN(B)m+', 'l', '2', '# Params.', '10M'], ['Model', 'RRNN(C)', 'l', '2', '# Params.', '10M'], ['Model', 'RRNN(F)', 'l', '2', '# Params.', '10M'], ['Model', '... | 1 | [['Dev.'], ['Test']] | [['73.3', '71.4'], ['78.8', '76.2'], ['73.1', '69.2'], ['75.1', '71.7'], ['72.5', '69.5'], ['69.5', '66.3'], ['68.7', '65.2'], ['70.8', '66.9'], ['70.0', '67.0'], ['66.0', '63.1']] | column | ['perplexity', 'perplexity'] | ['RRNN(B)', 'RRNN(B)m+', 'RRNN(C)', 'RRNN(F)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Dev.</th> <th>Test</th> </tr> </thead> <tbody> <tr> <td>Model || LSTM || l || 2 || # Params. || 24M</td> <td>73.3</td> <td>71.4</td> </tr> <tr> <td>Model || LSTM || l || 3 || #... | Table 3 | table_3 | D18-1152 | 8 | emnlp2018 | Results. Following Collins et al. (2017) and Melis et al. (2018), we compare models controlling for parameter budget. Table 3 summarizes language modeling perplexities on PTB test set. The middle block compares all models with two layers and 10M trainable parameters. RRNN(B) and RRNN(C) achieve roughly the same perform... | [2, 2, 1, 1, 1, 1, 1, 1, 2, 1] | ['Results.', 'Following Collins et al. (2017) and Melis et al. (2018), we compare models controlling for parameter budget.', 'Table 3 summarizes language modeling perplexities on PTB test set.', 'The middle block compares all models with two layers and 10M trainable parameters.', 'RRNN(B) and RRNN(C) achieve roughly th... | [None, ['# Params.'], None, ['RRNN(B)', 'RRNN(B)m+', 'RRNN(C)', 'RRNN(F)', 'l', '2', '# Params.', '10M'], ['RRNN(B)', 'RRNN(C)', 'RRNN(F)', 'RRNN(B)m+', 'Test', 'l', '2', '# Params.', '10M'], ['RRNN(B)', 'RRNN(B)m+', 'RRNN(C)', 'RRNN(F)', 'l', '3', '# Params.', '24M'], ['RRNN(F)', 'RRNN(B)', 'RRNN(B)m+', 'RRNN(C)', 'l'... | 1 |
D18-1156table_1 | Overall performance comparing to the state-of-the-art methods with golden-standard entities. | 2 | [['Method', 'Cross-Event'], ['Method', 'JointBeam'], ['Method', 'DMCNN'], ['Method', 'PSL'], ['Method', 'JRNN'], ['Method', 'dbRNN'], ['Method', 'JMEE']] | 2 | [['Trigger Identification (%)', 'P'], ['Trigger Identification (%)', 'R'], ['Trigger Identification (%)', 'F1'], ['Trigger Classification (%)', 'P'], ['Trigger Classification (%)', 'R'], ['Trigger Classification (%)', 'F1'], ['Argument Identification (%)', 'P'], ['Argument Identification (%)', 'R'], ['Argument Identifi... | [['N/A', 'N/A', 'N/A', '68.7', '68.9', '68.8', '50.9', '49.7', '50.3', '45.1', '44.1', '44.6'], ['76.9', '65.0', '70.4', '73.7', '62.3', '67.5', '69.8', '47.9', '56.8', '64.7', '44.4', '52.7'], ['80.4', '67.7', '73.5', '75.6', '63.6', '69.1', '68.8', '51.9', '59.1', '62.2', '46.9', '53.5'], ['N/A', 'N/A', 'N/A', '75.3'... | column | ['P', 'R', 'F1', 'P', 'R', 'F1', 'P', 'R', 'F1', 'P', 'R', 'F1'] | ['JMEE'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Trigger Identification (%) || P</th> <th>Trigger Identification (%) || R</th> <th>Trigger Identification (%) || F1</th> <th>Trigger Classification (%) || P</th> <th>Trigger Classification (%) || ... | Table 1 | table_1 | D18-1156 | 7 | emnlp2018 | Table 1 shows the overall performance comparing to the above state-of-the-art methods with golden-standard entities. From the table, we can see that our JMEE framework achieves the best F1 scores for both trigger classification and argumentrelated subtasks among all the compared methods. There is a significant gain wit... | [1, 1, 1, 2] | ['Table 1 shows the overall performance comparing to the above state-of-the-art methods with golden-standard entities.', 'From the table, we can see that our JMEE framework achieves the best F1 scores for both trigger classification and argumentrelated subtasks among all the compared methods.', 'There is a significant ... | [['JMEE', 'Cross-Event', 'JointBeam', 'DMCNN', 'PSL', 'JRNN', 'dbRNN'], ['JMEE', 'F1', 'Trigger Classification (%)', 'Argument Identification (%)', 'Argument Role (%)', 'Cross-Event', 'JointBeam', 'DMCNN', 'PSL', 'JRNN', 'dbRNN'], ['JMEE', 'Trigger Classification (%)', 'Argument Role (%)', 'dbRNN'], ['JMEE']] | 1 |
D18-1158table_2 | Performance of different ED systems. 1/1 means one sentence that only has one event and 1/N means that one sentence has multiple events. | 2 | [['Method', 'LSTM+Softmax'], ['Method', 'LSTM+CRF'], ['Method', 'LSTM+TLSTM'], ['Method', 'LSTM+HTLSTM'], ['Method', 'LSTM+HTLSTM+Bias']] | 1 | [['1/1'], ['1/N'], ['all']] | [['74.7', '44.6', '66.8'], ['75.1', '49.5', '68.5'], ['76.8', '51.2', '70.2'], ['77.9', '57.3', '72.4'], ['78.4', '59.5', '73.3']] | column | ['accuracy', 'accuracy', 'accuracy'] | ['LSTM+HTLSTM+Bias'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>1/1</th> <th>1/N</th> <th>all</th> </tr> </thead> <tbody> <tr> <td>Method || LSTM+Softmax</td> <td>74.7</td> <td>44.6</td> <td>66.8</td> </tr> <tr> <td>Method || LSTM... | Table 2 | table_2 | D18-1158 | 7 | emnlp2018 | Table 2 shows the results. And we have the the following observations:. 1) Compared with LSTM+Softmax, LSTM-based collective ED methods (LSTM+CRF, LSTM+TLSTM, LSTM+HTLSTM, LSTM+HTLSTM+Bias) achieves a better performance. Surprisingly, the LSTM+HTLSTM+Bias yields a 14.9% improvement on the sentence contains multiple eve... | [1, 2, 1, 1, 2, 1, 1, 2, 1, 2] | ['Table 2 shows the results.', 'And we have the the following observations:.', '1) Compared with LSTM+Softmax, LSTM-based collective ED methods (LSTM+CRF, LSTM+TLSTM, LSTM+HTLSTM, LSTM+HTLSTM+Bias) achieves a better performance.', 'Surprisingly, the LSTM+HTLSTM+Bias yields a 14.9% improvement on the sentence contains m... | [None, None, ['LSTM+Softmax', 'LSTM+CRF', 'LSTM+TLSTM', 'LSTM+HTLSTM', 'LSTM+HTLSTM+Bias'], ['LSTM+HTLSTM+Bias', 'LSTM+Softmax', '1/N'], ['LSTM+HTLSTM+Bias'], ['LSTM+TLSTM', 'LSTM+CRF'], ['LSTM+HTLSTM', 'LSTM+TLSTM'], ['LSTM+HTLSTM', 'LSTM+TLSTM'], ['LSTM+HTLSTM', 'LSTM+HTLSTM+Bias', 'all'], ['LSTM+HTLSTM']] | 1 |
D18-1159table_5 | Experimental results involving analyzing PPs as valency patterns. | 1 | [['Baseline'], ['PP MTL'], ['PP MTL + Joint Decoding'], ['Core + PP MTL'], ['Core + PP MTL + Joint Decoding'], ['Core + Func. + PP MTL'], ['Core + Func. + PP MTL + Joint Decoding']] | 1 | [['UAS'], ['LAS'], ['Core P'], ['Core R'], ['Core F'], ['Func. P'], ['Func. R'], ['Func. F'], ['PP P'], ['PP R'], ['PP F']] | [['87.59', '83.64', '80.87', '81.31', '81.08', '91.99', '92.43', '92.20', '77.29', '77.99', '77.62'], ['87.67', '83.70', '80.61', '81.23', '80.91', '92.03', '92.50', '92.26', '78.30', '78.38', '78.32'], ['87.68', '83.69', '79.93', '81.50', '80.69', '91.92', '92.51', '92.21', '80.59', '77.68', '79.04'], ['87.70', '83.77... | column | ['UAS', 'LAS', 'Core P', 'Core R', 'Core F', 'Func. P', 'Func. R', 'Func. F', 'PP P', 'PP R', 'PP F'] | ['PP MTL + Joint Decoding', 'Core + PP MTL + Joint Decoding', 'Core + Func. + PP MTL + Joint Decoding'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>UAS</th> <th>LAS</th> <th>Core P</th> <th>Core R</th> <th>Core F</th> <th>Func. P</th> <th>Func. R</th> <th>Func. F</th> <th>PP P</th> <th>PP R</th> <th>PP F</th> ... | Table 5 | table_5 | D18-1159 | 8 | emnlp2018 | Table 5 presents the results for different combinations of valency relation subsets. We find that PP-attachment decisions are generally harder to make, compared with core and functional relations. Including them during training distracts other parsing objectives (compare Core + PP with only analyzing Core in ยง6)). How... | [1, 1, 2, 1, 2] | ['Table 5 presents the results for different combinations of valency relation subsets.', 'We find that PP-attachment decisions are generally harder to make, compared with core and functional relations.', 'Including them during training distracts other parsing objectives (compare Core + PP with only analyzing Core in ยง... | [['UAS', 'LAS', 'Core P', 'Core R', 'Core F', 'Func. P', 'Func. R', 'Func. F', 'PP P', 'PP R', 'PP F'], ['PP P', 'PP R', 'PP F', 'Core P', 'Core R', 'Core F', 'Func. P', 'Func. R', 'Func. F'], None, ['PP P', 'Baseline', 'PP MTL + Joint Decoding', 'Core + PP MTL + Joint Decoding', 'Core + Func. + PP MTL + Joint Decoding... | 1 |
D18-1167table_5 | Human accuracy on test set based on different sources. As expected, humans get the best performance when given both videos and subtitles. | 2 | [['VQA source', 'Question'], ['VQA source', 'Video and Question'], ['VQA source', 'Subtitle and Question'], ['VQA source', 'Video Subtitle and Question']] | 1 | [['Human accuracy on test.']] | [['31.84'], ['61.73'], ['72.88'], ['89.41']] | column | ['Human accuracy on test.'] | ['Video and Question', 'Subtitle and Question', 'Video Subtitle and Question'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Human accuracy on test.</th> </tr> </thead> <tbody> <tr> <td>VQA source || Question</td> <td>31.84</td> </tr> <tr> <td>VQA source || Video and Question</td> <td>61.73</td> </tr> ... | Table 5 | table_5 | D18-1167 | 5 | emnlp2018 | Human Evaluation on Usefulness of Video and Subtitle in Dataset:. To gain a better understandng of the roles of videos and subtitles in the our dataset, we perform a human study, asking different groups of workers to complete the QA task in settings while observing different sources (subsets) of information:. • Questio... | [2, 2, 2, 2, 2, 2, 2, 1, 1, 1, 2, 1, 2, 2] | ['Human Evaluation on Usefulness of Video and Subtitle in Dataset:.', 'To gain a better understandng of the roles of videos and subtitles in the our dataset, we perform a human study, asking different groups of workers to complete the QA task in settings while observing different sources (subsets) of information:.', '•... | [None, None, ['Question'], ['Video and Question'], ['Subtitle and Question'], ['Video Subtitle and Question'], None, None, ['Question', 'Video and Question', 'Subtitle and Question', 'Human accuracy on test.'], ['Video Subtitle and Question', 'Human accuracy on test.'], ['Video Subtitle and Question'], ['Question', 'Hu... | 1 |
D18-1168table_6 | Comparison of different model performance on TEMPO HL on the test set. “MLLC Global” indicates our model with global context and “MLLC B/A” indicated MLLC with before/after context. | 1 | [['Frequeny Prior'], ['MCN'], ['TALL + TEF'], ['MLLC - Global'], ['MLLC - B/A'], ['MLLC (Ours)'], ['MLLC (Ours) Context Sup. Test']] | 3 | [['TEMPO - Human Language (HL)', 'DiDeMo', 'R@1'], ['TEMPO - Human Language (HL)', 'DiDeMo', 'mIoU'], ['TEMPO - Human Language (HL)', 'Before', 'R@1'], ['TEMPO - Human Language (HL)', 'Before', 'mIoU'], ['TEMPO - Human Language (HL)', 'After', 'R@1'], ['TEMPO - Human Language (HL)', 'After', 'mIoU'], ['TEMPO - Human La... | [['19.43', '25.44', '29.31', '51.92', '0.00', '0.00', '0.00', '7.84', '4.74', '12.27', '10.69', '37.56', '19.50'], ['26.07', '39.92', '26.79', '51.40', '14.93', '34.28', '18.55', '47.92', '10.70', '35.47', '19.4', '70.88', '41.80'], ['21.79', '33.55', '25.91', '49.26', '14.43', '32.62', '2.52', '31.13', '8.1', '28.14',... | column | ['R@1', 'mIoU', 'R@1', 'mIoU', 'R@1', 'mIoU', 'R@1', 'mIoU', 'R@1', 'mIoU', 'R@1', ' R@5', 'mIoU'] | ['MLLC (Ours)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>TEMPO - Human Language (HL) || DiDeMo || R@1</th> <th>TEMPO - Human Language (HL) || DiDeMo || mIoU</th> <th>TEMPO - Human Language (HL) || Before || R@1</th> <th>TEMPO - Human Language (HL) || Before... | Table 6 | table_6 | D18-1168 | 9 | emnlp2018 | Results: TEMPO - HL. Table 6 compares performance on TEMPO - HL. We compare our best-performing model from training on the TEMPOTL (strongly supervised MLLC and conTEF) to prior work (MCN and TALL) and to MLLC with global and before/after context. Performance on TEMPO-HL is considerably lower than TEMPOTL suggesting th... | [2, 1, 2, 1, 1, 1, 1, 1, 0, 1, 1] | ['Results: TEMPO - HL.', 'Table 6 compares performance on TEMPO - HL.', 'We compare our best-performing model from training on the TEMPOTL (strongly supervised MLLC and conTEF) to prior work (MCN and TALL) and to MLLC with global and before/after context.', 'Performance on TEMPO-HL is considerably lower than TEMPOTL su... | [None, None, ['MLLC (Ours)', 'MCN', 'TALL + TEF', 'MLLC - Global', 'MLLC - B/A'], ['TEMPO - Human Language (HL)'], ['TEMPO - Human Language (HL)'], ['MLLC - Global', 'MLLC - B/A', 'MLLC (Ours)'], ['MLLC (Ours)', 'mIoU'], ['MLLC (Ours)', 'DiDeMo'], None, ['MLLC (Ours) Context Sup. Test'], ['MLLC (Ours) Context Sup. Test... | 1 |
D18-1173table_2 | Results of domain specific Named Entity Recognition. P, R, F1 respectively denotes precision, recall and F1 score | 2 | [['Model', 'Word2Vec'], ['Model', 'GloVe'], ['Model', 'N2V'], ['Model', 'SUM'], ['Model', 'DAREP'], ['Model', 'CRE'], ['Model', 'Mem2Vec']] | 3 | [['Task', 'AnatEM', 'P'], ['Task', 'AnatEM', 'R'], ['Task', 'AnatEM', 'F1'], ['Task', 'BioNLP', 'P'], ['Task', 'BioNLP', 'R'], ['Task', 'BioNLP', 'F1'], ['Task', 'NCBI', 'P'], ['Task', 'NCBI', 'R'], ['Task', 'NCBI', 'F1']] | [['76.12', '69.80', '72.82', '73.13', '54.79', '62.64', '75.22', '75.37', '74.39'], ['75.83', '67.04', '71.14', '72.58', '53.35', '61.50', '75.76', '72.33', '74.01'], ['76.81', '66.8', '71.46', '73.91', '54.21', '62.54', '72.45', '74.37', '73.30'], ['77.06', '69.01', '72.81', '74.36', '58.58', '62.25', '74.89', '74.02'... | column | ['P', 'R', 'F1', 'P', 'R', 'F1', 'P', 'R', 'F1'] | ['Mem2Vec'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Task || AnatEM || P</th> <th>Task || AnatEM || R</th> <th>Task || AnatEM || F1</th> <th>Task || BioNLP || P</th> <th>Task || BioNLP || R</th> <th>Task || BioNLP || F1</th> <th>Task || N... | Table 2 | table_2 | D18-1173 | 8 | emnlp2018 | Named Entity Recognition. Table 2 shows the results of domain specific named entity recognition. Used for pre-training embeddings, Mem2Vec achieves higher F1-score than all the baselines. It first surpasses CRE and DAREP that only bring slight improvements over Word2Vec. CRE and DAREP are both methods which relies on w... | [2, 1, 1, 1, 2, 2] | ['Named Entity Recognition.', 'Table 2 shows the results of domain specific named entity recognition.', 'Used for pre-training embeddings, Mem2Vec achieves higher F1-score than all the baselines.', 'It first surpasses CRE and DAREP that only bring slight improvements over Word2Vec.', 'CRE and DAREP are both methods whi... | [None, None, ['Mem2Vec', 'F1', 'Word2Vec', 'GloVe', 'N2V', 'SUM', 'DAREP', 'CRE'], ['DAREP', 'CRE', 'Word2Vec'], ['DAREP', 'CRE'], ['Mem2Vec']] | 1 |
D18-1176table_2 | Sentiment classification accuracy results on the binary SST task. For DCG we compare against their best single sentence model (Looks et al., 2017). *=multiple different embedding sets (see Section 4). Number of parameters included in parenthesis. Results averaged over ten runs with different random seeds. | 2 | [['Model', 'Const. Tree LSTM (Tai et al. 2015)'], ['Model', 'DMN (Kumar et al. 2016)'], ['Model', 'DCG (Looks et al. 2017)'], ['Model', 'NSE (Munkhdalai and Yu 2017)'], ['Model', 'GloVe BiLSTM-Max (4.1M)'], ['Model', 'FastText BiLSTM-Max (4.1M)'], ['Model', 'Naive baseline (5.4M)'], ['Model', 'Unweighted DME (4.1M)'], ... | 1 | [['SST']] | [['88.0'], ['88.6'], ['89.4'], ['89.7'], ['88.0±.1'], ['86.7±.3'], ['88.5±.4'], ['89.0±.2'], ['88.7±.6'], ['89.2±.4'], ['89.3±.5'], ['89.8±.4']] | column | ['accuracy'] | ['DME (4.1M)', 'CDME (4.1M)', 'CDME*-Softmax (4.6M)', 'CDME*-Sigmoid (4.6M)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>SST</th> </tr> </thead> <tbody> <tr> <td>Model || Const. Tree LSTM (Tai et al. 2015)</td> <td>88.0</td> </tr> <tr> <td>Model || DMN (Kumar et al. 2016)</td> <td>88.6</td> </tr> ... | Table 2 | table_2 | D18-1176 | 5 | emnlp2018 | 5.2 Results. Table 2 shows a similar pattern as we observed the naive baseline outperforms the with NLI:. the naive baseline outperforms the single-embedding encoders;. the DME methods outperform the naive baseline, with the contextualized version appearing to work best. Finally, we experiment with replacing φ in Eq. ... | [2, 1, 1, 1, 1, 2] | ['5.2 Results.', 'Table 2 shows a similar pattern as we observed the naive baseline outperforms the with NLI:.', 'the naive baseline outperforms the single-embedding encoders;.', 'the DME methods outperform the naive baseline, with the contextualized version appearing to work best.', 'Finally, we experiment with replac... | [None, ['Naive baseline (5.4M)'], ['Naive baseline (5.4M)', 'GloVe BiLSTM-Max (4.1M)', 'FastText BiLSTM-Max (4.1M)'], ['DME (4.1M)', 'Naive baseline (5.4M)'], ['CDME*-Sigmoid (4.6M)', 'CDME*-Softmax (4.6M)', 'Const. Tree LSTM (Tai et al. 2015)', 'DMN (Kumar et al. 2016)', 'DCG (Looks et al. 2017)', 'NSE (Munkhdalai and... | 1 |
D18-1176table_3 | Image and caption retrieval results (R@1 and R@10) on Flickr30k dataset, compared to VSE++ baseline (Faghri et al., 2017). VSE++ numbers in the table are with ResNet features and random cropping, but no fine-tuning. Number of parameters included in parenthesis; averaged over five runs with std omitted for brevity. | 2 | [['Model | R@:', 'VSE++'], ['Model | R@:', 'FastText (15M)'], ['Model | R@:', 'ImageNet (29M)'], ['Model | R@:', 'Naive (32M)'], ['Model | R@:', 'Unweighted DME (15M)'], ['Model | R@:', 'DME (15M)'], ['Model | R@:', 'CDME (15M)']] | 2 | [['Image', '1'], ['Image', '10'], ['Caption', '1'], ['Caption', '10']] | [['32.3', '72.1', '43.7', '82.1'], ['35.6', '74.7', '47.1', '82.7'], ['25.6', '63.1', '36.6', '72.2'], ['34.4', '73.9', '46.4', '82.2'], ['35.9', '75.0', '48.9', '83.7'], ['36.5', '75.5', '49.7', '83.6'], ['36.5', '75.6', '49.0', '83.8']] | column | ['R@1', 'R@10', 'R@1', 'R@10'] | ['DME (15M)', 'CDME (15M)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Image || 1</th> <th>Image || 10</th> <th>Caption || 1</th> <th>Caption || 10</th> </tr> </thead> <tbody> <tr> <td>Model | R@: || VSE++</td> <td>32.3</td> <td>72.1</td> <td>... | Table 3 | table_3 | D18-1176 | 5 | emnlp2018 | 6.2 Results. Table 3 shows the results, comparing against VSE++. First, note that the ImageNet-only embeddings don’t work as well as the FastText ones, which is most likely due to poorer coverage. We observe that DME outperforms naive and FastText-only, and outperforms VSE++ by a large margin. These findings confirm th... | [2, 1, 1, 1, 2, 2] | ['6.2 Results.', 'Table 3 shows the results, comparing against VSE++.', 'First, note that the ImageNet-only embeddings don’t work as well as the FastText ones, which is most likely due to poorer coverage.', 'We observe that DME outperforms naive and FastText-only, and outperforms VSE++ by a large margin.', 'These findi... | [None, ['VSE++'], ['FastText (15M)', 'ImageNet (29M)'], ['DME (15M)', 'FastText (15M)', 'Naive (32M)'], None, ['DME (15M)']] | 1 |
D18-1177table_2 | Results on word similarity task. Reported are the Spearman’s rank order correlation between model prediction and human judgment (higher is better and bolds highlight the best methods). See text for details. | 2 | [['Models', 'CNN'], ['Models', 'VAE'], ['Models', 'SGNS'], ['Models', 'CNN⊕SGNS'], ['Models', 'VAE⊕SGNS'], ['Models', 'V-SGNS'], ['Models', 'IV-SGNS(LINEAR)'], ['Models', 'IV-SGNS(NONLINEAR)'], ['Models', 'PIXIE+'], ['Models', 'PIXIE⊕']] | 3 | [['Semantic/taxonomic similarity', 'SEMSIM', '100%'], ['Semantic/taxonomic similarity', 'SEMSIM', '98%'], ['Semantic/taxonomic similarity', 'SimLex', '100%'], ['Semantic/taxonomic similarity', 'SimLex', '39%'], ['Semantic/taxonomic similarity', 'SIM', '100%'], ['Semantic/taxonomic similarity', 'SIM', '44%'], ['Semantic... | [['-', '0.49', '-', '0.41', '-', '0.49', '-', '0.54', '-', '0.46', '-', '0.54', '-', '0.20', '-', '0.18', '-', '0.53', '-', '0.28'], ['-', '0.65', '-', '0.43', '-', '0.51', '-', '0.56', '-', '0.55', '-', '0.62', '-', '0.22', '-', '0.40', '-', '0.62', '-', '0.37'], ['0.50', '0.50', '0.33', '0.35', '0.66', '0.66', '0.60'... | column | ['Semantic/taxonomic similarity', 'Semantic/taxonomic similarity', 'Semantic/taxonomic similarity', 'Semantic/taxonomic similarity', 'Semantic/taxonomic similarity', 'Semantic/taxonomic similarity', 'Semantic/taxonomic similarity', 'Semantic/taxonomic similarity', 'Semantic/taxonomic similarity', 'Semantic/taxonomic si... | ['PIXIE+', 'PIXIE⊕'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Semantic/taxonomic similarity || SEMSIM || 100%</th> <th>Semantic/taxonomic similarity || SEMSIM || 98%</th> <th>Semantic/taxonomic similarity || SimLex || 100%</th> <th>Semantic/taxonomic similarity ... | Table 2 | table_2 | D18-1177 | 6 | emnlp2018 | 4.2.1 Main results. The results across different datasets are shown in Table 2. We perform evaluations under two settings: by considering (i) word similarity between visual words only and (ii) between all words (column 100% in Table 2). For the models CNN, VAE and their concatenation with SGNS embeddings, the latter se... | [2, 1, 1, 2, 1, 2, 1, 2, 2, 2, 2, 1, 1, 2, 1, 2, 1, 1, 1, 2, 1, 1, 2, 2] | ['4.2.1 Main results.', 'The results across different datasets are shown in Table 2.', 'We perform evaluations under two settings: by considering (i) word similarity between visual words only and (ii) between all words (column 100% in Table 2).', 'For the models CNN, VAE and their concatenation with SGNS embeddings, th... | [None, ['SEMSIM', 'SimLex', 'SIM', 'EN-RG', 'EN-MC', 'MEN', 'REL', 'MTurk', 'VISSIM', 'WORDSIM'], ['98%', '39%', '44%', '72%', '73%', '54%', '53%', '26%', '100%'], ['CNN', 'VAE', 'CNN⊕SGNS', 'VAE⊕SGNS'], ['PIXIE+', 'PIXIE⊕'], ['PIXIE+', 'PIXIE⊕'], ['PIXIE⊕'], ['PIXIE⊕'], None, None, ['CNN⊕SGNS', 'VAE⊕SGNS', 'IV-SGNS(LI... | 1 |
D18-1177table_6 | Results for image (I) ↔ sentence (S) retrieval. | 2 | [['Models', 'SGNS'], ['Models', 'V-SGNS'], ['Models', 'IV-SGNS (LINEAR)'], ['Models', 'PIXIE+'], ['Models', 'PIXIE⊕']] | 2 | [['I → S', 'K=1'], ['I → S', 'K=5'], ['I → S', 'K=10'], ['S → I', 'K=1'], ['S → I', 'K=5'], ['S → I', 'K=10']] | [['23.1', '49.0', '61.6', '16.6', '41.0', '53.8'], ['21.9', '51.7', '64.2', '16.2', '42.0', '54.8'], ['22.7', '50.5', '61.7', '17.1', '42.6', '55.4'], ['24.2', '52.5', '65.4', '17.5', '43.8', '56.2'], ['25.7', '55.7', '67.7', '18.4', '44.9', '56.9']] | column | ['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy'] | ['PIXIE+', 'PIXIE⊕'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>I → S || K=1</th> <th>I → S || K=5</th> <th>I → S || K=10</th> <th>S → I || K=1</th> <th>S → I || K=5</th> <th>S → I || K=10</th> </tr> </thead> <tbody> <tr> <td>Models || SGNS<... | Table 6 | table_6 | D18-1177 | 8 | emnlp2018 | Results. Table 6 summarizes the results. The evaluation metrics are accuracies at top-K (K=1, 5, or 10) retrieved sentences or images. Our model consistently outperforms SGNS and other competing multimodal methods, which provides additional support for the benefits of our approach. | [2, 1, 1, 1] | ['Results.', 'Table 6 summarizes the results.', 'The evaluation metrics are accuracies at top-K (K=1, 5, or 10) retrieved sentences or images.', 'Our model consistently outperforms SGNS and other competing multimodal methods, which provides additional support for the benefits of our approach.'] | [None, None, ['K=1', 'K=5', 'K=10'], ['PIXIE+', 'PIXIE⊕', 'SGNS', 'V-SGNS', 'IV-SGNS (LINEAR)']] | 1 |
D18-1182table_3 | Performance (%Correct, %Wrong, %Abstained) of the different odd-man-out solvers on the | 4 | [['Embedding Map', 'ELMo clusters (K = 5)', 'Training Tokens', '1B+2B'], ['Embedding Map', 'w2v.googlenews', 'Training Tokens', '100B'], ['Embedding Map', 'glove.commoncrawl2', 'Training Tokens', '840B'], ['Embedding Map', 'glove.commoncrawl1', 'Training Tokens', '42B'], ['Embedding Map', 'glove.wikipedia', 'Training T... | 2 | [['AnomiaCommon', 'C'], ['AnomiaCommon', 'W'], ['AnomiaCommon', 'A'], ['AnomiaProper', 'C'], ['AnomiaProper', 'W'], ['AnomiaProper', 'A'], ['Crowdsourced', 'C'], ['Crowdsourced', 'W'], ['Crowdsourced', 'A']] | [['76.7', '13.9', '9.4', '42.6', '17.8', '39.6', '55.5', '18.8', '25.6'], ['61.9', '25.2', '12.9', '40.1', '14.9', '45.0', '46.3', '28.8', '24.9'], ['60.9', '23.8', '15.4', '32.2', '14.4', '53.5', '47.1', '28.4', '24.6'], ['57.4', '29.2', '13.4', '30.7', '17.8', '51.5', '40.1', '36.3', '23.7'], ['54.5', '24.3', '21.3',... | column | ['C', 'W', 'A', 'C', 'W', 'A', 'C', 'W', 'A'] | ['ELMo clusters (K = 5)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>AnomiaCommon || C</th> <th>AnomiaCommon || W</th> <th>AnomiaCommon || A</th> <th>AnomiaProper || C</th> <th>AnomiaProper || W</th> <th>AnomiaProper || A</th> <th>Crowdsourced || C</th> ... | Table 3 | table_3 | D18-1182 | 6 | emnlp2018 | 5.2 Word Embeddings Solvers. Table 3 shows the results of embedding-based solvers on the Anomia and crowdsourced datasets, using several different pre-trained embedding maps. We find the best performance on ANOMIACOMMON and ANOMIAPROPER using the word2vec vectors trained on 100 billion tokens from the Google News corpu... | [2, 1, 1, 1, 1, 2] | ['5.2 Word Embeddings Solvers.', 'Table 3 shows the results of embedding-based solvers on the Anomia and crowdsourced datasets, using several different pre-trained embedding maps.', 'We find the best performance on ANOMIACOMMON and ANOMIAPROPER using the word2vec vectors trained on 100 billion tokens from the Google Ne... | [None, ['w2v.googlenews', 'glove.commoncrawl2', 'glove.commoncrawl1', 'glove.wikipedia', 'Neelakantan', 'w2v.freebase', 'WordNet'], ['w2v.googlenews', '100B', 'AnomiaCommon', 'AnomiaProper'], ['ELMo clusters (K = 5)'], ['ELMo clusters (K = 5)', 'w2v.googlenews', 'glove.commoncrawl2', 'glove.commoncrawl1', 'glove.wikipe... | 1 |
D18-1183table_3 | Experimental results on simile sentence classification. SC: simile sentence classification; CE: component extraction; LM: language modeling. | 2 | [['Model', 'Baseline1'], ['Model', 'Baseline2'], ['Model', 'Singletask (SC)'], ['Model', 'Multitask (SC+CE)'], ['Model', 'Multitask (SC+LM)'], ['Model', 'Multitask (SC+CE+LM)']] | 2 | [['Simile Classification', 'P'], ['Simile Classification', 'R'], ['Simile Classification', 'F1']] | [['0.6523', '0.4752', '0.5498'], ['0.7661', '0.7832', '0.7745'], ['0.7751', '0.8895', '0.8284'], ['0.8056', '0.8886', '0.8450'], ['0.8021', '0.9105', '0.8525'], ['0.8084', '0.9220', '0.8615']] | column | ['P', 'R', 'F1'] | ['Singletask (SC)', 'Multitask (SC+CE)', 'Multitask (SC+LM)', 'Multitask (SC+CE+LM)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Simile Classification || P</th> <th>Simile Classification || R</th> <th>Simile Classification || F1</th> </tr> </thead> <tbody> <tr> <td>Model || Baseline1</td> <td>0.6523</td> <td>0... | Table 3 | table_3 | D18-1183 | 7 | emnlp2018 | 5.2.2 Results. Table 3 shows the performance of the systems. The results are reported with the precision (P), recall (R), and their harmonic mean F1 score (F1). The two feature based methods perform differently. Baseline1 performs poorly. The reason may be that the classification depends on the performance of component... | [2, 1, 1, 1, 1, 2, 2, 1, 2, 1, 1, 1, 2, 2, 1, 1] | ['5.2.2 Results.', 'Table 3 shows the performance of the systems.', 'The results are reported with the precision (P), recall (R), and their harmonic mean F1 score (F1).', 'The two feature based methods perform differently.', 'Baseline1 performs poorly.', 'The reason may be that the classification depends on the perform... | [None, None, ['P', 'R', 'F1'], ['Baseline1', 'Baseline2'], ['Baseline1'], ['Baseline1'], ['Baseline1'], ['Baseline2', 'Baseline1'], None, None, ['Baseline1', 'Baseline2', 'Singletask (SC)', 'Multitask (SC+CE)', 'Multitask (SC+LM)', 'Multitask (SC+CE+LM)'], ['Multitask (SC+CE)', 'Multitask (SC+LM)', 'Singletask (SC)', '... | 1 |
D18-1183table_4 | Experimental results on component extraction. Experiments on dataset of simile sentences assume that the sentence classifier is perfect. CE: component extraction; SC: simile sentence classification; LM: language modeling. | 2 | [['Model', 'Rule based'], ['Model', 'CRF'], ['Model', 'Singletask (CE)'], ['Model', 'RandomForest → CRF'], ['Model', 'SingleSC → SingleCE'], ['Model', 'Multitask (CE+SC)'], ['Model', 'Multitask (CE+LM)'], ['Model', 'Multitask (CE+SC+LM)'], ['Model', 'Optimized pipeline']] | 2 | [['Gold simile sentences', 'P'], ['Gold simile sentences', 'R'], ['Gold simile sentences', 'F1'], ['Whole test set', 'P'], ['Whole test set', 'R'], ['Whole test set', 'F1']] | [['0.4094', '0.1805', '0.2505', '-', '-', '-'], ['0.5619', '0.5907', '0.5760', '0.3157', '0.3698', '0.3406'], ['0.7297', '0.7854', '0.7564', '0.5580', '0.6489', '0.5998'], ['-', '-', '-', '0.4591', '0.4980', '0.4778'], ['-', '-', '-', '0.5720', '0.7074', '0.6325'], ['-', '-', '-', '0.5409', '0.6400', '0.5861'], ['0.753... | column | ['P', 'R', 'F1', 'P', 'R', 'F1'] | ['Singletask (CE)', 'RandomForest → CRF', 'SingleSC → SingleCE', 'Multitask (CE+SC)', 'Multitask (CE+LM)', 'Multitask (CE+SC+LM)', 'Optimized pipeline'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Gold simile sentences || P</th> <th>Gold simile sentences || R</th> <th>Gold simile sentences || F1</th> <th>Whole test set || P</th> <th>Whole test set || R</th> <th>Whole test set || F1</t... | Table 4 | table_4 | D18-1183 | 8 | emnlp2018 | Table 4 shows the results of various systems and settings on two test sets. The first dataset consists of all manually labeled simile sentences in the test set and the second dataset is the whole test set. We want to compare how component extraction systems work when they know whether a sentence contains a simile or no... | [1, 1, 2, 0, 2, 1, 2, 1, 1, 2, 2, 2, 1, 2, 2, 2, 2, 2, 1, 1, 2, 2, 2, 2, 2, 1, 2, 2, 1, 1, 2] | ['Table 4 shows the results of various systems and settings on two test sets.', 'The first dataset consists of all manually labeled simile sentences in the test set and the second dataset is the whole test set.', 'We want to compare how component extraction systems work when they know whether a sentence contains a simi... | [['Rule based', 'CRF', 'Singletask (CE)', 'RandomForest → CRF', 'SingleSC → SingleCE', 'Multitask (CE+SC)', 'Multitask (CE+LM)', 'Multitask (CE+SC+LM)', 'Optimized pipeline', 'Gold simile sentences', 'Whole test set'], ['Gold simile sentences', 'Whole test set'], ['Rule based', 'CRF', 'Singletask (CE)', 'RandomForest →... | 1 |
D18-1184table_1 | Results of our models (top) and previously proposed systems (bottom) on the TREC-QA test set. | 2 | [['Models', 'Word-level Attention'], ['Models', 'Simple Span Alignment'], ['Models', 'Simple Span Alignment + External Parser'], ['Models', 'Structured Alignment (Shared Parameters)'], ['Models', 'Structured Alignment (Separated Parameters)'], ['Models', 'QA-LSTM (Tan et al. 2016b)'], ['Models', 'Attentive Pooling Netw... | 1 | [['MAP'], ['MRR']] | [['0.764', '0.842'], ['0.772', '0.851'], ['0.780', '0.846'], ['0.780', '0.860'], ['0.786', '0.860'], ['0.730', '0.824'], ['0.753', '0.851'], ['0.777', '0.836'], ['0.771', '0.845'], ['0.801', '0.877'], ['0.802', '0.875']] | column | ['MAP', 'MRR'] | ['Structured Alignment (Shared Parameters)', 'Structured Alignment (Separated Parameters)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MAP</th> <th>MRR</th> </tr> </thead> <tbody> <tr> <td>Models || Word-level Attention</td> <td>0.764</td> <td>0.842</td> </tr> <tr> <td>Models || Simple Span Alignment</td> ... | Table 1 | table_1 | D18-1184 | 5 | emnlp2018 | Experimental results are listed in Table 1. We measure performance by the mean average precision (MAP) and mean reciprocal rank (MRR) using the standard TREC evaluation script. In the first block of Table 1, we compare our model and variants thereof against several baselines. The first baseline is the Word-level Decomp... | [1, 1, 1, 2, 2, 2, 1, 2, 2, 2, 1, 1, 1, 1, 1, 2, 2] | ['Experimental results are listed in Table 1.', 'We measure performance by the mean average precision (MAP) and mean reciprocal rank (MRR) using the standard TREC evaluation script.', 'In the first block of Table 1, we compare our model and variants thereof against several baselines.', 'The first baseline is the Word-l... | [None, ['MAP', 'MRR'], ['Word-level Attention', 'Simple Span Alignment', 'Simple Span Alignment + External Parser', 'Structured Alignment (Shared Parameters)', 'Structured Alignment (Separated Parameters)'], ['Word-level Attention'], ['Simple Span Alignment'], ['Simple Span Alignment + External Parser'], ['Structured A... | 1 |
D18-1185table_2 | Performance comparison (accuracy) on MultiNLI and SciTail. Models with †, # and (cid:91) are reported from (Weissenborn, 2017), (Khot et al., 2018) and (Williams et al., 2017) respectively. | 2 | [['Model', 'Majority'], ['Model', 'NGRAM#'], ['Model', 'CBOW♭'], ['Model', 'BiLSTM♭'], ['Model', 'ESIM#♭'], ['Model', 'DecompAtt# -'], ['Model', 'DGEM#'], ['Model', 'DGEM + Edge#'], ['Model', 'ESIM†'], ['Model', 'ESIM + Read†'], ['Model', 'CAFE'], ['Model', 'CAFE Ensemble']] | 2 | [['MultiNLI', 'Match'], ['MultiNLI', 'Mismatch'], ['SciTail', '-']] | [['36.5', '35.6', '60.3'], ['-', '-', '70.6'], ['65.2', '64.8', '-'], ['69.8', '69.4', '-'], ['72.4', '72.1', '70.6'], ['-', '-', '72.3'], ['-', '-', '70.8'], ['-', '-', '77.3'], ['76.3', '75.8', '-'], ['77.8', '77.0', '-'], ['78.7', '77.9', '83.3'], ['80.2', '79.0', '-']] | column | ['accuracy', 'accuracy', 'accuracy'] | ['CAFE', 'CAFE Ensemble'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MultiNLI || Match</th> <th>MultiNLI || Mismatch</th> <th>SciTail || -</th> </tr> </thead> <tbody> <tr> <td>Model || Majority</td> <td>36.5</td> <td>35.6</td> <td>60.3</td> </... | Table 2 | table_2 | D18-1185 | 7 | emnlp2018 | Table 2 reports our results on the MultiNLI and SciTail datasets. On MultiNLI, CAFE significantly outperforms ESIM, a strong state-of-the-art model on both settings. We also outperform the ESIM + Read model (Weissenborn, 2017). An ensemble of CAFE models achieve competitive result on the MultiNLI dataset. On SciTail, o... | [1, 1, 1, 1, 1, 1, 1, 1] | ['Table 2 reports our results on the MultiNLI and SciTail datasets.', 'On MultiNLI, CAFE significantly outperforms ESIM, a strong state-of-the-art model on both settings.', 'We also outperform the ESIM + Read model (Weissenborn, 2017).', 'An ensemble of CAFE models achieve competitive result on the MultiNLI dataset.', ... | [['MultiNLI', 'SciTail'], ['MultiNLI', 'CAFE', 'ESIM†', 'ESIM#♭'], ['MultiNLI', 'CAFE', 'ESIM + Read†'], ['CAFE Ensemble', 'MultiNLI'], ['CAFE', 'SciTail'], ['CAFE', 'SciTail', 'DecompAtt# -', 'ESIM#♭'], ['SciTail', 'CAFE', 'DGEM + Edge#'], ['CAFE', 'SciTail']] | 1 |
D18-1186table_2 | Performance on SNLI dataset. | 2 | [['Models', 'Handcrafted features (Bowman et al. 2015)'], ['Models', 'LSTM with attention (Rocktaschel et al. 2015)'], ['Models', 'Match-LSTM (Wang and Jiang 2016)'], ['Models', 'Decomposable attention model (Parikh et al. 2016)'], ['Models', 'BiMPM (Zhiguo Wang 2017)'], ['Models', 'NTI-SLSTM-LSTM (Munkhdalai and Yu 20... | 1 | [['Train'], ['Test']] | [['99.7', '78.2'], ['85.3', '83.5'], ['92.0', '86.1'], ['90.5', '86.8'], ['90.9', '87.5'], ['88.5', '87.3'], ['90.7', '87.5'], ['91.2', '88.0'], ['92.6', '88.0'], ['93.2', '88.0'], ['93.5', '88.6'], ['93.2', '88.8'], ['92.3', '88.9'], ['94.3', '89.1']] | column | ['accuracy', 'accuracy'] | ['CIN', 'CIN (Ensemble)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Train</th> <th>Test</th> </tr> </thead> <tbody> <tr> <td>Models || Handcrafted features (Bowman et al. 2015)</td> <td>99.7</td> <td>78.2</td> </tr> <tr> <td>Models || LSTM with... | Table 2 | table_2 | D18-1186 | 7 | emnlp2018 | SNLI. Table 2 shows the results of different models on the train set and test set of SNLI. The first row gives a baseline model with handcrafted features presented by Bowman et al. (2015). All the other models are attention-based neural networks. Wang and Jiang (2016) exploits the long short-term memory (LSTM) for NLI.... | [2, 1, 1, 2, 2, 2, 2, 2, 1, 1, 1, 2, 1, 1, 1] | ['SNLI.', 'Table 2 shows the results of different models on the train set and test set of SNLI.', 'The first row gives a baseline model with handcrafted features presented by Bowman et al. (2015).', 'All the other models are attention-based neural networks.', 'Wang and Jiang (2016) exploits the long short-term memory (... | [None, ['Handcrafted features (Bowman et al. 2015)', 'LSTM with attention (Rocktaschel et al. 2015)', 'Match-LSTM (Wang and Jiang 2016)', 'Decomposable attention model (Parikh et al. 2016)', 'BiMPM (Zhiguo Wang 2017)', 'NTI-SLSTM-LSTM (Munkhdalai and Yu 2017)', 'Re-read LSTM (Sha et al. 2016)', 'DIIN (Gong et al. 2017)... | 1 |
D18-1186table_3 | Performance on MultiNLI test set. | 2 | [['Models', 'BiLSTM (Williams et al. 2017)'], ['Models', 'InnerAtt (Balazs et al. 2017)'], ['Models', 'ESIM (Chen et al. 2017a)'], ['Models', 'Gated-Att BiLSTM (Chen et al. 2017b)'], ['Models', 'ESIM (Chen et al. 2017a)'], ['Models', 'CIN']] | 1 | [['Match'], ['Mismatch']] | [['67.0', '67.6'], ['72.1', '72.1'], ['72.3', '72.1'], ['73.2', '73.6'], ['76.3', '75.8'], ['77.0', '77.6']] | column | ['accuracy', 'accuracy'] | ['CIN'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Match</th> <th>Mismatch</th> </tr> </thead> <tbody> <tr> <td>Models || BiLSTM (Williams et al. 2017)</td> <td>67.0</td> <td>67.6</td> </tr> <tr> <td>Models || InnerAtt (Balazs ... | Table 3 | table_3 | D18-1186 | 7 | emnlp2018 | MultiNLI. Table 3 shows the performance of different models on MultiNLI. The original aim of this dataset is to evaluate the quality of sentence representations. Recently this dataset is also used to evaluate the interaction model involving attention mechanism. The first line of Table 3 gives a baseline model without i... | [2, 1, 2, 2, 1, 1, 1, 2] | ['MultiNLI.', 'Table 3 shows the performance of different models on MultiNLI.', 'The original aim of this dataset is to evaluate the quality of sentence representations.', 'Recently this dataset is also used to evaluate the interaction model involving attention mechanism.', 'The first line of Table 3 gives a baseline m... | [None, ['BiLSTM (Williams et al. 2017)', 'InnerAtt (Balazs et al. 2017)', 'ESIM (Chen et al. 2017a)', 'Gated-Att BiLSTM (Chen et al. 2017b)', 'CIN'], None, None, ['BiLSTM (Williams et al. 2017)'], ['InnerAtt (Balazs et al. 2017)', 'ESIM (Chen et al. 2017a)', 'Gated-Att BiLSTM (Chen et al. 2017b)'], ['CIN', 'Match', 'Mi... | 1 |
D18-1186table_4 | Performance on Quora question pair dataset. | 2 | [['Models', 'Siamese-CNN'], ['Models', 'Multi-Perspective CNN'], ['Models', 'Siamese-LSTM'], ['Models', 'Multi-Perspective-LSTM'], ['Models', 'L.D.C'], ['Models', 'BiMPM (Zhiguo Wang 2017)'], ['Models', 'CIN']] | 1 | [['Test']] | [['79.60'], ['81.38'], ['82.58'], ['83.21'], ['85.55'], ['88.17'], ['88.62']] | column | ['accuracy'] | ['CIN'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Test</th> </tr> </thead> <tbody> <tr> <td>Models || Siamese-CNN</td> <td>79.60</td> </tr> <tr> <td>Models || Multi-Perspective CNN</td> <td>81.38</td> </tr> <tr> <td>Mode... | Table 4 | table_4 | D18-1186 | 7 | emnlp2018 | Quora. Table 4 shows the performance of different models on the Quora test set. The baselines on Table 4 are all implemented in Zhiguo Wang (2017). The Siamese-CNN model and SiameseLSTM model encode sentences with CNN and LSTM respectively, and then predict the relationship between them based on the cosine similarity. ... | [2, 1, 1, 2, 2, 2, 1] | ['Quora.', 'Table 4 shows the performance of different models on the Quora test set.', 'The baselines on Table 4 are all implemented in Zhiguo Wang (2017).', 'The Siamese-CNN model and SiameseLSTM model encode sentences with CNN and LSTM respectively, and then predict the relationship between them based on the cosine s... | [None, ['Siamese-CNN', 'Multi-Perspective CNN', 'Siamese-LSTM', 'Multi-Perspective-LSTM', 'L.D.C', 'BiMPM (Zhiguo Wang 2017)', 'CIN', 'Test'], ['Siamese-CNN', 'Multi-Perspective CNN', 'Siamese-LSTM', 'Multi-Perspective-LSTM', 'L.D.C', 'BiMPM (Zhiguo Wang 2017)'], ['Siamese-CNN', 'Siamese-LSTM'], ['Multi-Perspective CNN... | 1 |
D18-1194table_1 | Evaluation of results on the test set. | 1 | [['Pipeline'], ['Variant (a)'], ['Variant (b)'], ['Variant (c)'], ['Our model']] | 2 | [['S metric', 'Precision'], ['S metric', 'Recall'], ['S metric', 'F1'], ['BLEU INST', '-'], ['MAE SPR', '-'], ['MAE FACT', '-']] | [['35.08', '30.10', '32.39', '15.03', 'N/A', 'N/A'], ['39.31', '32.93', '35.84', '16.74', '0.75', '1.11'], ['42.76', '33.20', '37.38', '17.71', '0.74', '1.14'], ['41.74', '33.28', '37.03', '18.01', '0.80', '1.14'], ['45.33', '33.88', '38.78', '19.61', '0.71', '1.06']] | column | ['S metric', 'S metric', 'S metric', 'BLEU INST', 'MAE SPR', 'MAE FACT'] | ['Our model'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>S metric || Precision</th> <th>S metric || Recall</th> <th>S metric || F1</th> <th>BLEU INST || -</th> <th>MAE SPR || -</th> <th>MAE FACT || -</th> </tr> </thead> <tbody> <tr> <... | Table 1 | table_1 | D18-1194 | 7 | emnlp2018 | 6.3 Results. Table 1 reports the experimental results on the test set. Results on the in-domain test set are similar and shown in Appendix D. In Table 1, S metric (defined in Section 4) measures the similarity between predicted and reference graph representations. Based on the optimal variable mapping provided by the S... | [2, 1, 2, 2, 2, 1, 2, 1, 1, 2, 2, 2, 1] | ['6.3 Results.', 'Table 1 reports the experimental results on the test set.', 'Results on the in-domain test set are similar and shown in Appendix D.', 'In Table 1, S metric (defined in Section 4) measures the similarity between predicted and reference graph representations.', 'Based on the optimal variable mapping pro... | [None, None, None, ['S metric'], ['BLEU INST', 'MAE SPR', 'MAE FACT'], ['Our model', 'Variant (a)', 'Variant (b)', 'Variant (c)', 'S metric', 'BLEU INST', 'MAE SPR', 'MAE FACT'], ['Variant (a)', 'Variant (b)'], ['Our model', 'Variant (a)', 'Variant (b)', 'S metric', 'Precision'], ['Variant (a)', 'Variant (b)', 'F1'], [... | 1 |
D18-1199table_4 | The performance of MinV+NN and models without soft label on all the idioms in the two corpora | 2 | [['Model', 'Gibbs'], ['Model', 'EM'], ['Model', 'MinV+NN']] | 1 | [['Avg. Ff ig'], ['Avg.Acc']] | [['0.58 (0.31 ? 0.78)', '0.57 (0.4 ? 0.78)'], ['0.56 (0.31 ? 0.71)', '0.6 (0.42 ? 0.77)'], ['0.68 (0.41 ? 0.83)', '0.67 (0.55 ? 0.86)']] | column | ['Avg. Ff ig', 'Avg.Acc'] | ['MinV+NN'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Avg. Ff ig</th> <th>Avg.Acc</th> </tr> </thead> <tbody> <tr> <td>Model || Gibbs</td> <td>0.58 (0.31 ? 0.78)</td> <td>0.57 (0.4 ? 0.78)</td> </tr> <tr> <td>Model || EM</td> ... | Table 4 | table_4 | D18-1199 | 8 | emnlp2018 | Table 4 shows the performances of the new models, which are all worse than our full models MinV +infGibbs and MinV +infEM. This highlights the advantage of integrating distributional semantic information and local features into one single learning procedure. Without the informed prior (encoded by the soft labels), the ... | [1, 2, 2, 1, 2, 2, 2] | ['Table 4 shows the performances of the new models, which are all worse than our full models MinV +infGibbs and MinV +infEM.', 'This highlights the advantage of integrating distributional semantic information and local features into one single learning procedure.', 'Without the informed prior (encoded by the soft label... | [['Gibbs', 'EM', 'MinV+NN'], None, ['Gibbs', 'EM'], ['MinV+NN'], ['MinV+NN'], ['MinV+NN'], ['MinV+NN']] | 1 |
D18-1201table_1 | Results on development set (all metrics except MR are x100). M3GM lines use TRANSE as their association model. In M3GMαr, the graph component is tuned post-hoc against the local component per relation. | 3 | [['System', '-', 'RULE'], ['System', '1', 'DISTMULT'], ['System', '2', 'BILIN'], ['System', '3', 'TRANSE'], ['System', '4', 'M3GM'], ['System', '5', 'M3GMαr']] | 1 | [['MR'], ['MRR'], ['H@10'], ['H@1']] | [['13396', '35.26', '35.27', '35.23'], ['1111', '43.29', '50.73', '39.67'], ['738', '45.36', '52.93', '41.37'], ['2231', '46.07', '55.65', '41.41'], ['2231', '47.94', '57.72', '43.26'], ['2231', '48.30', '57.59', '43.78']] | column | ['MR', 'MRR', 'H@10', 'H@1'] | ['M3GM', 'M3GMαr'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MR</th> <th>MRR</th> <th>H@10</th> <th>H@1</th> </tr> </thead> <tbody> <tr> <td>System || - || RULE</td> <td>13396</td> <td>35.26</td> <td>35.27</td> <td>35.23</td> ... | Table 1 | table_1 | D18-1201 | 7 | emnlp2018 | 5 Results. Table 1 presents the results on the development set. Lines 1-3 depict the results for local models using averaged FastText embedding initialization, showing that the best performance in terms of MRR and top-rank hits is achieved by TRANSE. Mean Rank does not align with the other metrics;. this is an interpre... | [2, 1, 1, 1, 2, 2, 1, 1, 2] | ['5 Results.', 'Table 1 presents the results on the development set.', 'Lines 1-3 depict the results for local models using averaged FastText embedding initialization, showing that the best performance in terms of MRR and top-rank hits is achieved by TRANSE.', 'Mean Rank does not align with the other metrics;.', 'this ... | [None, None, ['DISTMULT', 'BILIN', 'TRANSE', 'MRR', 'H@10', 'H@1'], ['MR', 'MRR', 'H@10', 'H@1'], ['DISTMULT', 'BILIN'], None, ['M3GM'], ['M3GMαr'], ['DISTMULT', 'BILIN']] | 1 |
D18-1201table_2 | Main results on test set. † These models were not re-implemented, and are reported as in Nguyen et al. (2018) and in Dettmers et al. (2018). | 2 | [['System', 'RULE'], ['System', 'COMPLEX†'], ['System', 'CONVE†'], ['System', 'CONVKB†'], ['System', 'TRANSE'], ['System', 'M3GMαr']] | 1 | [['MR'], ['MRR'], ['H@10'], ['H@1']] | [['13396', '35.26', '35.26', '35.26'], ['5261', '44', '51', '41'], ['5277', '46', '48', '39'], ['2554', '24.8', '52.5', ''], ['2195', '46.59', '55.55', '42.26'], ['2193', '49.83', '59.02', '45.37']] | column | ['MR', 'MRR', 'H@10', 'H@1'] | ['M3GMαr'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MR</th> <th>MRR</th> <th>H@10</th> <th>H@1</th> </tr> </thead> <tbody> <tr> <td>System || RULE</td> <td>13396</td> <td>35.26</td> <td>35.26</td> <td>35.26</td> </tr... | Table 2 | table_2 | D18-1201 | 7 | emnlp2018 | Table 2 shows that our main results transfer onto the test set, with even a slightly larger margin. This could be the result of the greater edge density of the combined training and dev graphs, which enhance the global coherence of the graph structure captured by M3GM features. To support this theory, we tested the M3G... | [1, 2, 2] | ['Table 2 shows that our main results transfer onto the test set, with even a slightly larger margin.', 'This could be the result of the greater edge density of the combined training and dev graphs, which enhance the global coherence of the graph structure captured by M3GM features.', 'To support this theory, we tested... | [None, ['M3GMαr'], ['M3GMαr']] | 1 |
D18-1205table_5 | Comparsion results of sentence selection. | 2 | [['Method', 'SummaRuNNer-abs'], ['Method', 'SummaRuNNer'], ['Method', 'OurExtractive'], ['Method', '– distS'], ['Method', '– distS&gateF']] | 1 | [['Rouge-1'], ['Rouge-2'], ['Rouge-L']] | [['37.5', '14.5', '33.4'], ['39.6', '16.2', '35.3'], ['40.41', '18.30', '36.30'], ['37.06', '16.55', '33.23'], ['36.25', '16.22', '32.59']] | column | ['Rouge-1', 'Rouge-2', 'Rouge-L'] | ['OurExtractive'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Rouge-1</th> <th>Rouge-2</th> <th>Rouge-L</th> </tr> </thead> <tbody> <tr> <td>Method || SummaRuNNer-abs</td> <td>37.5</td> <td>14.5</td> <td>33.4</td> </tr> <tr> <td... | Table 5 | table_5 | D18-1205 | 7 | emnlp2018 | Results in Table 5 show that our simple extractive method OurExtractive significantly outperforms state-of-the-art neural extractive baselines, which demonstrates the effectiveness of the information selection component in our model. Moreover, OurExtractive significantly outperforms the two comparison systems which rem... | [1, 1, 2, 2] | ['Results in Table 5 show that our simple extractive method OurExtractive significantly outperforms state-of-the-art neural extractive baselines, which demonstrates the effectiveness of the information selection component in our model.', 'Moreover, OurExtractive significantly outperforms the two comparison systems whic... | [['OurExtractive', 'SummaRuNNer-abs', 'SummaRuNNer'], ['OurExtractive', '– distS', '– distS&gateF'], ['– distS', '– distS&gateF'], ['OurExtractive']] | 1 |
D18-1206table_1 | Comparison of summarization datasets with respect to overall corpus size, size of training, validation, and test set, average document (source) and summary (target) length (in terms of words and sentences), and vocabulary size on both on source and target. For CNN and DailyMail, we used the original splits of Hermann e... | 2 | [['Datasets', 'CNN'], ['Datasets', 'DailyMail'], ['Datasets', 'NY Times'], ['Datasets', 'XSum']] | 2 | [['# docs', 'train'], ['# docs', 'val'], ['# docs', 'test'], ['avg. document length', 'words'], ['avg. document length', 'sentences'], ['avg. summary length', 'words'], ['avg. summary length', 'sentences'], ['vocabulary size', 'document'], ['vocabulary size', 'summary']] | [['90266', '1220', '1093', '760.50', '33.98', '45.70', '3.59', '343,516', '89,051'], ['196961', '12148', '10397', '653.33', '29.33', '54.65', '3.86', '563,663', '179,966'], ['589284', '32736', '32739', '800.04', '35.55', '45.54', '2.44', '1,399,358', '294,011'], ['204045', '11332', '11334', '431.07', '19.77', '23.26', ... | column | ['# docs', '# docs', '# docs', 'avg. document length', 'avg. document length', 'avg. summary length', 'avg. summary length', 'vocabulary size', 'vocabulary size'] | ['XSum'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th># docs || train</th> <th># docs || val</th> <th># docs || test</th> <th>avg. document length || words</th> <th>avg. document length || sentences</th> <th>avg. summary length || words</th> ... | Table 1 | table_1 | D18-1206 | 3 | emnlp2018 | Table 1 compares XSum with the CNN, DailyMail, and NY Times benchmarks. As can be seen, XSum contains a substantial number of training instances, similar to DailyMail; documents and summaries in XSum are shorter in relation to other datasets but the vocabulary size is sufficiently large, comparable to CNN. | [1, 1] | ['Table 1 compares XSum with the CNN, DailyMail, and NY Times benchmarks.', 'As can be seen, XSum contains a substantial number of training instances, similar to DailyMail; documents and summaries in XSum are shorter in relation to other datasets but the vocabulary size is sufficiently large, comparable to CNN.'] | [['CNN', 'DailyMail', 'NY Times', 'XSum'], ['XSum', '# docs', 'avg. summary length', 'vocabulary size', 'DailyMail', 'CNN']] | 1 |
D18-1206table_2 | Corpus bias towards extractive methods in the CNN, DailyMail, NY Times, and XSum datasets. We show the proportion of novel n-grams in gold summaries. We also report ROUGE scores for the LEAD baseline and the extractive oracle system EXT-ORACLE. Results are computed on the test set. | 2 | [['Datasets', 'CNN'], ['Datasets', 'DailyMail'], ['Datasets', 'NY Times'], ['Datasets', 'XSum']] | 2 | [['% of novel n-grams in gold summary', 'unigrams'], ['% of novel n-grams in gold summary', 'bigrams'], ['% of novel n-grams in gold summary', 'trigrams'], ['% of novel n-grams in gold summary', '4-grams'], ['LEAD', 'R1'], ['LEAD', 'R2'], ['LEAD', 'RL'], ['EXT-ORACLE', 'R1'], ['EXT-ORACLE', 'R2'], ['EXT-ORACLE', 'RL']] | [['16.75', '54.33', '72.42', '80.37', '29.15', '11.13', '25.95', '50.38', '28.55', '46.58'], ['17.03', '53.78', '72.14', '80.28', '40.68', '18.36', '37.25', '55.12', '30.55', '51.24'], ['22.64', '55.59', '71.93', '80.16', '31.85', '15.86', '23.75', '52.08', '31.59', '46.72'], ['35.76', '83.45', '95.50', '98.49', '16.30... | column | ['unigrams', 'bigrams', 'trigrams', '4-grams', 'R1', 'R2', 'RL', 'R1', 'R2', 'RL'] | ['XSum'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>% of novel n-grams in gold summary || unigrams</th> <th>% of novel n-grams in gold summary || bigrams</th> <th>% of novel n-grams in gold summary || trigrams</th> <th>% of novel n-grams in gold summar... | Table 2 | table_2 | D18-1206 | 3 | emnlp2018 | Table 2 provides empirical analysis supporting our claim that XSum is less biased toward extractive methods compared to other summarization datasets. We report the percentage of novel n-grams in the target gold summaries that do not appear in their source documents. There are 36% novel unigrams in the XSum reference su... | [1, 1, 1, 1, 1, 1, 2, 2, 0, 2, 2, 2, 2, 1, 1, 1, 2, 1] | ['Table 2 provides empirical analysis supporting our claim that XSum is less biased toward extractive methods compared to other summarization datasets.', 'We report the percentage of novel n-grams in the target gold summaries that do not appear in their source documents.', 'There are 36% novel unigrams in the XSum refe... | [['CNN', 'DailyMail', 'NY Times', 'XSum'], ['% of novel n-grams in gold summary'], ['CNN', 'DailyMail', 'NY Times', 'XSum', 'unigrams'], ['XSum'], ['XSum', 'bigrams', 'trigrams', '4-grams'], ['LEAD', 'EXT-ORACLE'], ['LEAD'], ['LEAD', 'CNN', 'DailyMail'], None, ['XSum', 'LEAD'], ['EXT-ORACLE'], ['EXT-ORACLE'], ['XSum', ... | 1 |
D18-1206table_4 | ROUGE results on XSum test set. We report ROUGE-1 (R1), ROUGE-2 (R2), and ROUGE-L (RL) F1 scores. Extractive systems are in the upper block, RNN-based abstractive systems are in the middle block, and convolutional abstractive systems are in the bottom block. | 2 | [['Models', 'Random'], ['Models', 'LEAD'], ['Models', 'EXT-ORACLE'], ['Models', 'SEQ2SEQ'], ['Models', 'PTGEN'], ['Models', 'PTGEN-COVG'], ['Models', 'CONVS2S'], ['Models', 'T-CONVS2SS (enct)'], ['Models', 'T-CONVS2S (enct dectD)'], ['Models', 'T-CONVS2S (enc(t tD))'], ['Models', 'T-CONVS2S (enc(t tD) dectD)']] | 1 | [['R1'], ['R2'], ['RL']] | [['15.16', '1.78', '11.27'], ['16.30', '1.60', '11.95'], ['29.79', '8.81', '22.66'], ['28.42', '8.77', '22.48'], ['29.70', '9.21', '23.24'], ['28.10', '8.02', '21.72'], ['31.27', '11.07', '25.23'], ['31.71', '11.38', '25.56'], ['31.71', '11.34', '25.61'], ['31.61', '11.30', '25.51'], ['31.89', '11.54', '25.75']] | column | ['R1', 'R2', 'R3'] | ['T-CONVS2SS (enct)', 'T-CONVS2S (enct dectD)', 'T-CONVS2S (enc(t tD))', 'T-CONVS2S (enc(t tD) dectD)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>R1</th> <th>R2</th> <th>RL</th> </tr> </thead> <tbody> <tr> <td>Models || Random</td> <td>15.16</td> <td>1.78</td> <td>11.27</td> </tr> <tr> <td>Models || LEAD</td> ... | Table 4 | table_4 | D18-1206 | 7 | emnlp2018 | Automatic Evaluation. We report results using automatic metrics in Table 4. We evaluated summarization quality using F1 ROUGE (Lin and Hovy, 2003). Unigram and bigram overlap (ROUGE-1 and ROUGE-2) are a proxy for assessing informativeness and the longest common subsequence (ROUGE-L) represents fluency. On the XSum data... | [2, 1, 1, 2, 1, 1, 2, 2, 1, 2, 1, 2, 1, 1, 1, 1] | ['Automatic Evaluation.', 'We report results using automatic metrics in Table 4.', 'We evaluated summarization quality using F1 ROUGE (Lin and Hovy, 2003).', 'Unigram and bigram overlap (ROUGE-1 and ROUGE-2) are a proxy for assessing informativeness and the longest common subsequence (ROUGE-L) represents fluency.', 'On... | [None, None, ['R1', 'R2', 'RL'], ['R1', 'R2', 'RL'], ['SEQ2SEQ', 'Random', 'LEAD'], ['PTGEN', 'EXT-ORACLE', 'R2', 'RL'], ['PTGEN', 'EXT-ORACLE', 'LEAD'], None, ['PTGEN-COVG'], ['PTGEN-COVG'], ['CONVS2S', 'SEQ2SEQ', 'PTGEN', 'PTGEN-COVG'], ['CONVS2S'], ['T-CONVS2SS (enct)', 'T-CONVS2S (enc(t tD))'], ['T-CONVS2S (enct de... | 1 |
D18-1208table_3 | ROUGE-2 recall across sentence extractors when using fixed pretrained embeddings or when embeddings are updated during training. In both cases embeddings are initialized with pretrained GloVe embeddings. All extractors use the averaging sentence encoder. When both learned and fixed settings are bolded, there is no sign... | 4 | [['Ext.', 'Seq2Seq', 'Emb.', 'Fixed'], ['Ext.', 'Seq2Seq', 'Emb.', 'Learn'], ['Ext.', 'C&L', 'Emb.', 'Fixed'], ['Ext.', 'C&L', 'Emb.', 'Learn'], ['Ext.', 'Summa', 'Emb.', 'Fixed'], ['Ext.', 'Runner', 'Emb.', 'Learn']] | 1 | [['CNN/DM'], ['NYT'], ['DUC'], ['Reddit'], ['AMI'], [' PubMed']] | [['25.6', '35.7', '22.8', '13.6', '5.5', '17.7'], ['25.3 (0.3)', '35.7 (0.0)', '22.9 (-0.1)', '13.8 (-0.2)', '5.8 (-0.3)', '16.9 (0.8)'], ['25.3', '35.6', '23.1', '13.6', '6.1', '17.7'], ['24.9 (0.4)', '35.4 (0.2)', '23.0 (0.1)', '13.4 (0.2)', '6.2 (-0.1)', '16.4 (1.3)'], ['25.4', '35.4', '22.3', '13.4', '5.6', '17.2']... | column | ['ROUGE-2', 'ROUGE-2', 'ROUGE-2', 'ROUGE-2', 'ROUGE-2', 'ROUGE-2'] | ['Fixed', 'Learn'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>CNN/DM</th> <th>NYT</th> <th>DUC</th> <th>Reddit</th> <th>AMI</th> <th>PubMed</th> </tr> </thead> <tbody> <tr> <td>Ext. || Seq2Seq || Emb. || Fixed</td> <td>25.6</td> ... | Table 3 | table_3 | D18-1208 | 6 | emnlp2018 | Word Embedding Learning. Given that learning a sentence encoder (averaging has no learned parameters) does not yield significant improvement, it is natural to consider whether learning word embeddings is also necessary. In Table 3 we compare the performance of different extractors using the averaging encoder, when the ... | [2, 2, 1, 2, 0, 1, 2, 2, 2] | ['Word Embedding Learning.', 'Given that learning a sentence encoder (averaging has no learned parameters) does not yield significant improvement, it is natural to consider whether learning word embeddings is also necessary.', 'In Table 3 we compare the performance of different extractors using the averaging encoder, w... | [None, None, ['Seq2Seq', 'C&L', 'Summa', 'Runner', 'Learn', 'Fixed'], ['Learn', 'Fixed'], None, ['Fixed'], None, None, [' PubMed']] | 1 |
D18-1208table_5 | ROUGE-2 recall using models trained on in-order and shuffled documents. Extractor uses the averaging sentence encoder. When both in-order and shuffled settings are bolded, there is no signifcant performance difference. Difference in scores shown in parenthesis. | 4 | [['Ext.', 'Seq2Seq', 'Order', 'In-Order'], ['Ext.', 'Seq2Seq', 'Order', 'Shuffled']] | 1 | [['CNN/DM'], ['NYT'], ['DUC'], ['Reddit'], ['AMI'], ['PubMed']] | [['25.6', '35.7', '22.8', '13.6', '5.5', '17.7'], ['21.7 (3.9)', '25.6 (10.1)', '21.2 (1.6)', '13.5 (0.1)', '6.0 (-0.5)', '14.9 (2.8)']] | column | ['ROUGE-2', 'ROUGE-2', 'ROUGE-2', 'ROUGE-2', 'ROUGE-2', 'ROUGE-2'] | ['In-Order', 'Shuffled'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>CNN/DM</th> <th>NYT</th> <th>DUC</th> <th>Reddit</th> <th>AMI</th> <th>PubMed</th> </tr> </thead> <tbody> <tr> <td>Ext. || Seq2Seq || Order || In-Order</td> <td>25.6</td> ... | Table 5 | table_5 | D18-1208 | 7 | emnlp2018 | Table 5 shows the results of the shuffling experiments. The news domains and PubMed suffer a significant drop in performance when the document order is shuffled. By comparison, there is no significant difference between the shuffled and inorder models on the Reddit domain, and shuffling actually improves performance on... | [1, 1, 1, 2] | ['Table 5 shows the results of the shuffling experiments.', 'The news domains and PubMed suffer a significant drop in performance when the document order is shuffled.', 'By comparison, there is no significant difference between the shuffled and inorder models on the Reddit domain, and shuffling actually improves perfor... | [None, ['Shuffled', 'CNN/DM', 'NYT', 'PubMed'], ['In-Order', 'Shuffled', 'Reddit', 'AMI'], None] | 1 |
D18-1215table_2 | Comparison of sample precision and absolute recall (all instances and unique entity tuples) in test extraction on PMC. DPL + EMB is our full system using PubMed-trained word embedding, whereas DPL uses the original Wikipedia-trained word embedding in Peng et al. (2017). Ablation: DS (distant supervision), DP (data prog... | 2 | [['System', 'Peng 2017'], ['System', 'DPL + EMB'], ['System', 'DPL'], ['System', 'DPL -DS'], ['System', 'DPL -DP'], ['System', 'DPL -DP (ENTITY)'], ['System', 'DPL -JI']] | 1 | [['Prec.'], ['Abs. Rec.'], ['Unique']] | [['0.64', '6768', '2738'], ['0.74', '8478', '4821'], ['0.73', '7666', '4144'], ['0.29', '7555', '4912'], ['0.67', '4826', '2629'], ['0.70', '7638', '4074'], ['0.72', '7418', '4011']] | column | ['Prec.', 'Abs. Rec.', 'Unique'] | ['DPL + EMB', 'DPL', 'DPL -DS', 'DPL -DP', 'DPL -DP (ENTITY)', 'DPL -JI'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Prec.</th> <th>Abs. Rec.</th> <th>Unique</th> </tr> </thead> <tbody> <tr> <td>System || Peng 2017</td> <td>0.64</td> <td>6768</td> <td>2738</td> </tr> <tr> <td>System... | Table 2 | table_2 | D18-1215 | 7 | emnlp2018 | old in all cases (an instance is classified as positive if the normalized probability score is at least 0.5). For each system, sample precision was estimated by sampling 100 positive extractions and manually determining the proportion of correct extractions by an author knowledgeable about this domain. Absolute recall ... | [2, 2, 2, 1, 1, 1, 1, 1] | ['old in all cases (an instance is classified as positive if the normalized probability score is at least 0.5).', 'For each system, sample precision was estimated by sampling 100 positive extractions and manually determining the proportion of correct extractions by an author knowledgeable about this domain.', 'Absolute... | [None, None, None, None, ['DPL + EMB', 'Peng 2017', 'Prec.', 'Abs. Rec.'], ['DPL', 'DPL -DS', 'DPL -DP', 'DPL -DP (ENTITY)', 'DPL -JI'], ['DPL -DS', 'DPL -DP', 'DPL -DP (ENTITY)', 'DPL -JI'], ['DPL + EMB']] | 1 |
D18-1215table_5 | Comparison of gene entity linking results on a balanced test set. The string-matching baseline has low precision. By combining indirect supervision strategies, DPL substantially improved precision while retaining reasonably high recall. | 2 | [['System', 'String Match'], ['System', 'DS'], ['System', 'DS + DP'], ['System', 'DS + DP + JI']] | 1 | [['Acc.'], ['F1'], ['Prec.'], ['Rec.']] | [['0.18', '0.31', '0.18', '1.00'], ['0.64', '0.71', '0.62', '0.83'], ['0.66', '0.71', '0.62', '0.83'], ['0.70', '0.76', '0.68', '0.86']] | column | ['Acc.', 'F1', 'Prec.', 'Rec.'] | ['DS', 'DS + DP', 'DS + DP + JI'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Acc.</th> <th>F1</th> <th>Prec.</th> <th>Rec.</th> </tr> </thead> <tbody> <tr> <td>System || String Match</td> <td>0.18</td> <td>0.31</td> <td>0.18</td> <td>1.00</td> ... | Table 5 | table_5 | D18-1215 | 8 | emnlp2018 | Experiment results. For evaluation, we annotated a larger set of sample gene-mention candidates and then subsampled a balanced test set of 550 instances (half are true gene mentions, half not). These instances were excluded from training and development. Table 5 compares system performance on this test set. The string-... | [2, 2, 2, 1, 1, 1, 2] | ['Experiment results.', 'For evaluation, we annotated a larger set of sample gene-mention candidates and then subsampled a balanced test set of 550 instances (half are true gene mentions, half not).', 'These instances were excluded from training and development.', 'Table 5 compares system performance on this test set.'... | [None, None, None, ['String Match', 'DS', 'DS + DP', 'DS + DP + JI'], ['String Match', 'Prec.'], ['DS', 'DS + DP', 'DS + DP + JI', 'Rec.'], ['DS', 'DS + DP', 'DS + DP + JI']] | 1 |
D18-1218table_5 | The quality of the coreference chains on the CoNLL-2012 test set. Each simulated scenario is randomly generated 10 times (summary reported in terms of average result and standard deviation) | 6 | [['CoNLL 2012 Test Dataset', 'Simulation', 'None', 'Method', 'Stanford', '-'], ['CoNLL 2012 Test Dataset', 'Simulation', 'Synthetic Uniform', 'Method', 'MV', 'avg.'], ['CoNLL 2012 Test Dataset', 'Simulation', 'Synthetic Uniform', 'Method', 'MV', 's.d.'], ['CoNLL 2012 Test Dataset', 'Simulation', 'Synthetic Uniform', 'M... | 2 | [['MUC', 'P'], ['MUC', 'R'], ['MUC', 'F1'], ['BCUB', 'P'], ['BCUB', 'R'], ['BCUB', 'F1'], ['CEAFE', 'P'], ['CEAFE', 'R'], ['CEAFE', 'F1'], ['Avg. F1', '-']] | [['89.78', '73.88', '81.06', '83.93', '59.22', '69.44', '73.87', '60.57', '66.56', '72.35'], ['88.27', '86.00', '87.12', '73.92', '70.81', '72.33', '70.62', '76.73', '73.55', '77.67'], ['0.38', '0.35', '0.36', '0.83', '0.52', '0.62', '0.49', '0.60', '0.50', '0.47'], ['90.92', '91.97', '91.44', '75.51', '80.14', '77.75'... | column | ['P', 'R', 'F1', 'P', 'R', 'F1', 'P', 'R', 'F1', 'Avg.F1'] | ['MPA'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MUC || P</th> <th>MUC || R</th> <th>MUC || F1</th> <th>BCUB || P</th> <th>BCUB || R</th> <th>BCUB || F1</th> <th>CEAFE || P</th> <th>CEAFE || R</th> <th>CEAFE || F1</th> ... | Table 5 | table_5 | D18-1218 | 7 | emnlp2018 | In Table 5 we present the results obtained on simulated data from the CONLL-2012 test set. The results follow a similar trend to those observed using actual annotations: a much better quality of the chains produced using the mention pairs inferred by our MPA model, across all the simulated scenarios. Furthermore, the M... | [1, 1, 1] | ['In Table 5 we present the results obtained on simulated data from the CONLL-2012 test set.', 'The results follow a similar trend to those observed using actual annotations: a much better quality of the chains produced using the mention pairs inferred by our MPA model, across all the simulated scenarios.', 'Furthermor... | [None, ['MPA', 'Synthetic Uniform', 'PD-inspired Uniform', 'Synthetic Sparse', 'PD-inspired Sparse'], ['MV', 'Stanford', 'Synthetic Uniform', 'PD-inspired Uniform', 'PD-inspired Sparse']] | 1 |
D18-1219table_6 | Results of using NP head plus modifications in different word representations for bridging anaphora resolution compared to the best results of two models from Hou et al. (2013b). Bold indicates statistically significant differences over the other models (two-sided paired approximate randomization test, p < 0.01). | 2 | [['models from Hou et al. (2013b)', 'pairwise model III'], ['models from Hou et al. (2013b)', 'MLN model II'], ['NP head + modifiers', 'GloVe GigaWiki14'], ['NP head + modifiers', 'GloVe Giga'], ['NP head + modifiers', 'embeddings PP'], ['NP head + modifiers', 'embeddings bridging']] | 1 | [['acc']] | [['36.35'], ['41.32'], ['20.52'], ['20.81'], ['31.67'], ['39.52']] | column | ['acc'] | ['embeddings bridging'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>acc</th> </tr> </thead> <tbody> <tr> <td>models from Hou et al. (2013b) || pairwise model III</td> <td>36.35</td> </tr> <tr> <td>models from Hou et al. (2013b) || MLN model II</td> ... | Table 6 | table_6 | D18-1219 | 7 | emnlp2018 | Table 6 lists the best results of the two models for bridging anaphora resolution from Hou et al. (2013b). pairwise model III is a pairwise mentionentity model based on various semantic, syntactic and lexical features. MLN model II is a joint inference framework based on Markov logic networks (Domingos and Lowd, 2009).... | [1, 2, 2, 1, 1, 1, 2, 2, 2, 1, 2] | ['Table 6 lists the best results of the two models for bridging anaphora resolution from Hou et al. (2013b).', 'pairwise model III is a pairwise mentionentity model based on various semantic, syntactic and lexical features.', 'MLN model II is a joint inference framework based on Markov logic networks (Domingos and Lowd... | [['models from Hou et al. (2013b)', 'pairwise model III', 'MLN model II'], ['pairwise model III'], ['MLN model II'], ['pairwise model III', 'MLN model II', 'acc'], ['GloVe GigaWiki14', 'GloVe Giga'], ['embeddings PP', 'acc'], ['embeddings PP', 'acc'], ['embeddings PP'], ['embeddings PP', 'embeddings bridging'], ['embed... | 1 |
D18-1219table_8 | Results of different systems for bridging anaphora resolution in ISNotes. Bold indicates statistically significant differences over the other models (two-sided paired approximate randomization test, p < 0.01). | 3 | [['Baselines', 'System', 'Schulte im Walde (1998)'], ['Baselines', 'System', 'Poesio et al. (2004)'], ['Models from Hou et al. (2013b)', 'System', 'pairwise model III'], ['Models from Hou et al. (2013b)', 'System', 'MLN model II'], ['Hou (2018)', 'System', 'MLN model II + embeddings PP (NP head + noun pre-modifiers)'],... | 1 | [['acc']] | [['13.68'], ['18.85'], ['36.35'], ['41.32'], ['45.85'], ['39.52'], ['46.46']] | column | ['acc'] | ['embeddings bridging (NP head + modifiers)', 'MLN model II + embeddings bridging (NP head + modifiers)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>acc</th> </tr> </thead> <tbody> <tr> <td>Baselines || System || Schulte im Walde (1998)</td> <td>13.68</td> </tr> <tr> <td>Baselines || System || Poesio et al. (2004)</td> <td>18.85... | Table 8 | table_8 | D18-1219 | 9 | emnlp2018 | 5.6 Combining NP Head + Modifiers with MLN II. For bridging anaphora resolution, Hou (2018) integrates a much simpler deterministic approach by combining an NP head with its noun modifiers (appearing before the head) based on embeddings PP into the MLN II system (Hou et al., 2013b). Similarly, we add a constraint on to... | [2, 2, 2, 1, 1, 1, 2, 2] | ['5.6 Combining NP Head + Modifiers with MLN II.', 'For bridging anaphora resolution, Hou (2018) integrates a much simpler deterministic approach by combining an NP head with its noun modifiers (appearing before the head) based on embeddings PP into the MLN II system (Hou et al., 2013b).', 'Similarly, we add a constrai... | [None, ['Models from Hou et al. (2013b)', 'MLN model II + embeddings PP (NP head + noun pre-modifiers)'], ['MLN model II + embeddings PP (NP head + noun pre-modifiers)', 'MLN model II + embeddings bridging (NP head + modifiers)'], ['Schulte im Walde (1998)', 'Poesio et al. (2004)', 'pairwise model III', 'MLN model II',... | 1 |
D18-1219table_9 | Results of resolving bridging anaphors in other corpora. Number of bridging anaphors is reported after filtering out a few problematic cases on each corpus. | 4 | [['Corpus', 'BASHI', 'Bridging Type', 'referential, including comparative anaphora'], ['Corpus', 'BASHI', 'Bridging Type', 'referential, excluding comparative anaphora'], ['Corpus', 'ARRAU (RST Train)', 'Bridging Type', 'mostly lexical, some referential'], ['Corpus', 'ARRAU (RST Test)', 'Bridging Type', 'mostly lexical... | 1 | [['# of Anaphors'], ['acc']] | [['452', '27.43'], ['344', '29.94'], ['2,325', '31.44'], ['639', '32.39']] | column | ['# of Anaphors', 'acc'] | ['BASHI', 'BASHI', 'ARRAU (RST Train)', 'ARRAU (RST Test)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th># of Anaphors</th> <th>acc</th> </tr> </thead> <tbody> <tr> <td>Corpus || BASHI || Bridging Type || referential, including comparative anaphora</td> <td>452</td> <td>27.43</td> </tr> ... | Table 9 | table_9 | D18-1219 | 9 | emnlp2018 | Table 9 lists the results of bridging anaphora resolution in the BASHI and ARRAU corpora, respectively. On the test set of the ARRAU (RST) corpus, Rosiger (2018b) proposed a modified rule-based system based on Hou et al. (2014)’s work and reported an accuracy of 39.8% for bridging anaphora resolution. And our algorit... | [1, 2, 1, 1] | ['Table 9 lists the results of bridging anaphora resolution in the BASHI and ARRAU corpora, respectively.', 'On the test set of the ARRAU (RST) corpus, Rosiger (2018b) proposed a modified rule-based system based on Hou et al. (2014)’s work and reported an accuracy of 39.8% for bridging anaphora resolution.', 'And our... | [['BASHI', 'ARRAU (RST Train)', 'ARRAU (RST Test)'], ['ARRAU (RST Train)', 'ARRAU (RST Test)'], ['ARRAU (RST Test)'], ['BASHI', 'ARRAU (RST Train)', 'ARRAU (RST Test)']] | 1 |
D18-1221table_2 | Results for the reverse dictionary task, compared with the highest numbers reported by Hill et al. (2016). TF vectors refers to textually enhanced vectors with λ = 1. For the MS-LSTM, k is set to 3. | 3 | [['Model', 'Seen (500 WordNet definitions)', 'OneLook (Hill et al. 2016)'], ['Model', 'Seen (500 WordNet definitions)', 'RNN cosine (Hill et al. 2016)'], ['Model', 'Seen (500 WordNet definitions)', 'Std LSTM (150 dim.) + TF vec.'], ['Model', 'Seen (500 WordNet definitions)', 'Std LSTM (k × 150 dim.) + TF vec.'], ['Mode... | 1 | [['Acc-10'], ['Acc-100']] | [['0.89', '0.91'], ['0.48', '0.73'], ['0.86', '0.96'], ['0.93', '0.98'], ['0.95', '0.99'], ['0.96', '0.99'], ['0.44', '0.69'], ['0.46', '0.71'], ['0.72', '0.88'], ['0.77', '0.90'], ['0.79', '0.90'], ['0.80', '0.91']] | column | ['Acc-10', 'Acc-100'] | ['MS-LSTM + TF vectors', 'MS-LSTM + TF vectors + anchors'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Acc-10</th> <th>Acc-100</th> </tr> </thead> <tbody> <tr> <td>Model || Seen (500 WordNet definitions) || OneLook (Hill et al. 2016)</td> <td>0.89</td> <td>0.91</td> </tr> <tr> <... | Table 2 | table_2 | D18-1221 | 7 | emnlp2018 | Table 2 shows the results, based on a MS-LSTM setup similar to that of ยง4.1. Note that the MSLSTM achieves 0.95-0.96 top-10 accuracy for the seen evaluation, significantly higher not only than the best model of Hill et al. (2016), but also higher than OneLook, a commercial system with access to more than 1000 dictiona... | [1, 1, 1, 2] | ['Table 2 shows the results, based on a MS-LSTM setup similar to that of ยง4.1.', 'Note that the MSLSTM achieves 0.95-0.96 top-10 accuracy for the seen evaluation, significantly higher not only than the best model of Hill et al. (2016), but also higher than OneLook, a commercial system with access to more than 1000 dic... | [['MS-LSTM +TF vectors', 'MS-LSTM +TF vectors + anchors'], ['MS-LSTM +TF vectors', 'MS-LSTM +TF vectors + anchors', 'Acc-10', 'Seen (500 WordNet definitions)', 'OneLook (Hill et al. 2016)', 'RNN cosine (Hill et al. 2016)'], ['MS-LSTM +TF vectors', 'MS-LSTM +TF vectors + anchors', 'Unseen (500 WordNet definitions)'], No... | 1 |
D18-1221table_3 | Results for the Cora dataset. TF vectors refers to textually enhanced KB vectors (λ = 0.5). Difference between our best models and GAT/GCN/TADW are not s.s. | 3 | [['Model', 'Evaluation 1 (training ratio=0.50)', 'PLSA (Hofmann 1999)'], ['Model', 'Evaluation 1 (training ratio=0.50)', 'NetPLSA (Mei et al. 2008)'], ['Model', 'Evaluation 1 (training ratio=0.50)', 'TADW (Yang et al. 2015)'], ['Model', 'Evaluation 1 (training ratio=0.50)', 'Linear SVM + DeepWalk vectors'], ['Model', '... | 1 | [['Accuracy']] | [['0.68'], ['0.85'], ['0.87'], ['0.85'], ['0.88'], ['0.76'], ['0.81'], ['0.83'], ['0.72'], ['0.82']] | column | ['Accuracy'] | ['Linear SVM + DeepWalk vectors', 'Linear SVM + TF vectors'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy</th> </tr> </thead> <tbody> <tr> <td>Model || Evaluation 1 (training ratio=0.50) || PLSA (Hofmann 1999)</td> <td>0.68</td> </tr> <tr> <td>Model || Evaluation 1 (training ratio=0... | Table 3 | table_3 | D18-1221 | 8 | emnlp2018 | In Table 3 we report results for two evaluation settings. In Evaluation 1, we provide a comparison with the method of Yang et al. (2015) who include textual features in graph embeddings based on matrix factorisation, and two topic models used as baselines in their paper. Using the same classification algorithm (a linea... | [1, 1, 2, 2, 1, 1, 2] | ['In Table 3 we report results for two evaluation settings.', 'In Evaluation 1, we provide a comparison with the method of Yang et al. (2015) who include textual features in graph embeddings based on matrix factorisation, and two topic models used as baselines in their paper.', 'Using the same classification algorithm ... | [['Evaluation 1 (training ratio=0.50)', 'Evaluation 2 (training ratio=0.05)'], ['PLSA (Hofmann 1999)', 'NetPLSA (Mei et al. 2008)', 'TADW (Yang et al. 2015)'], ['Linear SVM + DeepWalk vectors', 'Linear SVM + TF vectors'], ['Linear SVM + TF vectors'], ['Evaluation 2 (training ratio=0.05)', 'Planetoid (Yang et al. 2016)'... | 1 |
D18-1222table_3 | Experimental results on instanceOf triple classification(%). | 2 | [['Metric', 'TransE'], ['Metric', 'TransH'], ['Metric', 'TransR'], ['Metric', 'TransD'], ['Metric', 'HolE'], ['Metric', 'DistMult'], ['Metric', 'ComplEx'], ['Metric', 'TransC (unif)'], ['Metric', 'TransC (bern)']] | 3 | [['Datasets', 'YAGO39K', 'Accuracy'], ['Datasets', 'YAGO39K', 'Precision'], ['Datasets', 'YAGO39K', 'Recall'], ['Datasets', 'YAGO39K', 'F1-Score'], ['Datasets', 'M-YAGO39K', 'Accuracy'], ['Datasets', 'M-YAGO39K', 'Precision'], ['Datasets', 'M-YAGO39K', 'Recall'], ['Datasets', 'M-YAGO39K', 'F1-Score']] | [['82.6', '83.6', '81.0', '82.3', '71.0', '81.4', '54.4', '65.2'], ['82.9', '83.7', '81.7', '82.7', '70.1', '80.4', '53.2', '64.0'], ['80.6', '79.4', '82.5', '80.9', '70.9', '73.0', '66.3', '69.5'], ['83.2', '84.4', '81.5', '82.9', '72.5', '73.1', '71.4', '72.2'], ['82.3', '86.3', '76.7', '81.2', '74.2', '81.4', '62.7'... | column | ['Accuracy', 'Precision', 'Recall', 'F1-Score', 'Accuracy', 'Precision', 'Recall', 'F1-Score'] | ['TransC (unif)', 'TransC (bern)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Datasets || YAGO39K || Accuracy</th> <th>Datasets || YAGO39K || Precision</th> <th>Datasets || YAGO39K || Recall</th> <th>Datasets || YAGO39K || F1-Score</th> <th>Datasets || M-YAGO39K || Accurac... | Table 3 | table_3 | D18-1222 | 7 | emnlp2018 | Our datasets have three kinds of triples. Hence, we do experiments on them respectively. Experimental results for relational triples, instanceOf triples, and subClassOf triples are shown in Table 2, Table 3, and Table 4 respectively. In Table 3 and Table 4, a rising arrow means performance of this model have a promotio... | [2, 2, 1, 1, 2, 1, 2, 2, 1, 2, 2, 2, 1, 1, 2, 1, 1] | ['Our datasets have three kinds of triples.', 'Hence, we do experiments on them respectively.', 'Experimental results for relational triples, instanceOf triples, and subClassOf triples are shown in Table 2, Table 3, and Table 4 respectively.', 'In Table 3 and Table 4, a rising arrow means performance of this model have... | [None, None, None, ['YAGO39K', 'M-YAGO39K'], ['TransC (unif)', 'TransC (bern)'], ['YAGO39K', 'TransE', 'TransH', 'TransR', 'TransD', 'HolE', 'DistMult', 'ComplEx', 'TransC (unif)', 'TransC (bern)'], None, None, ['TransC (unif)', 'TransC (bern)'], ['YAGO39K', 'TransC (unif)', 'TransC (bern)'], None, None, ['M-YAGO39K', ... | 1 |
D18-1225table_4 | The predicted Mean Rank (lower the better) for temporal Scoping. The number of classes are 61 and 78 for YAGO11K and Wiki-data12k respectively. The results depict the effectiveness of TDNS. Please see Section 6.2 | 2 | [['Negative Sampling', 'TANS (Equation 1)'], ['Negative Sampling', 'TDNS (Equation 2)']] | 1 | [['YAGO11K'], ['Wikidata12k']] | [['14.0', '29.3'], ['9.88', '17.6']] | column | ['predicted Mean Rank', 'predicted Mean Rank'] | ['TANS (Equation 1)', 'TDNS (Equation 2)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>YAGO11K</th> <th>Wikidata12k</th> </tr> </thead> <tbody> <tr> <td>Negative Sampling || TANS (Equation 1)</td> <td>14.0</td> <td>29.3</td> </tr> <tr> <td>Negative Sampling || TD... | Table 4 | table_4 | D18-1225 | 7 | emnlp2018 | Temporal scoping of facts:. We report the rank of correct time instance of the triple. If the triple scope is an interval of time, we consider the lowest rank that corresponds to the time within that interval. The ranks are reported in table 4 for both the datasets. The results depict the effectiveness of TDNS. | [2, 2, 2, 1, 1] | ['Temporal scoping of facts.', 'We report the rank of correct time instance of the triple.', 'If the triple scope is an interval of time, we consider the lowest rank that corresponds to the time within that interval.', 'The ranks are reported in table 4 for both the datasets.', 'The results depict the effectiveness of ... | [None, None, None, ['YAGO11K', 'Wikidata12k'], ['TDNS (Equation 2)']] | 1 |
D18-1227table_2 | Entity linking performance of different methods on Yelp-EL. | 2 | [['Method', 'DirectLink'], ['Method', 'ELT'], ['Method', 'SSRegu'], ['Method', 'LinkYelp']] | 1 | [['Accuracy (mean±std)']] | [['0.6684±0.008'], ['0.8451±0.012'], ['0.7970±0.013'], ['0.9034±0.014']] | column | ['Accuracy (mean�std)'] | ['LinkYelp'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy (mean±std)</th> </tr> </thead> <tbody> <tr> <td>Method || DirectLink</td> <td>0.6684±0.008</td> </tr> <tr> <td>Method || ELT</td> <td>0.8451±0.012</td> </tr> <tr> ... | Table 2 | table_2 | D18-1227 | 6 | emnlp2018 | 5.3 Comparison Results. Table 2 shows the entity linking performance of different methods on Yelp-EL. Here, all three types of features described in Section 4 are fed into LinkYelp. Within the compared methods, LinkYelp performs substantially better. This shows that methods carefully designed for traditional entity lin... | [2, 1, 2, 1, 2, 1, 2] | ['5.3 Comparison Results.', 'Table 2 shows the entity linking performance of different methods on Yelp-EL.', 'Here, all three types of features described in Section 4 are fed into LinkYelp.', 'Within the compared methods, LinkYelp performs substantially better.', 'This shows that methods carefully designed for traditio... | [None, ['DirectLink', 'ELT', 'SSRegu', 'LinkYelp'], None, ['LinkYelp', 'Accuracy (mean±std)', 'DirectLink', 'ELT', 'SSRegu'], ['DirectLink', 'ELT', 'SSRegu'], ['DirectLink'], None] | 1 |
D18-1230table_2 | [Biomedical Domain] NER Performance Comparison. The supervised benchmarks on the BC5CDR and NCBI-Disease datasets are LM-LSTM-CRF and LSTM-CRF respectively (Wang et al., 2018). SwellShark has no annotated data, but for entity span extraction, it requires pre-trained POS taggers and extra human efforts of designing POS ... | 4 | [['Method', 'Supervised Benchmark', 'Human Effort other than Dictionary', 'Gold Annotations'], ['Method', 'SwellShark', 'Human Effort other than Dictionary', 'Regex Design + Special Case Tuning'], ['Method', 'SwellShark', 'Human Effort other than Dictionary', 'Regex Design'], ['Method', 'Dictionary Match', 'Human Effor... | 2 | [['BC5CDR', 'Pre'], ['BC5CDR', 'Rec'], ['BC5CDR', 'F1'], ['NCBI-Disease', 'Pre'], ['NCBI-Disease', 'Rec'], ['NCBI-Disease', 'F1']] | [['88.84', '85.16', '86.96', '86.11', '85.49', '85.80'], ['86.11', '82.39', '84.21', '81.6', '80.1', '80.8'], ['84.98', '83.49', '84.23', '64.7', '69.7', '67.1'], ['93.93', '58.35', '71.98', '90.59', '56.15', '69.32'], ['88.27', '76.75', '82.11', '79.85', '67.71', '73.28'], ['88.96', '81.00', '84.8', '79.42', '71.98', ... | column | ['Pre', 'Rec', 'F1', 'Pre', 'Rec', 'F1'] | ['AutoNER'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BC5CDR || Pre</th> <th>BC5CDR || Rec</th> <th>BC5CDR || F1</th> <th>NCBI-Disease || Pre</th> <th>NCBI-Disease || Rec</th> <th>NCBI-Disease || F1</th> </tr> </thead> <tbody> <tr> ... | Table 2 | table_2 | D18-1230 | 7 | emnlp2018 | 5.3 NER Performance Comparison. We present F1, precision, and recall scores on all datasets in Table 2 and Table 3. From both tables, one can find the AutoNER achieves the best performance when there is no extra human effort. Fuzzy-LSTM-CRF does have some improvements over the Dictionary Match, but it is always worse t... | [2, 1, 1, 1, 1, 2, 2, 1, 1] | ['5.3 NER Performance Comparison.', 'We present F1, precision, and recall scores on all datasets in Table 2 and Table 3.', 'From both tables, one can find the AutoNER achieves the best performance when there is no extra human effort.', 'Fuzzy-LSTM-CRF does have some improvements over the Dictionary Match, but it is alw... | [None, ['Pre', 'Rec', 'F1', 'BC5CDR', 'NCBI-Disease'], ['AutoNER', 'Human Effort other than Dictionary', 'None'], ['Fuzzy-LSTM-CRF', 'AutoNER', 'Dictionary Match'], ['AutoNER', 'SwellShark'], ['SwellShark', 'Regex Design + Special Case Tuning', 'NCBI-Disease'], ['AutoNER'], ['AutoNER', 'Supervised Benchmark'], ['AutoNE... | 1 |
D18-1231table_3 | Evaluation of coarse entity-typing (§4.2): we compare two supervised entity-typers with our system. For the supervised systems, cells with gray color indicate in-domain evaluation. For each column, the best, out-of-domain and overall results are bold-faced and underlined, respectively. Numbers are F 1 in percentage. In... | 4 | [['System', 'COGCOMPNLP', 'Trained on', 'OntoNotes'], ['System', 'COGCOMPNLP', 'Trained on', 'CoNLL'], ['System', 'ZOE (ours)', 'Trained on', '×']] | 2 | [['OntoNotes', 'PER'], ['OntoNotes', 'LOC'], ['OntoNotes', 'ORG'], ['CoNLL', 'PER'], ['CoNLL', 'LOC'], ['CoNLL', 'ORG'], ['MUC', 'PER'], ['MUC', 'LOC'], ['MUC', 'ORG']] | [['98.4', '91.9', '97.7', '83.7', '70.1', '68.3', '82.5', '76.9', '86.7'], ['94.4', '59.1', '87.8', '95.6', '92.9', '90.5', '90.8', '90.8', '90.9'], ['88.4', '70.0', '85.6', '90.1', '80.1', '73.9', '87.8', '90.9', '91.2']] | column | ['F1', 'F1', 'F1', 'F1', 'F1', 'F1', 'F1', 'F1', 'F1'] | ['ZOE (ours)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>OntoNotes || PER</th> <th>OntoNotes || LOC</th> <th>OntoNotes || ORG</th> <th>CoNLL || PER</th> <th>CoNLL || LOC</th> <th>CoNLL || ORG</th> <th>MUC || PER</th> <th>MUC || LOC</th> ... | Table 3 | table_3 | D18-1231 | 7 | emnlp2018 | 4.2 Coarse Entity Typing. In Table 3 we study entity typing for the coarse types on three datasets. We focus on three types that are shared among the datasets: PER, LOC, ORG. In coarse-entity typing, the best available systems are heavily supervised. In this evaluation, we use gold mention spans; i.e., we force the dec... | [2, 1, 1, 2, 2, 1, 1, 1] | ['4.2 Coarse Entity Typing.', 'In Table 3 we study entity typing for the coarse types on three datasets.', 'We focus on three types that are shared among the datasets: PER, LOC, ORG.', 'In coarse-entity typing, the best available systems are heavily supervised.', 'In this evaluation, we use gold mention spans; i.e., we... | [None, ['OntoNotes', 'CoNLL', 'MUC'], ['PER', 'LOC', 'ORG'], ['COGCOMPNLP'], ['COGCOMPNLP'], ['COGCOMPNLP', 'OntoNotes', 'CoNLL'], ['COGCOMPNLP', 'OntoNotes', 'CoNLL', 'MUC'], ['ZOE (ours)', 'Trained on', '×', 'COGCOMPNLP']] | 1 |
D18-1231table_6 | Ablation study of different ways in which concepts are generated in our system (§4.5). The first row shows performance of our system on each dataset, followed by the change in the performance upon dropping a component. While both signals are crucial, contextual information is playing more important role than the mentio... | 2 | [['Approach', 'ZOE (ours)'], ['Approach', 'no surface-based concepts'], ['Approach', 'no context-based concepts']] | 2 | [['FIGER', 'Acc.'], ['FIGER', 'F1ma'], ['FIGER', 'F1mi'], ['BBN', 'Acc.'], ['BBN', 'F1ma'], ['BBN', 'F1mi'], ['OntoNotesfine', 'Acc.'], ['OntoNotesfine', 'F1ma'], ['OntoNotesfine', 'F1mi']] | [['58.8', '74.8', '71.3', '61.8', '74.6', '74.9', '50.7', '66.9', '60.8'], ['-8.8', '-7.5', '-9.2', '-12.9', '-7.0', '-8.6', '-1.8', '-1.2', '-0.1'], ['-39.3', '-42.1', '-25.4', '-36.4', '-31.0', '-13.9', '-10.0', '-12.3', '-7.4']] | column | ['Acc.', 'F1ma', 'F1mi', 'Acc.', 'F1ma', 'F1mi', 'Acc.', 'F1ma', 'F1mi'] | ['ZOE (ours)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>FIGER || Acc.</th> <th>FIGER || F1ma</th> <th>FIGER || F1mi</th> <th>BBN || Acc.</th> <th>BBN || F1ma</th> <th>BBN || F1mi</th> <th>OntoNotesfine || Acc.</th> <th>OntoNotesfine || ... | Table 6 | table_6 | D18-1231 | 9 | emnlp2018 | 4.5 Ablation Study. We carry out ablation studies that quantify the contribution of surface information (ยง3.3) and context information (ยง3.2). As Table 6 shows, both factors are crucial and complementary for the system. However, the contextual information seems to have a bigger role overall. We complement our qualita... | [2, 2, 1, 1, 2, 2] | ['4.5 Ablation Study.', 'We carry out ablation studies that quantify the contribution of surface information (ยง3.3) and context information (ยง3.2).', 'As Table 6 shows, both factors are crucial and complementary for the system.', 'However, the contextual information seems to have a bigger role overall.', 'We compleme... | [None, None, ['ZOE (ours)', 'no surface-based concepts', 'no context-based concepts'], ['no context-based concepts'], None, ['ZOE (ours)', 'no surface-based concepts', 'no context-based concepts', 'FIGER', 'BBN', 'OntoNotesfine']] | 1 |
D18-1233table_3 | Selected Results of the baseline models on follow-up question generation. | 2 | [['Model', 'First Sent.'], ['Model', 'NMT-Copy'], ['Model', 'BiDAF'], ['Model', 'Rule-based']] | 1 | [['BLEU-1'], ['BLEU-2'], ['BLEU-3'], ['BLEU-4']] | [['0.221', '0.144', '0.119', '0.106'], ['0.339', '0.206', '0.139', '0.102'], ['0.450', '0.375', '0.338', '0.312'], ['0.533', '0.437', '0.379', '0.344']] | column | ['BLEU-1', 'BLEU-2', 'BLEU-3', 'BLEU-4'] | ['Rule-based'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU-1</th> <th>BLEU-2</th> <th>BLEU-3</th> <th>BLEU-4</th> </tr> </thead> <tbody> <tr> <td>Model || First Sent.</td> <td>0.221</td> <td>0.144</td> <td>0.119</td> <td>... | Table 3 | table_3 | D18-1233 | 9 | emnlp2018 | Results. Our results, shown in Table 3 indicate that systems that return contiguous spans from the rule text perform better according to our BLEU metric. We speculate that the logical forms in the data are challenging for existing models to extract and manipulate, which may suggest why the explicit rule-based system pe... | [2, 1, 1, 2] | ['Results.', 'Our results, shown in Table 3 indicate that systems that return contiguous spans from the rule text perform better according to our BLEU metric.', 'We speculate that the logical forms in the data are challenging for existing models to extract and manipulate, which may suggest why the explicit rule-based s... | [None, ['BiDAF', 'Rule-based', 'BLEU-1', 'BLEU-2', 'BLEU-3', 'BLEU-4'], ['Rule-based'], ['NMT-Copy', 'Rule-based']] | 1 |
D18-1233table_4 | Results of entailment models on ShARC. | 2 | [['Model', 'Random'], ['Model', 'Surface LR'], ['Model', 'DAM (SNLI)'], ['Model', 'DAM (ShARC)']] | 1 | [['Micro Acc.'], ['Macro Acc.']] | [['0.330', '0.326'], ['0.682', '0.333'], ['0.479', '0.362'], ['0.492', '0.322']] | column | ['Micro Acc.', 'Macro Acc.'] | ['Model'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Micro Acc.</th> <th>Macro Acc.</th> </tr> </thead> <tbody> <tr> <td>Model || Random</td> <td>0.330</td> <td>0.326</td> </tr> <tr> <td>Model || Surface LR</td> <td>0.682</t... | Table 4 | table_4 | D18-1233 | 9 | emnlp2018 | Results. Table 4 shows the result of our baseline models on the entailment corpus of ShARC test set. Results show poor performance especially for the macro accuracy metric of both simple baselines and neural state-of-the-art entailment models. This performance highlights the challenges that the scenario interpretation ... | [2, 1, 1, 2] | ['Results.', 'Table 4 shows the result of our baseline models on the entailment corpus of ShARC test set.', 'Results show poor performance especially for the macro accuracy metric of both simple baselines and neural state-of-the-art entailment models.', 'This performance highlights the challenges that the scenario inte... | [None, ['Random', 'Surface LR', 'DAM (SNLI)', 'DAM (ShARC)'], ['Random', 'Surface LR', 'DAM (SNLI)', 'DAM (ShARC)', 'Macro Acc.'], ['Random', 'Surface LR', 'DAM (SNLI)', 'DAM (ShARC)', 'Macro Acc.']] | 1 |
D18-1235table_2 | Comparison among different choices for the loss function with multiple answers on the development set | 2 | [['Loss', 'single answer'], ['Loss', 'Lmin'], ['Loss', 'Lavg'], ['Loss', 'Lwavg']] | 1 | [['ROUGE-L'], ['Δ']] | [['48.93', '-'], ['49.05', '+0.12'], ['49.67', '+0.74'], ['49.77', '+0.84']] | column | ['ROUGE-L', 'delta'] | ['Lmin', 'Lavg', 'Lwavg'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>ROUGE-L</th> <th>Δ</th> </tr> </thead> <tbody> <tr> <td>Loss || single answer</td> <td>48.93</td> <td>-</td> </tr> <tr> <td>Loss || Lmin</td> <td>49.05</td> <td>+0.12... | Table 2 | table_2 | D18-1235 | 7 | emnlp2018 | 4.3.2 Different loss functions with multi-answer. Table 2 shows the experimental results with three different multi-answer loss functions introduced in Section 3.5.1. All of them offer improvement over the single-answer baseline, which shows the effectiveness of utilizing multiple answers. The average loss performs bet... | [2, 1, 1, 1, 1, 2] | ['4.3.2 Different loss functions with multi-answer.', 'Table 2 shows the experimental results with three different multi-answer loss functions introduced in Section 3.5.1.', 'All of them offer improvement over the single-answer baseline, which shows the effectiveness of utilizing multiple answers.', 'The average loss p... | [None, ['Lmin', 'Lavg', 'Lwavg'], ['ROUGE-L', 'single answer', 'Lmin', 'Lavg', 'Lwavg'], ['ROUGE-L', 'Lmin', 'Lavg'], ['ROUGE-L', 'Lmin', 'Lavg', 'Lwavg'], ['Lwavg']] | 1 |
D18-1235table_4 | Performance of our model and competing models on the DuReader test set | 2 | [['Model', 'BiDAF (He et al. 2017)'], ['Model', 'Match-LSTM (He et al. 2017)'], ['Model', 'PR+BiDAF (Wang et al. 2018b)'], ['Model', 'PE+BiDAF (ours)'], ['Model', 'V-Net (Wang et al. 2018b)'], ['Model', 'Our complete model'], ['Model', 'Human']] | 1 | [['ROUGE-L'], ['BLEU-4']] | [['39.0', '31.8'], ['39.2', '31.9'], ['41.81', '37.55'], ['45.93', '38.86'], ['44.18', '40.97'], ['51.09', '43.76'], ['57.4', '56.1']] | column | ['ROUGE-L', 'BLEU-4'] | ['PE+BiDAF (ours)', 'Our complete model'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>ROUGE-L</th> <th>BLEU-4</th> </tr> </thead> <tbody> <tr> <td>Model || BiDAF (He et al. 2017)</td> <td>39.0</td> <td>31.8</td> </tr> <tr> <td>Model || Match-LSTM (He et al. 2017... | Table 4 | table_4 | D18-1235 | 8 | emnlp2018 | 4.3.4 Comparison with State-of-the-art. Table 4 shows the performance of our model and other state-of-the-art models on the DuReader test set. First, we compare our passage extraction method with the paragraph ranking model from Wang et al. (2018b). Based on the same BiDAF model described in Section 3.4, our method (PE... | [2, 1, 2, 1, 1] | ['4.3.4 Comparison with State-of-the-art.', 'Table 4 shows the performance of our model and other state-of-the-art models on the DuReader test set.', 'First, we compare our passage extraction method with the paragraph ranking model from Wang et al. (2018b).', 'Based on the same BiDAF model described in Section 3.4, our... | [None, ['BiDAF (He et al. 2017)', 'Match-LSTM (He et al. 2017)', 'PR+BiDAF (Wang et al. 2018b)', 'PE+BiDAF (ours)', 'V-Net (Wang et al. 2018b)', 'Our complete model'], None, ['PE+BiDAF (ours)', 'PR+BiDAF (Wang et al. 2018b)'], ['Our complete model', 'Human', 'ROUGE-L', 'BLEU-4']] | 1 |
D18-1239table_1 | Results for Short Questions (CLEVRGEN): Performance of our model compared to baseline models on the Short Questions test set. The LSTM (NO KG) has accuracy close to chance, showing that the questions lack trivial biases. Our model almost perfectly solves all questions showing its ability to learn challenging semantic o... | 2 | [['Model', 'LSTM (NO KG)'], ['Model', 'LSTM'], ['Model', 'BI-LSTM'], ['Model', 'TREE-LSTM'], ['Model', 'TREE-LSTM (UNSUP.)'], ['Model', 'RELATION NETWORK'], ['Model', 'Our Model (Pre-parsed)'], ['Model', 'Our Model']] | 1 | [['Boolean Questions'], ['Entity Set Questions'], ['Relation Questions'], ['Overall']] | [['50.7', '14.4', '17.5', '27.2'], ['88.5', '99.9', '15.7', '84.9'], ['85.3', '99.6', '14.9', '83.6'], ['82.2', '97.0', '15.7', '81.2'], ['85.4', '99.4', '16.1', '83.6'], ['85.6', '89.7', '97.6', '89.4'], ['94.8', '93.4', '70.5', '90.8'], ['99.9', '100', '100', '99.9']] | column | ['accuracy', 'accuracy', 'accuracy', 'accuracy'] | ['Our Model (Pre-parsed)', 'Our Model'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Boolean Questions</th> <th>Entity Set Questions</th> <th>Relation Questions</th> <th>Overall</th> </tr> </thead> <tbody> <tr> <td>Model || LSTM (NO KG)</td> <td>50.7</td> <td>14... | Table 1 | table_1 | D18-1239 | 7 | emnlp2018 | Short Questions Performance:. Table 1 shows that our model perfectly answers all test questions, demonstrating that it can learn challenging semantic operators and induce parse trees from end task supervision. Performance drops when using external parser, showing that our model learns an effective syntactic model for t... | [2, 1, 1, 1, 1] | ['Short Questions Performance:.', 'Table 1 shows that our model perfectly answers all test questions, demonstrating that it can learn challenging semantic operators and induce parse trees from end task supervision.', 'Performance drops when using external parser, showing that our model learns an effective syntactic mod... | [None, ['Our Model (Pre-parsed)', 'Our Model', 'Boolean Questions', 'Entity Set Questions', 'Relation Questions'], ['Our Model (Pre-parsed)', 'Our Model'], ['RELATION NETWORK', 'Relation Questions'], ['LSTM', 'Relation Questions']] | 1 |
D18-1239table_3 | Results for Human Queries (GENX) Our model outperforms LSTM and semantic parsing models on complex human-generated queries, showing it is robust to work on natural language. Better performance than TREE-LSTM (UNSUP.) shows the efficacy in representing sub-phrases using explicit denotations. Our model also performs bett... | 2 | [['Model', 'LSTM (NO KG)'], ['Model', 'LSTM'], ['Model', 'BI-LSTM'], ['Model', 'TREE-LSTM'], ['Model', 'TREE-LSTM (UNSUP.)'], ['Model', 'SEMPRE'], ['Model', 'Our Model (Pre-parsed)'], ['Model', 'Our Model']] | 1 | [['Accuracy']] | [['0.0'], ['64.9'], ['64.6'], ['43.5'], ['67.7'], ['48.1'], ['67.1'], ['73.7']] | column | ['Accuracy'] | ['Our Model (Pre-parsed)', 'Our Model'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy</th> </tr> </thead> <tbody> <tr> <td>Model || LSTM (NO KG)</td> <td>0.0</td> </tr> <tr> <td>Model || LSTM</td> <td>64.9</td> </tr> <tr> <td>Model || BI-LSTM</td>... | Table 3 | table_3 | D18-1239 | 9 | emnlp2018 | Performance on Human-generated Language:. Table 3 shows the performance of our model on complex human-generated queries in GENX. Our approach outperforms strong LSTM and semantic parsing baselines, despite the semantic parser’s use of hard-coded operators. These results suggest that our method represents an attractiv... | [2, 1, 1, 2, 2, 2, 2] | ['Performance on Human-generated Language:.', 'Table 3 shows the performance of our model on complex human-generated queries in GENX.', 'Our approach outperforms strong LSTM and semantic parsing baselines, despite the semantic parser’s use of hard-coded operators.', 'These results suggest that our method represents a... | [None, ['Our Model'], ['Our Model', 'LSTM', 'SEMPRE'], ['Our Model'], ['Our Model'], ['Our Model'], ['Our Model']] | 1 |
D18-1240table_7 | Comparison to the SoA on SemEval-2016 | 1 | [['Kelp [#1] (Filice et al. 2016)'], ['Conv-KN [#2] (Barron-Cedeno et al. 2016)'], ['CTKC +VQF (Tymoshenko et al. 2016b)'], ['HyperQA (Tay et al. 2018)'], ['AI-CNN (Zhang et al. 2017)'], ['Our model (V+Bcr+Ecr+E+SST)']] | 1 | [['MRR'], ['MAP']] | [['86.42', '79.19'], ['84.93', '77.6'], ['86.26', '78.78'], ['n/a', '79.5'], ['n/a', '80.14'], ['86.52', '79.79']] | column | ['MRR', 'MAP'] | ['Our model (V+Bcr+Ecr+E+SST)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MRR</th> <th>MAP</th> </tr> </thead> <tbody> <tr> <td>Kelp [#1] (Filice et al. 2016)</td> <td>86.42</td> <td>79.19</td> </tr> <tr> <td>Conv-KN [#2] (Barron-Cedeno et al. 2016)<... | Table 7 | table_7 | D18-1240 | 9 | emnlp2018 | Semeval. Table 7 compares performance of Bcr + Ecr + V + E + SST system on Semeval to that of KeLP and ConvKN, the two top systems in the SemEval 2016 competition, and also to the performance of the recent DNN-based HyperQA and AI-CNN systems. In the Semeval 2016 competition, our model would have been the first, with #... | [2, 1, 1, 1] | ['Semeval.', 'Table 7 compares performance of Bcr + Ecr + V + E + SST system on Semeval to that of KeLP and ConvKN, the two top systems in the SemEval 2016 competition, and also to the performance of the recent DNN-based HyperQA and AI-CNN systems.', 'In the Semeval 2016 competition, our model would have been the first... | [None, ['Kelp [#1] (Filice et al. 2016)', 'Conv-KN [#2] (Barron-Cedeno et al. 2016)', 'CTKC +VQF (Tymoshenko et al. 2016b)', 'HyperQA (Tay et al. 2018)', 'AI-CNN (Zhang et al. 2017)', 'Our model (V+Bcr+Ecr+E+SST)'], ['Our model (V+Bcr+Ecr+E+SST)', 'Kelp [#1] (Filice et al. 2016)', 'MAP'], ['Our model (V+Bcr+Ecr+E+SST)'... | 1 |
D18-1244table_1 | Results on TACRED. Underscore marks highest number among single models; bold marks highest among all. † marks results reported in (Zhang et al., 2017); ‡ marks results produced with our implementation. ⇤ marks statistically significant improvements over PA-LSTM with p < .01 under a bootstrap test. | 2 | [['System', 'LR (Zhang+2017)'], ['System', 'SDP-LSTM (Xu+2015b)'], ['System', 'Tree-LSTM (Tai+2015)'], ['System', 'PA-LSTM (Zhang+2017)'], ['System', 'GCN'], ['System', 'C-GCN'], ['System', 'GCN + PA-LSTM'], ['System', 'C-GCN + PA-LSTM']] | 1 | [['P'], ['R'], ['F1']] | [['73.5', '49.9', '59.4'], ['66.3', '52.7', '58.7'], ['66', '59.2', '62.4'], ['65.7', '64.5', '65.1'], ['69.8', '59', '64'], ['69.9', '63.3', '66.4'], ['71.7', '63', '67.1'], ['71.3', '65.4', '68.2']] | column | ['P', 'R', 'F1'] | ['GCN', 'C-GCN', 'GCN + PA-LSTM', 'C-GCN + PA-LSTM'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>System || LR (Zhang+2017)</td> <td>73.5</td> <td>49.9</td> <td>59.4</td> </tr> <tr> <td>System || SDP-LS... | Table 1 | table_1 | D18-1244 | 6 | emnlp2018 | 5.3 Results on the TACRED Dataset. We present our main results on the TACRED test set in Table 1. We observe that our GCN model outperforms all dependency-based models by at least 1.6 F1. By using contextualized word representations, the C-GCN model further outperforms the strong PA-LSTM model by 1.3 F1, and achieves a... | [2, 1, 1, 1, 1, 1, 2, 2, 2, 2, 1, 1, 1] | ['5.3 Results on the TACRED Dataset.', 'We present our main results on the TACRED test set in Table 1.', 'We observe that our GCN model outperforms all dependency-based models by at least 1.6 F1.', 'By using contextualized word representations, the C-GCN model further outperforms the strong PA-LSTM model by 1.3 F1, and... | [None, ['GCN', 'C-GCN', 'GCN + PA-LSTM', 'C-GCN + PA-LSTM'], ['GCN', 'LR (Zhang+2017)', 'SDP-LSTM (Xu+2015b)', 'Tree-LSTM (Tai+2015)', 'F1'], ['C-GCN', 'PA-LSTM (Zhang+2017)', 'F1'], ['C-GCN', 'LR (Zhang+2017)', 'SDP-LSTM (Xu+2015b)', 'Tree-LSTM (Tai+2015)', 'P', 'R'], ['GCN', 'C-GCN', 'R'], ['C-GCN'], ['GCN', 'PA-LSTM... | 1 |
D18-1259table_4 | Main results: the performance of question answering and supporting fact prediction in the two benchmark settings. We encourage researchers to report these metrics when evaluating their methods. | 4 | [['Setting', 'distractor', 'Split', 'dev'], ['Setting', 'distractor', 'Split', 'test'], ['Setting', 'full wiki', 'Split', 'dev'], ['Setting', 'full wiki', 'Split', 'test']] | 2 | [['Answer', 'EM'], ['Answer', 'F1'], ['Sup Fact', 'EM'], ['Sup Fact', 'F1'], ['Joint', 'EM'], ['Joint', 'F1']] | [['44.44', '58.28', '21.95', '66.66', '11.56', '40.86'], ['45.46', '58.99', '22.24', '66.62', '12.04', '41.37'], ['24.68', '34.36', '5.28', '40.98', '2.54', '17.73'], ['25.23', '34.40', '5.07', '40.69', '2.63', '17.85']] | column | ['EM', 'F1', 'EM', 'F1', 'EM', 'F1'] | ['distractor', 'full wiki'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Answer || EM</th> <th>Answer || F1</th> <th>Sup Fact || EM</th> <th>Sup Fact || F1</th> <th>Joint || EM</th> <th>Joint || F1</th> </tr> </thead> <tbody> <tr> <td>Setting || dist... | Table 4 | table_4 | D18-1259 | 8 | emnlp2018 | The performance of our model on the benchmark settings is reported in Table 4, where all numbers are obtained with strong supervision over supporting facts. From the distractor setting to the full wiki setting, expanding the scope of the context increases the difficulty of question answering. The performance in the ful... | [1, 1, 1, 2, 1, 1] | ['The performance of our model on the benchmark settings is reported in Table 4, where all numbers are obtained with strong supervision over supporting facts.', 'From the distractor setting to the full wiki setting, expanding the scope of the context increases the difficulty of question answering.', 'The performance in... | [None, ['distractor', 'full wiki', 'Answer'], ['full wiki', 'Answer', 'Sup Fact', 'Joint', 'EM', 'F1'], ['distractor', 'full wiki'], ['Sup Fact'], ['Sup Fact', 'F1', 'Joint', 'distractor', 'full wiki']] | 1 |
D18-1259table_9 | Retrieval performance comparison on full wiki setting for train-medium, dev and test with 1,000 random samples each. MAP and are in %. Mean Rank averages over retrieval ranks of two gold paragraphs. CorAns Rank refers to the rank of the gold paragraph containing the answer. | 2 | [['Set', 'train-medium'], ['Set', 'dev'], ['Set', 'test']] | 1 | [['MAP'], ['Mean Rank'], ['CorAns Rank']] | [['41.89', '288.19', '82.76'], ['42.79', '304.30', '97.93'], ['45.92', '286.20', '74.85']] | column | ['MAP', 'Mean Rank', 'CorAns Rank'] | ['train-medium'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MAP</th> <th>Mean Rank</th> <th>CorAns Rank</th> </tr> </thead> <tbody> <tr> <td>Set || train-medium</td> <td>41.89</td> <td>288.19</td> <td>82.76</td> </tr> <tr> <td... | Table 9 | table_9 | D18-1259 | 12 | emnlp2018 | Table 9 shows the comparison between train-medium split and hard examples like dev and test under retrieval metrics in full wiki setting. As we can see, the performance gap between trainmedium split and its dev/test is close, which implies that train-medium split has a similar level of difficulty as hard examples under... | [1, 1] | ['Table 9 shows the comparison between train-medium split and hard examples like dev and test under retrieval metrics in full wiki setting.', 'As we can see, the performance gap between trainmedium split and its dev/test is close, which implies that train-medium split has a similar level of difficulty as hard examples ... | [['train-medium', 'dev', 'test', 'MAP', 'Mean Rank', 'CorAns Rank'], ['train-medium', 'dev', 'test']] | 1 |
D18-1262table_3 | Results on the English out-of-domain test set. | 3 | [['System', 'Local model', 'Lei et al. (2015)'], ['System', 'Local model', 'FitzGerald et al. (2015)'], ['System', 'Local model', 'Roth and Lapata (2016)'], ['System', 'Local model', 'Marcheggiani et al. (2017)'], ['System', 'Local model', 'Marcheggiani and Titov (2017)'], ['System', 'Local model', 'He et al. (2018)'],... | 1 | [['P'], ['R'], ['F1']] | [['-', '-', '75.6'], ['-', '-', '75.2'], ['76.9', '73.8', '75.3'], ['79.4', '76.2', '77.7'], ['78.5', '75.9', '77.2'], ['81.9', '76.9', '79.3'], ['79.8', '78.3', '79.0'], ['80.6', '79.0', '79.8'], ['81.0', '78.2', '79.6'], ['80.4', '78.7', '79.5'], ['77.9', '73.6', '75.7'], ['-', '-', '75.2'], ['78.6', '73.8', '76.1'],... | column | ['P', 'R', 'F1'] | ['Ours (Syn-GCN)', 'Ours (SA-LSTM)', 'Ours (Tree-LSTM)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>System || Local model || Lei et al. (2015)</td> <td>-</td> <td>-</td> <td>75.6</td> </tr> <tr> <td>Syste... | Table 3 | table_3 | D18-1262 | 6 | emnlp2018 | Table 3 presents the results on English out-of-domain test set. Our models outperform the highest records achieved by He et al. (2018), with absolute improvements of 0.2-0.5% in F1 scores. These favorable results on both in-domain and outof-domain data demonstrate the effectiveness and robustness of our proposed unifie... | [1, 1, 1] | ['Table 3 presents the results on English out-of-domain test set.', 'Our models outperform the highest records achieved by He et al. (2018), with absolute improvements of 0.2-0.5% in F1 scores.', 'These favorable results on both in-domain and outof-domain data demonstrate the effectiveness and robustness of our propose... | [None, ['Ours (Syn-GCN)', 'Ours (SA-LSTM)', 'Ours (Tree-LSTM)', 'F1', 'He et al. (2018)'], ['Ours (Syn-GCN)', 'Ours (SA-LSTM)', 'Ours (Tree-LSTM)']] | 1 |
D18-1262table_6 | Comparison of models with deep encoder and M&T encoder (Marcheggiani and Titov, 2017) on the English test set. | 2 | [['Our system', 'Baseline (syntax-agnostic)'], ['Our system', 'Syn-GCN'], ['Our system', 'SA-LSTM'], ['Our system', 'Tree-LSTM'], ['Our system', 'Syn-GCN (M&T encoder)'], ['Our system', 'SA-LSTM (M&T encoder)'], ['Our system', 'Tree-LSTM (M&T encoder)']] | 1 | [['P'], ['R'], ['F1']] | [['89.5', '87.9', '88.7'], ['90.3', '89.3', '89.8'], ['90.8', '88.6', '89.7'], ['90.0', '88.8', '89.4'], ['89.2', '88.0', '88.6'], ['89.8', '88.8', '89.3'], ['90.0', '87.8', '88.9']] | column | ['P', 'R', 'F1'] | ['Syn-GCN', 'SA-LSTM', 'Tree-LSTM', 'Syn-GCN (M&T encoder)', 'SA-LSTM (M&T encoder)', 'Tree-LSTM (M&T encoder)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Our system || Baseline (syntax-agnostic)</td> <td>89.5</td> <td>87.9</td> <td>88.7</td> </tr> <tr> <td>O... | Table 6 | table_6 | D18-1262 | 7 | emnlp2018 | To further investigate the impact of deep encoder, we perform our Syn-GCN, SA-LSTM and Tree-LSTM models with another alternative configuration, using the same encoder as (Marcheggiani and Titov, 2017) (M&T encoder for short), which removes the residual connections from our framework. The corresponding results of our mo... | [2, 1, 1, 1, 1, 2] | ['To further investigate the impact of deep encoder, we perform our Syn-GCN, SA-LSTM and Tree-LSTM models with another alternative configuration, using the same encoder as (Marcheggiani and Titov, 2017) (M&T encoder for short), which removes the residual connections from our framework.', 'The corresponding results of o... | [['Syn-GCN', 'SA-LSTM', 'Tree-LSTM'], ['Syn-GCN', 'SA-LSTM', 'Tree-LSTM'], ['Baseline (syntax-agnostic)'], ['Syn-GCN (M&T encoder)', 'Syn-GCN', 'F1'], ['SA-LSTM', 'Tree-LSTM', 'SA-LSTM (M&T encoder)', 'Tree-LSTM (M&T encoder)', 'F1'], None] | 1 |
D18-1262table_7 | Results on English test set, in terms of labeled attachment score for syntactic dependencies (LAS), semantic precision (P), semantic recall (R), semantic labeled F1 score (Sem-F1), the ratio Sem-F1/LAS. All numbers are in percent. A superscript * indicates LAS results from our personal communication with the authors. | 2 | [['System', 'Zhao et al. (2009c) [SRL-only]'], ['System', 'Zhao et al. (2009a) [Joint]'], ['System', 'Bjorkelund et al.(2010)'], ['System', 'Lei et al. (2015)'], ['System', 'Roth and Lapata (2016)'], ['System', 'Marcheggiani and Titov (2017)'], ['System', 'He et al. (2018) [CoNLL-2009 predicted]'], ['System', 'He et al... | 1 | [['LAS'], ['P'], ['R'], ['Sem-F1'], ['Sem-F1/LAS']] | [['86.0', '-', '-', '85.4', '99.3'], ['89.2', '-', '-', '86.2', '96.6'], ['89.8', '87.1', '84.5', '85.8', '95.6'], ['90.4', '-', '-', '86.6', '95.8'], ['89.8', '88.1', '85.3', '86.7', '96.5'], ['90.34*', '89.1', '86.8', '88.0', '97.41'], ['86.0', '89.7', '89.3', '89.5', '104.0'], ['100', '91.0', '89.7', '90.3', '90.3']... | column | ['LAS', 'P', 'R', 'Sem-F1', 'Sem-F1/LAS'] | ['Our Syn-GCN (CoNLL-2009 predicted)', 'Our Syn-GCN (Biaffine Parser)', 'Our Syn-GCN (BIST Parser)', 'Our Syn-GCN (Gold syntax)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>LAS</th> <th>P</th> <th>R</th> <th>Sem-F1</th> <th>Sem-F1/LAS</th> </tr> </thead> <tbody> <tr> <td>System || Zhao et al. (2009c) [SRL-only]</td> <td>86.0</td> <td>-</td> ... | Table 7 | table_7 | D18-1262 | 8 | emnlp2018 | Comparison and Discussion. Table 7 presents the comprehensive results of our Syn-GCN model on the four syntactic inputs aforementioned of different quality together with previous SRL models. A number of observations can be made from these results. First, our model gives quite stable SRL performance no matter the syntac... | [2, 1, 2, 1, 1, 2, 1, 1, 2] | ['Comparison and Discussion.', 'Table 7 presents the comprehensive results of our Syn-GCN model on the four syntactic inputs aforementioned of different quality together with previous SRL models.', 'A number of observations can be made from these results.', 'First, our model gives quite stable SRL performance no matter... | [None, ['Our Syn-GCN (CoNLL-2009 predicted)', 'Our Syn-GCN (Biaffine Parser)', 'Our Syn-GCN (BIST Parser)', 'Our Syn-GCN (Gold syntax)'], None, ['Our Syn-GCN (CoNLL-2009 predicted)', 'Our Syn-GCN (Biaffine Parser)', 'Our Syn-GCN (BIST Parser)', 'Our Syn-GCN (Gold syntax)', 'P', 'R', 'Sem-F1'], ['Our Syn-GCN (CoNLL-2009... | 1 |
D18-1263table_3 | Evaluation of different DFS orderings, in labeled F1 score, across the different tasks. | 1 | [['Random'], ['Sentence order'], ['Closest words'], ['Smaller-first']] | 1 | [['DM'], ['PAS'], ['PSD'], ['Avg.']] | [['86.1', '87.7', '78.4', '84.1'], ['87.2', '90.3', '79.9', '85.8'], ['87.5', '89.8', '79.7', '85.8'], ['87.9', '90.9', '80.3', '86.2']] | column | ['F1', 'F1', 'F1', 'F1'] | ['Random', 'Sentence order', 'Closest words', 'Smaller-first'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>DM</th> <th>PAS</th> <th>PSD</th> <th>Avg.</th> </tr> </thead> <tbody> <tr> <td>Random</td> <td>86.1</td> <td>87.7</td> <td>78.4</td> <td>84.1</td> </tr> <tr> ... | Table 3 | table_3 | D18-1263 | 8 | emnlp2018 | DFS order matters. Table 3 depicts our model's performance when linearizing the graphs according to the different traversal orders discussed and exemplified in Table 2. Overall, we find that the “smaller-first” approach performs best across all datasets, and that imposing one of our orders is always preferable over ran... | [2, 1, 1, 2] | ['DFS order matters.', "Table 3 depicts our model's performance when linearizing the graphs according to the different traversal orders discussed and exemplified in Table 2.", 'Overall, we find that the “smaller-first” approach performs best across all datasets, and that imposing one of our orders is always preferable ... | [None, ['Random', 'Sentence order', 'Closest words', 'Smaller-first'], ['Smaller-first', 'Random'], ['Smaller-first']] | 1 |
D18-1263table_4 | Evaluation of our model (labeled F1 score) versus the current state of the art. “Single” denotes training a different encoder-decoder for each task. “MTL PRIMARY” reports the performance of multi-task learning on only the PRIMARY tasks. “MTL PRIMARY+AUX” shows the performance of our full model, including MTL with the A... | 1 | [['Peng et al. (2017a)'], ['Single'], ['MTL PRIMARY'], ['MTL PRIMARY+AUX']] | 1 | [['DM'], ['PAS'], ['PSD'], ['Avg.']] | [['90.4', '92.7', '78.5', '87.2'], ['70.1', '73.6', '63.6', '69.1'], ['82.4', '87.2', '71.4', '80.3'], ['87.9', '90.9', '80.3', '86.2']] | column | ['F1', 'F1', 'F1', 'F1'] | ['Single', 'MTL PRIMARY', 'MTL PRIMARY+AUX'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>DM</th> <th>PAS</th> <th>PSD</th> <th>Avg.</th> </tr> </thead> <tbody> <tr> <td>Peng et al. (2017a)</td> <td>90.4</td> <td>92.7</td> <td>78.5</td> <td>87.2</td> </t... | Table 4 | table_4 | D18-1263 | 9 | emnlp2018 | From English to SDP. Table 4 presents the performance of our complete model (“MTL PRIMARY+AUX”) versus Peng et al. (2017a). On average, our model performs within 1% F1 point from the state-of-the art (outperforming it on the harder PSD task), despite using the more general sequence-to-sequence approach instead of a ded... | [2, 1, 1, 1] | ['From English to SDP.', 'Table 4 presents the performance of our complete model (“MTL PRIMARY+AUX”) versus Peng et al. (2017a).', 'On average, our model performs within 1% F1 point from the state-of-the art (outperforming it on the harder PSD task), despite using the more general sequence-to-sequence approach instead ... | [None, ['MTL PRIMARY+AUX', 'Peng et al. (2017a)'], ['Peng et al. (2017a)', 'MTL PRIMARY+AUX', 'Avg.', 'PSD'], ['Single', 'MTL PRIMARY', 'MTL PRIMARY+AUX']] | 1 |
D18-1263table_5 | Performance (labeled F1 score) of our model versus the state of the art, when reducing the amount of overlap in the training data to 10%. | 1 | [['Peng et al. (2017a)'], ['MTL PRIMARY+AUX']] | 1 | [['DM'], ['PAS'], ['PSD'], ['Avg.']] | [['86.8', '90.5', '77.3', '84.9'], ['87.1', '89.6', '79.1', '85.3']] | column | ['F1', 'F1', 'F1', 'F1'] | ['MTL PRIMARY+AUX'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>DM</th> <th>PAS</th> <th>PSD</th> <th>Avg.</th> </tr> </thead> <tbody> <tr> <td>Peng et al. (2017a)</td> <td>86.8</td> <td>90.5</td> <td>77.3</td> <td>84.9</td> </t... | Table 5 | table_5 | D18-1263 | 9 | emnlp2018 | Simulating disjoint annotations. In contrast with SDP's complete overlap of annotated sentences, multi-task learning often deals with disjoint training data. To simulate such scenario, we retrained the models on a randomly selected set of 33% of the train sentences for each representation (11, 886 sentences), such that... | [2, 2, 2, 1, 2] | ['Simulating disjoint annotations.', "In contrast with SDP's complete overlap of annotated sentences, multi-task learning often deals with disjoint training data.", 'To simulate such scenario, we retrained the models on a randomly selected set of 33% of the train sentences for each representation (11, 886 sentences), s... | [None, None, ['Peng et al. (2017a)', 'MTL PRIMARY+AUX'], ['MTL PRIMARY+AUX', 'Peng et al. (2017a)', 'DM', 'PSD', 'Avg.'], ['MTL PRIMARY+AUX']] | 1 |
D18-1264table_3 | The intrinsic evaluation results. | 2 | [['Aligner', 'JAMR'], ['Aligner', 'Our']] | 1 | [['Alignment F1 (on hand-align)'], ['Oracle’s Smatch (on dev. dataset)']] | [['90.6', '91.7'], ['95.2', '94.7']] | column | ['Alignment F1 (on hand-align)', 'Oracle�s Smatch (on dev. dataset)'] | ['Our'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Alignment F1 (on hand-align)</th> <th>Oracle’s Smatch (on dev. dataset)</th> </tr> </thead> <tbody> <tr> <td>Aligner || JAMR</td> <td>90.6</td> <td>91.7</td> </tr> <tr> <td>Ali... | Table 3 | table_3 | D18-1264 | 7 | emnlp2018 | Intrinsic Evaluation. Table 3 shows the intrinsic evaluation results, in which our alignment intrinsically outperforms JAMR aligner by achieving better alignment F1 score and leading to a higher scored oracle parser. | [2, 1] | ['Intrinsic Evaluation.', 'Table 3 shows the intrinsic evaluation results, in which our alignment intrinsically outperforms JAMR aligner by achieving better alignment F1 score and leading to a higher scored oracle parser.'] | [None, ['Our', 'JAMR', 'Alignment F1 (on hand-align)']] | 1 |
D18-1264table_4 | The parsing results. | 3 | [['model', 'JAMR parser: Word POS NER DEP', '+ JAMR aligner'], ['model', 'JAMR parser: Word POS NER DEP', '+ Our aligner'], ['model', 'CAMR parser: Word POS NER DEP', '+ JAMR aligner'], ['model', 'CAMR parser: Word POS NER DEP', '+ Our aligner']] | 1 | [['newswire'], ['all']] | [['71.3', '65.9'], ['73.1', '67.6'], ['68.4', '64.6'], ['68.8', '65.1']] | column | ['accuracy', 'accuracy'] | ['+ Our aligner'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>newswire</th> <th>all</th> </tr> </thead> <tbody> <tr> <td>model || JAMR parser: Word POS NER DEP || + JAMR aligner</td> <td>71.3</td> <td>65.9</td> </tr> <tr> <td>model || JAM... | Table 4 | table_4 | D18-1264 | 7 | emnlp2018 | Extrinsic Evaluation. Table 4 shows the results. From this table, we can see that our alignment consistently improves all the parsers by a margin ranging from 0.5 to 1.7. Both the intrinsic and the extrinsic evaluations show the effectiveness our aligner. | [2, 1, 1, 2] | ['Extrinsic Evaluation.', 'Table 4 shows the results.', 'From this table, we can see that our alignment consistently improves all the parsers by a margin ranging from 0.5 to 1.7.', 'Both the intrinsic and the extrinsic evaluations show the effectiveness our aligner.'] | [None, None, ['JAMR parser: Word POS NER DEP', 'CAMR parser: Word POS NER DEP', '+ JAMR aligner', '+ Our aligner', 'all'], ['+ Our aligner']] | 1 |
D18-1268table_3 | The accuracy@k scores of all methods in bilingual lexicon induction on LEX-C. The best score for each language pair is bold-faced for the supervised and unsupervised categories, respectively. Languages are paired among English(en), Bulgarian(bg), Catalan(ca), Swedish(sv) and Latvian(lv). ”-” means that during the train... | 3 | [['Methods', 'Supervised', 'Mikolov et al. (2013)'], ['Methods', 'Supervised', 'Zhang et al. (2016)'], ['Methods', 'Supervised', 'Xing et al. (2015)'], ['Methods', 'Supervised', 'Shigeto et al. (2015)'], ['Methods', 'Supervised', 'Artetxe et al. (2016)'], ['Methods', 'Supervised', 'Artetxe et al. (2017)'], ['Methods', ... | 1 | [['bg-en'], ['en-bg'], ['ca-en'], ['en-ca'], ['sv-en'], ['en-sv'], ['lv-en'], ['en-lv']] | [['44.80', '48.47', '57.73', '66.20', '43.73', '63.73', '26.53', '28.93'], ['50.60', '39.73', '63.40', '58.73', '50.87', '53.93', '34.53', '22.87'], ['50.33', '40.00', '63.40', '58.53', '51.13', '53.73', '34.27', '21.60'], ['61.00', '33.80', '69.33', '53.60', '61.27', '41.67', '42.20', '13.87'], ['53.27', '43.40', '65.... | column | ['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy'] | ['Ours'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>bg-en</th> <th>en-bg</th> <th>ca-en</th> <th>en-ca</th> <th>sv-en</th> <th>en-sv</th> <th>lv-en</th> <th>en-lv</th> </tr> </thead> <tbody> <tr> <td>Methods || Supervis... | Table 3 | table_3 | D18-1268 | 7 | emnlp2018 | Table 3 and Table 4 summarize the results of all the methods on the LEX-C dataset. Several points may be worth noticing. Firstly, the performance scores on LEX-C are not necessarily consistent with those on LEX-Z (Table 2) even if the methods and the language pairs are the same; this is not surprising as the two datase... | [1, 2, 1, 1, 2, 1, 1, 2] | ['Table 3 and Table 4 summarize the results of all the methods on the LEX-C dataset.', 'Several points may be worth noticing.', 'Firstly, the performance scores on LEX-C are not necessarily consistent with those on LEX-Z (Table 2) even if the methods and the language pairs are the same; this is not surprising as the tw... | [['Mikolov et al. (2013)', 'Zhang et al. (2016)', 'Xing et al. (2015)', 'Shigeto et al. (2015)', 'Artetxe et al. (2016)', 'Artetxe et al. (2017)', 'Conneau et al. (2017)', 'Zhang et al. (2017a)', 'Ours'], None, None, ['Supervised', 'Unsupervised'], None, ['bg-en', 'en-bg', 'ca-en', 'en-ca', 'sv-en', 'en-sv', 'lv-en', '... | 1 |
D18-1268table_4 | The accuracy@k scores of all methods in bilingual lexicon induction on LEX-C. The best score for each language pair is bold-faced for the supervised and unsupervised categories, respectively. Languages are paired among English (en), German (de), Spanish (es), French (fr) and Italian (it). ”-” means that during the trai... | 3 | [['Methods', 'Supervised', 'Mikolov et al. (2013)'], ['Methods', 'Supervised', 'Zhang et al. (2016)'], ['Methods', 'Supervised', 'Xing et al. (2015)'], ['Methods', 'Supervised', 'Shigeto et al. (2015)'], ['Methods', 'Supervised', 'Artetxe et al. (2016)'], ['Methods', 'Supervised', 'Artetxe et al. (2017)'], ['Methods', ... | 1 | [['de-en'], ['en-de'], ['es-en'], ['en-es'], ['fr-en'], ['en-fr'], ['it-en'], ['en-it']] | [['61.93', '73.07', '74.00', '80.73', '71.33', '82.20', '68.93', '77.60'], ['67.67', '69.87', '77.27', '78.53', '76.07', '78.20', '72.40', '73.40'], ['67.73', '69.53', '77.20', '78.60', '76.33', '78.67', '72.00', '73.33'], ['71.07', '63.73', '81.07', '74.53', '79.93', '73.13', '76.47', '68.13'], ['69.13', '72.13', '78.... | column | ['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy'] | ['Ours'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>de-en</th> <th>en-de</th> <th>es-en</th> <th>en-es</th> <th>fr-en</th> <th>en-fr</th> <th>it-en</th> <th>en-it</th> </tr> </thead> <tbody> <tr> <td>Methods || Supervis... | Table 4 | table_4 | D18-1268 | 8 | emnlp2018 | Table 3 and Table 4 summarize the results of all the methods on the LEX-C dataset. Several points may be worth noticing. Firstly, the performance scores on LEX-C are not necessarily consistent with those on LEX-Z (Table 2) even if the methods and the language pairs are the same; this is not surprising as the two datase... | [1, 2, 1, 1, 2, 1, 1, 2] | ['Table 3 and Table 4 summarize the results of all the methods on the LEX-C dataset.', 'Several points may be worth noticing.', 'Firstly, the performance scores on LEX-C are not necessarily consistent with those on LEX-Z (Table 2) even if the methods and the language pairs are the same; this is not surprising as the tw... | [['Mikolov et al. (2013)', 'Zhang et al. (2016)', 'Xing et al. (2015)', 'Shigeto et al. (2015)', 'Artetxe et al. (2016)', 'Artetxe et al. (2017)', 'Conneau et al. (2017)', 'Zhang et al. (2017a)', 'Ours'], None, None, ['Supervised', 'Unsupervised'], None, ['de-en', 'en-de', 'es-en', 'en-es', 'fr-en', 'en-fr', 'it-en', '... | 1 |
D18-1268table_5 | Performance (measured using Pearson correlation) of all the methods in cross-lingual semantic word similarity prediction on the benchmark data from Conneau et al. (2017). The best score in the supervised and unsupervised category is bold-faced, respectively. The languages include English (en), German (de), Spanish (es)... | 3 | [['Methods', 'Supervised', 'Mikolov et al. (2013)'], ['Methods', 'Supervised', 'Zhang et al. (2016)'], ['Methods', 'Supervised', 'Xing et al. (2015)'], ['Methods', 'Supervised', 'Shigeto et al. (2015)'], ['Methods', 'Supervised', 'Artetxe et al. (2016)'], ['Methods', 'Supervised', 'Artetxe et al. (2017)'], ['Methods', ... | 1 | [['de-en'], ['es-en'], ['fa-en'], ['it-en']] | [['0.71', '0.72', '0.68', '0.71'], ['0.71', '0.71', '0.69', '0.71'], ['0.72', '0.71', '0.69', '0.72'], ['0.72', '0.72', '0.69', '0.71'], ['0.73', '0.72', '0.70', '0.73'], ['0.70', '0.70', '0.67', '0.71'], ['0.71', '0.71', '0.68', '0.71'], ['-', '-', '-', '-'], ['0.71', '0.71', '0.67', '0.71']] | column | ['correlation', 'correlation', 'correlation', 'correlation'] | ['Ours'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>de-en</th> <th>es-en</th> <th>fa-en</th> <th>it-en</th> </tr> </thead> <tbody> <tr> <td>Methods || Supervised || Mikolov et al. (2013)</td> <td>0.71</td> <td>0.72</td> <td>... | Table 5 | table_5 | D18-1268 | 8 | emnlp2018 | Table 5 summarizes the performance of all the methods in cross-lingual word similarity prediction. We can see that the unsupervised methods, including ours, perform equally well as the supervised methods, which is highly encouraging. | [1, 1] | ['Table 5 summarizes the performance of all the methods in cross-lingual word similarity prediction.', 'We can see that the unsupervised methods, including ours, perform equally well as the supervised methods, which is highly encouraging.'] | [['Mikolov et al. (2013)', 'Zhang et al. (2016)', 'Xing et al. (2015)', 'Shigeto et al. (2015)', 'Artetxe et al. (2016)', 'Artetxe et al. (2017)', 'Conneau et al. (2017)', 'Zhang et al. (2017a)', 'Ours'], ['Unsupervised', 'Supervised', 'Ours']] | 1 |
D18-1270table_7 | Linking accuracy of the zero-shot (Z-S) approach on different datasets. Zero-shot (w/ prior) is close to SoTA for datasets like TAC15-Test, but performance drops in the more realistic setting of zero-shot (w/o prior) (§6.1) on all datasets, indicating most of the performance can be attributed to the presence of prior p... | 2 | [['Approach', 'XELMS (Z-S w/ prior)'], ['Approach', 'XELMS (Z-S w/o prior)'], ['Approach', 'SoTA']] | 3 | [['Dataset', 'TAC15-Test', '(es)'], ['Dataset', 'TAC15-Test', ' (zh)'], ['Dataset', 'TH-Test', ' (avg)'], ['Dataset', 'McN-Test', ' (avg)']] | [['80.3', '83.9', '43.5', '88.1'], ['53.5', '55.9', '41.1', '86.0'], ['83.9', '85.9', '54.7', '89.4']] | column | ['accuracy', 'accuracy', 'accuracy', 'accuracy'] | ['XELMS (Z-S w/ prior)', 'XELMS (Z-S w/o prior)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Dataset || TAC15-Test || (es)</th> <th>Dataset || TAC15-Test || (zh)</th> <th>Dataset || TH-Test || (avg)</th> <th>Dataset || McN-Test || (avg)</th> </tr> </thead> <tbody> <tr> <td>App... | Table 7 | table_7 | D18-1270 | 8 | emnlp2018 | Is zero-shot XEL really effective?. To evaluate the effectiveness of the zero-shot XEL approach, we perform zero-shot XEL using XELMS on all datasets. Table 7 shows zero-shot XEL results on all datasets, both with and without using the prior during inference. Note that zero-shot XEL (with prior) is close to SoTA (Sil e... | [2, 2, 1, 1, 1, 2] | ['Is zero-shot XEL really effective?.', 'To evaluate the effectiveness of the zero-shot XEL approach, we perform zero-shot XEL using XELMS on all datasets.', 'Table 7 shows zero-shot XEL results on all datasets, both with and without using the prior during inference.', 'Note that zero-shot XEL (with prior) is close to ... | [None, ['XELMS (Z-S w/ prior)', 'XELMS (Z-S w/o prior)', 'TAC15-Test', 'TH-Test', 'McN-Test'], ['XELMS (Z-S w/ prior)', 'XELMS (Z-S w/o prior)', 'TAC15-Test', 'TH-Test', 'McN-Test'], ['XELMS (Z-S w/ prior)', 'SoTA', 'TAC15-Test'], ['XELMS (Z-S w/ prior)', 'XELMS (Z-S w/o prior)', 'TAC15-Test', 'TH-Test', 'McN-Test'], [... | 1 |
D18-1273table_5 | Correlation results of Trn13, Trn14, Trn15 and D with Tst13, Tst14, Tst15. | 2 | [['Train:Test', 'Trn13 : Tst13'], ['Train:Test', 'Trn14 : Tst14'], ['Train:Test', 'Trn15 : Tst15'], ['Train:Test', 'D : Tst13'], ['Train:Test', 'D : Tst14'], ['Train:Test', 'D : Tst15']] | 1 | [['C (%)']] | [['16.2'], ['53.9'], ['46.7'], ['74.1'], ['80.6'], ['84.2']] | column | ['C (%)'] | ['D : Tst13', 'D : Tst14', 'D : Tst15'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>C (%)</th> </tr> </thead> <tbody> <tr> <td>Train:Test || Trn13 : Tst13</td> <td>16.2</td> </tr> <tr> <td>Train:Test || Trn14 : Tst14</td> <td>53.9</td> </tr> <tr> <td>Tra... | Table 5 | table_5 | D18-1273 | 5 | emnlp2018 | Table 5 illustrates that, for three different testing datasets, the entire generated corpus D achieves 74.1%, 80.6% and 84.2% on Ctrain:test, respectively, which are much higher than that of Trn13, Trn14 and Trn15. This difference may denote the validity of the generated corpus, with adequate spelling errors. | [1, 2] | ['Table 5 illustrates that, for three different testing datasets, the entire generated corpus D achieves 74.1%, 80.6% and 84.2% on Ctrain:test, respectively, which are much higher than that of Trn13, Trn14 and Trn15.', 'This difference may denote the validity of the generated corpus, with adequate spelling errors.'] | [['D : Tst13', 'D : Tst14', 'D : Tst15', 'C (%)', 'Trn13 : Tst13', 'Trn14 : Tst14', 'Trn15 : Tst15'], None] | 1 |
D18-1273table_7 | The performance of Chinese spelling error detection with BiLSTM on Tst13,Tst14,Tst15 (%). Best results are in bold. Trn represents the training dataset provided in the corresponding shared task, e.g., Trn denotes Trn13 in Tst13. | 1 | [['Trn'], ['D-10k'], ['D-20k'], ['D-30k'], ['D-40k'], ['D-50k']] | 2 | [['Tst13', 'P'], ['Tst13', 'R'], ['Tst13', 'F1'], ['Tst14', 'P'], ['Tst14', 'R'], ['Tst14', 'F1'], ['Tst15', 'P'], ['Tst15', 'R'], ['Tst15', 'F1']] | [['24.4', '27.3', '25.8', '49.8', '51.5', '50.6', '40.1', '43.2', '41.6'], ['33.3', '39.6', '36.1', '31.1', '35.1', '32.9', '31.0', '37.0', '33.7'], ['41.1', '50.2', '45.2', '41.1', '50.2', '45.2', '43.0', '54.9', '48.2'], ['47.2', '59.1', '52.5', '40.9', '48.0', '44.2', '50.3', '62.3', '55.7'], ['53.4', '65.0', '58.6'... | column | ['P', 'R', 'F1', 'P', 'R', 'F1', 'P', 'R', 'F1'] | ['D-10k', 'D-20k', 'D-30k', 'D-40k', 'D-50k'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Tst13 || P</th> <th>Tst13 || R</th> <th>Tst13 || F1</th> <th>Tst14 || P</th> <th>Tst14 || R</th> <th>Tst14 || F1</th> <th>Tst15 || P</th> <th>Tst15 || R</th> <th>Tst15 || F1</... | Table 7 | table_7 | D18-1273 | 7 | emnlp2018 | Table 7 shows the detection performance on three different testing datasets. We have the following observations. The size of training dataset is important for the model training. For Tst13, D-10k achieves a better F1 score than Trn13. A major reason may be the size of Trn13 (=350, see in Table 3), which is much smaller... | [1, 1, 2, 1, 2, 2, 1, 2, 2, 1, 1, 2, 1, 1, 2, 2, 1, 2] | ['Table 7 shows the detection performance on three different testing datasets.', 'We have the following observations.', 'The size of training dataset is important for the model training.', 'For Tst13, D-10k achieves a better F1 score than Trn13.', 'A major reason may be the size of Trn13 (=350, see in Table 3), which i... | [['Tst13', 'Tst14', 'Tst15'], None, None, ['D-10k', 'Tst13', 'Trn'], ['Trn'], ['Trn'], ['D-10k', 'D-20k', 'D-30k', 'D-40k', 'D-50k'], None, None, ['D-10k', 'D-20k', 'D-30k', 'D-40k', 'D-50k', 'P', 'R', 'F1'], ['D-10k', 'D-20k', 'D-30k', 'D-40k', 'D-50k', 'R'], None, ['D-10k', 'D-20k', 'D-30k', 'D-40k', 'D-50k', 'P', 'R... | 1 |
D18-1277table_7 | Effect of using different tuning sets. As usual with early stopping, the best tuning set performance was used to evaluate the test set. Here, we evaluated the same experimental runs at two points: when the performance was best on the WSJ development set, and again when the performance was best on the Noun-Verb developm... | 3 | [['Model', 'WSJ Test Set', 'Bohnet et al. (2018)'], ['Model', 'WSJ Test Set', '+ELMo'], ['Model', 'WSJ Test Set', '+NV Data'], ['Model', 'WSJ Test Set', '+ELMo+NV Data'], ['Model', 'Noun-Verb Test Set', 'Bohnet et al. (2018)'], ['Model', 'Noun-Verb Test Set', '+ELMo'], ['Model', 'Noun-Verb Test Set', '+NV Data'], ['Mod... | 2 | [['Tuning Set', 'WSJ'], ['Tuning Set', 'NV']] | [['98.00±0.12', '97.98±0.13'], ['97.94±0.08', '97.85±0.16'], ['97.98±0.11', '97.94±0.14'], ['97.97±0.09', '97.94±0.13'], ['74.0±1.2', '76.9±0.6†'], ['82.1±0.9', '83.4±0.5†'], ['86.4±0.4', '86.8±0.4'], ['88.9±0.3', '89.3±0.2‡']] | column | ['accuracy', 'accuracy'] | ['Tuning Set'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Tuning Set || WSJ</th> <th>Tuning Set || NV</th> </tr> </thead> <tbody> <tr> <td>Model || WSJ Test Set || Bohnet et al. (2018)</td> <td>98.00±0.12</td> <td>97.98±0.13</td> </tr> <tr... | Table 7 | table_7 | D18-1277 | 7 | emnlp2018 | Impact of Tuning Set. Table 7 compares performance of the same experiments on the WSJ and Noun-Verb Challenge test sets, tuned either using the WSJ or the Noun-Verb development set. The only effect of the change in tuning set was for the Noun-Verb tuning to cause the early stopping to sometimes be a little earlier. Whe... | [2, 1, 2, 1, 2, 1, 2] | ['Impact of Tuning Set.', 'Table 7 compares performance of the same experiments on the WSJ and Noun-Verb Challenge test sets, tuned either using the WSJ or the Noun-Verb development set.', 'The only effect of the change in tuning set was for the Noun-Verb tuning to cause the early stopping to sometimes be a little earl... | [None, ['WSJ Test Set', 'Noun-Verb Test Set', 'WSJ', 'NV'], ['Tuning Set', 'WSJ', 'NV'], ['NV', 'WSJ Test Set', 'Noun-Verb Test Set', 'Bohnet et al. (2018)', '+ELMo', '+NV Data', '+ELMo+NV Data'], ['Tuning Set'], ['Noun-Verb Test Set', 'Bohnet et al. (2018)', 'WSJ', 'NV'], ['Noun-Verb Test Set', '+ELMo+NV Data', 'NV']] | 1 |
D18-1278table_6 | LAS results when case information is added. We use bold to highlight the best results for models without explicit access to gold annotations. | 4 | [['Language', 'Czech', 'Input', 'char'], ['Language', 'Czech', 'Input', 'char (multi-task)'], ['Language', 'Czech', 'Input', 'char + predicted case'], ['Language', 'Czech', 'Input', 'char + gold case'], ['Language', 'Czech', 'Input', 'oracle'], ['Language', 'German', 'Input', 'char'], ['Language', 'German', 'Input', 'c... | 1 | [['Dev'], ['Test']] | [['91.2', '90.6'], ['91.6', '91.0'], ['92.2', '91.8'], ['92.3', '91.9'], ['92.5', '92.0'], ['87.5', '84.5'], ['87.9', '84.4'], ['87.8', '86.4'], ['90.2', '86.9'], ['89.7', '86.5'], ['91.6', '92.4'], ['92.2', '92.6'], ['92.5', '93.3'], ['92.8', '93.5'], ['92.6', '93.3']] | column | ['LAS', 'LAS'] | ['char', 'char (multi-task)', 'char + predicted case', 'char + gold case', 'oracle'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Dev</th> <th>Test</th> </tr> </thead> <tbody> <tr> <td>Language || Czech || Input || char</td> <td>91.2</td> <td>90.6</td> </tr> <tr> <td>Language || Czech || Input || char (mu... | Table 6 | table_6 | D18-1278 | 7 | emnlp2018 | Table 6 summarizes the results on Czech, German, and Russian. We find augmenting the charlstm model with either oracle or predicted case improve its accuracy, although the effect is different across languages. The improvements from predicted case results are interesting, since in nonneural parsers, predicted case usual... | [1, 1, 2, 2, 1, 2, 2, 1, 2] | ['Table 6 summarizes the results on Czech, German, and Russian.', 'We find augmenting the charlstm model with either oracle or predicted case improve its accuracy, although the effect is different across languages.', 'The improvements from predicted case results are interesting, since in nonneural parsers, predicted ca... | [['Czech', 'German', 'Russian'], ['char + predicted case', 'oracle', 'Dev', 'Test', 'Czech', 'German', 'Russian'], ['char + predicted case'], None, ['char'], None, None, ['char + gold case', 'char + predicted case', 'Czech', 'German', 'Russian', 'oracle'], None] | 1 |
D18-1279table_2 | F1 score of our proposed models in comparison with state-of-the-art results. | 2 | [['Model', 'Conv-CRF+Lexicon (Collobert et al. 2011)'], ['Model', 'LSTM-CRF+Lexicon (Huang et al. 2015)'], ['Model', 'LSTM-CRF+Lexicon+char-CNN (Chiu and Nichols 2016)'], ['Model', 'LSTM-Softmax+char-LSTM (Ling et al. 2015)'], ['Model', 'LSTM-CRF+char-LSTM (Lample et al. 2016)'], ['Model', 'LSTM-CRF+char-CNN (Ma and Ho... | 1 | [['Spanish'], ['Dutch'], ['English'], ['German'], ['Chunking'], ['POS']] | [['-', '-', '89.59', '-', '94.32', '97.29'], ['-', '-', '90.10', '-', '94.46', '97.43'], ['-', '-', '90.77', '-', '-', '-'], ['-', '-', '-', '-', '-', '97.55'], ['85.75', '81.74', '90.94', '78.76', '-', '-'], ['-', '-', '91.21', '-', '-', '97.55'], ['84.69', '85.00', '91.20', '-', '94.66', '97.55'], ['80.33±0.37', '79.... | column | ['F1', 'F1', 'F1', 'F1', 'F1', 'F1'] | ['LSTM-CRF+char-IntNet-9', 'LSTM-CRF+char-IntNet-5'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Spanish</th> <th>Dutch</th> <th>English</th> <th>German</th> <th>Chunking</th> <th>POS</th> </tr> </thead> <tbody> <tr> <td>Model || Conv-CRF+Lexicon (Collobert et al. 2011)</td... | Table 2 | table_2 | D18-1279 | 7 | emnlp2018 | 5.2 State-of-the-art Results. Table 2 presents our proposed model in comparison with state-of-the-art results. LSTM-CRF is our baseline which uses fine-tuned pre-trained word embeddings. Its comparison with LSTMCRF using random initializations for word embeddings, as shown in Table 1, confirms that pre-trained word emb... | [2, 1, 2, 2, 2, 2, 1, 1, 1, 2, 2, 2] | ['5.2 State-of-the-art Results.', 'Table 2 presents our proposed model in comparison with state-of-the-art results.', 'LSTM-CRF is our baseline which uses fine-tuned pre-trained word embeddings.', 'Its comparison with LSTMCRF using random initializations for word embeddings, as shown in Table 1, confirms that pre-train... | [None, ['Conv-CRF+Lexicon (Collobert et al. 2011)', 'LSTM-CRF+Lexicon (Huang et al. 2015)', 'LSTM-CRF+Lexicon+char-CNN (Chiu and Nichols 2016)', 'LSTM-Softmax+char-LSTM (Ling et al. 2015)', 'LSTM-CRF+char-LSTM (Lample et al. 2016)', 'LSTM-CRF+char-CNN (Ma and Hovy 2016)', 'GRM-CRF+char-GRU (Yang et al. 2017)', 'LSTM-CR... | 1 |
D18-1280table_4 | Performance of ICON on the IEMOCAP dataset. † represents statistical significance over state-of-the-art scores under | 2 | [['Models', 'memnet'], ['Models', 'cLSTM'], ['Models', 'TFN'], ['Models', 'MFN'], ['Models', 'CMN'], ['Models', 'ICON']] | 3 | [['IEMOCAP: Emotion Categories', 'Happy', 'acc.'], ['IEMOCAP: Emotion Categories', 'Happy', 'F1'], ['IEMOCAP: Emotion Categories', 'Sad', 'acc.'], ['IEMOCAP: Emotion Categories', 'Sad', 'F1'], ['IEMOCAP: Emotion Categories', 'Neutral', 'acc.'], ['IEMOCAP: Emotion Categories', 'Neutral', 'F1'], ['IEMOCAP: Emotion Catego... | [['24.4', '33.0', '60.4', '69.3', '56.8', '55.0', '67.1', '66.1', '65.2', '62.3', '68.4', '63.0', '59.9', '59.5'], ['25.5', '35.6', '58.6', '69.2', '56.5', '53.5', '70.0', '66.3', '58.8', '61.1', '67.4', '62.4', '59.8', '59.0'], ['23.2', '33.7', '58.0', '68.6', '56.6', '55.1', '69.1', '64.2', '63.1', '62.4', '65.5', '6... | column | ['acc.', 'F1', 'acc.', 'F1', 'acc.', 'F1', 'acc.', 'F1', 'acc.', 'F1', 'acc.', 'F1', 'acc.', 'F1'] | ['ICON'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>IEMOCAP: Emotion Categories || Happy || acc.</th> <th>IEMOCAP: Emotion Categories || Happy || F1</th> <th>IEMOCAP: Emotion Categories || Sad || acc.</th> <th>IEMOCAP: Emotion Categories || Sad || F1</... | Table 4 | table_4 | D18-1280 | 7 | emnlp2018 | 6 Results. Tables 4 and 5 present the results on the IEMOCAP and SEMAINE testing sets, respectively. In Table 4, we evaluate the mean classification performance using Weighted Accuracy (acc.) and F1-Score (F1) on the discrete emotion categories. ICON performs better than the compared models with significant performance... | [2, 1, 1, 1, 1, 1, 2] | ['6 Results.', 'Tables 4 and 5 present the results on the IEMOCAP and SEMAINE testing sets, respectively.', 'In Table 4, we evaluate the mean classification performance using Weighted Accuracy (acc.) and F1-Score (F1) on the discrete emotion categories.', 'ICON performs better than the compared models with significant ... | [None, ['IEMOCAP: Emotion Categories'], ['acc.', 'F1'], ['ICON', 'acc.'], ['ICON', 'Sad', 'Neutral', 'Angry', 'Excited', 'Frustrated'], ['ICON', 'cLSTM', 'Happy'], ['ICON']] | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.