table_id_paper stringlengths 15 15 | caption stringlengths 14 1.88k | row_header_level int32 1 9 | row_headers large_stringlengths 15 1.75k | column_header_level int32 1 6 | column_headers large_stringlengths 7 1.01k | contents large_stringlengths 18 2.36k | metrics_loc stringclasses 2
values | metrics_type large_stringlengths 5 532 | target_entity large_stringlengths 2 330 | table_html_clean large_stringlengths 274 7.88k | table_name stringclasses 9
values | table_id stringclasses 9
values | paper_id stringlengths 8 8 | page_no int32 1 13 | dir stringclasses 8
values | description large_stringlengths 103 3.8k | class_sentence stringlengths 3 120 | sentences large_stringlengths 110 3.92k | header_mention stringlengths 12 1.8k | valid int32 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
P17-1052table_2 | Error rates (%) on larger datasets in comparison with previous models. The previous results are roughly sorted in the order of error rates (best to worst). The best results and the second best are shown in bold and italic, respectively. ‘tv’ stands for tv-embeddings. ‘w2v’ stands for word2vec. ‘(w2v)’ in row 7 indicate... | 2 | [['Models', 'DPCNN + unsupervised embed.'], ['Models', 'ShallowCNN + unsup. embed. [JZ16]'], ['Models', 'Hierarchical attention net [YYDHSH16]'], ['Models', '[CSBL16] char-level CNN: best'], ['Models', 'fastText bigrams (Joulin et al., 2016)'], ['Models', '[ZZL15] char-level CNN: best'], ['Models', '[ZZL15] word-level ... | 1 | [['Yelp.p'], ['Yelp.f'], ['Yahoo'], ['Ama.f'], ['Ama.p']] | [['2.64', '30.58', '23.9', '34.81', '3.32'], ['2.9', '32.39', '24.85', '36.24', '3.79'], ['-', '-', '24.2', '36.4', '-'], ['4.28', '35.28', '26.57', '37', '4.28'], ['4.3', '36.1', '27.7', '39.8', '5.4'], ['4.88', '37.95', '28.8', '40.43', '4.93'], ['4.6', '39.58', '28.84', '42.39', '5.51'], ['4.36', '40.14', '28.96', '... | column | ['Error rates', 'Error rates', 'Error rates', 'Error rates', 'Error rates'] | ['DPCNN + unsupervised embed.'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Yelp.p</th> <th>Yelp.f</th> <th>Yahoo</th> <th>Ama.f</th> <th>Ama.p</th> </tr> </thead> <tbody> <tr> <td>Models || DPCNN + unsupervised embed.</td> <td>2.64</td> <td>30.58<... | Table 2 | table_2 | P17-1052 | 6 | acl2017 | We first report the error rates of our full model (DPCNN with 15 weight layers plus unsupervised embeddings) on the larger five datasets (Table 2). To put it into perspective, we also show the previous results in the literature. The previous results are roughly sorted in the order of error rates from best to worst. On ... | [1, 1, 2, 1] | ['We first report the error rates of our full model (DPCNN with 15 weight layers plus unsupervised embeddings) on the larger five datasets (Table 2).', 'To put it into perspective, we also show the previous results in the literature.', 'The previous results are roughly sorted in the order of error rates from best to wo... | [['DPCNN + unsupervised embed.'], None, None, ['DPCNN + unsupervised embed.']] | 1 |
P17-1053table_2 | Accuracy on the SimpleQuestions and WebQSP relation detection tasks (test sets). The top shows performance of baselines. On the bottom we give the results of our proposed model together with the ablation tests. | 4 | [['Model', 'AMPCNN (Yin et al. 2016)', 'Relation Input Views', 'words'], ['Model', 'BiCNN (Yih et al. 2015)', 'Relation Input Views', 'char-3-gram'], ['Model', 'BiLSTM w/ words', 'Relation Input Views', 'words'], ['Model', 'BiLSTM w/ relation names', 'Relation Input Views', 'rel_names'], ['Model', 'Hier-Res-BiLSTM (HR-... | 2 | [['Accuracy', 'SimpleQuestions'], ['Accuracy', 'WebQSP']] | [['91.3', '-'], ['90.0', '77.74'], ['91.2', '79.32'], ['88.9', '78.96'], ['93.3', '82.53']] | column | ['Accuracy', 'Accuracy'] | ['Hier-Res-BiLSTM (HR-BiLSTM)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy || SimpleQuestions</th> <th>Accuracy || WebQSP</th> </tr> </thead> <tbody> <tr> <td>Model || AMPCNN (Yin et al. 2016) || Relation Input Views || words</td> <td>91.3</td> <td>-</t... | Table 2 | table_2 | P17-1053 | 8 | acl2017 | Table 2 shows the results on two relation detection tasks. The AMPCNN result is from (Yin et al., 2016), which yielded state-of-the-art scores by outperforming several attention-based methods. We re-implemented the BiCNN model from (Yih et al., 2015), where both questions and relations are represented with the word has... | [1, 2, 2, 2, 1, 1, 1, 1] | ['Table 2 shows the results on two relation detection tasks.', 'The AMPCNN result is from (Yin et al., 2016), which yielded state-of-the-art scores by outperforming several attention-based methods.', 'We re-implemented the BiCNN model from (Yih et al., 2015), where both questions and relations are represented with the ... | [None, ['AMPCNN (Yin et al. 2016)'], None, ['BiLSTM w/ words', 'WebQSP'], ['Hier-Res-BiLSTM (HR-BiLSTM)', 'BiLSTM w/ words', 'SimpleQuestions', 'WebQSP'], ['BiLSTM w/ relation names'], ['BiLSTM w/ relation names', 'SimpleQuestions'], ['BiLSTM w/ relation names', 'WebQSP', 'SimpleQuestions']] | 1 |
P17-1085table_1 | Performance on ACE05 test dataset. The dashed (“–”) performance numbers were missing in the original paper (Miwa and Bansal, 2016). | 2 | [['Method', 'Li and Ji (2014)'], ['Method', 'SPTree'], ['Method', 'SPTree1'], ['Method', 'Our Model']] | 2 | [['Entity', 'P'], ['Entity', 'R'], ['Entity', 'F1'], ['Relation', 'P'], ['Relation', 'R'], ['Relation', 'F1'], ['Entity+Relation', 'P'], ['Entity+Relation', 'R'], ['Entity+Relation', 'F1']] | [['0.852', '0.769', '0.808', '0.689', '0.419', '0.521', '0.654', '0.398', '0.495'], ['0.829', '0.839', '0.834', '-', '-', '-', '0.572', '0.54', '0.556'], ['0.823', '0.839', '0.831', '0.605', '0.553', '0.578', '0.578', '0.529', '0.553'], ['0.84', '0.813', '0.826', '0.579', '0.54', '0.559', '0.555', '0.518', '0.536']] | column | ['P', 'R', 'F1', 'P', 'R', 'F1', 'P', 'R', 'F1'] | ['Our Model'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Entity || P</th> <th>Entity || R</th> <th>Entity || F1</th> <th>Relation || P</th> <th>Relation || R</th> <th>Relation || F1</th> <th>Entity+Relation || P</th> <th>Entity+Relation ... | Table 1 | table_1 | P17-1085 | 7 | acl2017 | Table 1 compares the performance of our system with respect to the baselines on ACE05 dataset. We find that our joint model significantly outperforms the joint structured perceptron model (Li and Ji, 2014) on both entities and relations, despite the unavailability of features such as dependency trees, POS tags, etc. Ho... | [1, 1, 1] | ['Table 1 compares the performance of our system with respect to the baselines on ACE05 dataset.', 'We find that our joint model significantly outperforms the joint structured perceptron model (Li and Ji, 2014) on both entities and relations, despite the unavailability of features such as dependency trees, POS tags, et... | [None, ['Our Model', 'Li and Ji (2014)'], ['Our Model', 'SPTree', 'SPTree1', 'R', 'Entity', 'Relation', 'Entity+Relation']] | 1 |
P17-1171table_5 | Feature ablation analysis of the paragraph representations of our Document Reader. Results are reported on the SQuAD development set. | 2 | [['Features', 'Full'], ['Features', 'No ftoken'], ['Features', 'No fexact match'], ['Features', 'No faligned'], ['Features', 'No faligned and fexact match']] | 1 | [['F1']] | [['78.8'], ['78.0'], ['77.3'], ['77.3'], ['59.4']] | column | ['F1'] | ['Full'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Features || Full</td> <td>78.8</td> </tr> <tr> <td>Features || No ftoken</td> <td>78.0</td> </tr> <tr> <td>Features || No fexact m... | Table 5 | table_5 | P17-1171 | 8 | acl2017 | We conducted an ablation analysis on the feature vector of paragraph tokens. As shown in Table 5 all the features contribute to the performance of our final system. Without the aligned question embedding feature (only word embedding and a few manual features), our system is still able to achieve F1 over 77%. More inter... | [2, 1, 1, 1] | ['We conducted an ablation analysis on the feature vector of paragraph tokens.', 'As shown in Table 5 all the features contribute to the performance of our final system.', 'Without the aligned question embedding feature (only word embedding and a few manual features), our system is still able to achieve F1 over 77%.', ... | [None, ['Full'], ['No faligned'], ['No faligned and fexact match']] | 1 |
P17-1171table_6 | Full Wikipedia results. Top-1 exact-match accuracy (in %, using SQuAD eval script). +Finetune (DS): Document Reader models trained on SQuAD and fine-tuned on each DS training set independently. +Multitask (DS): Document Reader single model trained on SQuAD and all the distant supervision (DS) training sets jointly. Yod... | 2 | [['Dataset', 'SQuAD (All Wikipedia)'], ['Dataset', 'CuratedTREC'], ['Dataset', 'WebQuestions'], ['Dataset', 'WikiMovies']] | 2 | [['YodaQA', '-'], ['DrQA', 'SQuAD'], ['DrQA', '+Fine-tune (DS)'], ['DrQA', '+Multitask (DS)']] | [['n/a', '27.1', '28.4', '29.8'], ['31.3', '19.7', '25.7', '25.4'], ['39.8', '11.8', '19.5', '20.7'], ['n/a', '24.5', '34.3', '36.5']] | column | ['accuracy', 'accuracy', 'accuracy', 'accuracy'] | ['+Multitask (DS)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>YodaQA || -</th> <th>DrQA || SQuAD</th> <th>DrQA || +Fine-tune (DS)</th> <th>DrQA || +Multitask (DS)</th> </tr> </thead> <tbody> <tr> <td>Dataset || SQuAD (All Wikipedia)</td> <td>n/... | Table 6 | table_6 | P17-1171 | 9 | acl2017 | Table 6 presents the results. Despite the difficulty of the task compared to machine comprehension (where you are given the right paragraph) and unconstrained QA (using redundant resources), DrQA still provides reasonable performance across all four datasets. We are interested in a single, full system that can answer a... | [1, 1, 2, 2, 1, 2, 1, 1, 1, 1] | ['Table 6 presents the results.', 'Despite the difficulty of the task compared to machine comprehension (where you are given the right paragraph) and unconstrained QA (using redundant resources), DrQA still provides reasonable performance across all four datasets.', 'We are interested in a single, full system that can ... | [None, ['DrQA', 'Dataset'], ['SQuAD'], ['SQuAD'], ['SQuAD'], ['+Multitask (DS)'], ['+Multitask (DS)'], ['YodaQA', 'CuratedTREC', 'WebQuestions'], ['YodaQA', 'CuratedTREC', '+Multitask (DS)'], ['WebQuestions', 'YodaQA']] | 1 |
P17-1176table_3 | Comparison with previous work on Spanish-French and German-French translation tasks from the Europarl corpus. English is treated as the pivot language. The likelihood method uses 100K parallel source-target sentences, which are not available for other methods. | 3 | [['Cheng et al. (2016a)', 'Method', 'pivot'], ['Cheng et al. (2016a)', 'Method', 'hard'], ['Cheng et al. (2016a)', 'Method', 'soft'], ['Cheng et al. (2016a)', 'Method', 'likelihood'], ['Ours', 'Method', 'Ours sent-beam'], ['Ours', 'Method', 'word-sampling']] | 1 | [['Es→ Fr'], ['De→ Fr']] | [['29.79', '23.70'], ['29.93', '23.88'], ['30.57', '23.79'], ['32.59', '25.93'], ['31.64', '24.39'], ['33.86', '27.03']] | column | ['BLEU', 'BLEU'] | ['Ours'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Es→ Fr</th> <th>De→ Fr</th> </tr> </thead> <tbody> <tr> <td>Cheng et al. (2016a) || Method || pivot</td> <td>29.79</td> <td>23.70</td> </tr> <tr> <td>Cheng et al. (2016a) || Me... | Table 3 | table_3 | P17-1176 | 6 | acl2017 | Table 3 gives BLEU scores on the Europarl corpus of our best performing sentence-level method (sent-beam) and word-level method (word-sampling) compared with pivot-based methods (Cheng et al., 2016a). We use the same data preprocessing as (Cheng et al., 2016a). We find that both the sent-beam and word-sampling methods ... | [1, 2, 1, 1, 1] | ['Table 3 gives BLEU scores on the Europarl corpus of our best performing sentence-level method (sent-beam) and word-level method (word-sampling) compared with pivot-based methods (Cheng et al., 2016a).', 'We use the same data preprocessing as (Cheng et al., 2016a).', 'We find that both the sent-beam and word-sampling ... | [['Ours sent-beam', 'word-sampling', 'Cheng et al. (2016a)'], ['Cheng et al. (2016a)'], ['Ours sent-beam', 'word-sampling', 'pivot'], ['word-sampling', 'soft', 'Es→ Fr'], ['word-sampling', 'likelihood']] | 1 |
P17-1185table_2 | Comparison of the methods in terms of the semantic similarity task. Each entry represents the Spearman’s correlation between predicted similarities and the manually assessed ones. | 4 | [['Dim. d', 'd = 100', 'Algorithm', 'SGD-SGNS'], ['Dim. d', 'd = 100', 'Algorithm', 'SVD-SPPMI'], ['Dim. d', 'd = 100', 'Algorithm', 'RO-SGNS'], ['Dim. d', 'd = 200', 'Algorithm', 'SGD-SGNS'], ['Dim. d', 'd = 200', 'Algorithm', 'SVD-SPPMI'], ['Dim. d', 'd = 200', 'Algorithm', 'RO-SGNS'], ['Dim. d', 'd = 500', 'Algorith... | 1 | [['ws-sim'], ['ws-rel'], ['ws-full'], ['simlex'], ['men']] | [['0.719', '0.57', '0.662', '0.288', '0.645'], ['0.722', '0.585', '0.669', '0.317', '0.686'], ['0.729', '0.597', '0.677', '0.322', '0.683'], ['0.733', '0.584', '0.677', '0.317', '0.664'], ['0.747', '0.625', '0.694', '0.347', '0.71'], ['0.757', '0.647', '0.708', '0.353', '0.701'], ['0.738', '0.6', '0.688', '0.35', '0.71... | column | ['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy'] | ['RO-SGNS'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>ws-sim</th> <th>ws-rel</th> <th>ws-full</th> <th>simlex</th> <th>men</th> </tr> </thead> <tbody> <tr> <td>Dim. d || d = 100 || Algorithm || SGD-SGNS</td> <td>0.719</td> <td... | Table 2 | table_2 | P17-1185 | 7 | acl2017 | However, the target performance measure of embedding models is the correlation between semantic similarity and human assessment (Section 4.2). Table 2 presents the comparison of the methods in terms of it. We see that our method outperforms the competitors on all datasets except for men dataset where it obtains slightl... | [2, 1, 1, 1] | ['However, the target performance measure of embedding models is the correlation between semantic similarity and human assessment (Section 4.2).', 'Table 2 presents the comparison of the methods in terms of it.', 'We see that our method outperforms the competitors on all datasets except for men dataset where it obtains... | [None, None, ['RO-SGNS', 'ws-sim', 'ws-rel', 'ws-full', 'simlex', 'men'], ['RO-SGNS', 'Dim. d']] | 1 |
P17-1187table_1 | Evaluation results of word similarity computation. | 2 | [['Model', 'CBOW'], ['Model', 'GloVe'], ['Model', 'Skip-gram'], ['Model', 'SSA'], ['Model', 'SAC'], ['Model', 'MST'], ['Model', 'SAT']] | 1 | [['Wordsim-240'], ['Wordsim-297']] | [['57.7', '61.1'], ['59.8', '58.7'], ['58.5', '63.3'], ['58.9', '64'], ['59', '63.1'], ['59.2', '62.8'], ['63.2', '65.6']] | column | ['correlation', 'correlation'] | ['SSA', 'SAC', 'SAT', 'MST'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Wordsim-240</th> <th>Wordsim-297</th> </tr> </thead> <tbody> <tr> <td>Model || CBOW</td> <td>57.7</td> <td>61.1</td> </tr> <tr> <td>Model || GloVe</td> <td>59.8</td> ... | Table 1 | table_1 | P17-1187 | 6 | acl2017 | Table 1 shows the results of these models for word similarity computation. From the results we can observe that: (1) Our SAT model outperforms other models, including all baselines, on both two test sets. This indicates that, by utilizing sememe annotation properly, our model can better capture the semantic relations o... | [1, 1, 2, 2, 1, 2, 2, 1, 1, 2, 1, 2, 2] | ['Table 1 shows the results of these models for word similarity computation.', 'From the results we can observe that: (1) Our SAT model outperforms other models, including all baselines, on both two test sets.', 'This indicates that, by utilizing sememe annotation properly, our model can better capture the semantic rel... | [None, ['SAT', 'Wordsim-240', 'Wordsim-297'], None, ['SSA'], ['SSA'], None, ['SSA'], ['SAT', 'SSA', 'SAC'], ['SAT'], ['SAC'], ['SAT', 'MST'], None, None] | 1 |
P17-1187table_2 | Evaluation results of word analogy inference. | 2 | [['Model', 'CBOW'], ['Model', 'GloVe'], ['Model', 'Skip-gram'], ['Model', 'SSA'], ['Model', 'SAC'], ['Model', 'MST'], ['Model', 'SAT']] | 2 | [['Accuracy', 'Capital'], ['Accuracy', 'City'], ['Accuracy', 'Relationship'], ['Accuracy', 'All'], ['Mean Rank', 'Capital'], ['Mean Rank', 'City'], ['Mean Rank', 'Relationship'], ['Mean Rank', 'All']] | [['49.8', '85.7', '86', '64.2', '36.98', '1.23', '62.64', '37.62'], ['57.3', '74.3', '81.6', '65.8', '19.09', '1.71', '3.58', '12.63'], ['66.8', '93.7', '76.8', '73.4', '137.19', '1.07', '2.95', '83.51'], ['62.3', '93.7', '81.6', '71.9', '45.74', '1.06', '3.33', '28.52'], ['61.6', '95.4', '77.9', '70.8', '19.08', '1.02... | column | ['Accuracy', 'Accuracy', 'Accuracy', 'Accuracy', 'Mean Rank', 'Mean Rank', 'Mean Rank', 'Mean Rank'] | ['SSA', 'SAC', 'SAT', 'MST'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy || Capital</th> <th>Accuracy || City</th> <th>Accuracy || Relationship</th> <th>Accuracy || All</th> <th>Mean Rank || Capital</th> <th>Mean Rank || City</th> <th>Mean Rank || R... | Table 2 | table_2 | P17-1187 | 7 | acl2017 | Table 2 shows the evaluation results of these models for word analogy inference. From the table, we can observe that: (1) The SAT model performs best among all models, and the superiority is more significant than that on word similarity computation. This indicates that SAT will enhance the modeling of implicit relation... | [1, 1, 2, 2, 2, 1, 2, 1, 1, 1, 2, 2, 2, 2] | ['Table 2 shows the evaluation results of these models for word analogy inference.', 'From the table, we can observe that: (1) The SAT model performs best among all models, and the superiority is more significant than that on word similarity computation.', 'This indicates that SAT will enhance the modeling of implicit ... | [None, ['SAT'], ['SAT'], None, None, ['SAT', 'Capital', 'City'], ['SAT'], ['CBOW', 'SAT', 'Relationship'], ['CBOW', 'Mean Rank'], ['SAT', 'CBOW'], ['SAT'], None, ['CBOW'], None] | 1 |
P17-1189table_2 | Results of Chinese SRL tested on CPB and CSB with automatic PoS tagging, using standard LSTM RNN model (Experiment 1). | 2 | [['Corpus', 'CSB'], ['Corpus', 'CPB']] | 1 | [['Pr. (%)'], ['Rec. (%)'], ['F1 (%)']] | [['75.8', '73.45', '74.61'], ['76.75', '73.03', '74.84']] | column | ['Pr. (%)', 'Rec. (%)', 'F1 (%)'] | ['CSB', 'CPB'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Pr. (%)</th> <th>Rec. (%)</th> <th>F1 (%)</th> </tr> </thead> <tbody> <tr> <td>Corpus || CSB</td> <td>75.8</td> <td>73.45</td> <td>74.61</td> </tr> <tr> <td>Corpus ||... | Table 2 | table_2 | P17-1189 | 8 | acl2017 | 5.2 Results . Performance on Chinese SemBank. Table 2 gives the results of Experiment 1. We see that precision on CPB with automatic PoS tagging is about 0.9 percentage point higher than that on CSB, while recall is about 0.4 percentage point lower, and the gap between F1 scores on CPB and CSB is not significant, which... | [2, 2, 1, 1] | ['5.2 Results .', 'Performance on Chinese SemBank.', 'Table 2 gives the results of Experiment 1.', 'We see that precision on CPB with automatic PoS tagging is about 0.9 percentage point higher than that on CSB, while recall is about 0.4 percentage point lower, and the gap between F1 scores on CPB and CSB is not signifi... | [None, None, None, ['Pr. (%)', 'CPB', 'CSB', 'Rec. (%)', 'F1 (%)']] | 1 |
P17-1189table_3 | Result comparison on CPB dataset. Compared to learning with single corpus using bi-LSTM model (77.09%), learning with CSB can improve the performance by at list 0.59%. Also the best score (79.67%) was achieved by the PNN GRA model. | 3 | [['Method', '-', 'Xue (2008) ME'], ['Method', '-', 'Collobert and Weston (2008) MTL'], ['Method', '-', 'Ding and Chang (2009) CRF'], ['Method', '-', 'Yang et al. (2014) Multi-Predicate'], ['Method', '-', 'Wang et al. (2015) bi-LSTM'], ['Method', '-', 'Sha et al. (2016) bi-LSTM+QOM'], ['Method', 'With external language ... | 1 | [['F1 (%)']] | [['71.9'], ['74.05'], ['72.64'], ['75.31'], ['77.09 (+0.00)'], ['77.69'], ['77.21'], ['77.59'], ['75.46'], ['77.68 (+0.59)'], ['78.42 (+1.33)'], ['79.30 (+2.21)'], ['79.67 (+2.58)']] | column | ['F1 (%)'] | ['Two-column progressive (ours)', 'Two-column Progressive+GRA (ours)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>F1 (%)</th> </tr> </thead> <tbody> <tr> <td>Method || - || Xue (2008) ME</td> <td>71.9</td> </tr> <tr> <td>Method || - || Collobert and Weston (2008) MTL</td> <td>74.05</td> </tr... | Table 3 | table_3 | P17-1189 | 8 | acl2017 | Table 3 summarizes the SRL performance of previous benchmark methods and our experiments described above. Collobert and Weston only conducted their experiments on English corpus, but we notice that their approach has been implemented and tested on CPB by Wang et al. (2015), so we also put their result here for comparis... | [1, 2, 2, 1, 2, 1, 2, 1] | ['Table 3 summarizes the SRL performance of previous benchmark methods and our experiments described above.', 'Collobert and Weston only conducted their experiments on English corpus, but we notice that their approach has been implemented and tested on CPB by Wang et al. (2015), so we also put their result here for com... | [None, ['Collobert and Weston (2008) MTL', 'Wang et al. (2015) bi-LSTM'], None, ['Two-column Progressive+GRA (ours)', 'Sha et al. (2016) bi-LSTM+QOM'], None, None, ['F1 (%)', 'Two-column Progressive+GRA (ours)'], ['Two-column progressive (ours)', 'F1 (%)']] | 1 |
P17-1190table_2 | Results on SemEval textual similarity datasets (Pearson’s r × 100) when experimenting with different regularization techniques. | 4 | [['Model', 'AVG', 'Regularization', 'none'], ['Model', 'AVG', 'Regularization', 'dropout'], ['Model', 'AVG', 'Regularization', 'word dropout'], ['Model', 'LSTM', 'Regularization', 'none'], ['Model', 'LSTM', 'Regularization', 'L2'], ['Model', 'LSTM', 'Regularization', 'dropout'], ['Model', 'LSTM', 'Regularization', 'wor... | 1 | [['Oracle'], ['2016 STS']] | [['68.5', '68.4'], ['68.4', '68.3'], ['68.3', '68.3'], ['60.6', '59.3'], ['60.3', '56.5'], ['58.1', '55.3'], ['66.2', '65.3'], ['66.3', '65.1'], ['68.4', '68.4'], ['67.7', '67.5'], ['69.2', '68.6'], ['69.4', '68.7']] | column | ['correlation', 'correlation'] | ['dropout + scrambling'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Oracle</th> <th>2016 STS</th> </tr> </thead> <tbody> <tr> <td>Model || AVG || Regularization || none</td> <td>68.5</td> <td>68.4</td> </tr> <tr> <td>Model || AVG || Regularizat... | Table 2 | table_2 | P17-1190 | 6 | acl2017 | The results are shown in Table 2. They show that dropping entire word embeddings and scrambling input sequences is very effective in improving the result of the LSTM, while neither type of dropout improves AVG. Moreover, averaging the hidden states of the LSTM is the most effective modification to the LSTM in improving... | [1, 1, 1, 1] | ['The results are shown in Table 2.', 'They show that dropping entire word embeddings and scrambling input sequences is very effective in improving the result of the LSTM, while neither type of dropout improves AVG.', 'Moreover, averaging the hidden states of the LSTM is the most effective modification to the LSTM in i... | [None, ['Model', 'dropout', 'word dropout', 'scrambling', 'AVG', 'none'], ['LSTMAVG'], ['LSTMAVG', 'dropout + scrambling']] | 1 |
P17-1190table_3 | Results on SemEval textual similarity datasets (Pearson’s r × 100) for the GRAN architectures. The first row, marked as (no reg.) is the GRAN without any regularization. The other rows show the result of the various GRAN models using dropout and scrambling. | 2 | [['Model', 'GRAN (no reg.)'], ['Model', 'GRAN'], ['Model', 'GRAN-2'], ['Model', 'GRAN-3'], ['Model', 'GRAN-4'], ['Model', 'GRAN-5'], ['Model', 'BiGRAN']] | 1 | [['Oracle'], ['STS 2016']] | [['68', '68'], ['69.5', '68.9'], ['68.8', '68.1'], ['69', '67.2'], ['68.6', '68.1'], ['66.1', '64.8'], ['69.7', '68.4']] | column | ['correlation', 'correlation'] | ['GRAN'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Oracle</th> <th>STS 2016</th> </tr> </thead> <tbody> <tr> <td>Model || GRAN (no reg.)</td> <td>68</td> <td>68</td> </tr> <tr> <td>Model || GRAN</td> <td>69.5</td> <td... | Table 3 | table_3 | P17-1190 | 6 | acl2017 | In Table 3, we compare the various GRAN architectures. We find that the GRAN provides a small improvement over the best LSTM configuration, possibly because of its similarity to AVG. It also outperforms the other GRAN models, despite being the simplest. | [1, 1, 1] | ['In Table 3, we compare the various GRAN architectures.', 'We find that the GRAN provides a small improvement over the best LSTM configuration, possibly because of its similarity to AVG.', 'It also outperforms the other GRAN models, despite being the simplest.'] | [None, ['GRAN'], ['GRAN', 'GRAN (no reg.)']] | 1 |
P17-1190table_6 | Results from supervised training on the STS and SICK datasets (Pearson’s r × 100) for the GRAN architectures. The last column is the average result on the two datasets. | 2 | [['Model', 'GRAN'], ['Model', 'GRAN-2'], ['Model', 'GRAN-3'], ['Model', 'GRAN-4'], ['Model', 'GRAN-5']] | 1 | [['STS'], ['SICK'], ['Avg.']] | [['81.6', '85.3', '83.5'], ['77.4', '85.1', '81.3'], ['81.3', '85.4', '83.4'], ['80.1', '85.5', '82.8'], ['70.9', '83', '77']] | column | ['r', 'r', 'r'] | ['GRAN'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>STS</th> <th>SICK</th> <th>Avg.</th> </tr> </thead> <tbody> <tr> <td>Model || GRAN</td> <td>81.6</td> <td>85.3</td> <td>83.5</td> </tr> <tr> <td>Model || GRAN-2</td> ... | Table 6 | table_6 | P17-1190 | 7 | acl2017 | In Table 6 we compare the various GRAN architectures under the same settings as the previous experiment. We find that the GRAN still has the best overall performance. | [1, 1] | ['In Table 6 we compare the various GRAN architectures under the same settings as the previous experiment.', 'We find that the GRAN still has the best overall performance.'] | [['GRAN'], ['GRAN']] | 1 |
P17-1191table_1 | Results on Belinkov et al. (2014)’s PPA test set. HPCD (full) is from the original paper, and it uses syntactic SkipGram. GloVe-retro is GloVe vectors retrofitted (Faruqui et al., 2015) to WordNet 3.1, and GloVe-extended refers to the synset embeddings obtained by running AutoExtend (Rothe and Sch¨utze, 2015) on GloVe. | 4 | [['System', 'HPCD (full)', 'Initialization', 'Syntactic-SG'], ['System', 'LSTM-PP', 'Initialization', 'GloVe'], ['System', 'LSTM-PP', 'Initialization', 'GloVe-retro'], ['System', 'OntoLSTM-PP', 'Initialization', 'GloVe-extended']] | 2 | [['Test', 'Acc.']] | [['88.7'], ['84.3'], ['84.8'], ['89.7']] | column | ['Acc.'] | ['OntoLSTM-PP'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Test || Acc.</th> </tr> </thead> <tbody> <tr> <td>System || HPCD (full) || Initialization || Syntactic-SG</td> <td>88.7</td> </tr> <tr> <td>System || LSTM-PP || Initialization || GloVe</... | Table 1 | table_1 | P17-1191 | 6 | acl2017 | Table 1 shows that our proposed token level embedding scheme OntoLSTM-PP outperforms the better variant of our baseline LSTM-PP (with GloVe-retro intialization) by an absolute accuracy difference of 4.9%, or a relative error reduction of 32%. OntoLSTM-PP also outperforms HPCD (full), the previous best result on this da... | [1, 1] | ['Table 1 shows that our proposed token level embedding scheme OntoLSTM-PP outperforms the better variant of our baseline LSTM-PP (with GloVe-retro intialization) by an absolute accuracy difference of 4.9%, or a relative error reduction of 32%.', 'OntoLSTM-PP also outperforms HPCD (full), the previous best result on th... | [['OntoLSTM-PP', 'Initialization', 'GloVe-retro', 'Acc.'], ['OntoLSTM-PP', 'HPCD (full)']] | 1 |
P17-1191table_2 | Results from RBG dependency parser with features coming from various PP attachment predictors and oracle attachments. | 2 | [['System', 'RBG'], ['System', 'RBG + HPCD (full)'], ['System', 'RBG + LSTM-PP'], ['System', 'RBG + OntoLSTM-PP'], ['System', 'RBG + Oracle PP']] | 1 | [['Full UAS'], ['PPA Acc.']] | [['94.17', '88.51'], ['94.19', '89.59'], ['94.14', '86.35'], ['94.3', '90.11'], ['94.6', '98.97']] | column | ['Full UAS', 'PPA Acc.'] | ['RBG + OntoLSTM-PP'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Full UAS</th> <th>PPA Acc.</th> </tr> </thead> <tbody> <tr> <td>System || RBG</td> <td>94.17</td> <td>88.51</td> </tr> <tr> <td>System || RBG + HPCD (full)</td> <td>94.19<... | Table 2 | table_2 | P17-1191 | 6 | acl2017 | Table 2 shows the effect of using the PP attachment predictions as features within a dependency parser. We note there is a relatively small difference in unlabeled attachment accuracy for all dependencies (not only PP attachments), even when gold PP attachments are used as additional features to the parser. However, wh... | [1, 2, 1, 1, 2, 1, 2] | ['Table 2 shows the effect of using the PP attachment predictions as features within a dependency parser.', 'We note there is a relatively small difference in unlabeled attachment accuracy for all dependencies (not only PP attachments), even when gold PP attachments are used as additional features to the parser.', 'How... | [None, None, ['PPA Acc.', 'RBG', 'RBG + Oracle PP'], ['RBG + OntoLSTM-PP', 'RBG + HPCD (full)'], ['RBG + HPCD (full)'], ['Full UAS', 'RBG', 'RBG + HPCD (full)'], ['RBG']] | 1 |
P17-1194table_2 | Performance of alternative sequence labeling architectures on NER and chunking datasets, measured using CoNLL standard entity-level F1 score. | 1 | [['Baseline'], ['+ dropout'], ['+ LMcost']] | 2 | [['CoNLL-00', 'DEV'], ['CoNLL-01', 'TEST'], ['CoNLL-03', 'DEV'], ['CoNLL-04', 'TEST'], ['CHEMDNER', 'DEV'], ['CHEMDNER', 'TEST'], ['JNLPBA', 'DEV'], ['JNLPBA', 'TEST']] | [['92.92', '92.67', '90.85', '85.63', '83.63', '84.51', '77.13', '72.79'], ['93.4', '93.15', '91.14', '86', '84.78', '85.67', '77.61', '73.16'], ['94.22', '93.88', '91.48', '86.26', '85.45', '86.27', '78.51', '73.83']] | column | ['F1', 'F1', 'F1', 'F1', 'F1', 'F1', 'F1', 'F1'] | ['+ dropout', '+ LMcost'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>CoNLL-00 || DEV</th> <th>CoNLL-01 || TEST</th> <th>CoNLL-03 || DEV</th> <th>CoNLL-04 || TEST</th> <th>CHEMDNER || DEV</th> <th>CHEMDNER || TEST</th> <th>JNLPBA || DEV</th> <th>JNLP... | Table 2 | table_2 | P17-1194 | 6 | acl2017 | Table 2 contains results for evaluating the different architectures on NER and chunking. On these tasks, the application of dropout provides a consistent improvement -applying some variance onto the input embeddings results in more robust models for NER and chunking. The addition of the language modeling objective cons... | [1, 2, 2, 1, 1, 1, 1] | ['Table 2 contains results for evaluating the different architectures on NER and chunking.', 'On these tasks, the application of dropout provides a consistent improvement -applying some variance onto the input embeddings results in more robust models for NER and chunking.', 'The addition of the language modeling object... | [None, None, None, None, ['JNLPBA'], ['CoNLL-03'], ['+ LMcost', 'CoNLL-03']] | 1 |
P17-1195table_6 | Result of end-to-end problem solving | 2 | [['Dataset', 'DEV'], ['Dataset', 'TEST']] | 1 | [['Correct'], ['Timeout'], ['Wrong'], ['No RCF'], ['Parse Failure']] | [['27.60%', '10.90%', '12.10%', '12.10%', '37.40%'], ['11.40%', '1.80%', '11.40%', '6.80%', '68.60%']] | column | ['Correct', 'Timeout', 'Wrong', 'No RCF', 'Parse Failure'] | ['DEV', 'TEST'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Correct</th> <th>Timeout</th> <th>Wrong</th> <th>No RCF</th> <th>Parse Failure</th> </tr> </thead> <tbody> <tr> <td>Dataset || DEV</td> <td>27.60%</td> <td>10.90%</td> ... | Table 6 | table_6 | P17-1195 | 8 | acl2017 | Table 6 presents the result of end-to-end problem solving on the UNIV data. It shows the failure in the semantic parsing is a major bottleneck in the current system. Since a problem in UNIV includes more than three sentences on average, parsing a whole problem is quite a high bar for a semantic parser. It is however ne... | [1, 2, 2, 2, 1] | ['Table 6 presents the result of end-to-end problem solving on the UNIV data.', 'It shows the failure in the semantic parsing is a major bottleneck in the current system.', 'Since a problem in UNIV includes more than three sentences on average, parsing a whole problem is quite a high bar for a semantic parser.', 'It is... | [None, None, None, None, ['Correct', 'DEV', 'TEST']] | 1 |
P17-2001table_2 | The detailed comparison of E-E and E-T against relation types to Mirza and Tonelli (2016) (Micro-average Overall F1-score) on test data. | 2 | [['LINK type', 'AFTER'], ['LINK type', 'BEFORE'], ['LINK type', 'SIMULTA.'], ['LINK type', 'INCLUDES'], ['LINK type', 'IS INCLUD.'], ['LINK type', 'VAGUE'], ['LINK type', 'Overall']] | 2 | [['Our', 'E-D'], ['Mirza', 'E-D'], ['Our', 'E-E'], ['Mirza', 'E-E']] | [['0.582', '0.466', '0.44', '0.43'], ['0.634', '0.671', '0.46', '0.471'], ['-', '-', '-', '-'], ['0.056', '0.25', '0.025', '0.049'], ['0.595', '0.6', '0.17', '0.25'], ['0.526', '0.502', '0.624', '0.613'], ['0.546', '0.534', '0.529', '0.519']] | column | ['F1-score', 'F1-score', 'F1-score', 'F1-score'] | ['Our'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Our || E-D</th> <th>Mirza || E-D</th> <th>Our || E-E</th> <th>Mirza || E-E</th> </tr> </thead> <tbody> <tr> <td>LINK type || AFTER</td> <td>0.582</td> <td>0.466</td> <td>0.... | Table 2 | table_2 | P17-2001 | 4 | acl2017 | Table 2 shows the detailed comparison to their work. Our system achieves higher performance on ‘AFTER’, ‘VAGUE’, while lower on ‘BEFORE’, ‘INCLUDES’ (5% of all data) and ‘IS INCLUDED’ (4% of all data). It is likely that their rich traditional features help the classifiers to capture more minority-class links. On the wh... | [1, 1, 2, 1, 2] | ['Table 2 shows the detailed comparison to their work.', 'Our system achieves higher performance on ‘AFTER’, ‘VAGUE’, while lower on ‘BEFORE’, ‘INCLUDES’ (5% of all data) and ‘IS INCLUDED’ (4% of all data).', 'It is likely that their rich traditional features help the classifiers to capture more minority-class links.',... | [None, ['Our', 'AFTER', 'VAGUE', 'BEFORE', 'INCLUDES', 'IS INCLUD.'], None, ['Our', 'Overall', 'E-E', 'E-D'], None] | 1 |
P17-2007table_1 | Single-source parsing results in terms of average accuracy % over 3 runs. Best results are in bold. | 2 | [['GEO', 'en'], ['GEO', 'de'], ['GEO', 'el'], ['GEO', 'th'], ['GEO', 'avg.'], ['ATIS', 'en'], ['ATIS', 'id'], ['ATIS', 'zh'], ['ATIS', 'avg.']] | 2 | [['SINGLE', '-'], ['MULTI', 'separate'], ['MULTI', 'shared']] | [['84.4', '85', '85.48'], ['70.24', '71.19', '72.86'], ['74.4', '75.12', '75.6'], ['72.86', '72.26', '73.33'], ['75.48', '75.89', '76.82'], ['81.85', '81.4', '81.77'], ['74.85', '74.03', '75.45'], ['73.66', '75.89', '73.96'], ['76.79', '77.11', '77.06']] | row | ['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy'] | ['MULTI'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>SINGLE || -</th> <th>MULTI || separate</th> <th>MULTI || shared</th> </tr> </thead> <tbody> <tr> <td>GEO || en</td> <td>84.4</td> <td>85</td> <td>85.48</td> </tr> <tr> ... | Table 1 | table_1 | P17-2007 | 4 | acl2017 | Table 1 compares the performance of the monolingual sequence-to-tree model (Dong and Lapata, 2016), SINGLE, and our multilingual model, MULTI, with separate and shared output parameters under the single-source setting as described in Section 3.1. On average, both variants of the multilingual model outperform the monoli... | [1, 1, 1, 1, 1] | ['Table 1 compares the performance of the monolingual sequence-to-tree model (Dong and Lapata, 2016), SINGLE, and our multilingual model, MULTI, with separate and shared output parameters under the single-source setting as described in Section 3.1.', 'On average, both variants of the multilingual model outperform the m... | [['SINGLE', 'MULTI'], ['MULTI', 'SINGLE', 'GEO'], ['GEO'], ['ATIS', 'zh', 'id'], ['en']] | 1 |
P17-2007table_6 | Single-source parsing results showing the accuracy of the 3 runs. Best results are in bold. | 2 | [['GEO', 'en'], ['GEO', 'de'], ['GEO', 'el'], ['GEO', 'th'], ['ATIS', 'en'], ['ATIS', 'id'], ['ATIS', 'zh']] | 3 | [['SINGLE', '-', '1'], ['SINGLE', '-', '2'], ['SINGLE', '-', '3'], ['MULTI', 'separate', '1'], ['MULTI', 'separate', '2'], ['MULTI', 'separate', '3'], ['MULTI', 'shared', '1'], ['MULTI', 'shared', '2'], ['MULTI', 'shared', '3']] | [['87.14', '83.57', '82.50', '85.71', '83.93', '85.36', '85.36', '83.93', '87.14'], ['70.00', '70.36', '70.36', '71.79', '71.79', '70.00', '73.57', '73.93', '71.07'], ['76.43', '72.50', '74.29', '77.14', '72.14', '76.07', '76.43', '74.64', '75.71'], ['72.50', '73.57', '72.50', '72.14', '72.14', '72.50', '72.50', '71.07... | row | ['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy'] | ['SINGLE', 'MULTI'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>SINGLE || - || 1</th> <th>SINGLE || - || 2</th> <th>SINGLE || - || 3</th> <th>MULTI || separate || 1</th> <th>MULTI || separate || 2</th> <th>MULTI || separate || 3</th> <th>MULTI || sh... | Table 6 | table_6 | P17-2007 | 7 | acl2017 | In Table 6, we report the accuracy of the 3 runs for each model and dataset. In both settings, we observe that the best accuracy on both datasets is often achieved by MULTI. This is the same conclusion that we reached when averaging the results over all runs. | [1, 1, 1] | ['In Table 6, we report the accuracy of the 3 runs for each model and dataset.', 'In both settings, we observe that the best accuracy on both datasets is often achieved by MULTI.', 'This is the same conclusion that we reached when averaging the results over all runs.'] | [None, ['MULTI'], None] | 1 |
P17-2010table_1 | Results on RTE performance without (INIT) and with prior compound splitting. ?: significant difference of the performance in comparison to INIT | 2 | [['System', 'INIT'], ['System', 'manual splitting*'], ['System', 'ZvdP2016'], ['System', 'FF2010*'], ['System', 'WH2012']] | 2 | [['-', 'Acc'], ['Entailment', 'P'], ['Entailment', 'R'], ['Entailment', 'F1'], ['Non-entailment', 'P'], ['Non-entailment', 'R'], ['Non-entailment', 'F1']] | [['64.13', '62.50', '74.57', '68.00', '66.67', '53.20', '59.18'], ['67.88', '65.08', '80.20', '71.85', '72.64', '54.99', '62.59'], ['66.63', '64.55', '77.02', '70.23', '69.87', '55.75', '62.02'], ['67.38', '65.48', '76.53', '70.58', '70.19', '57.80', '63.39'], ['66.00', '63.73', '77.75', '70.04', '69.77', '53.71', '60.... | column | ['Acc', 'P', 'R', 'F1', 'P', 'R', 'F1'] | ['manual splitting*'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>- || Acc</th> <th>Entailment || P</th> <th>Entailment || R</th> <th>Entailment || F1</th> <th>Non-entailment || P</th> <th>Non-entailment || R</th> <th>Non-entailment || F1</th> </tr... | Table 1 | table_1 | P17-2010 | 3 | acl2017 | Table 1 shows accuracy, precision, recall and F1-score for the entailment and non-entailment class on the RTE-3 dataset. As reflected in the results, reducing the opacity of compounds via the application of a compound splitter improves the subsequent RTE performance. This holds for all compound splitters that we used i... | [1, 2, 2, 1, 1, 1, 2, 2] | ['Table 1 shows accuracy, precision, recall and F1-score for the entailment and non-entailment class on the RTE-3 dataset.', 'As reflected in the results, reducing the opacity of compounds via the application of a compound splitter improves the subsequent RTE performance.', 'This holds for all compound splitters that w... | [['Acc', 'P', 'R', 'F1', 'Entailment', 'Non-entailment'], None, None, ['FF2010*', 'INIT', 'Acc', 'F1'], ['manual splitting*'], ['FF2010*'], None, ['FF2010*']] | 1 |
P17-2021table_2 | BLEU results for the low-resource experiments (News Commentary v8) | 3 | [['DE-EN', 'system', 'bpe2bpe'], ['DE-EN', 'system', 'bpe2tree'], ['DE-EN', 'system', 'bpe2bpe ens.'], ['DE-EN', 'system', 'bpe2tree ens.'], ['RU-EN', 'system', 'bpe2bpe'], ['RU-EN', 'system', 'bpe2tree'], ['RU-EN', 'system', 'bpe2bpe ens.'], ['RU-EN', 'system', 'bpe2tree ens.'], ['CS-EN', 'system', 'bpe2bpe'], ['CS-EN... | 1 | [['newstest2015'], ['newstest2016']] | [['13.81', '14.16'], ['14.55', '16.13'], ['14.42', '15.07'], ['15.69', '17.21'], ['12.58', '11.37'], ['12.92', '11.94'], ['13.36', '11.91'], ['13.66', '12.89'], ['10.85', '11.23'], ['11.54', '11.65'], ['11.46', '11.77'], ['12.43', '12.68']] | column | ['BLEU', 'BLEU'] | ['bpe2tree'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>newstest2015</th> <th>newstest2016</th> </tr> </thead> <tbody> <tr> <td>DE-EN || system || bpe2bpe</td> <td>13.81</td> <td>14.16</td> </tr> <tr> <td>DE-EN || system || bpe2tree... | Table 2 | table_2 | P17-2021 | 3 | acl2017 | Table 2 shows the results in the low-resource setting, where the bpe2tree model is consistently better than the bpe2bpe baseline. We find this interesting as the syntax-aware system performs a much harder task (predicting trees on top of the translations, thus handling much longer output sequences) while having a nearl... | [1, 2, 2] | ['Table 2 shows the results in the low-resource setting, where the bpe2tree model is consistently better than the bpe2bpe baseline.', 'We find this interesting as the syntax-aware system performs a much harder task (predicting trees on top of the translations, thus handling much longer output sequences) while having a ... | [['bpe2tree', 'bpe2bpe'], None, None] | 1 |
P17-2022table_4 | Test Set Results | 2 | [['Classifier', 'SVM'], ['Classifier', 'NRC'], ['Classifier', 'Stanford'], ['Classifier', 'AutoSlog (ASlog)'], ['Classifier', 'Retrained Stanford']] | 2 | [['Pos', 'F1'], ['Neg', 'F1'], ['-', 'Macro F']] | [['0.66', '0.60', '0.64'], ['0.58', '0.69', '0.64'], ['0.54', '0.73', '0.67'], ['0.11', '0.68', '0.53'], ['0.53', '0.73', '0.67']] | column | ['F1', 'F1', 'Macro F'] | ['Retrained Stanford'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Pos || F1</th> <th>Neg || F1</th> <th>- || Macro F</th> </tr> </thead> <tbody> <tr> <td>Classifier || SVM</td> <td>0.66</td> <td>0.60</td> <td>0.64</td> </tr> <tr> <t... | Table 4 | table_4 | P17-2022 | 4 | acl2017 | We present our experimental results and analyze the results in terms of the lexico-functional linguistic patterns we learn. Rows 1-3 of Table 4 show the results for the three baselines, in terms of F-score for each class and the macro F. Stanford outperforms both NRC and SVM, but misses many cases of positive sentiment... | [1, 1, 1, 1, 1, 2, 1, 1, 2, 2] | ['We present our experimental results and analyze the results in terms of the lexico-functional linguistic patterns we learn.', 'Rows 1-3 of Table 4 show the results for the three baselines, in terms of F-score for each class and the macro F.', 'Stanford outperforms both NRC and SVM, but misses many cases of positive s... | [None, ['SVM', 'NRC', 'Stanford', 'F1', 'Macro F'], ['SVM', 'NRC', 'Stanford', 'Pos'], ['AutoSlog (ASlog)'], ['AutoSlog (ASlog)'], ['AutoSlog (ASlog)'], ['Retrained Stanford'], ['Retrained Stanford', 'Stanford'], ['Stanford'], ['Retrained Stanford']] | 1 |
P17-2034table_3 | Mean accuracy and standard deviation results. We report accuracy for the train, development, and both test sets. Three systems use the structured representation. Two systems (and Image Only) use the raw image. | 2 | [['-', 'Majority'], ['-', 'Text only'], ['-', 'Image Only'], ['Structured representation', 'MaxEnt'], ['Structured representation', 'MLP'], ['Structured representation', 'Image features+RNN'], ['Raw image', 'CNN+RNN'], ['Raw image', 'NMN']] | 1 | [['Train'], ['Dev'], ['Test-P'], ['Test-U']] | [['56.37', '55.31', '56.16', '55.43'], ['58.36±0.6', '56.61±0.5', '57.18±0.6', '56.21±0.4'], ['56.79±1.3', '55.35±0.1', '56.05±0.3', '55.33±0.3'], ['99.99', '68.04', '67.68', '67.82'], ['96.15±1.3', '67.50±0.5', '66.28±0.4', '65.32±0.4'], ['59.71±1.0', '57.72±1.4', '57.62±1.3', '56.29±0.9'], ['58.85±0.2', '56.59±0.3', ... | column | ['accuracy', 'accuracy', 'accuracy', 'accuracy'] | ['Structured representation', 'Raw image'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Train</th> <th>Dev</th> <th>Test-P</th> <th>Test-U</th> </tr> </thead> <tbody> <tr> <td>- || Majority</td> <td>56.37</td> <td>55.31</td> <td>56.16</td> <td>55.43</td> ... | Table 3 | table_3 | P17-2034 | 5 | acl2017 | We run each experiment ten times and report mean accuracy as well as standard deviation for randomly initialized models. Table 3 shows our results. NMN is the best performing model using images. For models using the structured representation, the MaxEnt model provides the best performance. | [2, 1, 1, 1] | ['We run each experiment ten times and report mean accuracy as well as standard deviation for randomly initialized models.', 'Table 3 shows our results.', 'NMN is the best performing model using images.', 'For models using the structured representation, the MaxEnt model provides the best performance.'] | [None, None, ['Raw image', 'NMN'], ['Structured representation', 'MaxEnt']] | 1 |
P17-2043table_2 | Readability evaluation by human subjects | 3 | [['Method', 'Ext.', 'n=1'], ['Method', 'Ext.', 'n=2'], ['Method', 'Comp.', 'n=1'], ['Method', 'Comp.', 'n=2']] | 1 | [['Score']] | [['4.55'], ['4.58'], ['3.88'], ['4.07']] | column | ['Score'] | ['Ext.'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Score</th> </tr> </thead> <tbody> <tr> <td>Method || Ext. || n=1</td> <td>4.55</td> </tr> <tr> <td>Method || Ext. || n=2</td> <td>4.58</td> </tr> <tr> <td>Method || Comp.... | Table 2 | table_2 | P17-2043 | 4 | acl2017 | We conducted human evaluation to compare readability of extractive oracle summaries to that of compressive oracle summaries. We presented the oracle summaries to five human subjects and asked them to rate the summaries using an integer scale from 1 (very poor) to 5 (very good). Table 2 shows the results. Extractive ora... | [2, 2, 1, 1, 1, 2] | ['We conducted human evaluation to compare readability of extractive oracle summaries to that of compressive oracle summaries.', 'We presented the oracle summaries to five human subjects and asked them to rate the summaries using an integer scale from 1 (very poor) to 5 (very good).', 'Table 2 shows the results.', 'Ext... | [['Score'], ['Score'], None, ['Ext.'], ['Comp.'], None] | 1 |
P17-2045table_3 | Results on the atomic and full datasets. | 2 | [['Dataset', 'Theano'], ['Dataset', 'keras'], ['Dataset', 'youtube-dl'], ['Dataset', 'node'], ['Dataset', 'angular'], ['Dataset', 'react'], ['Dataset', 'opencv'], ['Dataset', 'CNTK'], ['Dataset', 'bitcoin'], ['Dataset', 'CoreNLP'], ['Dataset', 'elasticsearch'], ['Dataset', 'guava']] | 3 | [['our model', 'atomic', 'Val. acc'], ['our model', 'atomic', 'BLEU'], ['Moses', 'atomic', 'BLEU']] | [['36.81%', '9.5', '7.1'], ['45.76%', '13.7', '7.8'], ['50.84%', '16.4'], ['52.46%', '7.8', '7.7'], ['44.39%', '13.9', '11.7'], ['49.44%', '11.4', '10.7'], ['50.77%', '11.2', '9.0'], ['48.88%', '17.9', '11.8'], ['50.04%', '17.9', '13.0'], ['63.20%', '28.5', '10.1'], ['36.53%', '11.8', '5.2'], ['65.52%', '29.8', '19.5']... | column | ['Val. acc', 'BLEU', 'BLEU'] | ['our model'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>our model || atomic || Val. acc</th> <th>our model || atomic || BLEU</th> <th>Moses || atomic || BLEU</th> </tr> </thead> <tbody> <tr> <td>Dataset || Theano</td> <td>36.81%</td> <td>... | Table 3 | table_3 | P17-2045 | 4 | acl2017 | As Table 3 shows, our model trained on atomic data outperforms the baseline in all but one project with an average gain of 5 BLEU points. In particular, we observe bigger gains for java projects such as CoreNLP and guava. We hypothesize this is because program differences in Java tend to be longer than the rest. While ... | [1, 1, 2, 2] | ['As Table 3 shows, our model trained on atomic data outperforms the baseline in all but one project with an average gain of 5 BLEU points.', 'In particular, we observe bigger gains for java projects such as CoreNLP and guava.', 'We hypothesize this is because program differences in Java tend to be longer than the rest... | [['our model', 'Moses', 'atomic', 'BLEU'], ['CoreNLP', 'guava'], None, None] | 1 |
P17-2046table_3 | Final results comparing translated language features (TRANS) to benchmark lexical generalisation features (LEX). BASE+LEX is our implementation of the core Hong et al. classifier. TAC KPB 2015 #1 corresponds to reported results for Hong et al. including semi-supervised learning. TAC KPB 2015 shared task has 38 runs sub... | 2 | [['System', 'BASE'], ['System', 'BASE+LEX'], ['System', 'BASE+TRANS'], ['System', 'BASE+LEX+TRANS']] | 1 | [['P'], ['R'], ['F']] | [['60.4', '24.1', '34.4'], ['66.8', '42.6', '52.0'], ['59.6', '45.8', '51.8'], ['67.9', '46.2', '55.0']] | column | ['P', 'R', 'F'] | ['BASE+LEX+TRANS'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F</th> </tr> </thead> <tbody> <tr> <td>System || BASE</td> <td>60.4</td> <td>24.1</td> <td>34.4</td> </tr> <tr> <td>System || BASE+LEX</td> ... | Table 3 | table_3 | P17-2046 | 5 | acl2017 | Table 3 contains final results on the held-out evaluation data. The final translated language feature set (TRANS) comprises word, character and Cangjie features from Traditional Chinese, Simplified Chinese, Japanese and Korean. TRANS features provide a large F1 improvement of 17.4 over the baseline (BASE), similar to t... | [1, 2, 1, 1, 1] | ['Table 3 contains final results on the held-out evaluation data.', 'The final translated language feature set (TRANS) comprises word, character and Cangjie features from Traditional Chinese, Simplified Chinese, Japanese and Korean.', 'TRANS features provide a large F1 improvement of 17.4 over the baseline (BASE), simi... | [None, None, ['BASE+TRANS', 'BASE+LEX', 'F'], ['P', 'R', 'BASE+TRANS', 'BASE+LEX+TRANS', 'F'], ['BASE', 'BASE+LEX+TRANS', 'F', 'P', 'BASE+LEX', 'R', 'BASE+TRANS']] | 1 |
P17-2047table_5 | Precision, Recall and F1 of different methods on Yahoo! Answers factoid QA dataset. The Oracle assumes candidate answers are ranked perfectly and its performance is limited by the initial retrieval step. | 2 | [['Method', 'Aqqu'], ['Method', 'Text2KB'], ['Method', 'AskMSR (entities)'], ['Method', 'MemN2N'], ['Method', 'KV MemN2N'], ['Method', 'EviNets (text)'], ['Method', 'EviNets (text+kb)'], ['Method', 'Oracle']] | 1 | [['P'], ['R'], ['F1']] | [['0.116', '0.117', '0.116'], ['0.170', '0.170', '0.170'], ['0.175', '0.319', '0.226'], ['0.072', '0.131', '0.092'], ['0.126', '0.228', '0.162'], ['0.210', '0.383', '0.271'], ['0.226', '0.409', '0.291'], ['0.622', '1.0', '0.767']] | column | ['P', 'R', 'F'] | ['EviNets (text)', 'EviNets (text+kb)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Method || Aqqu</td> <td>0.116</td> <td>0.117</td> <td>0.116</td> </tr> <tr> <td>Method || Text2KB</td> ... | Table 5 | table_5 | P17-2047 | 5 | acl2017 | Table 5 summarizes the results of EviNets and some baseline methods on the created Yahoo! Answers dataset. As we can see, knowledge base data is not enough to answer most of these questions, and a state-of-the-art KBQA system Aqqu gets only 0.116 precision. Adding textual data helps significantly, and Text2KB improves ... | [1, 1, 1, 1, 1] | ['Table 5 summarizes the results of EviNets and some baseline methods on the created Yahoo! Answers dataset.', 'As we can see, knowledge base data is not enough to answer most of these questions, and a state-of-the-art KBQA system Aqqu gets only 0.116 precision.', 'Adding textual data helps significantly, and Text2KB i... | [None, ['Aqqu', 'P'], ['Text2KB', 'P'], ['EviNets (text)', 'EviNets (text+kb)', 'F1'], ['EviNets (text+kb)', 'AskMSR (entities)', 'MemN2N', 'F1']] | 1 |
P17-2052table_3 | Results on our corpus. All quantities are macro-averaged. | 4 | [['Level', 'Entity', 'Features', 'Unstructured'], ['Level', 'Entity', 'Features', '+ Pairs'], ['Level', 'Entity', 'Features', '+ Graph'], ['Level', 'Sentence', 'Features', 'Unstructured'], ['Level', 'Sentence', 'Features', '+ Pairs'], ['Level', 'Sentence', 'Features', '+ Graph']] | 1 | [['P'], ['R'], ['F1']] | [['50.0', '67.2', '52.9'], ['53.3', '64.1', '54.3'], ['53.9', '63.9', '54.5'], ['42.6', '58.9', '44.4'], ['46.5', '54.1', '45.6'], ['47.0', '53.6', '45.6']] | column | ['P', 'R', 'F'] | ['+ Pairs', '+ Graph'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Level || Entity || Features || Unstructured</td> <td>50.0</td> <td>67.2</td> <td>52.9</td> </tr> <tr> <t... | Table 3 | table_3 | P17-2052 | 4 | acl2017 | Table 3 summarizes our results. Starting with the baseline, we incrementally add the type pair, graph-based, and set size features discussed in 2.1. Adding type pair features results in an appreciable performance gain, while the graph features bring little benefit—potentially because pairwise correlations suffice to su... | [1, 1, 1] | ['Table 3 summarizes our results.', 'Starting with the baseline, we incrementally add the type pair, graph-based, and set size features discussed in 2.1.', 'Adding type pair features results in an appreciable performance gain, while the graph features bring little benefit—potentially because pairwise correlations suffi... | [None, ['Unstructured', '+ Pairs', '+ Graph'], ['+ Pairs', '+ Graph']] | 1 |
P17-2055table_2 | Number of Wikidata entities as subjects (#s) of each predicate (p), and evaluation results on manually annotated randomly selected subjects that have at least an object. | 2 | [['p', 'has part (creative work series)'], ['p', 'contains admin. terr. entity'], ['p', 'spouse'], ['p', 'child'], ['p', 'child (manual ground truth)']] | 2 | [['-', '#s'], ['baseline', 'P'], ['vanilla', 'P'], ['vanilla', 'R'], ['vanilla', 'F1'], ['only-nummod', 'P'], ['only-nummod', 'R'], ['only-nummod', 'F1']] | [['261', '0.050', '0.333', '0.316', '0.324', '0.353', '0.316', '0.333'], ['18000', '0.034', '0.390', '0.188', '0.254', '0.548', '0.200', '0.293'], ['45917', '0', '0.014', '0.011', '0.013', '0.028', '0.017', '0.021'], ['35057', '0.112', '0.151', '0.129', '0.139', '0.320', '0.219', '0.260'], ['6408', '-', '0.374', '0.309... | column | ['#s', 'P', 'P', 'R', 'F1', 'P', 'R', 'F1'] | ['only-nummod'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>- || #s</th> <th>baseline || P</th> <th>vanilla || P</th> <th>vanilla || R</th> <th>vanilla || F1</th> <th>only-nummod || P</th> <th>only-nummod || R</th> <th>only-nummod || F1</th... | Table 2 | table_2 | P17-2055 | 4 | acl2017 | Table 2 shows the performance of our CRF-based method in finding the correct relation cardinality, evaluated on manually annotated 20 (has part), 100 (admin. terr. entity) and 200 (child and spouse) randomly selected subjects that have at least one object. The random-number baseline achieves a precision of 5% (has part... | [1, 1, 1, 1, 1, 2, 1] | ['Table 2 shows the performance of our CRF-based method in finding the correct relation cardinality, evaluated on manually annotated 20 (has part), 100 (admin. terr. entity) and 200 (child and spouse) randomly selected subjects that have at least one object.', 'The random-number baseline achieves a precision of 5% (has... | [None, ['baseline', 'P', 'has part (creative work series)', 'contains admin. terr. entity', 'spouse', 'child'], ['only-nummod', 'P', 'F1'], ['spouse'], ['child (manual ground truth)'], ['child (manual ground truth)'], ['child (manual ground truth)']] | 1 |
P17-2059table_1 | Test-set accuracies obtained; results except the AGT are drawn from (Lei et al., 2015). | 1 | [['AGT'], ['high-order CNN'], ['tree-LSTM'], ['DRNN'], ['PVEC'], ['DCNN'], ['DAN'], ['CNN-MC'], ['CNN'], ['RNTN'], ['NBoW'], ['RNN'], ['SVM']] | 1 | [['Accuracy']] | [['50.5'], ['51.2'], ['51.0'], ['49.8'], ['48.7'], ['48.5'], ['48.2'], ['47.4'], ['47.2'], ['45.7'], ['44.5'], ['43.2'], ['38.3']] | column | ['Accuracy'] | ['AGT'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy</th> </tr> </thead> <tbody> <tr> <td>AGT</td> <td>50.5</td> </tr> <tr> <td>high-order CNN</td> <td>51.2</td> </tr> <tr> <td>tree-LSTM</td> <td>51.0</td> ... | Table 1 | table_1 | P17-2059 | 3 | acl2017 | Table 1 presents the test-set accuracies obtained by different strategies. Results in Table 1 indicate that the AGT method achieved very competitive accuracy (with 50.5%), when compared to the state-of-the-art results obtained by the tree-LSTM (51.0%) (Tai et al., 2015, Zhu et al., 2015) and high-order CNN approaches (... | [1, 1] | ['Table 1 presents the test-set accuracies obtained by different strategies.', 'Results in Table 1 indicate that the AGT method achieved very competitive accuracy (with 50.5%), when compared to the state-of-the-art results obtained by the tree-LSTM (51.0%) (Tai et al., 2015, Zhu et al., 2015) and high-order CNN approac... | [None, ['AGT', 'tree-LSTM', 'high-order CNN']] | 1 |
P17-2060table_1 | Translation results (BLEU score) for different machine translation and system combination methods. Jane is a open source machine translation system combination toolkit that uses confusion network decoding. Best and important results per category are highlighted. | 2 | [['System', 'PBMT'], ['System', 'HPMT'], ['System', 'NMT'], ['System', 'Jane (Freitag et al. 2014)'], ['System', 'Multi'], ['System', 'Multi+Source'], ['System', 'Multi+Ensemble'], ['System', 'Multi+Source+Ensemble']] | 1 | [['MT03'], ['MT04'], ['MT05'], ['MT06'], ['Average']] | [['37.47', '41.20', '36.41', '36.03', '37.78'], ['38.05', '41.47', '36.86', '36.04', '38.10'], ['37.91', '38.95', '36.02', '36.65', '37.38'], ['39.83', '42.75', '38.63', '39.10', '40.08'], ['40.64', '44.81', '38.80', '38.26', '40.63'], ['42.16', '45.51', '40.28', '39.03', '41.75'], ['41.67', '45.95', '40.37', '39.02', ... | column | ['BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU'] | ['Multi', 'Multi+Source', 'Multi+Ensemble', 'Multi+Source+Ensemble'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MT03</th> <th>MT04</th> <th>MT05</th> <th>MT06</th> <th>Average</th> </tr> </thead> <tbody> <tr> <td>System || PBMT</td> <td>37.47</td> <td>41.20</td> <td>36.41</td> ... | Table 1 | table_1 | P17-2060 | 3 | acl2017 | We compare our neural combination system with the best individual engines, and the state-of-the-art traditional combination system Jane (Freitag et al., 2014). Table 1 shows the BLEU of different models on development data and test data. The BLEU score of the multi-source neural combination model is 2.53 higher than th... | [1, 1, 1, 1, 1, 1, 1] | ['We compare our neural combination system with the best individual engines, and the state-of-the-art traditional combination system Jane (Freitag et al., 2014).', 'Table 1 shows the BLEU of different models on development data and test data.', 'The BLEU score of the multi-source neural combination model is 2.53 higher... | [['System'], None, ['Multi', 'HPMT', 'Average'], ['Multi+Source'], ['Jane (Freitag et al. 2014)'], ['Jane (Freitag et al. 2014)', 'Multi+Ensemble'], ['Multi+Source+Ensemble']] | 1 |
P17-2066table_2 | Experimental results of Japanese caption generation. The numbers in boldface indicate the best score for each evaluation measure. | 1 | [['En-generator → MT'], ['Ja-generator']] | 1 | [['BLEU-1'], ['BLEU-2'], ['BLEU-3'], ['BLEU-4'], ['ROUGE_L'], ['CIDEr']] | [['0.565', '0.330', '0.204', '0.127', '0.449', '0.324'], ['0.763', '0.614', '0.492', '0.385', '0.553', '0.883']] | column | ['BLEU-1', 'BLEU-2', 'BLEU-3', 'BLEU-4', 'ROUGE_L', 'CIDEr'] | ['Ja-generator'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU-1</th> <th>BLEU-2</th> <th>BLEU-3</th> <th>BLEU-4</th> <th>ROUGE_L</th> <th>CIDEr</th> </tr> </thead> <tbody> <tr> <td>En-generator → MT</td> <td>0.565</td> <td>0... | Table 2 | table_2 | P17-2066 | 5 | acl2017 | Table 2 summarizes the experimental results. The results show that Ja-generator, that is, the approach in which Japanese captions were used as training data, outperformed En-generator → MT, which was trained without Japanese captions. | [1, 1] | ['Table 2 summarizes the experimental results.', 'The results show that Ja-generator, that is, the approach in which Japanese captions were used as training data, outperformed En-generator → MT, which was trained without Japanese captions.'] | [None, ['Ja-generator', 'En-generator → MT']] | 1 |
P17-2069table_3 | Comparison of DI algorithms. ‡ denotes statistical significance at p < 0.01 in comparison to the method without DI, * denotes statistical significance at p < 0.01 in comparison to standard DI and † denotes statistical significance at p < 0.05 in comparison to standard DI. | 2 | [['APT configuration', 'None'], ['APT configuration', 'Standard DI'], ['APT configuration', 'Offset Inference']] | 2 | [['ML10', 'AN'], ['ML10', 'NN'], ['ML10', 'VO'], ['ML10', 'Avg'], ['ML08', 'VO']] | [['0.35', '0.50', '0.39', '0.41', '0.22'], ['0.48', '0.51', '0.43', '0.47', '0.29'], ['0.49', '0.52', '0.44', '0.48', '0.31']] | column | ['correlation', 'correlation', 'correlation', 'correlation', 'correlation'] | ['Offset Inference'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>ML10 || AN</th> <th>ML10 || NN</th> <th>ML10 || VO</th> <th>ML10 || Avg</th> <th>ML08 || VO</th> </tr> </thead> <tbody> <tr> <td>APT configuration || None</td> <td>0.35</td> ... | Table 3 | table_3 | P17-2069 | 5 | acl2017 | Table 3 shows that both forms of distributional inference significantly outperform a baseline without DI. On average, offset inference outperforms the method of Kober et al. (2016) by a statistically significant margin on both datasets. | [1, 1] | ['Table 3 shows that both forms of distributional inference significantly outperform a baseline without DI.', 'On average, offset inference outperforms the method of Kober et al. (2016) by a statistically significant margin on both datasets.'] | [None, ['Offset Inference']] | 1 |
P17-2070table_2 | Spearman’s rank correlation performance for the Word Similarity task on SCWS. | 2 | [['Model', 'SGE + C (Mikolov et al. 2013a)'], ['Model', 'MSSG (Neelakantan et al. 2014)'], ['Model', 'HTLE'], ['Model', 'HTLE add'], ['Model', 'STLE']] | 2 | [['Dimension', '100'], ['Dimension', '300'], ['Dimension', '600']] | [['0.59', '0.59', '0.62'], ['0.60', '0.61', '0.64'], ['0.63', '0.56', '0.55'], ['0.61', '0.61', '0.58'], ['0.59', '0.58', '0.55']] | column | ['correlation', 'correlation', 'correlation'] | ['HTLE', 'HTLE add'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Dimension || 100</th> <th>Dimension || 300</th> <th>Dimension || 600</th> </tr> </thead> <tbody> <tr> <td>Model || SGE + C (Mikolov et al. 2013a)</td> <td>0.59</td> <td>0.59</td> ... | Table 2 | table_2 | P17-2070 | 5 | acl2017 | Table 2 provides the Spearman’s correlation scores for different models against the human ranking. We see that with dimensions 100 and 300, two of our models obtain improvements over the baseline. The MSSG model of Neelakantan et al. (2014) performs only slightly better than our HLTE model by requiring considerably mor... | [1, 1, 1] | ['Table 2 provides the Spearman’s correlation scores for different models against the human ranking.', 'We see that with dimensions 100 and 300, two of our models obtain improvements over the baseline.', 'The MSSG model of Neelakantan et al. (2014) performs only slightly better than our HLTE model by requiring consider... | [None, ['100', '300', 'HTLE', 'HTLE add'], ['MSSG (Neelakantan et al. 2014)', 'HTLE', '600', '100']] | 1 |
P17-2080table_1 | Metric-based Evaluation. SCENE1-A is set to generate generic responses, so it makes no sense to measure it with embedding-based metrics | 2 | [['Model', 'LM'], ['Model', 'HRED'], ['Model', 'SPHRED'], ['Model', 'VHRED'], ['Model', 'SCENE1-A'], ['Model', 'SCENE1-B'], ['Model', 'SCENE2-A'], ['Model', 'SCENE2-B']] | 1 | [['Average'], ['Greedy'], ['Extrema'], ['Accuracy']] | [['0.360', '0.348', '0.310', '-'], ['0.429', '0.466', '0.383', '-'], ['0.468', '0.478', '0.434', '-'], ['0.403', '0.432', '0.374', '-'], ['-', '-', '-', '90.9%'], ['0.426', '0.432', '0.396', '86.9%'], ['0.465', '0.440', '0.428', '99.8%'], ['0.463', '0.437', '0.420', '99.2%']] | column | ['Average', 'Greedy', 'Extrema', 'Accuracy'] | ['SCENE1-A', 'SCENE1-B', 'SCENE2-A', 'SCENE2-B'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Average</th> <th>Greedy</th> <th>Extrema</th> <th>Accuracy</th> </tr> </thead> <tbody> <tr> <td>Model || LM</td> <td>0.360</td> <td>0.348</td> <td>0.310</td> <td>-</td... | Table 1 | table_1 | P17-2080 | 4 | acl2017 | As can be seen from Table 1, SPHRED outperforms both HRED and LM over all the three embedding-based metrics. This implies separating the single-line context RNN into two independent parts can actually lead to a better context representation. It is worth mentioning the size of context RNN hidden states in SPHRED is only... | [1, 2, 2, 2, 1, 2, 2, 1, 1, 2, 1, 2] | ['As can be seen from Table 1, SPHRED outperforms both HRED and LM over all the three embedding-based metrics.', 'This implies separating the single-line context RNN into two independent parts can actually lead to a better context representation.', 'It is worth mentioning the size of context RNN hidden states in SPHRED... | [['SPHRED', 'HRED', 'LM'], None, ['SPHRED', 'HRED'], None, None, ['SCENE1-A', 'SCENE1-B'], ['SCENE2-A', 'SCENE2-B'], ['SCENE1-A', 'SCENE1-B'], ['SPHRED'], ['HRED'], ['SCENE1-A', 'SCENE1-B', 'SCENE2-A', 'SCENE2-B', 'SPHRED', 'HRED'], None] | 1 |
P17-2081table_4 | Results on SICK after finetuning. The first row is only trained on SICK. * indicates ensemble method. | 2 | [['Pretrained dataset / Previous work', '-'], ['Pretrained dataset / Previous work', 'SQuAD-T'], ['Pretrained dataset / Previous work', 'SQuAD'], ['Pretrained dataset / Previous work', 'SQuAD*'], ['Pretrained dataset / Previous work', 'SNLI'], ['Pretrained dataset / Previous work', 'SQuAD-T + SNLI'], ['Pretrained datas... | 1 | [['Accuracy']] | [['77.96'], ['81.49'], ['82.86'], ['84.38'], ['83.20'], ['85.00'], ['86.63'], ['88.22'], ['86.2'], ['84.57'], ['83.64'], ['83.05'], ['70.9'], ['77.6']] | column | ['Accuracy'] | ['SQuAD + SNLI*'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy</th> </tr> </thead> <tbody> <tr> <td>Pretrained dataset / Previous work || -</td> <td>77.96</td> </tr> <tr> <td>Pretrained dataset / Previous work || SQuAD-T</td> <td>81.49... | Table 4 | table_4 | P17-2081 | 5 | acl2017 | Table 4 shows the transfer learning results of BiDAF-T on SICK dataset (Marelli et al., 2014), with various pretraining routines. Note that SNLI (Bowman et al., 2015) is a similar task to SICK and is significantly larger (150K/10K/10K train/dev/test examples). Here we highlight three observations. (a) BiDAF-T pretraine... | [1, 2, 2, 1, 1, 2, 1, 2] | ['Table 4 shows the transfer learning results of BiDAF-T on SICK dataset (Marelli et al., 2014), with various pretraining routines.', 'Note that SNLI (Bowman et al., 2015) is a similar task to SICK and is significantly larger (150K/10K/10K train/dev/test examples).', 'Here we highlight three observations.', '(a) BiDAF-... | [None, None, None, ['SQuAD', 'SQuAD-T'], ['SQuAD + SNLI', 'SNLI'], ['SNLI', 'SQuAD'], ['SQuAD + SNLI*'], None] | 1 |
P17-2083table_1 | Comparison of our model variants on the MapTask corpus. | 1 | [['no attn.'], ['traditional'], ['gated attn.']] | 1 | [['without HMM'], ['gate bias HMM'], ['gate all HMM']] | [['60.97%', '64.60%', '63.55%'], ['61.72%', '64.73%', '65.19%'], ['62.21%', '65.94%', '65.94%']] | column | ['Accuracy', 'Accuracy', 'Accuracy'] | ['gated attn.'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>without HMM</th> <th>gate bias HMM</th> <th>gate all HMM</th> </tr> </thead> <tbody> <tr> <td>no attn.</td> <td>60.97%</td> <td>64.60%</td> <td>63.55%</td> </tr> <tr> ... | Table 1 | table_1 | P17-2083 | 4 | acl2017 | Table 1 shows the classification accuracy of the nine variants of our model on the MapTask corpus. Table 1 shows that adding the attention mechanism is beneficial, as the traditional attention models always outperform their non-attention counterparts. The gated attention configurations, in turn, outperform those with t... | [1, 1, 1, 1, 1, 1, 1] | ['Table 1 shows the classification accuracy of the nine variants of our model on the MapTask corpus.', 'Table 1 shows that adding the attention mechanism is beneficial, as the traditional attention models always outperform their non-attention counterparts.', 'The gated attention configurations, in turn, outperform thos... | [None, ['gated attn.', 'traditional', 'no attn.'], ['gated attn.', 'traditional'], ['gate bias HMM', 'gate all HMM', 'traditional', 'gated attn.'], ['no attn.', 'gate bias HMM', 'gate all HMM'], ['traditional', 'gate bias HMM', 'gate all HMM'], ['gated attn.', 'gate bias HMM', 'gate all HMM']] | 1 |
P17-2085table_4 | Overall performance (%). R, P, and F represent recall, precision, and F1 score, respectively. | 2 | [['Category', 'President'], ['Category', 'Company'], ['Category', 'University'], ['Category', 'State'], ['Category', 'Character'], ['Category', 'Brand'], ['Category', 'Restaurant'], ['Category', 'Overall']] | 2 | [['Complete', 'R'], ['Complete', 'P'], ['Complete', 'F'], ['Balanced Subset', 'R'], ['Balanced Subset', 'P'], ['Balanced Subset', 'F']] | [['94.6', '89.9', '92.2', '87.2', '80.4', '83.7'], ['86.6', '95.8', '91.0', '90.8', '85.2', '87.9'], ['96.7', '96.4', '96.5', '96.9', '92.0', '94.4'], ['96.2', '92.1', '94.1', '95.0', '58.6', '72.5'], ['92.5', '61.3', '73.7', '92.8', '52.2', '66.8'], ['89.6', '90.2', '89.9', '86.7', '83.2', '84.9'], ['87.0', '81.4', '8... | column | ['R', 'P', 'F', 'R', 'P', 'F'] | ['Balanced Subset'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Complete || R</th> <th>Complete || P</th> <th>Complete || F</th> <th>Balanced Subset || R</th> <th>Balanced Subset || P</th> <th>Balanced Subset || F</th> </tr> </thead> <tbody> <tr>... | Table 4 | table_4 | P17-2085 | 4 | acl2017 | As Table 4 demonstrates, our method shows promising results (87.0 F1 score) on the balanced data set. Nevertheless, we notice the low linking precisions for entities in the Character and State lists, which are caused by different reasons. For the Character list, mentions do not suffice to select high-quality seeds, whe... | [1, 2, 2] | ['As Table 4 demonstrates, our method shows promising results (87.0 F1 score) on the balanced data set.', 'Nevertheless, we notice the low linking precisions for entities in the Character and State lists, which are caused by different reasons.', 'For the Character list, mentions do not suffice to select high-quality se... | [['Balanced Subset', 'Overall', 'F'], None, None] | 1 |
P17-2095table_1 | Results of comparing several segmentation strategies. | 2 | [['# SEG', 'UNSEG'], ['# SEG', 'MORPH'], ['# SEG', 'cCNN'], ['# SEG', 'CHAR'], ['# SEG', 'BPE']] | 2 | [['Arabic-to-English', 'tst11'], ['Arabic-to-English', 'tst12'], ['Arabic-to-English', 'tst13'], ['Arabic-to-English', 'tst14'], ['Arabic-to-English', 'AVG.'], ['English-to-Arabic', 'tst11'], ['English-to-Arabic', 'tst12'], ['English-to-Arabic', 'tst13'], ['English-to-Arabic', 'tst14'], ['English-to-Arabic', 'AVG.']] | [['25.7', '28.2', '27.3', '23.9', '26.3', '15.8', '17.1', '18.1', '15.5', '16.6'], ['29.2', '33', '32.9', '28.3', '30.9', '16.5', '18.8', '20.4', '17.2', '18.2'], ['29', '32', '32.5', '27.8', '30.2', '14.3', '12.8', '13.6', '12.6', '13.3'], ['28.8', '31.8', '32.5', '27.8', '30.2', '15.3', '17.1', '18', '15.3', '16.4'],... | column | ['BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU'] | ['# SEG'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Arabic-to-English || tst11</th> <th>Arabic-to-English || tst12</th> <th>Arabic-to-English || tst13</th> <th>Arabic-to-English || tst14</th> <th>Arabic-to-English || AVG.</th> <th>English-to-... | Table 1 | table_1 | P17-2095 | 3 | acl2017 | Table 1 presents MT results using various segmentation strategies. Compared to the UNSEG system, the MORPH system improved translation quality by 4.6 and 1.6 BLEU points in Arabic-to-English and English-to-Arabic systems, respectively. The results also improved by up to 3 BLEU points for cCNN and CHAR systems in the Ar... | [1, 1, 1, 1, 1, 2, 2, 1, 2, 2, 1, 1, 2] | ['Table 1 presents MT results using various segmentation strategies.', 'Compared to the UNSEG system, the MORPH system improved translation quality by 4.6 and 1.6 BLEU points in Arabic-to-English and English-to-Arabic systems, respectively.', 'The results also improved by up to 3 BLEU points for cCNN and CHAR systems i... | [None, ['UNSEG', 'MORPH', 'Arabic-to-English'], ['cCNN', 'CHAR', 'Arabic-to-English'], ['MORPH', 'Arabic-to-English'], ['English-to-Arabic', 'cCNN', 'CHAR'], ['CHAR'], ['BPE'], ['cCNN', 'UNSEG', 'English-to-Arabic'], ['cCNN'], ['cCNN'], ['Arabic-to-English', 'cCNN', 'English-to-Arabic', 'UNSEG'], ['BPE', 'Arabic-to-Eng... | 1 |
P17-2096table_4 | Comparison with previous models. Results with * are from (Cai and Zhao, 2016).4 | 2 | [['Models', '(Zhao and Kit 2008c)'], ['Models', '(Chen et al. 2015a)'], ['Models', '(Chen et al. 2015b)'], ['Models', '(Ma and Hinrichs 2015)'], ['Models', '(Zhang et al. 2016)'], ['Models', '(Liu et al. 2016)'], ['Models', '(Cai and Zhao 2016)'], ['Models', 'Our results']] | 2 | [['PKU', 'F1 + pre-train'], ['PKU', 'F1'], ['PKU', 'Training (hours)'], ['PKU', 'Test (sec.)'], ['MSR', 'F1 + pre-train'], ['MSR', 'F1'], ['MSR', 'Training (hours)'], ['MSR', 'Test (sec.)']] | [['-', '95.4', '-', '-', '-', '97.6', '-', '-'], ['94.5', '94.4', '50', '105', '95.4', '95.1', '100', '120'], ['94.8', '94.3', '58', '105', '95.6', '95.0', '117', '120'], ['-', '95.1', '1.5', '24', '-', '96.6', '3', '28'], ['95.1', '-', '6', '110', '97.0', '-', '13', '125'], ['93.91', '-', '-', '-', '95.21', '-', '-', ... | column | ['F1 + pre-train', 'F1', 'Training (hours)', 'Test (sec.)', 'F1 + pre-train', 'F1', 'Training (hours)', 'Test (sec.)'] | ['Our results'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>PKU || F1 + pre-train</th> <th>PKU || F1</th> <th>PKU || Training (hours)</th> <th>PKU || Test (sec.)</th> <th>MSR || F1 + pre-train</th> <th>MSR || F1</th> <th>MSR || Training (hours)<... | Table 4 | table_4 | P17-2096 | 5 | acl2017 | Table 4 compares our final results (greedy search is adopted by setting k=1) to prior neural models. Pre-training character embeddings on large scale unlabeled corpus (not limited to the training corpus) has been shown helpful for extra performance improvement. The results with or without pre trained character embeddin... | [1, 1, 2, 1, 1] | ['Table 4 compares our final results (greedy search is adopted by setting k=1) to prior neural models.', 'Pre-training character embeddings on large scale unlabeled corpus (not limited to the training corpus) has been shown helpful for extra performance improvement.', 'The results with or without pre trained character ... | [None, ['F1 + pre-train'], ['F1 + pre-train'], ['(Zhao and Kit 2008c)'], ['Our results']] | 1 |
P17-2097table_3 | Final results. ∗ = estimate from 100; see Section 6.1. ‡ = from Mostafazadeh et al. (2016). | 1 | [['DSSM‡'], ['UW (Schwartz et al. 2017b)'], ['UW (ending only)'], ['trigram LM (estimated from stories)'], ['trigram LM (estimated from endings)'], ['Our model (HIER ENCPLOTEND ATT)'], ['Our model (ending only)'], ['Human‡ (story + ending)'], ['Human (ending only)']] | 1 | [['val'], ['test']] | [['60.4', '58.5'], ['-', '75.2'], ['-', '72.4'], ['52.4', '53.6'], ['53.8', '54.6'], ['-', '74.7'], ['-', '72.5'], ['100', '100'], ['78', '-']] | column | ['accuracy', 'accuracy'] | ['Our model (HIER ENCPLOTEND ATT)', 'Our model (ending only)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>val</th> <th>test</th> </tr> </thead> <tbody> <tr> <td>DSSM‡</td> <td>60.4</td> <td>58.5</td> </tr> <tr> <td>UW (Schwartz et al. 2017b)</td> <td>-</td> <td>75.2</td> ... | Table 3 | table_3 | P17-2097 | 4 | acl2017 | Table 3 shows final results. We report the best result from Mostafazadeh et al. (2016), the best result from the concurrently held LSDSem shared task (Schwartz et al., 2017b), and our final system configuration (with decisions tuned via cross validation as shown in Tables 1-2, then using the model with the best held-ou... | [1, 2, 1, 1, 2, 2, 1] | ['Table 3 shows final results.', 'We report the best result from Mostafazadeh et al. (2016), the best result from the concurrently held LSDSem shared task (Schwartz et al., 2017b), and our final system configuration (with decisions tuned via cross validation as shown in Tables 1-2, then using the model with the best he... | [None, ['UW (Schwartz et al. 2017b)', 'Our model (HIER ENCPLOTEND ATT)', 'Our model (ending only)'], ['Our model (HIER ENCPLOTEND ATT)', 'UW (Schwartz et al. 2017b)'], ['Our model (ending only)'], None, None, ['Our model (ending only)', 'UW (Schwartz et al. 2017b)']] | 1 |
P17-2100table_1 | Results of our model and baseline systems. Our models achieve substantial improvement of all ROUGE scores over baseline systems. (W: Word level; C: Character level). | 2 | [['Model', 'RNN (W) (Hu et al. 2015)'], ['Model', 'RNN (C) (Hu et al. 2015)'], ['Model', 'RNN context (W) (Hu et al. 2015)'], ['Model', 'RNN context (C) (Hu et al. 2015)'], ['Model', 'RNN context + SRB (C)'], ['Model', '+Attention (C)']] | 1 | [['ROUGE-1'], ['ROUGE-2'], ['ROUGE-L']] | [['17.7', '8.5', '15.8'], ['21.5', '8.9', '18.6'], ['26.8', '16.1', '24.1'], ['29.9', '17.4', '27.2'], ['32.1', '18.9', '29.2'], ['33.3', '20.0', '30.1']] | column | ['ROUGE-1', 'ROUGE-2', 'ROUGE-L'] | ['RNN context + SRB (C)', '+Attention (C)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>ROUGE-1</th> <th>ROUGE-2</th> <th>ROUGE-L</th> </tr> </thead> <tbody> <tr> <td>Model || RNN (W) (Hu et al. 2015)</td> <td>17.7</td> <td>8.5</td> <td>15.8</td> </tr> <tr> ... | Table 1 | table_1 | P17-2100 | 4 | acl2017 | We compare our model with above baseline systems, including RNN and RNN context. We refer to our proposed Semantic Relevance Based neural model as SRB. Besides, SRB with a gated attention encoder is denoted as +Attention. Table 1 shows the results of our models and baseline systems. We can see SRB outperforms both RNN ... | [2, 2, 2, 1, 1, 2, 1] | ['We compare our model with above baseline systems, including RNN and RNN context.', 'We refer to our proposed Semantic Relevance Based neural model as SRB.', 'Besides, SRB with a gated attention encoder is denoted as +Attention.', 'Table 1 shows the results of our models and baseline systems.', 'We can see SRB outperf... | [['Model'], ['RNN context + SRB (C)'], ['+Attention (C)'], None, ['RNN context + SRB (C)', 'ROUGE-1', 'ROUGE-2', 'ROUGE-L'], ['RNN context + SRB (C)'], ['+Attention (C)', 'ROUGE-1', 'ROUGE-2', 'ROUGE-L']] | 1 |
P17-2100table_2 | Results of our model and state-of-the-art systems. COPYNET incorporates copying mechanism to solve out-of-vocabulary problem, so its has higher ROUGE scores. Our model does not incorporate this mechanism currently. In the future work, we will implement this technic to further improve the performance. (Word: Word level;... | 4 | [['Model', 'RNN context (Hu et al. 2015)', 'level', 'Word'], ['Model', 'RNN context (Hu et al. 2015)', 'level', 'Char'], ['Model', 'COPYNET (Gu et al. 2016)', 'level', 'Word'], ['Model', 'COPYNET (Gu et al. 2016)', 'level', 'Char'], ['Model', 'this work', 'level', 'Char']] | 1 | [['R-1'], ['R-2'], ['R-L']] | [['26.8', '16.1', '24.1'], ['29.9', '17.4', '27.2'], ['35.0', '22.3', '32.0'], ['34.4', '21.6', '31.3'], ['33.3', '20.0', '30.1']] | column | ['R-1', 'R-2', 'R-L'] | ['this work'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>R-1</th> <th>R-2</th> <th>R-L</th> </tr> </thead> <tbody> <tr> <td>Model || RNN context (Hu et al. 2015) || level || Word</td> <td>26.8</td> <td>16.1</td> <td>24.1</td> </tr>... | Table 2 | table_2 | P17-2100 | 4 | acl2017 | Table 2 summarizes the results of our model and state of the art systems. COPYNET has the highest socres, because it incorporates copying mechanism to deals with out of vocabulary word problem. In this paper, we do not implement this mechanism in our model. In the future work, we will try to incorporates copying mechan... | [1, 1, 2, 2] | ['Table 2 summarizes the results of our model and state of the art systems.', 'COPYNET has the highest socres, because it incorporates copying mechanism to deals with out of vocabulary word problem.', 'In this paper, we do not implement this mechanism in our model.', 'In the future work, we will try to incorporates cop... | [None, ['COPYNET (Gu et al. 2016)'], ['this work'], ['this work']] | 1 |
P17-2102table_2 | Classification results: predicting suspicion and verified posts reported as A – accuracy, AP – average precision, ROC – the area under the receiver operator characteristics curve, and inferring types of suspicious news reported using F1 micro and F1 macro scores. | 3 | [['Features', 'BASELINE 1: LOGISTIC REGRESSION (DOC2VEC)', 'Tweets'], ['Features', 'BASELINE 1: LOGISTIC REGRESSION (DOC2VEC)', ' + network'], ['Features', 'BASELINE 1: LOGISTIC REGRESSION (DOC2VEC)', ' + cues'], ['Features', 'BASELINE 1: LOGISTIC REGRESSION (DOC2VEC)', 'ALL'], ['Features', 'BASELINE 2: LOGISTIC REGRES... | 2 | [['BINARY', 'A'], ['BINARY', 'ROC'], ['BINARY', 'AP'], ['MULTI-CLASS', 'F1'], ['MULTI-CLASS', 'F1 macro']] | [['0.65', '0.70', '0.68', '0.82', '0.40'], ['0.72', '0.80', '0.82', '0.88', '0.57'], ['0.69', '0.74', '0.73', '0.83', '0.46'], ['0.75', '0.84', '0.84', '0.88', '0.59'], ['0.72', '0.81', '0.81', '0.84', '0.48'], ['0.78', '0.87', '0.88', '0.88', '0.59'], ['0.75', '0.85', '0.85', '0.86', '0.49'], ['0.79', '0.88', '0.89', ... | column | ['A', 'ROC', 'AP', 'F1', 'F1 macro'] | ['RECURRENT NEURAL NETWORK', 'CONVOLUTIONAL NEURAL NETWORK'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BINARY || A</th> <th>BINARY || ROC</th> <th>BINARY || AP</th> <th>MULTI-CLASS || F1</th> <th>MULTI-CLASS || F1 macro</th> </tr> </thead> <tbody> <tr> <td>Features || BASELINE 1: LOGI... | Table 2 | table_2 | P17-2102 | 4 | acl2017 | Table 2 presents classification results for Task 1 (binary) suspicious vs. verified news posts and Task 2 (multi-class) four types of suspicious tweets e.g., propaganda, hoaxes, satire and clickbait. We report performance for different model and feature combinations. We find that our neural network models (both CNNs an... | [1, 1, 1, 1, 2, 1, 1, 1, 1, 2, 1, 2] | ['Table 2 presents classification results for Task 1 (binary) suspicious vs. verified news posts and Task 2 (multi-class) four types of suspicious tweets e.g., propaganda, hoaxes, satire and clickbait.', 'We report performance for different model and feature combinations.', 'We find that our neural network models (both... | [None, None, ['RECURRENT NEURAL NETWORK', 'CONVOLUTIONAL NEURAL NETWORK', 'BASELINE 1: LOGISTIC REGRESSION (DOC2VEC)', 'BASELINE 2: LOGISTIC REGRESSION (TFIDF)'], ['RECURRENT NEURAL NETWORK', 'CONVOLUTIONAL NEURAL NETWORK', 'BINARY', 'F1 macro', 'MULTI-CLASS'], None, ['Tweets'], [' + cues'], ['ALL', 'BINARY'], [' + net... | 1 |
P17-2103table_3 | Performance of Classifiers | 2 | [['Performance', 'Precision'], ['Performance', 'Recall'], ['Performance', 'F1']] | 1 | [['CF Parser'], ['Rules Only'], ['SVM']] | [['0.7131', '0.5864', '0.2381'], ['0.8365', '0.9134', '0.9135'], ['0.7699', '0.7143', '0.3777']] | row | ['Precision', 'Recall', 'F1'] | ['CF Parser'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>CF Parser</th> <th>Rules Only</th> <th>SVM</th> </tr> </thead> <tbody> <tr> <td>Performance || Precision</td> <td>0.7131</td> <td>0.5864</td> <td>0.2381</td> </tr> <tr> ... | Table 3 | table_3 | P17-2103 | 4 | acl2017 | Thus, in order to make the classifier robust to the imbalanced dataset, we designed a rule based model with counter-factual forms, which resulted in significantly higher F1 than statistical model. Moreover the rule based model captures positive samples of all possible forms which might not exist in the training set. A ... | [2, 2, 1, 1] | ['Thus, in order to make the classifier robust to the imbalanced dataset, we designed a rule based model with counter-factual forms, which resulted in significantly higher F1 than statistical model.', 'Moreover the rule based model captures positive samples of all possible forms which might not exist in the training se... | [['Rules Only'], ['Rules Only'], ['CF Parser'], ['CF Parser']] | 1 |
P18-1001table_4 | Word similarity evaluation on foreign languages. | 2 | [['FR', 'WS353'], ['DE', 'GUR350'], ['DE', 'GUR65'], ['IT', 'WS353'], ['IT', 'SL-999']] | 1 | [['FASTTEXT'], ['w2g'], ['w2gm'], ['pft-g'], ['pft-gm']] | [['38.2', '16.73', '20.09', '41', '41.3'], ['70', '65.01', '69.26', '77.6', '78.2'], ['81', '74.94', '76.89', '81.8', '85.2'], ['57.1', '56.02', '61.09', '60.2', '62.5'], ['29.3', '29.44', '34.91', '29.3', '33.7']] | column | ['similarity', 'similarity', 'similarity', 'similarity', 'similarity'] | ['FASTTEXT'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>FASTTEXT</th> <th>w2g</th> <th>w2gm</th> <th>pft-g</th> <th>pft-gm</th> </tr> </thead> <tbody> <tr> <td>FR || WS353</td> <td>38.2</td> <td>16.73</td> <td>20.09</td> ... | Table 4 | table_4 | P18-1001 | 8 | acl2018 | Table 4 shows the Spearmanfs correlation results of our models. We outperform FASTTEXT on many word similarity benchmarks. Our results are also significantly better than the dictionary-based models, W2G and W2GM. We hypothesize that W2G and W2GM can perform better than the current reported results given proper pre-pro... | [1, 1, 1, 1, 2, 2, 2] | ['Table 4 shows the Spearman\x81fs correlation results of our models.', 'We outperform FASTTEXT on many word similarity benchmarks.', 'Our results are also significantly better than the dictionary-based models, W2G and W2GM.', 'We hypothesize that W2G and W2GM can perform better than the current reported results given ... | [None, ['FASTTEXT'], ['w2g', 'w2gm'], ['w2g', 'w2gm'], None, None, None] | 1 |
P18-1002table_1 | Comparison with baselines and nonce2vec (Herbelot and Baroni, 2017) on few-shot embedding tasks. Performance on the chimeras task is measured using the Spearman correlation with human ratings. Note that the additive baseline requires removing stop-words in order to improve with more data. | 2 | [['Method', 'word2vec'], ['Method', 'additive'], ['Method', 'additive, no stop words'], ['Method', 'nonce2vec'], ['Method', 'a la carte']] | 2 | [['Nonce (Herbelot and Baroni, 2017)', 'Mean Recip. Rank'], ['Nonce (Herbelot and Baroni, 2017)', 'Med. Rank'], ['Chimera (Lazaridou et al., 2017)', 'Spearman correlation 2 Sent.'], ['Chimera (Lazaridou et al., 2017)', 'Spearman correlation 4 Sent.'], ['Chimera (Lazaridou et al., 2017)', 'Spearman correlation 6 Sent.']... | [['0.00007', '111012', '0.1459', '0.2457', '0.2498'], ['0.00945', '3381', '0.3627', '0.3701', '0.3595'], ['0.03686', '861', '0.3376', '0.3624', '0.408'], ['0.04907', '623', '0.332', '0.3668', '0.389'], ['0.07058', '165.5', '0.3634', '0.3844', '0.3941']] | column | ['Mean Recip. Rank', 'Med.Rank', 'Spearman correlation 2 Sent.', 'Spearman correlation 4 Sent.', 'Spearman correlation 6 Sent.'] | ['nonce2vec'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Nonce (Herbelot and Baroni, 2017) || Mean Recip. Rank</th> <th>Nonce (Herbelot and Baroni, 2017) || Med. Rank</th> <th>Chimera (Lazaridou et al., 2017) || Spearman correlation 2 Sent.</th> <th>Chimera... | Table 1 | table_1 | P18-1002 | 6 | acl2018 | We use the same approach as in the nonce task, except that the chimera embedding is the result of summing over multiple sentences. From Table 1 we see that, while our method is consistently better than both the additive baseline and nonce2vec, removing stop-words from the additive baseline leads to stronger performance... | [2, 1, 1] | ['We use the same approach as in the nonce task, except that the chimera embedding is the result of summing over multiple sentences.', 'From Table 1 we see that, while our method is consistently better than both the additive baseline and nonce2vec, removing stop-words from the additive baseline leads to stronger perfor... | [None, ['nonce2vec'], ['a la carte']] | 1 |
P18-1002table_4 | Performance of document embeddings built using `a la carte n-gram vectors and recent unsupervised word-level approaches on classification tasks, with the character LSTM of (Radford et al., 2017) shown for comparison. Top three results are bolded and the best word-level performance is underlined. | 2 | [['Representation', 'BonG'], ['Representation', 'BonG'], ['Representation', 'BonG'], ['Representation', 'a la carte'], ['Representation', 'a la carte'], ['Representation', 'a la carte'], ['Representation', 'Sent2Vec'], ['Representation', 'DisC'], ['Representation', 'skip-thoughts'], ['Representation', 'SDAE'], ['Repres... | 1 | [['MR'], ['CR'], ['SUBJ'], ['MPQA'], ['TREC'], ['SST (±1)'], ['SST'], ['IMDB']] | [['77.1', '77', '91', '85.1', '86.8', '80.7', '36.8', '88.3'], ['77.8', '78.1', '91.8', '85.8', '90', '80.9', '39', '90'], ['77.8', '78.3', '91.4', '85.6', '89.8', '80.1', '42.3', '89.8'], ['79.8', '81.3', '92.6', '87.4', '85.6', '84.1', '46.7', '89'], ['81.3', '83.7', '93.5', '87.6', '89', '85.8', '47.8', '90.3'], ['8... | column | ['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy'] | ['a la carte'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MR</th> <th>CR</th> <th>SUBJ</th> <th>MPQA</th> <th>TREC</th> <th>SST (±1)</th> <th>SST</th> <th>IMDB</th> </tr> </thead> <tbody> <tr> <td>Representation || BonG</td> ... | Table 4 | table_4 | P18-1002 | 9 | acl2018 | In Table 4 we display the result of running cross-validated, (cid:96)2-regularized logistic regression on documents from MR movie reviews (Pang and Lee, 2005), CR customer reviews (Hu and Liu, 2004), SUBJ subjectivity dataset (Pang and Lee, 2004), MPQA opinion polarity subtask (Wiebe et al., 2005), TREC question classi... | [1, 2, 1] | ['In Table 4 we display the result of running cross-validated, (cid:96)2-regularized logistic regression on documents from MR movie reviews (Pang and Lee, 2005), CR customer reviews (Hu and Liu, 2004), SUBJ subjectivity dataset (Pang and Lee, 2004), MPQA opinion polarity subtask (Wiebe et al., 2005), TREC question clas... | [['MR', 'CR', 'SUBJ', 'MPQA', 'TREC', 'SST', 'IMDB'], None, ['a la carte']] | 1 |
P18-1003table_1 | Results for the relation induction task. | 1 | [['Acc'], ['Pre'], ['Rec'], ['F1']] | 2 | [['Google Analogy', 'Diff'], ['Google Analogy', ' Conc'], ['Google Analogy', 'Avg'], ['Google Analogy', 'R1ik'], ['Google Analogy', 'R2ik'], ['Google Analogy', 'R3ik'], ['Google Analogy', 'R4ik']] | [['90', '89', '89.9', '90', '92.3', '90.9', '90.4'], ['81.6', '78.7', '80.8', '79.9', '87.1', '83.2', '81.1'], ['82.6', '83.9', '83.9', '86', '84.8', '84.8', '85.5'], ['82.1', '81.2', '82.3', '82.8', '85.9', '84', '83.3']] | column | ['Diff', 'Conc', 'Avg', 'R1ik', 'R2ik', 'R3ik', 'R4ik'] | ['R2ik'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Google Analogy || Diff</th> <th>Google Analogy || Conc</th> <th>Google Analogy || Avg</th> <th>Google Analogy || R1ik</th> <th>Google Analogy || R2ik</th> <th>Google Analogy || R3ik</th> ... | Table 1 | table_1 | P18-1003 | 6 | acl2018 | The results are summarized in Table 1 in terms of accuracy and (macro-averaged) precision, recall and F1 score. As can be observed, our model outperforms the baselines, with the R2 ik variant outperforming the others. | [1, 1] | ['The results are summarized in Table 1 in terms of accuracy and (macro-averaged) precision, recall and F1 score.', 'As can be observed, our model outperforms the baselines, with the R2 ik variant outperforming the others.'] | [None, ['R2ik']] | 1 |
P18-1004table_2 | Performance (ρ) on SL and SV for ERCNT models trained with different constraints. | 2 | [['Constraints (ER-CNT model)', 'Synonyms only'], ['Constraints (ER-CNT model)', 'Antonyms only'], ['Constraints (ER-CNT model)', 'Synonyms + Antonyms']] | 1 | [['SL'], ['SV']] | [['0.465', '0.339'], ['0.451', '0.317'], ['0.582', '0.439']] | column | ['SL', 'SV'] | ['Constraints (ER-CNT model)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>SL</th> <th>SV</th> </tr> </thead> <tbody> <tr> <td>Constraints (ER-CNT model) || Synonyms only</td> <td>0.465</td> <td>0.339</td> </tr> <tr> <td>Constraints (ER-CNT model) || ... | Table 2 | table_2 | P18-1004 | 7 | acl2018 | In Table 2 we show the specialization performance of the ER-CNT models (H = 5, É = 0.3),using different types of constraints on SimLex999 (SL) and SimVerb-3500 (SV). We compare the standard model, which exploits both synonym and antonym pairs for creating training instances, with the models employing only synonym and ... | [1, 1, 1, 2] | ['In Table 2 we show the specialization performance of the ER-CNT models (H = 5, \x83É = 0.3),using different types of constraints on SimLex999 (SL) and SimVerb-3500 (SV).', 'We compare the standard model, which exploits both synonym and antonym pairs for creating training instances, with the models employing only syno... | [['Constraints (ER-CNT model)', 'SL', 'SV'], ['Synonyms only', 'Antonyms only', 'Synonyms + Antonyms'], ['Synonyms + Antonyms'], None] | 1 |
P18-1005table_2 | The translation performance on English-German, English-French and Chinese-to-English test sets. The results of (Lample et al., 2017) are copied directly from their paper. We do not present the results of (Artetxe et al., 2017b) since we use different training sets. | 1 | [['Supervised'], ['Word-by-word'], ['Lample et al. (2017)'], ['The proposed approach']] | 1 | [['en-de'], ['de-en'], ['en-fr'], ['fr-en'], ['zh-en']] | [['24.07', '26.99', '30.5', '30.21', '40.02'], ['5.85', '9.34', '3.6', '6.8', '5.09'], ['9.64', '13.33', '15.05', '14.31', '-'], ['10.86', '14.62', '16.97', '15.58', '14.52']] | column | ['BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU'] | ['The proposed approach'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>en-de</th> <th>de-en</th> <th>en-fr</th> <th>fr-en</th> <th>zh-en</th> </tr> </thead> <tbody> <tr> <td>Supervised</td> <td>24.07</td> <td>26.99</td> <td>30.5</td> ... | Table 2 | table_2 | P18-1005 | 7 | acl2018 | Table 2 shows the BLEU scores on English-German, English-French and English-to-Chinese test sets. As it can be seen, the proposed approach obtains significant improvements than the word-by-word baseline system, with at least +5.01 BLEU points in English-to-German translation and up to +13.37 BLEU points in English-to-F... | [1, 1, 1, 1, 1, 1, 1] | ['Table 2 shows the BLEU scores on English-German, English-French and English-to-Chinese test sets.', 'As it can be seen, the proposed approach obtains significant improvements than the word-by-word baseline system, with at least +5.01 BLEU points in English-to-German translation and up to +13.37 BLEU points in English... | [['en-de', 'en-fr', 'zh-en'], ['The proposed approach', 'en-de', 'en-fr'], ['The proposed approach'], ['The proposed approach', 'Lample et al. (2017)', 'en-fr'], ['The proposed approach'], ['The proposed approach', 'Supervised'], ['The proposed approach', 'Supervised']] | 1 |
P18-1007table_5 | Comparison of different segmentation algorithms (WMT14 en!de) | 2 | [['Model', 'Word'], ['Model', 'Character (512 nodes)'], ['Model', 'Mixed Word/Character'], ['Model', 'BPE'], ['Model', 'Unigram w/o SR (l = 1)'], ['Model', 'Unigram w/ SR (l = 64 alpha = 0.1)']] | 1 | [['BLEU']] | [['23.12'], ['22.62'], ['24.17'], ['24.53'], ['24.5'], ['25.04']] | column | ['BLEU'] | ['Unigram w/ SR (l = 64 alpha = 0.1)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU</th> </tr> </thead> <tbody> <tr> <td>Model || Word</td> <td>23.12</td> </tr> <tr> <td>Model || Character (512 nodes)</td> <td>22.62</td> </tr> <tr> <td>Model || Mixe... | Table 5 | table_5 | P18-1007 | 8 | acl2018 | Table 5 shows the comparison on different segmentation algorithms: word, character, mixed word/character (Wu et al., 2016), BPE (Sennrich et al., 2016) and our unigram model with or without subword regularization. The BLEU scores of word, character and mixed word/character models are cited from (Wu et al.,2016). As Ger... | [1, 1, 1, 1] | ['Table 5 shows the comparison on different segmentation algorithms: word, character, mixed word/character (Wu et al., 2016), BPE (Sennrich et al., 2016) and our unigram model with or without subword regularization.', 'The BLEU scores of word, character and mixed word/character models are cited from (Wu et al.,2016).',... | [['Word', 'Character (512 nodes)', 'Mixed Word/Character', 'BPE', 'Unigram w/o SR (l = 1)', 'Unigram w/ SR (l = 64 alpha = 0.1)'], ['Word', 'Character (512 nodes)', 'Mixed Word/Character'], ['Unigram w/o SR (l = 1)', 'Unigram w/ SR (l = 64 alpha = 0.1)'], ['Unigram w/ SR (l = 64 alpha = 0.1)']] | 1 |
P18-1008table_2 | Results on WMT14 En→De. Note that Transformer models are trained using 16 GPUs, while ConvS2S and RNMT+ are trained using 32 GPUs. | 2 | [['Model', 'GNMT'], ['Model', 'ConvS2S'], ['Model', 'Trans. Base'], ['Model', 'Trans. Big'], ['Model', 'RNMT+']] | 1 | [['Test BLEU'], ['Epochs'], ['Training\nTime']] | [['24.67', '-', '-'], ['25.01 \x81}0.17', '38', '20h'], ['27.26 \x81} 0.15', '38', '17h'], ['27.94 \x81} 0.18', '26.9', '48h'], ['28.49 \x81} 0.05', '24.6', '40h']] | column | ['Test BLEU', 'Epochs', 'Training Time'] | ['RNMT+'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Test BLEU</th> <th>Epochs</th> <th>TrainingTime</th> </tr> </thead> <tbody> <tr> <td>Model || GNMT</td> <td>24.67</td> <td>-</td> <td>-</td> </tr> <tr> <td>Model || C... | Table 2 | table_2 | P18-1008 | 6 | acl2018 | Table 2 shows our results on the WMTf14 En¨De task. The Transformer Base model improves over GNMT and ConvS2S by more than 2 BLEU points while the Big model improves by over 3 BLEU points. RNMT+ further outperforms the Transformer Big model and establishes a new state of the art with an averaged value of 28.49. In th... | [1, 1, 1, 1] | ['Table 2 shows our results on the WMT\x81f14 En\x81¨De task.', 'The Transformer Base model improves over GNMT and ConvS2S by more than 2 BLEU points while the Big model improves by over 3 BLEU points.', 'RNMT+ further outperforms the Transformer Big model and establishes a new state of the art with an averaged value o... | [None, ['Trans. Base', 'GNMT', 'ConvS2S', 'Trans. Big'], ['RNMT+', 'Trans. Big'], ['RNMT+', 'Trans. Big']] | 1 |
P18-1008table_4 | Ablation results of RNMT+ and the Transformer Big model on WMT’14 En → Fr. We report average BLEU scores on the test set. An asterisk ’*’ indicates an unstable training run (training halts due to non-finite elements). | 2 | [['Model', 'Baseline'], ['Model', '-Label Smoothing'], ['Model', '-Multi-head Attention'], ['Model', '-Layer Norm.'], ['Model', '-Sync. Training']] | 1 | [['RNMT+'], ['Trans. Big']] | [['41', '40.73'], ['40.33', '40.49'], ['40.44', '39.83'], ['*', '*'], ['39.68', '*']] | column | ['BLEU', 'BLEU'] | ['RNMT+', 'Trans. Big'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>RNMT+</th> <th>Trans. Big</th> </tr> </thead> <tbody> <tr> <td>Model || Baseline</td> <td>41</td> <td>40.73</td> </tr> <tr> <td>Model || -Label Smoothing</td> <td>40.33</t... | Table 4 | table_4 | P18-1008 | 7 | acl2018 | From Table 4 we draw the following conclusions about the four techniques:. We observed that label smoothing improves both models, leading to an average increase of 0.7 BLEU for RNMT+ and 0.2 BLEU for Transformer Big models. ? Multi-head Attention Multi-head attention contributes significantly to the quality of both mod... | [1, 1, 1, 2, 1, 2, 1, 2, 2] | ['From Table 4 we draw the following conclusions about the four techniques:.', 'We observed that label smoothing improves both models, leading to an average increase of 0.7 BLEU for RNMT+ and 0.2 BLEU for Transformer Big models.', '? Multi-head Attention Multi-head attention contributes significantly to the quality of ... | [['-Label Smoothing', '-Multi-head Attention', '-Layer Norm.', '-Sync. Training'], ['-Label Smoothing', 'RNMT+', 'Trans. Big'], ['-Multi-head Attention', 'RNMT+', 'Trans. Big'], ['-Layer Norm.'], ['-Layer Norm.', 'RNMT+', 'Trans. Big'], ['-Layer Norm.'], ['-Sync. Training', 'RNMT+', 'Trans. Big'], ['-Sync. Training'], ... | 1 |
P18-1009table_3 | Performance of our model and AttentiveNER (Shimaoka et al., 2017) on the new entity typing benchmark, using same training data. We show results for both development and test sets. | 2 | [['Model', 'AttentiveNER'], ['Model', 'Our Model']] | 2 | [['Dev', 'MRR'], ['Dev', 'P'], ['Dev', 'R'], ['Dev', 'F1'], ['Test', 'MRR'], ['Test', 'P'], ['Test', 'R'], ['Test', 'F1']] | [['0.221', '53.7', '15', '23.5', '0.223', '54.2', '15.2', '23.7'], ['0.229', '48.1', '23.2', '31.3', '0.234', '47.1', '24.2', '32']] | column | ['MRR', 'P', 'R', 'F1', 'MRR', 'P', 'R', 'F1'] | ['Our Model'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Dev || MRR</th> <th>Dev || P</th> <th>Dev || R</th> <th>Dev || F1</th> <th>Test || MRR</th> <th>Test || P</th> <th>Test || R</th> <th>Test || F1</th> </tr> </thead> <tbody> ... | Table 3 | table_3 | P18-1009 | 6 | acl2018 | Results Table 3 shows the performance of our model and our reimplementation of AttentiveNER. Our model, which uses a multitask objective to learn finer types without punishing more general types, shows recall gains at the cost of drop in precision. The MRR score shows that our model is slightly better than the baseline... | [1, 1, 1] | ['Results Table 3 shows the performance of our model and our reimplementation of AttentiveNER.', 'Our model, which uses a multitask objective to learn finer types without punishing more general types, shows recall gains at the cost of drop in precision.', 'The MRR score shows that our model is slightly better than the ... | [['Our Model', 'AttentiveNER'], ['Our Model'], ['MRR', 'Our Model', 'AttentiveNER']] | 1 |
P18-1009table_4 | Results on the development set for different type granularity and for different supervision data with our model. In each row, we remove a single source of supervision. Entity linking (EL) includes supervision from both KB and Wikipedia definitions. The numbers in the first row are example counts for each type granulari... | 2 | [['Train Data', 'All'], ['Train Data', '-Crowd'], ['Train Data', '-Head'], ['Train Data', '-EL']] | 2 | [['Total', 'MRR'], ['Total', 'P'], ['Total', 'R'], ['Total', 'F1'], ['General', 'P'], ['General', 'R'], ['General', 'F1'], ['Fine', 'P'], ['Fine', 'R'], ['Fine', 'F1'], ['Ultra-Fine', 'P'], ['Ultra-Fine', 'R'], ['Ultra-Fine', 'F1']] | [['0.229', '48.1', '23.2', '31.3', '60.3', '61.6', '61', '40.4', '38.4', '39.4', '42.8', '8.8', '14.6'], ['0.173', '40.1', '14.8', '21.6', '53.7', '45.6', '49.3', '20.8', '18.5', '19.6', '54.4', '4.6', '8.4'], ['0.22', '50.3', '19.6', '28.2', '58.8', '62.8', '60.7', '44.4', '29.8', '35.6', '46.2', '4.7', '8.5'], ['0.22... | column | ['MRR', 'P', 'R', 'F1', 'P', 'R', 'F1', 'P', 'R', 'F1', 'P', 'R', 'F1'] | ['Ultra-Fine'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Total || MRR</th> <th>Total || P</th> <th>Total || R</th> <th>Total || F1</th> <th>General || P</th> <th>General || R</th> <th>General || F1</th> <th>Fine || P</th> <th>Fine |... | Table 4 | table_4 | P18-1009 | 6 | acl2018 | Table 4 shows the performance breakdown for different type granularity and different supervision. Overall, as seen in previous work on finegrained NER literature (Gillick et al., 2014; Ren et al., 2016a), finer labels were more challenging to predict than coarse grained labels, and this issue is exacerbated when dealin... | [1, 1, 1, 1, 1] | ['Table 4 shows the performance breakdown for different type granularity and different supervision.', 'Overall, as seen in previous work on finegrained NER literature (Gillick et al., 2014; Ren et al., 2016a), finer labels were more challenging to predict than coarse grained labels, and this issue is exacerbated when d... | [None, ['General', 'Fine', 'Ultra-Fine'], ['All', '-Crowd'], ['-Head', 'Ultra-Fine', '-EL', 'Fine'], ['General']] | 1 |
P18-1009table_6 | Results on the OntoNotes fine-grained entity typing test set. The first two models (AttentiveNER++ and AFET) use only KB-based supervision. LNR uses a filtered version of the KBbased training set. Our model uses all our distant supervision sources. | 1 | [['AttentiveNER++'], ['AFET (Ren et al., 2016a)'], ['LNR (Ren et al., 2016b)'], ['Ours (ONTO+WIKI+HEAD)']] | 1 | [['Acc.'], ['Ma-F1'], ['Mi-F1']] | [['51.7', '70.9', '64.9'], ['55.1', '71.1', '64.7'], ['57.2', '71.5', '66.1'], ['59.5', '76.8', '71.8']] | column | ['Acc.', 'Ma-F1', 'Mi-F1'] | ['Ours (ONTO+WIKI+HEAD)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Acc.</th> <th>Ma-F1</th> <th>Mi-F1</th> </tr> </thead> <tbody> <tr> <td>AttentiveNER++</td> <td>51.7</td> <td>70.9</td> <td>64.9</td> </tr> <tr> <td>AFET (Ren et al.,... | Table 6 | table_6 | P18-1009 | 8 | acl2018 | Results Table 6 shows the overall performance on the test set. Our combination of model and training data shows a clear improvement from prior work, setting a new state-of-the art result. | [1, 1] | ['Results Table 6 shows the overall performance on the test set.', 'Our combination of model and training data shows a clear improvement from prior work, setting a new state-of-the art result.'] | [None, ['Ours (ONTO+WIKI+HEAD)']] | 1 |
P18-1010table_6 | MAP of entity-level typing in Wikipedia data using TypeNet. The second column shows results using 5% of the total data. The last column shows results using the full set of 344,246 entities. | 2 | [['Model\t', 'CNN'], ['Model\t', 'CNN + hierarchy'], ['Model\t', 'CNN + transitive'], ['Model\t', 'CNN + hierarchy + transitive'], ['Model\t', 'CNN+Complex'], ['Model\t', 'CNN+Complex + hierarchy'], ['Model\t', 'CNN+Complex + transitive'], ['Model\t', 'CNN+Complex + hierarchy + transitive']] | 1 | [['Low Data'], ['Full Data']] | [['51.72', '68.15'], ['54.82', '75.56'], ['57.68', '77.21'], ['58.74', '78.59'], ['50.51', '69.83'], ['55.3', '72.86'], ['53.71', '72.18'], ['58.81', '77.21']] | column | ['MAP', 'MAP'] | ['CNN', 'CNN+Complex'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Low Data</th> <th>Full Data</th> </tr> </thead> <tbody> <tr> <td>Model\t || CNN</td> <td>51.72</td> <td>68.15</td> </tr> <tr> <td>Model\t || CNN + hierarchy</td> <td>54.82... | Table 6 | table_6 | P18-1010 | 7 | acl2018 | Table 6 shows the results for entity level typing on our Wikipedia TypeNet dataset. We see that both the basic CNN and the CNN+Complex models perform similarly with the CNN+Complex model doing slightly better on the full data regime. We also see that both models get an improvement when adding an explicit hierarchy loss... | [1, 1, 1, 1, 1] | ['Table 6 shows the results for entity level typing on our Wikipedia TypeNet dataset.', 'We see that both the basic CNN and the CNN+Complex models perform similarly with the CNN+Complex model doing slightly better on the full data regime.', 'We also see that both models get an improvement when adding an explicit hierar... | [None, ['CNN', 'CNN+Complex', 'Full Data'], ['CNN + hierarchy', 'CNN+Complex + hierarchy'], ['CNN + transitive', 'CNN+Complex + transitive'], ['CNN', 'CNN+Complex']] | 1 |
P18-1018table_3 | Interannotator agreement rates (pairwise averages) on Little Prince sample (216 tokens) with different levels of hierarchy coarsening according to figure 2 (“Exact” means no coarsening). “Labels” refers to the number of distinct labels that annotators could have provided at that level of coarsening. Excludes tokens whe... | 1 | [['Exact'], ['Depth-3'], ['Depth-2'], ['Depth-1']] | 1 | [['Role'], ['Function']] | [['74.40%', '81.30%'], ['75.00%', '81.80%'], ['79.90%', '87.40%'], ['92.60%', '93.90%']] | column | ['agreement', 'agreement'] | ['Role', 'Function'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Labels</th> <th>Role</th> <th>Function</th> </tr> </thead> <tbody> <tr> <td>Exact</td> <td>47</td> <td>74.40%</td> <td>81.30%</td> </tr> <tr> <td>Depth-3</td> <t... | Table 3 | table_3 | P18-1018 | 6 | acl2018 | Table 3 shows the interannotator agreement rates, averaged across all pairs of annotators. Average agreement is 74.4% on the scene role and 81.3% on the function (row 1). Agreement is higher on the function slot than on the scene role slot, which implies that the former is an easier task than the latter. This is expect... | [1, 1, 1, 2, 2, 1] | ['Table 3 shows the interannotator agreement rates, averaged across all pairs of annotators.', 'Average agreement is 74.4% on the scene role and 81.3% on the function (row 1).', 'Agreement is higher on the function slot than on the scene role slot, which implies that the former is an easier task than the latter.', 'Thi... | [None, ['Exact', 'Role', 'Function'], ['Role', 'Function'], None, ['Depth-3', 'Depth-2', 'Depth-1'], ['Depth-3', 'Depth-2', 'Depth-1']] | 1 |
P18-1022table_5 | Performance of predicting veracity. | 3 | [['Features', 'Generic classifier', 'Style'], ['Features', 'Generic classifier', 'Topic'], ['Features', 'Orientation-specific classifier', 'Style'], ['Features', 'Orientation-specific classifier', 'Topic'], ['Features', '-', 'All-fake'], ['Features', '-', 'All-real']] | 2 | [['Accuracy', 'all'], ['Precision', 'fake'], ['Precision', 'real'], ['Recall', 'fake'], ['Recall', 'real'], ['F1', 'fake'], ['F1', 'real']] | [['0.55', '0.42', '0.62', '0.41', '0.64', '0.41', '0.63'], ['0.52', '0.41', '0.62', '0.48', '0.55', '0.44', '0.58'], ['0.55', '0.43', '0.64', '0.49', '0.59', '0.46', '0.61'], ['0.58', '0.46', '0.65', '0.45', '0.66', '0.46', '0.66'], ['0.39', '0.39', '-', '1', '0', '0.56', '-'], ['0.61', '-', '0.61', '0', '1', '-', '0.7... | column | ['Accuracy', 'Precision', 'Precision', 'Recall', 'Recall', 'F1', 'F1'] | ['Orientation-specific classifier'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy || all</th> <th>Precision || fake</th> <th>Precision || real</th> <th>Recall || fake</th> <th>Recall || real</th> <th>F1 || fake</th> <th>F1 || real</th> </tr> </thead> <t... | Table 5 | table_5 | P18-1022 | 8 | acl2018 | Table 5 shows the performance values for a generic classifier that predicts fake news across orientations, and orientation-specific classifiers that have been individually trained on articles from either orientation. Although all classifiers outperform the naive baselines of classifying everything into one of the class... | [1, 1, 1, 2] | ['Table 5 shows the performance values for a generic classifier that predicts fake news across orientations, and orientation-specific classifiers that have been individually trained on articles from either orientation.', 'Although all classifiers outperform the naive baselines of classifying everything into one of the ... | [['Generic classifier', 'Orientation-specific classifier'], ['Generic classifier', 'Orientation-specific classifier'], ['Orientation-specific classifier'], ['Style', 'fake']] | 1 |
P18-1026table_1 | Results for AMR generation on the test set. All score differences between our models and the corresponding baselines are significantly different (p<0.05). “(-s)” means input without scope marking. KIYCZ17, PKH16, SPZWG17 and FDSC16 are respectively the results reported in Konstas et al. (2017), Pourdamghani et al. (201... | 2 | [['Single models', 's2s'], ['Single models', 's2s (-s)'], ['Single models', 'g2s'], ['Ensembles', 's2s'], ['Ensembles', 's2s (-s)'], ['Ensembles', 'g2s'], ['Previous work (early AMR treebank versions)', 'KIYCZ17'], ['Previous work (as above + unlabelled data)', 'KIYCZ17'], ['Previous work (as above + unlabelled data)',... | 1 | [['BLEU'], ['CHRF++'], ['#params']] | [['21.7', '49.1', '28.4M'], ['18.4', '46.3', ' 28.4M'], ['23.3', '50.4', '28.3M'], ['26.6', '52.5', '142M'], ['22', '48.9', '142M'], ['27.5', '53.5', '141M'], ['22', '-', '-'], ['33.8', '-', '-'], ['26.9', '-', '-'], ['25.6', '-', '-'], ['22', '-', '-']] | column | ['BLEU', 'BLEU', 'BLEU'] | ['g2s'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU</th> <th>CHRF++</th> <th>#params</th> </tr> </thead> <tbody> <tr> <td>Single models || s2s</td> <td>21.7</td> <td>49.1</td> <td>28.4M</td> </tr> <tr> <td>Single ... | Table 1 | table_1 | P18-1026 | 5 | acl2018 | Table 1 shows the results on the test set. For the s2s models, we also report results without the scope marking procedure of Konstas et al.(2017). Our approach significantly outperforms the s2s baselines both with individual models and ensembles, while using a comparable number of parameters. In particular, we obtain t... | [1, 2, 1, 1, 1, 2, 2, 1, 1, 2] | ['Table 1 shows the results on the test set.', 'For the s2s models, we also report results without the scope marking procedure of Konstas et al.(2017).', 'Our approach significantly outperforms the s2s baselines both with individual models and ensembles, while using a comparable number of parameters.', 'In particular, ... | [None, ['s2s'], ['g2s', 's2s'], ['g2s', 's2s'], ['BLEU'], ['BLEU'], ['g2s'], ['g2s', 's2s'], ['g2s', 's2s'], ['g2s']] | 1 |
P18-1028table_1 | Test classification accuracy (and the number of parameters used). The bottom part shows our ablation results: SoPa: our full model. SoPams1: running with max-sum semiring (rather than max-product), with the identity function as our encoder E (see Equation 3). sl: self-loops, ✏: ✏ transitions. The final row is equivalen... | 2 | [['Mode', 'Hard'], ['Mode', 'DAN'], ['Mode', 'BiLSTM'], ['Mode', 'CNN'], ['Mode', 'SoPa']] | 1 | [['ROC'], ['SST'], ['Amazon']] | [['62.2 (4K)', '75.5 (6K)', '88.5 (67K)'], ['64.3 (91K)', '83.1 (91K)', '85.4 (91K)'], ['65.2 (844K)', '84.8 (1.5M)', '90.8 (844K)'], ['64.3 (155K)', '82.2 (62K)', '90.2 (305K)'], ['66.5 (255K)', '85.6 (255K)', '90.5 (256K)']] | column | ['accuracy', 'accuracy', 'accuracy'] | ['SoPa'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>ROC</th> <th>SST</th> <th>Amazon</th> </tr> </thead> <tbody> <tr> <td>Mode || Hard</td> <td>62.2 (4K)</td> <td>75.5 (6K)</td> <td>88.5 (67K)</td> </tr> <tr> <td>Mode ... | Table 1 | table_1 | P18-1028 | 7 | acl2018 | Table 1 shows our main experimental results. In two of the cases (SST and ROC), SoPa outperforms all models. On Amazon, SoPa performs within 0.3 points of CNN and BiLSTM, and outperforms the other two baselines. The table also shows the number of parameters used by each model for each task. Given enough data, models wi... | [1, 1, 1, 1, 2, 1] | ['Table 1 shows our main experimental results.', 'In two of the cases (SST and ROC), SoPa outperforms all models.', 'On Amazon, SoPa performs within 0.3 points of CNN and BiLSTM, and outperforms the other two baselines.', 'The table also shows the number of parameters used by each model for each task.', 'Given enough d... | [None, ['SoPa', 'SST', 'ROC'], ['SoPa', 'CNN', 'BiLSTM', 'Amazon'], ['Hard', 'DAN', 'BiLSTM', 'CNN', 'SoPa'], None, ['SoPa', 'BiLSTM']] | 1 |
P18-1030table_2 | Movie review DEV results of S-LSTM | 2 | [['Model', '+0 dummy node'], ['Model', '+1 dummy node'], ['Model', '+2 dummy node'], ['Model', 'Hidden size 100'], ['Model', 'Hidden size 200'], ['Model', 'Hidden size 300'], ['Model', 'Hidden size 600'], ['Model', 'Hidden size 900'], ['Model', 'Without s /s'], ['Model', 'With s /s']] | 1 | [['Time (s)'], ['Acc'], ['# Param']] | [['56', '81.76', '7216K'], ['65', '82.64', '8768K'], ['76', '82.24', '10321K'], ['42', '81.75', '4891K'], ['54', '82.04', '6002K'], ['65', '82.64', '8768K'], ['175', '81.84', '17648K'], ['235', '81.66', '33942K'], ['63', '82.36', '8768K'], ['65', '82.64', '8768K']] | column | ['Time(s)', 'Acc', '#Param'] | ['Model'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Time (s)</th> <th>Acc</th> <th># Param</th> </tr> </thead> <tbody> <tr> <td>Model || +0 dummy node</td> <td>56</td> <td>81.76</td> <td>7216K</td> </tr> <tr> <td>Model... | Table 2 | table_2 | P18-1030 | 5 | acl2018 | Hyperparameters: Table 2 shows the development results of various S-LSTM settings, where Time refers to training time per epoch. Without the sentence-level node, the accuracy of S-LSTM drops to 81.76%, demonstrating the necessity of global information exchange. Adding one additional sentence-level node as described in ... | [1, 1, 1, 2, 1, 2, 1, 0, 2] | ['Hyperparameters: Table 2 shows the development results of various S-LSTM settings, where Time refers to training time per epoch.', 'Without the sentence-level node, the accuracy of S-LSTM drops to 81.76%, demonstrating the necessity of global information exchange.', 'Adding one additional sentence-level node as descr... | [['Time (s)'], ['+0 dummy node'], ['+1 dummy node', 'Time (s)', '# Param'], ['+1 dummy node'], ['Hidden size 100', 'Hidden size 200', 'Hidden size 300'], ['Hidden size 300'], ['Without s /s', 'With s /s'], None, None] | 1 |
P18-1030table_3 | Movie review development results | 2 | [['Model', 'LSTM'], ['Model', 'BiLSTM'], ['Model', '2 stacked BiLSTM'], ['Model', '3 stacked BiLSTM'], ['Model', '4 stacked BiLSTM'], ['Model', 'S-LSTM'], ['Model', 'CNN'], ['Model', '2 stacked CNN'], ['Model', '3 stacked CNN'], ['Model', '4 stacked CNN'], ['Model', 'Transformer (N=6)'], ['Model', 'Transformer (N=8)'],... | 1 | [['Time (s)'], ['Acc'], ['# Param']] | [['67', '80.72', '5,977K'], ['106', '81.73', '7,059K'], ['207', '81.97', '9,221K'], ['310', '81.53', '11,383K'], ['411', '81.37', '13,546K'], ['65', '82.64*', '8,768K'], ['34', '80.35', '5,637K'], ['40', '80.97', '5,717K'], ['47', '81.46', '5,808K'], ['51', '81.39', '5,855K'], ['138', '81.03', '7,234K'], ['174', '81.86... | column | ['Time(s)', 'Acc', '#Param'] | ['S-LSTM'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Time (s)</th> <th>Acc</th> <th># Param</th> </tr> </thead> <tbody> <tr> <td>Model || LSTM</td> <td>67</td> <td>80.72</td> <td>5,977K</td> </tr> <tr> <td>Model || BiLS... | Table 3 | table_3 | P18-1030 | 6 | acl2018 | As shown in Table 3, BiLSTM gives significantly better accuracies compared to uni-directional LSTM, with the
training time per epoch growing from 67 seconds to 106 seconds. Stacking 2 layers of BiLSTM gives further improvements to development results, with a larger time of 207 seconds. 3 layers of stacked BiLSTM does ... | [1, 1, 1, 1, 2, 1, 1, 1, 1] | ['As shown in Table 3, BiLSTM gives significantly better accuracies compared to uni-directional LSTM, with the\r\ntraining time per epoch growing from 67 seconds to 106 seconds.', 'Stacking 2 layers of BiLSTM gives further improvements to development results, with a larger time of 207 seconds.', '3 layers of stacked Bi... | [['BiLSTM', 'LSTM', 'Time (s)'], ['2 stacked BiLSTM', 'Time (s)'], ['3 stacked BiLSTM', 'Time (s)'], ['S-LSTM', '2 stacked BiLSTM', '# Param', 'Acc', 'Time (s)'], ['CNN', '2 stacked CNN', '3 stacked CNN', '4 stacked CNN', 'Transformer (N=6)', 'Transformer (N=8)', 'Transformer (N=10)'], ['CNN'], ['3 stacked CNN', 'Acc',... | 1 |
P18-1034table_1 | Performances of different approaches on the WikiSQL dataset. Two evaluation metrics are logical form accuracy (Acclf ) and execution accuracy (Accex). Our model is abbreviated as (STAMP). | 2 | [['Methods', 'Attentional Seq2Seq'], ['Methods', 'Aug.PntNet (Zhong et al. 2017)'], ['Methods', 'Aug.PntNet (re-implemented by us)'], ['Methods', 'Seq2SQL (no RL) (Zhong et al. 2017)'], ['Methods', 'Seq2SQL (Zhong et al. 2017)'], ['Methods', 'SQLNet (Xu et al. 2017)'], ['Methods', 'Guo and Gao (2018)'], ['Methods', 'ST... | 2 | [['Dev', 'Acclf'], ['Dev', 'Accex'], ['Test', 'Acclf'], ['Test', 'Accex']] | [['23.3%', '37.0%', '23.4%', '35.9%'], ['44.1%', '53.8%', '43.3%', '53.3%'], ['51.5%', '58.9%', '52.1%', '59.2%'], ['48.2%', '58.1%', '47.4%', '57.1%'], ['49.5%', '60.8%', '48.3%', '59.4%'], ['-', '69.8%', '-', '68.0%'], ['-', '71.1%', '-', '69.0%'], ['58.6%', '67.8%', '58.0%', '67.4%'], ['59.3%', '71.8%', '58.4%', '70... | column | ['Acclf', 'Accex', 'Acclf', 'Accex'] | ['STAMP (w/o cell)', 'STAMP (w/o column-cell relation)', 'STAMP', 'STAMP+RL'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Dev || Acclf</th> <th>Dev || Accex</th> <th>Test || Acclf</th> <th>Test || Accex</th> </tr> </thead> <tbody> <tr> <td>Methods || Attentional Seq2Seq</td> <td>23.3%</td> <td>37.0... | Table 1 | table_1 | P18-1034 | 6 | acl2018 | Our model is abbreviated as (STAMP), which is short for Syntaxand TableAware seMantic Parser. The STAMP model in Table 1 stands for the model we describe in 4.2 plus 4.3. STAMP+RL is the model that is fine-tuned with the reinforcement learning strategy as described in 4.4. We implement a simplified version of our... | [2, 2, 2, 2, 2, 2, 1, 1, 1, 1] | ['Our model is abbreviated as (STAMP), which is short for Syntaxand TableAware seMantic Parser.', 'The STAMP model in Table 1 stands for the model we describe in \x81\x984.2 plus \x81\x984.3.', 'STAMP+RL is the model that is fine-tuned with the reinforcement learning strategy as described in \x81\x984.4.', 'We implemen... | [['STAMP'], ['STAMP'], ['STAMP+RL'], ['STAMP (w/o cell)'], ['Aug.PntNet (Zhong et al. 2017)'], ['STAMP (w/o column-cell relation)'], ['STAMP (w/o cell)', 'STAMP (w/o column-cell relation)', 'STAMP', 'STAMP+RL'], ['STAMP+RL'], ['STAMP (w/o cell)', 'Aug.PntNet (Zhong et al. 2017)'], ['STAMP (w/o column-cell relation)', '... | 1 |
P18-1039table_6 | Performances on two datasets. “LF” means that the model generates latent intermediate forms instead of equation systems. “AttReg” means attention regularization. “Iter” means iterative labeling. “n/a” means that the model does not run on the dataset. | 2 | [['Models', 'Wang et al. (2017)'], ['Models', 'Seq2Seq Equ'], ['Models', 'Seq2Seq LF'], ['Models', 'Seq2Seq LF+AttReg'], ['Models', 'Seq2Seq LF+AttReg+Iter'], ['Models', 'Shi et al. (2015)'], ['Models', 'Huang et al. (2017)']] | 2 | [['NumWord', '(Linear)'], ['NumWord', '(ALL)'], ['Dolphin18K', '(Linear)']] | [['19.70%', '14.60%', '10.20%'], ['26.80%', '20.10%', '13.10%'], ['50.80%', '45.20%', '13.90%'], ['56.70%', '54.00%', '15.10%'], ['61.60%', '57.10%', '16.80%'], ['63.60%', '60.20%', 'n/a'], ['20.80%', 'n/a', '28.40%']] | column | ['accuracy', 'accuracy', 'accuracy'] | ['Seq2Seq LF+AttReg+Iter'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>NumWord || (Linear)</th> <th>NumWord || (ALL)</th> <th>Dolphin18K || (Linear)</th> </tr> </thead> <tbody> <tr> <td>Models || Wang et al. (2017)</td> <td>19.70%</td> <td>14.60%</td> ... | Table 6 | table_6 | P18-1039 | 7 | acl2018 | Overall results are shown in Table 6. From the table, we can see that our final model (Seq2Seq LF+AttReg+Iter) outperforms the neural-based baseline models (Wang et al. (2017) 4 and Seq2Seq Equ). On Number word problem dataset, our model already outperforms the state-of-the-art feature-based model (Huang et al., 2017) ... | [1, 1, 1, 0, 1, 1, 2] | ['Overall results are shown in Table 6.', 'From the table, we can see that our final model (Seq2Seq LF+AttReg+Iter) outperforms the neural-based baseline models (Wang et al. (2017) 4 and Seq2Seq Equ).', 'On Number word problem dataset, our model already outperforms the state-of-the-art feature-based model (Huang et al.... | [None, ['Seq2Seq LF+AttReg+Iter', 'Wang et al. (2017)', 'Seq2Seq Equ'], ['NumWord', 'Seq2Seq LF+AttReg+Iter', 'Huang et al. (2017)', 'Shi et al. (2015)'], None, ['Seq2Seq LF', 'Seq2Seq Equ'], ['NumWord', 'Dolphin18K'], ['Dolphin18K']] | 1 |
P18-1044table_7 | The comparisons of Gen+Adv with Gen and the data augmentation model (Gen+Aug). ‡ denotes that the improvement is statistically significant at p < 0.05, compared with Gen+Aug. | 1 | [['Gen'], ['Gen+Aug'], ['Gen+Adv']] | 1 | [['Case'], ['Zero']] | [['91.5', '56.2'], ['91.2', '57'], ['92.0\x81ö', '58.4\x81ö']] | column | ['F1', 'F1'] | ['Gen+Adv'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Case</th> <th>Zero</th> </tr> </thead> <tbody> <tr> <td>Gen</td> <td>91.5</td> <td>56.2</td> </tr> <tr> <td>Gen+Aug</td> <td>91.2</td> <td>57</td> </tr> <tr> ... | Table 7 | table_7 | P18-1044 | 8 | acl2018 | Table 7 shows the results of the data augmentation model and the GAN-based model. Our Gen+Adv model performs better than the data augmented model. Note that our data augmentation model does not use raw corpora directly. | [1, 1, 2] | ['Table 7 shows the results of the data augmentation model and the GAN-based model.', 'Our Gen+Adv model performs better than the data augmented model.', 'Note that our data augmentation model does not use raw corpora directly.'] | [['Gen', 'Gen+Aug', 'Gen+Adv'], ['Gen+Adv', 'Gen+Aug'], ['Gen+Aug']] | 1 |
P18-1047table_2 | Results of different models in NYT dataset and WebNLG dataset. | 2 | [['Model', 'NovelTagging'], ['Model', 'OneDecoder'], ['Model', 'MultiDecoder']] | 2 | [['NYT', 'Precision'], ['NYT', 'Recall'], ['NYT', 'F1'], ['WebNLG', 'Precision'], ['WebNLG', 'Recall'], ['WebNLG', 'F1']] | [['0.624', '0.317', '0.42', '0.525', '0.193', '0.283'], ['0.594', '0.531', '0.56', '0.322', '0.289', '0.305'], ['0.61', '0.566', '0.587', '0.377', '0.364', '0.371']] | column | ['Precision', 'Recall', 'F1', 'Precision', 'Recall', 'F1'] | ['MultiDecoder'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>NYT || Precision</th> <th>NYT || Recall</th> <th>NYT || F1</th> <th>WebNLG || Precision</th> <th>WebNLG || Recall</th> <th>WebNLG || F1</th> </tr> </thead> <tbody> <tr> <td>Mode... | Table 2 | table_2 | P18-1047 | 8 | acl2018 | Table 2 shows the Precision, Recall and F1 value of NovelTagging model (Zheng et al., 2017) and our OneDecoder and MultiDecoder models. As we can see, in NYT dataset, our MultiDecoder model achieves the best F1 score, which is 0.587. There is 39.8% improvement compared with the NovelTagging model, which is 0.420. Besid... | [1, 1, 1, 1, 1, 1, 1, 1, 1] | ['Table 2 shows the Precision, Recall and F1 value of NovelTagging model (Zheng et al., 2017) and our OneDecoder and MultiDecoder models.', 'As we can see, in NYT dataset, our MultiDecoder model achieves the best F1 score, which is 0.587.', 'There is 39.8% improvement compared with the NovelTagging model, which is 0.42... | [['Precision', 'Recall', 'F1', 'NovelTagging', 'OneDecoder', 'MultiDecoder'], ['NYT', 'MultiDecoder', 'F1'], ['NYT', 'MultiDecoder', 'NovelTagging', 'F1'], ['NYT', 'OneDecoder', 'NovelTagging'], ['WebNLG', 'MultiDecoder', 'F1'], ['MultiDecoder', 'OneDecoder', 'NovelTagging'], ['MultiDecoder', 'OneDecoder'], ['NYT', 'We... | 1 |
P18-1048table_1 | Trigger identification performance | 2 | [['Method', 'Joint (Local+Global)'], ['Method', 'MSEP-EMD'], ['Method', 'DM-CNN'], ['Method', 'DM-CNN*'], ['Method', 'Bi-RNN'], ['Method', 'Hybrid: Bi-LSTM+CNN'], ['Method', 'SELF: Bi-LSTM+GAN']] | 1 | [['P (%)'], ['R (%)'], ['F (%)']] | [['76.9', '65', '70.4'], ['75.6', '69.8', '72.6'], ['80.4', '67.7', '73.5'], ['79.7', '69.6', '74.3'], ['68.5', '75.7', '71.9'], ['80.8', '71.5', '75.9'], ['75.3', '78.8', '77']] | column | ['P (%)', 'R (%)', 'F (%)'] | ['SELF: Bi-LSTM+GAN'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P (%)</th> <th>R (%)</th> <th>F (%)</th> </tr> </thead> <tbody> <tr> <td>Method || Joint (Local+Global)</td> <td>76.9</td> <td>65</td> <td>70.4</td> </tr> <tr> <td>Me... | Table 1 | table_1 | P18-1048 | 6 | acl2018 | Table 1 shows the trigger identification performance. It can be observed that SELF outperforms other models, with a performance gain of no less than 1.1% F-score. Frankly, the performance mainly benefits from the higher recall (78.8%). But in fact the relatively comparable precision (75.3%) to the recall reinforces the... | [1, 1, 1, 1, 1, 1] | ['Table 1 shows the trigger identification performance.', 'It can be observed that SELF outperforms other models, with a performance gain of no less than 1.1% F-score.', 'Frankly, the performance mainly benefits from the higher recall (78.8%).', 'But in fact the relatively comparable precision (75.3%) to the recall rei... | [None, ['SELF: Bi-LSTM+GAN', 'F (%)'], ['SELF: Bi-LSTM+GAN', 'R (%)'], ['SELF: Bi-LSTM+GAN', 'R (%)', 'P (%)'], ['Joint (Local+Global)', 'MSEP-EMD', 'DM-CNN', 'Bi-RNN', 'Hybrid: Bi-LSTM+CNN', 'P (%)', 'R (%)'], ['Joint (Local+Global)', 'MSEP-EMD', 'DM-CNN', 'Bi-RNN', 'Hybrid: Bi-LSTM+CNN', 'P (%)', 'R (%)']] | 1 |
P18-1049table_1 | Results on the test set. The GCL models use the same hyperparameters, if possible. The two models on the top do not use neural networks. The results in the two lower blocks all use double-check. “Two more hidden layers” means adding two dense layers on top of the pre-trained model without using GCL. The last row corres... | 2 | [['Model', 'CAEVO (not NN model)'], ['Model', 'CATENA (not NN model)'], ['Model', 'Cheng et al. 2017'], ['Model', 'Meng et al. 2017'], ['Model', 'pairwise'], ['Model', 'Two more hidden layers'], ['Model', 'GCL w/ state-tracking controller'], ['Model', 'GCL w/ stateless controller'], ['Model', 'GCL w/ pre-trained output... | 1 | [['Micro-F1'], ['Macro-F1']] | [['0.507', '-'], ['0.511', '-'], ['0.5203', '-'], ['-', '0.519'], ['0.535', '0.528'], ['0.539', '0.532'], ['0.545', '0.538'], ['0.546', '0.538'], ['0.541', '0.536']] | column | ['Micro-F1', 'Macro-F1'] | ['GCL w/ state-tracking controller', 'GCL w/ stateless controller', 'GCL w/ pre-trained output layer'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Micro-F1</th> <th>Macro-F1</th> </tr> </thead> <tbody> <tr> <td>Model || CAEVO (not NN model)</td> <td>0.507</td> <td>-</td> </tr> <tr> <td>Model || CATENA (not NN model)</td> ... | Table 1 | table_1 | P18-1049 | 7 | acl2018 | The middle block of Table 1 shows the performance of the pairwise model after applying double-checking. Since all pairs are flipped, double-checking combines results from (ei , ej ) and (ej , ei), picking the label with the higher probability score, which typically boosts performance. The results without double-checkin... | [1, 2, 2, 1, 2, 1, 2, 1, 2] | ['The middle block of Table 1 shows the performance of the pairwise model after applying double-checking.', 'Since all pairs are flipped, double-checking combines results from (ei , ej ) and (ej , ei), picking the label with the higher probability score, which typically boosts performance.', 'The results without double... | [['pairwise'], ['pairwise'], ['pairwise'], ['GCL w/ state-tracking controller', 'GCL w/ pre-trained output layer', 'GCL w/ stateless controller'], ['Two more hidden layers', 'GCL w/ pre-trained output layer'], ['Model'], ['GCL w/ state-tracking controller', 'GCL w/ stateless controller', 'GCL w/ pre-trained output laye... | 1 |
P18-1050table_5 | Results on TimeBank corpus | 2 | [['Models', 'Choubey and Huang (2017)'], ['Models', 'Choubey and Huang (2017) + CP score']] | 1 | [['Acc.(%)']] | [['51.2'], ['52.3']] | column | ['Acc.(%)'] | ['Choubey and Huang (2017) + CP score'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Acc.(%)</th> </tr> </thead> <tbody> <tr> <td>Models || Choubey and Huang (2017)</td> <td>51.2</td> </tr> <tr> <td>Models || Choubey and Huang (2017) + CP score</td> <td>52.3</td> ... | Table 5 | table_5 | P18-1050 | 8 | acl2018 | To facilitate direct comparisons, we used the same state-of-the-art temporal relation classification system as described in our previous work Choubey and Huang (2017) and considered all the 14 relations in classification. Choubey and Huang (2017) forms three sequences (i.e., word forms, POS tags, and dependency relatio... | [2, 2, 2, 2, 2, 1, 2] | ['To facilitate direct comparisons, we used the same state-of-the-art temporal relation classification system as described in our previous work Choubey and Huang (2017) and considered all the 14 relations in classification.', 'Choubey and Huang (2017) forms three sequences (i.e., word forms, POS tags, and dependency re... | [['Choubey and Huang (2017)', 'Choubey and Huang (2017) + CP score'], ['Choubey and Huang (2017)'], None, ['Choubey and Huang (2017) + CP score'], ['Choubey and Huang (2017)', 'Choubey and Huang (2017) + CP score'], ['Choubey and Huang (2017)', 'Choubey and Huang (2017) + CP score'], None] | 1 |
P18-1053table_4 | Performance of our model with different random seeds. | 1 | [['Our model']] | 1 | [['Min F'], ['Median F'], ['Max F'], ['σ']] | [['56.5', '57.1', '57.5', '0.00253']] | column | ['Min F', 'Median F', 'Max F', 'σ'] | ['Our model'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Min F</th> <th>Median F</th> <th>Max F</th> <th>σ</th> </tr> </thead> <tbody> <tr> <td>Our model</td> <td>56.5</td> <td>57.1</td> <td>57.5</td> <td>0.00253</td> </t... | Table 4 | table_4 | P18-1053 | 7 | acl2018 | Table 4 shows the performance of our model with different random seeds on the test dataset. We report the minimum, the maximum, the median F-scores results and the standard deviation ƒÐof F-scores. We run the model with 38 different random seeds. The maximum F-score is 57.5% and the minimum on is 56.5%. | [1, 1, 2, 1] | ['Table 4 shows the performance of our model with different random seeds on the test dataset.', 'We report the minimum, the maximum, the median F-scores results and the standard deviation ƒÐof F-scores.', 'We run the model with 38 different random seeds.', 'The maximum F-score is 57.5% and the minimum on is 56.5%.'] | [['Our model'], ['Min F', 'Median F', 'Max F', 'σ'], None, ['Max F', 'Min F']] | 1 |
P18-1061table_2 | Full length ROUGE F1 evaluation (%) on CNN/Daily Mail test set. Results with ‡ mark are taken from the corresponding papers. Those marked with * were trained and evaluated on the anonymized dataset, and so are not strictly comparable to our results on the original text. All our ROUGE scores have a 95% confidence interv... | 2 | [['Models', 'LEAD3'], ['Models', 'TEXTRANK'], ['Models', 'CRSUM'], ['Models', 'NN-SE'], ['Models', 'PGN ‡'], ['Models', 'LEAD3 ‡ *'], ['Models', 'SUMMARUNNER ‡ *'], ['Models', 'NEUSUM']] | 1 | [['ROUGE-1'], ['ROUGE-2'], ['ROUGE-L']] | [['40.24-', '17.70-', '36.45-'], ['40.20-', '17.56-', '36.44-'], ['40.52-', '18.08-', '36.81-'], ['41.13-', '18.59-', '37.40-'], ['39.53-', '17.28-', '36.38-'], ['39.2', '15.7', '35.5'], ['39.6', '16.2', '35.3'], ['41.59', '19.01', '37.98']] | column | ['ROUGE-1', 'ROUGE-2', 'ROUGE-L'] | ['NEUSUM'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>ROUGE-1</th> <th>ROUGE-2</th> <th>ROUGE-L</th> </tr> </thead> <tbody> <tr> <td>Models || LEAD3</td> <td>40.24-</td> <td>17.70-</td> <td>36.45-</td> </tr> <tr> <td>Mod... | Table 2 | table_2 | P18-1061 | 7 | acl2018 | We use the official ROUGE script4 (version 1.5.5) to evaluate the summarization output. Table 2 summarizes the results on CNN/Daily Mail data set using full length ROUGE-F15 evaluation. It includes two unsupervised baselines, LEAD3 and TEXTRANK. The table also includes three stateof-the-art neural network based extract... | [2, 2, 2, 2, 2, 2, 2, 1, 1, 1, 1, 1] | ['We use the official ROUGE script4 (version 1.5.5) to evaluate the summarization output.', 'Table 2 summarizes the results on CNN/Daily Mail data set using full length ROUGE-F15 evaluation.', 'It includes two unsupervised baselines, LEAD3 and TEXTRANK.', 'The table also includes three stateof-the-art neural network ba... | [None, ['ROUGE-1', 'ROUGE-2', 'ROUGE-L'], ['LEAD3', 'TEXTRANK'], ['CRSUM', 'NN-SE', 'SUMMARUNNER ‡ *'], ['PGN ‡'], ['SUMMARUNNER ‡ *'], ['LEAD3 ‡ *'], ['NEUSUM', 'ROUGE-2'], ['NEUSUM', 'LEAD3', 'TEXTRANK', 'CRSUM', 'NN-SE', 'SUMMARUNNER ‡ *', 'PGN ‡'], ['NEUSUM', 'ROUGE-2', 'LEAD3'], ['NEUSUM', 'LEAD3', 'TEXTRANK', 'CR... | 1 |
P18-1063table_5 | Speed comparison with See et al. (2017). | 2 | [['Models', '(See et al., 2017)'], ['Models', 'rnn-ext + abs + RL'], ['Models', 'rnn-ext + abs + RL + rerank']] | 2 | [['Speed', 'total time (hr)'], ['Speed', 'words / sec']] | [['12.9', '14.8'], ['0.68', '361.3'], ['2.00 (1.46 +0.54)', '109.8']] | column | ['total time (hr)', 'words / sec'] | ['rnn-ext + abs + RL'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Speed || total time (hr)</th> <th>Speed || words / sec</th> </tr> </thead> <tbody> <tr> <td>Models || (See et al., 2017)</td> <td>12.9</td> <td>14.8</td> </tr> <tr> <td>Models ... | Table 5 | table_5 | P18-1063 | 9 | acl2018 | In Table 5, we show the substantial test-time speed-up of our model compared to See et al. (2017).18. We calculate the total decoding time for producing all summaries for the test set.19. Due to the fact that the main test-time speed bottleneck of RNN language generation model is that the model is constrained to genera... | [1, 2, 2, 1, 1, 1] | ['In Table 5, we show the substantial test-time speed-up of our model compared to See et al. (2017).18.', 'We calculate the total decoding time for producing all summaries for the test set.19.', 'Due to the fact that the main test-time speed bottleneck of RNN language generation model is that the model is constrained t... | [['total time (hr)', 'words / sec'], None, None, ['rnn-ext + abs + RL'], ['rnn-ext + abs + RL', 'total time (hr)', 'words / sec'], ['rnn-ext + abs + RL + rerank', 'total time (hr)', 'words / sec']] | 1 |
P18-1064table_4 | Gigaword Human Evaluation: pairwise comparison between our 3-way multi-task (MTL) model w.r.t. our baseline. | 2 | [['Models', 'MTL wins'], ['Models', 'Baseline wins'], ['Models', 'Non-distinguish']] | 1 | [['Relevance'], ['Readability'], ['Total']] | [['33', '32', '65'], ['22', '22', '44'], ['45', '46', '91']] | column | ['Relevance', 'Readability', 'Total'] | ['MTL wins'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Relevance</th> <th>Readability</th> <th>Total</th> </tr> </thead> <tbody> <tr> <td>Models || MTL wins</td> <td>33</td> <td>32</td> <td>65</td> </tr> <tr> <td>Models |... | Table 4 | table_4 | P18-1064 | 7 | acl2018 | We also show human evaluation results on the Gigaword dataset in Table 4 (again based on pairwise comparisons for 100 samples), where we see that our MTL model is better than our state-of-theart baseline on both relevance and readability.7. | [1] | ['We also show human evaluation results on the Gigaword dataset in Table 4 (again based on pairwise comparisons for 100 samples), where we see that our MTL model is better than our state-of-theart baseline on both relevance and readability.7.'] | [['MTL wins', 'Baseline wins']] | 1 |
P18-1064table_6 | Performance of our pointer-based entailment generation (EG) models compared with previous SotA work. M, C, R, B are short for Meteor, CIDEr-D, ROUGE-L, and BLEU-4, resp. | 2 | [['Models', 'Pasunuru&Bansal (2017)'], ['Models', 'Our 1-layer pointer EG'], ['Models', 'Our 2-layer pointer EG']] | 1 | [['M'], ['C'], ['R'], ['B']] | [['29.6', '117.8', '62.4', '40.6'], ['32.4', '139.3', '65.1', '43.6'], ['32.3', '140.0', '64.4', '43.7']] | column | ['M', 'C', 'R', 'B'] | ['Our 1-layer pointer EG', 'Our 2-layer pointer EG'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>M</th> <th>C</th> <th>R</th> <th>B</th> </tr> </thead> <tbody> <tr> <td>Models || Pasunuru&Bansal (2017)</td> <td>29.6</td> <td>117.8</td> <td>62.4</td> <td>40.6</... | Table 6 | table_6 | P18-1064 | 8 | acl2018 | Table 6 compares our model’s performance to Pasunuru and Bansal (2017). Our pointer mechanism gives a performance boost, since the entailment generation task involves copying from the given premise sentence, whereas the 2-layer model seems comparable to the 1-layer model. Also, the supplementary shows some output ex... | [1, 1, 2] | [' Table 6 compares our model’s performance to Pasunuru and Bansal (2017).', 'Our pointer mechanism gives a performance boost, since the entailment generation task involves copying from the given premise sentence, whereas the 2-layer model seems comparable to the 1-layer model.', 'Also, the supplementary shows some o... | [['Our 1-layer pointer EG', 'Our 2-layer pointer EG', 'Pasunuru&Bansal (2017)'], ['Our 2-layer pointer EG', 'Our 1-layer pointer EG'], ['Our 1-layer pointer EG', 'Our 2-layer pointer EG']] | 1 |
P18-1064table_9 | Entailment classification results of our baseline vs. EG-multi-task model (p < 0.001). | 2 | [['Models', 'Baseline'], ['Models', 'Multi-Task (EG)']] | 1 | [['Average Entailment Probability']] | [['0.907'], ['0.912']] | column | ['Average Entailment Probability'] | ['Multi-Task (EG)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Average Entailment Probability</th> </tr> </thead> <tbody> <tr> <td>Models || Baseline</td> <td>0.907</td> </tr> <tr> <td>Models || Multi-Task (EG)</td> <td>0.912</td> </tr> </t... | Table 9 | table_9 | P18-1064 | 8 | acl2018 | We employ a state-of-the-art entailment classifier (Chen et al., 2017), and calculate the average of the entailment probability of each of the output summaryfs sentences being entailed by the input source document. We do this for output summaries of our baseline and 2-way-EG multi-task model (with entailment generati... | [2, 2, 1] | ['We employ a state-of-the-art entailment classifier (Chen et al., 2017), and calculate the average of the entailment probability of each of the output summary\x81fs sentences being entailed by the input source document.', ' We do this for output summaries of our baseline and 2-way-EG multi-task model (with entailment ... | [['Average Entailment Probability'], ['Baseline', 'Multi-Task (EG)'], ['Multi-Task (EG)', 'Baseline']] | 1 |
P18-1067table_6 | Overview of Macro-weighted Average F1 Scores of SVM and PSL Models. The top portion of the table shows the results of the three baselines. The bottom portion shows a subset of the PSL models (parentheses indicate features added onto the previous models). | 2 | [['MODEL', 'SVM BOW'], ['MODEL', 'PSL BOW'], ['MODEL', 'MAJORITY VOTE'], ['MODEL', 'M1 (UNIGRAMS)'], ['MODEL', 'M3 (+ POLITICAL INFO)'], ['MODEL', 'M5 (+ FRAMES)'], ['MODEL', 'M9 (+ BIGRAMS)'], ['MODEL', 'M13 (ALL FEATURES)']] | 1 | [['MFD'], ['AR']] | [['18.7', '-'], ['21.88', '-'], ['12.5', '10.86'], ['7.17', '8.68'], ['22.01', '30.45'], ['28.94', '37.44'], ['67.93', '66.5'], ['72.49', '69.38']] | column | ['F1', 'F1'] | ['M1 (UNIGRAMS)', 'M3 (+ POLITICAL INFO)', 'M5 (+ FRAMES)', 'M9 (+ BIGRAMS)', 'M13 (ALL FEATURES)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MFD</th> <th>AR</th> </tr> </thead> <tbody> <tr> <td>MODEL || SVM BOW</td> <td>18.7</td> <td>-</td> </tr> <tr> <td>MODEL || PSL BOW</td> <td>21.88</td> <td>-</td> ... | Table 6 | table_6 | P18-1067 | 7 | acl2018 | Table 6 shows an overview of the average results of our supervised experiments for five of the PSL models. The first column lists the SVM or PSL model. The second column presents the results of a given model when using the MFD as the source of the unigrams for the initial model (M1). The final column shows the results... | [1, 1, 1, 1, 1, 1, 1, 1] | [' Table 6 shows an overview of the average results of our supervised experiments for five of the PSL models.', 'The first column lists the SVM or PSL model.', 'The second column presents the results of a given model when using the MFD as the source of the unigrams for the initial model (M1).', 'The final column shows ... | [['SVM BOW', 'PSL BOW', 'M1 (UNIGRAMS)', 'M3 (+ POLITICAL INFO)', 'M5 (+ FRAMES)', 'M9 (+ BIGRAMS)', 'M13 (ALL FEATURES)'], ['SVM BOW', 'PSL BOW', 'M1 (UNIGRAMS)', 'M3 (+ POLITICAL INFO)', 'M5 (+ FRAMES)', 'M9 (+ BIGRAMS)', 'M13 (ALL FEATURES)'], ['MFD'], ['AR'], ['SVM BOW', 'PSL BOW'], ['SVM BOW', 'PSL BOW'], ['MAJORI... | 1 |
P18-1067table_9 | Overview of Macro-weighted Average F1 Scores of Joint PSL Model M13. BASELINE is the MORAL prediction result. JOINT is the result of jointly predicting the MORAL and uninitialized FRAME predicates. SKYLINE shows the results when using all features with initialized frames. | 2 | [['PSL MODEL', 'BASELINE'], ['PSL MODEL', 'JOINT'], ['PSL MODEL', 'SKYLINE']] | 1 | [['MFD'], ['AR']] | [['55.49', '55.88'], ['51.22', '58.75'], ['72.49', '69.38']] | column | ['F1', 'F1'] | ['SKYLINE'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MFD</th> <th>AR</th> </tr> </thead> <tbody> <tr> <td>PSL MODEL || BASELINE</td> <td>55.49</td> <td>55.88</td> </tr> <tr> <td>PSL MODEL || JOINT</td> <td>51.22</td> <t... | Table 9 | table_9 | P18-1067 | 9 | acl2018 | Table 9 shows the macro-weighted average F1 scores for three different models. The BASELINE model shows the results of predicting only the MORAL of the tweet using the non-joint model M13, which uses all features with frames initialized. The JOINT model is designed to predict both the moral foundation and frame of a tw... | [1, 2, 2, 2, 1, 2, 2, 1] | ['Table 9 shows the macro-weighted average F1 scores for three different models.', 'The BASELINE model shows the results of predicting only the MORAL of the tweet using the non-joint model M13, which uses all features with frames initialized.', 'The JOINT model is designed to predict both the moral foundation and frame... | [['BASELINE', 'JOINT', 'SKYLINE'], ['BASELINE'], ['JOINT'], ['SKYLINE'], ['BASELINE', 'AR', 'MFD'], ['MFD'], ['JOINT'], ['JOINT', 'SKYLINE']] | 1 |
P18-1068table_3 | DJANGO results. Accuracies in the first and second block are taken from Ling et al. (2016) and Yin and Neubig (2017). | 2 | [['Method', 'Retrieval System'], ['Method', 'Phrasal SMT'], ['Method', 'Hierarchical SMT'], ['Method', 'SEQ2SEQ+UNK replacement'], ['Method', 'SEQ2TREE+UNK replacement'], ['Method', 'LPN+COPY (Ling et al. 2016)'], ['Method', 'SNM+COPY (Yin and Neubig 2017)'], ['Method', 'ONESTAGE'], ['Method', 'COARSE2FINE'], ['Method'... | 1 | [['Accuracy']] | [['14.7'], ['31.5'], ['9.5'], ['45.1'], ['39.4'], ['62.3'], ['71.6'], ['69.5'], ['74.1'], ['72.1'], ['83']] | column | ['Accuracy'] | ['COARSE2FINE'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy</th> </tr> </thead> <tbody> <tr> <td>Method || Retrieval System</td> <td>14.7</td> </tr> <tr> <td>Method || Phrasal SMT</td> <td>31.5</td> </tr> <tr> <td>Method ... | Table 3 | table_3 | P18-1068 | 8 | acl2018 | Table 3 reports results on DJANGO where we observe similar tendencies. COARSE2FINE outperforms ONESTAGE by a wide margin. It is also superior to the best reported result in the literature (SNM+COPY; see the second block in the table). Again we observe that the sketch encoder is beneficial and that there is an 8.9 point... | [1, 1, 1, 1] | ['Table 3 reports results on DJANGO where we observe similar tendencies.', 'COARSE2FINE outperforms ONESTAGE by a wide margin.', 'It is also superior to the best reported result in the literature (SNM+COPY; see the second block in the table).', 'Again we observe that the sketch encoder is beneficial and that there is a... | [None, ['COARSE2FINE', 'ONESTAGE'], ['COARSE2FINE', 'SNM+COPY (Yin and Neubig 2017)'], ['COARSE2FINE - sketch encoder', 'COARSE2FINE + oracle sketch', 'COARSE2FINE']] | 1 |
P18-1069table_5 | Importance scores of confidence metrics (normalized by maximum value on each dataset). Best results are shown in bold. Same shorthands apply as in Table 3. | 2 | [['Metric', 'IFTTT'], ['Metric', 'DJANGO']] | 1 | [['Dout'], ['Noise'], ['PR'], ['PPL'], ['LM'], ['#UNK'], ['Var'], ['Ent']] | [['0.39', '1', '0.89', '0.27', '0.26', '0.46', '0.43', '0.34'], ['1', '0.59', '0.22', '0.58', '0.49', '0.14', '0.24', '0.25']] | column | ['Dout', 'Noise', 'PR', 'PPL', 'LM', '#UNK', 'Var', 'Ent'] | ['Metric'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Dout</th> <th>Noise</th> <th>PR</th> <th>PPL</th> <th>LM</th> <th>#UNK</th> <th>Var</th> <th>Ent</th> </tr> </thead> <tbody> <tr> <td>Metric || IFTTT</td> <td>0.3... | Table 5 | table_5 | P18-1069 | 8 | acl2018 | Table 5 shows the relative importance of individual metrics in the regression model. As importance score we use the average gain (i.e., loss reduction) brought by the confidence metric once added as feature to the branch of the decision tree (Chen and Guestrin, 2016). The results indicate that model uncertainty (Noise/... | [1, 2, 1, 1, 2] | ['Table 5 shows the relative importance of individual metrics in the regression model.', 'As importance score we use the average gain (i.e., loss reduction) brought by the confidence metric once added as feature to the branch of the decision tree (Chen and Guestrin, 2016).', 'The results indicate that model uncertainty... | [['Metric'], None, ['Noise', 'Dout', 'PR', 'PPL'], ['IFTTT', '#UNK', 'Var'], ['Dout', 'PR', 'PPL', 'LM', '#UNK', 'Var', 'Ent']] | 1 |
P18-1073table_3 | Accuracy (%) of the proposed method in comparison with previous work. *Results obtained with the official implementation from the authors. †Results obtained with the framework from Artetxe et al. (2018a). The remaining results were reported in the original papers. For methods that do not require supervision, we report ... | 4 | [['Supervision', '5k dict', 'Method', 'Mikolov et al. (2013)'], ['Supervision', '5k dict', 'Method', 'Faruqui and Dyer (2014)'], ['Supervision', '5k dict', 'Method', 'Shigeto et al. (2015)'], ['Supervision', '5k dict', 'Method', 'Dinu et al. (2015)'], ['Supervision', '5k dict', 'Method', 'Lazaridou et al. (2015)'], ['S... | 1 | [['EN-IT'], ['EN-DE'], ['EN-FI'], ['EN-ES']] | [['34.93†', '35.00†', '25.91†', '27.73†'], ['38.40*', '37.13*', '27.60*', '26.80*'], ['41.53†', '43.07†', '31.04†', '33.73†'], ['37.7', '38.93*', '29.14*', '30.40*'], ['40.2', '-', '-', '-'], ['36.87†', '41.27†', '28.23†', '31.20†'], ['36.73†', '40.80†', '28.16†', '31.07†'], ['39.27', '41.87* ', '30.62*', '31.40*'], ['... | column | ['Accuracy', 'Accuracy', 'Accuracy', 'Accuracy'] | ['Proposed method'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>EN-IT</th> <th>EN-DE</th> <th>EN-FI</th> <th>EN-ES</th> </tr> </thead> <tbody> <tr> <td>Supervision || 5k dict || Method || Mikolov et al. (2013)</td> <td>34.93†</td> <td>35.00†... | Table 3 | table_3 | P18-1073 | 7 | acl2018 | Table 3 shows the results of the proposed method in comparison to previous systems, including those with different degrees of supervision. We focus on the widely used English-Italian dataset of Dinu et al(2015) and its extensions. Despite being fully unsupervised, our method achieves the best results in all language pa... | [1, 1, 1, 1, 1] | ['Table 3 shows the results of the proposed method in comparison to previous systems, including those with different degrees of supervision.', 'We focus on the widely used English-Italian dataset of Dinu et al(2015) and its extensions.', 'Despite being fully unsupervised, our method achieves the best results in all lan... | [['Proposed method'], ['Dinu et al. (2015)', 'EN-IT'], ['Proposed method'], ['Proposed method', 'EN-FI', 'Artetxe et al. (2018a)'], ['Proposed method', 'Artetxe et al. (2017)']] | 1 |
P18-1074table_6 | Performance comparison between models with different components (C: character embedding; L: shared LSTM; S: language-specific layer; H: highway networks; D: dropout). | 2 | [['Model', 'Basic'], ['Model', 'Basic + C'], ['Model', 'Basic + CL'], ['Model', 'Basic + CLS'], ['Model', 'Basic + CLSH'], ['Model', 'Basic + CLSHD']] | 1 | [['0'], ['10'], ['100'], ['200'], ['All']] | [['2.06', '20.03', '47.98', '51.52', '77.63'], ['1.69', '24.22', '48.53', '56.26', '83.38'], ['9.62', '25.97', '49.54', '56.29', '83.37'], ['3.21', '25.43', '50.67', '56.34', '84.02'], ['7.7', '30.48', '53.73', '58.09', '84.68'], ['12.12', '35.82', '57.33', '63.27', '86']] | column | ['F-score', 'F-score', 'F-score', 'F-score', 'F-score'] | ['0', '10', '100', '200', 'All'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>0</th> <th>10</th> <th>100</th> <th>200</th> <th>All</th> </tr> </thead> <tbody> <tr> <td>Model || Basic</td> <td>2.06</td> <td>20.03</td> <td>47.98</td> <td>51.5... | Table 6 | table_6 | P18-1074 | 8 | acl2018 | As Table 6 shows, adding each component usually enhances the performance (F-score, %), while the impact also depends on the size of the target task data. For example, the language-specific layer slightly impairs the performance with only 10 training sentences. However, this is unsurprising as it introduces additional p... | [1, 1, 2] | ['As Table 6 shows, adding each component usually enhances the performance (F-score, %), while the impact also depends on the size of the target task data.', 'For example, the language-specific layer slightly impairs the performance with only 10 training sentences.', 'However, this is unsurprising as it introduces addi... | [['Basic + C', 'Basic + CL', 'Basic + CLS', 'Basic + CLSH', 'Basic + CLSHD', '0', '10', '100', '200', 'All'], ['Basic + CLS', 'Basic + CLSH', 'Basic + CLSHD', '10'], None] | 1 |
P18-1075table_2 | We report F1 results for medical BLI with the cosine similarity and the classifier based systems. We present baseline and our proposed domain adaptation method using both general and medical lexicons. | 1 | [['Baseline'], ['Baseline BNC lexicon'], ['Adapted medical lexicon'], ['Adapted BNC lexicon']] | 2 | [['cosine similarity', 'F1 (top)'], ['cosine similarity', 'F1 (all)'], ['classifier', 'F1 (top)'], ['classifier', 'F1 (all)']] | [['13.43', '9.84', '37.73', '36.61'], ['-', '-', '20.73', '21.78'], ['14.18', '14.15', '40.71', '38.09'], ['16.29', '16.71', '22.1', '21.5']] | column | ['F1 (top)', 'F1 (all)', 'F1 (top)', 'F1 (all)'] | ['Adapted BNC lexicon'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>cosine similarity || F1 (top)</th> <th>cosine similarity || F1 (all)</th> <th>classifier || F1 (top)</th> <th>classifier || F1 (all)</th> </tr> </thead> <tbody> <tr> <td>Baseline</td> ... | Table 2 | table_2 | P18-1075 | 6 | acl2018 | Table 2 compares its performance with our adapted BWEs, with both cosine similarity and classification based systems. "top"F1 scores are based on the most probable word as prediction only; "all"F1 scores use all words as prediction whose probability is above the threshold. It can be seen that the cosine similarity base... | [1, 1, 1, 2, 1, 1, 2, 1, 1, 1] | ['Table 2 compares its performance with our adapted BWEs, with both cosine similarity and classification based systems.', '"top"F1 scores are based on the most probable word as prediction only; "all"F1 scores use all words as prediction whose probability is above the threshold.', 'It can be seen that the cosine similar... | [['Adapted medical lexicon', 'Adapted BNC lexicon', 'cosine similarity', 'classifier'], ['F1 (top)', 'F1 (all)'], ['cosine similarity', 'F1 (top)', 'F1 (all)', 'Baseline', 'Adapted BNC lexicon'], None, ['classifier', 'cosine similarity'], ['classifier', 'Adapted BNC lexicon'], None, ['Adapted medical lexicon'], ['Adapt... | 1 |
P18-1075table_5 | Results with the semi-supervised system for BLI. Differences comparing to previous results are indicated in brackets. Baseline results are compared to rerun experiments of Heyman et al. (2017) using BWEs instead of MWEs. | 1 | [['Baseline+BNC'], ['Baseline+medical'], ['Adapted+BNC'], ['Adapted+medical']] | 1 | [['F1 (top)'], ['F1 (all)']] | [['35.04 (-0.66)', '34.98 (-1.40)'], ['36.20 (0.50)', '36.55 (0.16)'], ['41.01 (0.30)', '39.04 (0.95)'], ['41.44 (0.73)', '37.51 (-0.57)']] | column | ['F1 (top)', 'F1 (all)'] | ['Adapted+BNC', 'Adapted+medical'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>F1 (top)</th> <th>F1 (all)</th> </tr> </thead> <tbody> <tr> <td>Baseline+BNC</td> <td>35.04 (-0.66)</td> <td>34.98 (-1.40)</td> </tr> <tr> <td>Baseline+medical</td> <td>36... | Table 5 | table_5 | P18-1075 | 9 | acl2018 | Results in Table 5 show that adding semisup to the classifier further increases performance for BLI as well. For the baseline system, when using only in-domain text for creating BWEs, only the medical unlabeled set was effective, general domain word pairs could not be exploited due to the lack of general semantic knowl... | [1, 1, 1, 1] | ['Results in Table 5 show that adding semisup to the classifier further increases performance for BLI as well.', 'For the baseline system, when using only in-domain text for creating BWEs, only the medical unlabeled set was effective, general domain word pairs could not be exploited due to the lack of general semantic ... | [None, ['Baseline+BNC', 'Baseline+medical'], ['Adapted+BNC', 'Adapted+medical'], ['F1 (top)', 'F1 (all)', 'Adapted+BNC', 'Adapted+medical']] | 1 |
P18-1076table_4 | Results for different combinations of interactions between document (D) and question (Q) context (ctx) and context + knowledge (ctx+kn) representations. (CN5Sel, 50 facts) | 2 | [['Drepr to Qrepr interaction', 'Dctx Qctx (w/o know)'], ['Drepr to Qrepr interaction', 'Dctx+kn Qctx+kn'], ['Drepr to Qrepr interaction', 'Dctx Qctx+kn'], ['Drepr to Qrepr interaction', 'Dctx+kn Qctx'], ['Drepr to Qrepr interaction', 'Full model'], ['Drepr to Qrepr interaction', 'w/o Dctx Qctx'], ['Drepr to Qrepr inte... | 2 | [['NE', 'Dev'], ['NE', 'Test'], ['CN', 'Dev'], ['CN', 'Test']] | [['75.5', '70.3', '68.2', '64.8'], ['76.45', '69.68', '70.85', '66.32'], ['77.1', '69.72', '70.8', '66.32'], ['75.65', '70.88', '71.2', '67.96'], ['76.8', '70.24', '71.85', '67.64'], ['75.95', '70.24', '70.65', '67.12'], ['76.2', '69.8', '70.75', '67'], ['76.55', '70.52', '71.75', '66.32'], ['76.05', '70.84', '70.8', '... | column | ['accuracy', 'accuracy', 'accuracy', 'accuracy'] | ['Dctx+kn Qctx+kn', 'Dctx Qctx+kn', 'Dctx+kn Qctx'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>NE || Dev</th> <th>NE || Test</th> <th>CN || Dev</th> <th>CN || Test</th> </tr> </thead> <tbody> <tr> <td>Drepr to Qrepr interaction || Dctx Qctx (w/o know)</td> <td>75.5</td> <... | Table 4 | table_4 | P18-1076 | 7 | acl2018 | Table 4 shows that the combination of different interactions between ctx and ctx+kn representations leads to clear improvement over the w/o knowledge setup, in particular for the Common Nouns dataset. We also performed ablations for a model with 100 facts (see Supplement). | [1, 0] | ['Table 4 shows that the combination of different interactions between ctx and ctx+kn representations leads to clear improvement over the w/o knowledge setup, in particular for the Common Nouns dataset.', 'We also performed ablations for a model with 100 facts (see Supplement).'] | [['CN', 'Dctx+kn Qctx+kn', 'Dctx Qctx+kn', 'Dctx+kn Qctx'], None] | 1 |
P18-1076table_6 | Comparison of KnReader to existing endto-end neural models on the benchmark datasets. | 3 | [['Models', '-', 'Human (ctx + q)'], ['Models', 'Single interaction', 'LSTMs (ctx + q) (Hill et al. 2015)'], ['Models', 'Single interaction', 'AS Reader'], ['Models', 'Single interaction', 'AS Reader (our impl)'], ['Models', 'Single interaction', 'KnReader (ours)'], ['Models', 'Multiple interactions', 'MemNNs (Weston e... | 2 | [['NE', 'dev'], ['NE', 'test'], ['CN', 'dev'], ['CN', 'test']] | [['-', '81.6', '-', '81.6'], ['51.2', '41.8', '62.6', '56'], ['73.8', '68.6', '68.8', '63.4'], ['75.5', '70.3', '68.2', '64.8'], ['77.4', '71.4', '71.8', '67.6'], ['70.4', '66.6', '64.2', '63'], ['74.9', '69', '71.5', '67.4'], ['77.2', '71.4', '71.6', '68'], ['75.3', '69.7', '72.1', '69.2'], ['75.2', '68.6', '72.2', '6... | column | ['accuracy', 'accuracy', 'accuracy', 'accuracy'] | ['KnReader (ours)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>NE || dev</th> <th>NE || test</th> <th>CN || dev</th> <th>CN || test</th> </tr> </thead> <tbody> <tr> <td>Models || - || Human (ctx + q)</td> <td>-</td> <td>81.6</td> <td>-... | Table 6 | table_6 | P18-1076 | 7 | acl2018 | Table 6 compares our model (Knowledgeable Reader) to previous work on the CBT datasets. We show the results of our model with the settings that performed best on the Dev sets of the two datasets NE and CN: for NE, (Dctx+kn, Qctx) with 100 facts; for CN the Full model with 50 facts, both with CN5Sel. Note that our work ... | [1, 1, 2, 1, 1] | ['Table 6 compares our model (Knowledgeable Reader) to previous work on the CBT datasets.', 'We show the results of our model with the settings that performed best on the Dev sets of the two datasets NE and CN: for NE, (Dctx+kn, Qctx) with 100 facts; for CN the Full model with 50 facts, both with CN5Sel.', 'Note that o... | [['KnReader (ours)'], ['KnReader (ours)', 'NE', 'CN'], ['Single interaction'], ['KnReader (ours)', 'Single interaction', 'NE', 'CN'], ['KnReader (ours)', 'Multiple interactions']] | 1 |
P18-1084table_2 | Results on cross-lingual image description retrieval. NN-based models are above the dashed line. Best overall results are in bold. Best results with non-deep models are underlined. | 2 | [['Model', 'DPCCA (Variant A)'], ['Model', 'DPCCA (Variant B)'], ['Model', 'DPCCA(B)+DCCA NOI (concat)'], ['Model', 'DCCA NOI (Wang et al. 2015b)'], ['Model', 'DCCA SDL (Chang et al. 2017)'], ['Model', 'DCCA (Wang et al. 2015a)'], ['Model', 'DCCAE (Wang et al. 2015a)'], ['Model', 'IMG PIVOT (Gella et al. 2017)'], ['Mod... | 2 | [['R@1', 'EN→DE'], ['R@1', 'DE→EN'], ['BLEU+1', 'EN→DE'], ['BLEU+1', 'DE→EN']] | [['0.795', '0.779', '0.836', '0.827'], ['0.809', '0.794', '0.848', '0.839'], ['0.826', '0.791', '0.863', '0.837'], ['0.812', '0.788', '0.849', '0.83'], ['0.507', '0.487', '0.552', '0.533'], ['0.619', '0.621', '0.664', '0.673'], ['0.564', '0.542', '0.607', '0.598'], ['0.772', '0.763', '0.789', '0.781'], ['0.579', '0.57'... | column | ['R@1', 'R@1', 'BLEU+1', 'BLEU+1'] | ['DPCCA(B)+DCCA NOI (concat)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>R@1 || EN→DE</th> <th>R@1 || DE→EN</th> <th>BLEU+1 || EN→DE</th> <th>BLEU+1 || DE→EN</th> </tr> </thead> <tbody> <tr> <td>Model || DPCCA (Variant A)</td> <td>0.795</td> <td>0.77... | Table 2 | table_2 | P18-1084 | 8 | acl2018 | We report two standard evaluation metrics: 1) Recall at 1 (R@1) scores, and 2) the sentence-level BLEU+1 metric (Lin and Och, 2004), a variant of BLEU which smooths terms for higher-order n-grams, making it more suitable for evaluating short sentences. The scores for the retrieval task with all models are summarized in... | [2, 1, 1] | ['We report two standard evaluation metrics: 1) Recall at 1 (R@1) scores, and 2) the sentence-level BLEU+1 metric (Lin and Och, 2004), a variant of BLEU which smooths terms for higher-order n-grams, making it more suitable for evaluating short sentences.', 'The scores for the retrieval task with all models are summariz... | [['R@1', 'BLEU+1'], None, ['DPCCA(B)+DCCA NOI (concat)']] | 1 |
P18-1084table_3 | Results on EN and DE SimLex-999 (POS-based evaluation). All scores are Spearman’s rank correlations. INIT EMB refers to initial pre-trained monolingual word embeddings (see §6). | 2 | [['Model', 'DPCCA (Variant A)'], ['Model', 'DPCCA (Variant B)'], ['Model', 'DCCA NOI (Wang et al. 2015b)'], ['Model', 'DCCA (Wang et al. 2015a)'], ['Model', 'PCCA (Rao 1969)'], ['Model', 'CCA (Hotelling 1936)'], ['Model', 'GCCA (Funaki and Nakayama 2015)'], ['Model', 'INIT EMB']] | 2 | [['English-German', 'EN-Adj'], ['English-German', 'EN-Verbs'], ['English-German', 'EN-Nouns'], ['English-German', 'DE-Adj'], ['English-German', 'DE-Verbs'], ['English-German', 'DE-Nouns']] | [['0.64', '0.311', '0.369', '0.43', '0.321', '0.404'], ['0.626', '0.316', '0.382', '0.462', '0.319', '0.399'], ['0.611', '0.308', '0.361', '0.441', '0.297', '0.398'], ['0.618', '0.261', '0.327', '0.404', '0.29', '0.362'], ['0.614', '0.296', '0.34', '0.305', '0.143', '0.34'], ['0.557', '0.297', '0.321', '0.284', '0.157'... | column | ['correlation', 'correlation', 'correlation', 'correlation', 'correlation', 'correlation'] | ['DPCCA (Variant A)', 'DPCCA (Variant B)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>English-German || EN-Adj</th> <th>English-German || EN-Verbs</th> <th>English-German || EN-Nouns</th> <th>English-German || DE-Adj</th> <th>English-German || DE-Verbs</th> <th>English-German... | Table 3 | table_3 | P18-1084 | 9 | acl2018 | The results on the POS classes represented in SimLex-999 (nouns, verbs, adjectives, Table 3) form our main finding: conditioning the multilingual representations on a shared image leads to improvements in verb and adjective representations. While for nouns one of the DPCCA variants is the best performing model for both... | [1, 1, 2] | ['The results on the POS classes represented in SimLex-999 (nouns, verbs, adjectives, Table 3) form our main finding: conditioning the multilingual representations on a shared image leads to improvements in verb and adjective representations.', 'While for nouns one of the DPCCA variants is the best performing model for... | [['EN-Adj', 'EN-Verbs', 'DE-Adj', 'DE-Verbs'], ['DPCCA (Variant A)', 'DPCCA (Variant B)', 'EN-Nouns', 'DE-Nouns'], None] | 1 |
P18-1084table_4 | Results (Spearman rank correlation) of our models and the strongest baselines on Multilingual SimLex-999 (all data). | 2 | [['Model', 'DPCCA (A)'], ['Model', 'DPCCA (B)'], ['Model', 'PCCA'], ['Model', 'DCCA NOI'], ['Model', 'GCCA'], ['Model', 'INIT EMB']] | 2 | [['EN-DE WIW', 'EN'], ['EN-DE WIW', 'DE'], ['EN-IT WIW', 'EN'], ['EN-IT WIW', 'IT'], ['EN-RU WIW', 'EN'], ['EN-RU WIW', 'RU']] | [['0.398', '0.4', '0.412', '0.429', '0.404', '0.407'], ['0.405', '0.4', '0.413', '0.427', '0.413', '0.402'], ['0.374', '0.301', '0.37', '0.386', '0.374', '0.374'], ['0.39', '0.398', '0.413', '0.422', '0.407', '0.398'], ['0.395', '0.386', '0.414', '0.407', '0.412', '0.396'], ['0.321', '0.278', '0.321', '0.361', '0.321',... | column | ['correlation', 'correlation', 'correlation', 'correlation', 'correlation', 'correlation'] | ['DPCCA (A)', 'DPCCA (B)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>EN-DE WIW || EN</th> <th>EN-DE WIW || DE</th> <th>EN-IT WIW || EN</th> <th>EN-IT WIW || IT</th> <th>EN-RU WIW || EN</th> <th>EN-RU WIW || RU</th> </tr> </thead> <tbody> <tr> <td... | Table 4 | table_4 | P18-1084 | 9 | acl2018 | Further, Table 4 presents results on all SimLex word pairs. The POS class result patterns for EN-IT and EN-RU are very similar to the patterns in Table 3 and are provided in the supplementary material. First, the results over the initial monolingual embeddings before training (INIT EMB) clearly indicate that multilingu... | [1, 2, 1, 1, 2] | ['Further, Table 4 presents results on all SimLex word pairs.', 'The POS class result patterns for EN-IT and EN-RU are very similar to the patterns in Table 3 and are provided in the supplementary material.', 'First, the results over the initial monolingual embeddings before training (INIT EMB) clearly indicate that mu... | [None, ['EN', 'IT', 'RU'], ['INIT EMB'], ['DPCCA (A)', 'DPCCA (B)', 'PCCA', 'DCCA NOI', 'GCCA'], None] | 1 |
P18-1085table_3 | SimLex-999 results (Spearman’s ⇢). Best results overall are bolded. Best results per section are underlined. Bracketed numbers signify the number of images used. Some rows are copied across sections for ease of reading. | 2 | [['Model', 'Glove'], ['Model', 'Picturebook'], ['Model', 'Glove + Picturebook'], ['Model', 'Picturebook (Visual)'], ['Model', 'Picturebook (Semantic)'], ['Model', 'Picturebook (1)'], ['Model', 'Picturebook (2)'], ['Model', 'Picturebook (3)'], ['Model', 'Picturebook (5)'], ['Model', 'Picturebook (10)']] | 1 | [['all'], ['adjs'], ['nouns'], ['verbs'], ['conc-q1'], ['conc-q2'], ['conc-q3'], ['conc-q4'], ['hard']] | [['40.8', '62.2', '42.8', '19.6', '43.3', '41.6', '42.3', '40.2', '27.2'], ['37.3', '11.7', '48.2', '17.3', '14.4', '27.5', '46.2', '60.7', '28.8'], ['45.5', '46.2', '52.1', '22.8', '36.7', '41.7', '50.4', '57.3', '32.5'], ['31.3', '11.1', '38.8', '20.4', '13.9', '26.1', '38.7', '47.7', '23.9'], ['37.3', '11.7', '48.2'... | column | ['Spearman’s', 'Spearman’s', 'Spearman’s', 'Spearman’s', 'Spearman’s', 'Spearman’s', 'Spearman’s', 'Spearman’s', 'Spearman’s'] | ['Glove + Picturebook'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>all</th> <th>adjs</th> <th>nouns</th> <th>verbs</th> <th>conc-q1</th> <th>conc-q2</th> <th>conc-q3</th> <th>conc-q4</th> <th>hard</th> </tr> </thead> <tbody> <tr> ... | Table 3 | table_3 | P18-1085 | 5 | acl2018 | Table 3 displays our results, from which several observations can be made. First, we observe that combining Glove and Picturebook leads to improved similarity across most categories. For adjectives and the most abstract category, Glove performs significantly better, while for the most concrete category Picturebook is s... | [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2] | ['Table 3 displays our results, from which several observations can be made.', 'First, we observe that combining Glove and Picturebook leads to improved similarity across most categories.', 'For adjectives and the most abstract category, Glove performs significantly better, while for the most concrete category Pictureb... | [None, ['Glove + Picturebook'], ['adjs', 'Glove', 'conc-q4', 'Picturebook'], ['Glove', 'Picturebook'], ['Picturebook', 'conc-q1', 'conc-q2', 'conc-q3', 'conc-q4'], ['hard', 'Picturebook', 'all', 'Glove'], ['Picturebook (Visual)', 'Picturebook (Semantic)'], ['Picturebook (Visual)', 'Picturebook (Semantic)', 'all', 'adjs... | 1 |
P18-1085table_4 | Classification accuracies are reported for SNLI and MulitNLI. For SICK we report Pearson, Spearman and MSE. Higher is better for all metrics except MSE. Best results overall per column are bolded. Best results per section are underlined. | 2 | [['Model', 'Glove (bow)'], ['Model', 'Picturebook (bow)'], ['Model', 'Glove + Picturebook (bow)'], ['Model', 'BiLSTM-Max (Conneau et al. 2017a)'], ['Model', 'Glove'], ['Model', 'Picturebook'], ['Model', 'Glove + Picturebook'], ['Model', 'Glove + Picturebook + Contextual Gating']] | 2 | [['SNLI', 'dev'], ['SNLI', 'test'], ['MultiNLI', 'dev-mat'], ['MultiNLI', 'dev-mis'], ['SICK Relatedness', 'test-p'], ['SICK Relatedness', 'test-s'], ['SICK Relatedness', 'test-mse']] | [['85.2', '84.2', '70.5', '69.9', '86.8', '79.8', '25.2'], ['84', '83.8', '67.9', '67.1', '85.8', '79.3', '27'], ['86.2', '85.2', '71.3', '70.9', '87.2', '80.9', '24.4'], ['85', '84.5', '-', '-', '-', '-', '-'], ['86.8', '86.3', '74.1', '74.5', '-', '-', '-'], ['85.2', '85.1', '70.7', '70.3', '-', '-', '-'], ['86.7', '... | column | ['accuracy', 'acucracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy'] | ['Glove + Picturebook (bow)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>SNLI || dev</th> <th>SNLI || test</th> <th>MultiNLI || dev-mat</th> <th>MultiNLI || dev-mis</th> <th>SICK Relatedness || test-p</th> <th>SICK Relatedness || test-s</th> <th>SICK Related... | Table 4 | table_4 | P18-1085 | 6 | acl2018 | Table 4 displays our results. For BoW models, adding Picturebook embeddings to Glove results in significant gains across all three tasks. For BiLSTM-Max, our contextual gating sets a new state-of-the-art on SNLI sentence encoding methods (methods without interaction layers), outperforming the recently proposed methods ... | [1, 1, 2, 2, 1, 1, 1] | ['Table 4 displays our results.', 'For BoW models, adding Picturebook embeddings to Glove results in significant gains across all three tasks.', 'For BiLSTM-Max, our contextual gating sets a new state-of-the-art on SNLI sentence encoding methods (methods without interaction layers), outperforming the recently proposed ... | [None, ['Glove (bow)', 'Picturebook (bow)', 'Glove + Picturebook (bow)'], ['BiLSTM-Max (Conneau et al. 2017a)', 'SNLI'], None, ['Glove (bow)', 'Picturebook (bow)', 'Glove + Picturebook (bow)', 'BiLSTM-Max (Conneau et al. 2017a)', 'Glove'], ['Glove + Picturebook + Contextual Gating', 'SNLI'], ['BiLSTM-Max (Conneau et al... | 1 |
P18-1085table_6 | COCO test-set results for image-sentence retrieval experiments. Our models use VSE++. R@K is Recall@K (high is good). Med r is the median rank (low is good). | 2 | [['Model', 'VSE++ (Faghri et al. 2017)'], ['Model', 'Glove'], ['Model', 'Picturebook'], ['Model', 'Glove + Picturebook'], ['Model', 'Glove + Picturebook + Contextual Gating']] | 2 | [['Image Annotation', 'R@1'], ['Image Annotation', 'R@5'], ['Image Annotation', 'R@10'], ['Image Annotation', 'Med r'], ['Image Search', 'R@1'], ['Image Annotation', 'R@5'], ['Image Annotation', 'R@10'], ['Image Annotation', 'Med r']] | [['64.6', '-', '95.7', '1', '52', '-', '92', '1'], ['64.6', '88.9', '95.5', '1', '53.7', '86.5', '94.4', '1'], ['62.4', '90.2', '95.3', '1', '54.2', '86.4', '94.3', '1'], ['61.8', '89.2', '95', '1', '54.1', '86.7', '94.7', '1'], ['63.4', '90.3', '96.5', '1', '55.2', '87.2', '94.4', '1']] | column | ['R@1', 'R@5', 'R@10', 'Med r', 'R@1', 'R@5', 'R@10', 'Med r'] | ['Glove', 'Glove + Picturebook', 'Glove + Picturebook + Contextual Gating'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Image Annotation || R@1</th> <th>Image Annotation || R@5</th> <th>Image Annotation || R@10</th> <th>Image Annotation || Med r</th> <th>Image Search || R@1</th> <th>Image Annotation || R@5</t... | Table 6 | table_6 | P18-1085 | 7 | acl2018 | Table 6 displays our results on this task. Our Glove baseline was able to match or outperform the reported results in Faghri et al.(2017) with the exception of Recall@10 for image annotation, where it performs slightly worse. Glove+Picturebook improves over the Glove baseline for image search but falls short on image ... | [1, 1, 1, 1] | ['Table 6 displays our results on this task.', 'Our Glove baseline was able to match or outperform the reported results in Faghri et al.(2017) with the exception of Recall@10 for image annotation, where it performs slightly worse.', 'Glove+Picturebook improves over the Glove baseline for image search but falls short o... | [None, ['Glove', 'VSE++ (Faghri et al. 2017)'], ['Glove + Picturebook', 'Image Search', 'Glove', 'R@10'], ['Glove + Picturebook + Contextual Gating', 'Image Annotation', 'R@5', 'R@10', 'Image Search', 'R@1']] | 1 |
P18-1085table_7 | Machine Translation results on the Multi30k English ! German task. We note that our models do not use BPE, and we perform better in BLEU relative to METEOR. | 2 | [['Model', 'BPE (Caglayan et al. 2017)'], ['Model', 'Baseline'], ['Model', 'Picturebook'], ['Model', 'Picturebook + Inverse Picturebook'], ['Model', 'Picturebook + Inverse Picturebook + Gating']] | 2 | [['Test2016', 'BLEU'], ['Test2016', 'METEOR'], ['Test2017', 'BLEU'], ['Test2017', 'METEOR'], ['MSCOCO', 'BLEU'], ['MSCOCO', 'METEOR']] | [['38.1', '57.3', '30.8', '51.6', '26.4', '46.8'], ['38.9', '56.5', '32.6', '50.7', '26.8', '45.4'], ['39.6', '56.9', '31.8', '50.1', '27.7', '45.8'], ['40.2', '57.2', '32.3', '50.7', '27.8', '46.3'], ['40', '57.3', '33', '51.1', '27.9', '46.5']] | column | ['BLEU', 'METEOR', 'BLEU', 'METEOR', 'BLEU', 'METEOR'] | ['Picturebook'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Test2016 || BLEU</th> <th>Test2016 || METEOR</th> <th>Test2017 || BLEU</th> <th>Test2017 || METEOR</th> <th>MSCOCO || BLEU</th> <th>MSCOCO || METEOR</th> </tr> </thead> <tbody> <tr> ... | Table 7 | table_7 | P18-1085 | 9 | acl2018 | On the English ¨ German tasks, we find our Picturebook model to perform on average 0.8 BLEU or 0.7 METEOR over our baseline. On the German task, compared to the previously best published results (Caglayan et al.,2017) we do better in BLEU but slightly worse in METEOR. We suspect this is due to the fact that we did not... | [1, 1, 2] | ['On the English \x81¨ German tasks, we find our Picturebook model to perform on average 0.8 BLEU or 0.7 METEOR over our baseline.', 'On the German task, compared to the previously best published results (Caglayan et al.,2017) we do better in BLEU but slightly worse in METEOR.', 'We suspect this is due to the fact that... | [['Picturebook', 'BLEU', 'METEOR'], ['BPE (Caglayan et al. 2017)', 'Picturebook', 'Picturebook + Inverse Picturebook', 'Picturebook + Inverse Picturebook + Gating', 'BLEU', 'METEOR'], None] | 1 |
P18-1087table_3 | Experimental results (%). The results with symbol“(cid:92)” are retrieved from the original papers, and those starred (∗) one are from Dong et al. (2014). The marker † refers to p-value < 0.01 when comparing with BILSTM-ATT-G, while the marker ‡ refers to p-value < 0.01 when comparing with RAM. | 3 | [['Baselines', 'Models', 'SVM'], ['Baselines', 'Models', 'AdaRNN'], ['Baselines', 'Models', 'AE-LSTM'], ['Baselines', 'Models', 'ATAE-LSTM'], ['Baselines', 'Models', 'IAN'], ['Baselines', 'Models', 'CNN-ASP'], ['Baselines', 'Models', 'TD-LSTM'], ['Baselines', 'Models', 'MemNet'], ['Baselines', 'Models', 'BILSTM-ATT-G']... | 2 | [['LAPTOP', 'ACC'], ['LAPTOP', 'Macro-F1'], ['REST', 'ACC'], ['REST', 'Macro-F1'], ['TWITTER', 'ACC'], ['TWITTER', 'Macro-F1']] | [['70.49\\', '-', '80.16\\', '-', '63.40?', '63.30?'], ['-', '-', '-', '-', '66.30\\', '65.90\\'], ['68.90\\', '-', '76.60\\', '-', '-', '-'], ['68.70\\ -', '-', '77.20\\', '-', '-', '-'], ['72.10\\ -', '-', '78.60\\', '-', '-', '-'], ['72.46', '65.31', '77.82', '65.11', '73.27', '71.77'], ['71.83', '68.43', '78', '66.... | column | ['ACC', 'Macro-F1', 'ACC', 'Macro-F1', 'ACC', 'Macro-F1'] | ['TNet-LF', 'TNet-AS'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>LAPTOP || ACC</th> <th>LAPTOP || Macro-F1</th> <th>REST || ACC</th> <th>REST || Macro-F1</th> <th>TWITTER || ACC</th> <th>TWITTER || Macro-F1</th> </tr> </thead> <tbody> <tr> <t... | Table 3 | table_3 | P18-1087 | 7 | acl2018 | As shown in Table 3, both TNet-LF and TNet-AS consistently achieve the best performance on all datasets, which verifies the efficacy of our whole TNet model. Moreover, TNet can perform well for different kinds of user generated content, such as product reviews with relatively formal sentences in LAPTOP and REST, and tw... | [1, 1, 2, 1, 2, 1, 2, 1, 1, 2, 1, 2, 1, 2] | ['As shown in Table 3, both TNet-LF and TNet-AS consistently achieve the best performance on all datasets, which verifies the efficacy of our whole TNet model.', 'Moreover, TNet can perform well for different kinds of user generated content, such as product reviews with relatively formal sentences in LAPTOP and REST, a... | [['TNet-LF', 'TNet-AS'], ['TNet-LF', 'TNet-AS', 'LAPTOP', 'REST', 'TWITTER'], ['TNet variants', 'CNN-ASP'], ['CNN-ASP', 'TWITTER', 'TNet-LF', 'TNet-AS'], None, ['BILSTM-ATT-G', 'RAM', 'LAPTOP', 'REST'], ['AdaRNN'], ['TNet w/o transformation', 'TNet w/o context', 'TNet-LF w/o position', 'TNet-AS w/o position'], ['TNet-L... | 1 |
P18-1090table_2 | Human evaluations of the proposed method and baselines. Sentiment evaluates sentiment transformation. Semantic evaluates content preservation. | 2 | [['Yelp', 'CAAE (Shen et al. 2017)'], ['Yelp', 'MDAL (Fu et al. 2018)'], ['Yelp', 'Proposed Method'], ['Amazon', 'CAAE (Shen et al. 2017)'], ['Amazon', 'MDAL (Fu et al. 2018)'], ['Amazon', 'Proposed Method']] | 1 | [['Sentiment'], ['Semantic'], ['G-score']] | [['7.67', '3.87', '5.45'], ['7.12', '3.68', '5.12'], ['6.99', '5.08', '5.96'], ['8.61', '3.15', '5.21'], ['7.93', '3.22', '5.05'], ['7.92', '4.67', '6.08']] | column | ['Sentiment', 'Semantic', 'G-score'] | ['Proposed Method'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Sentiment</th> <th>Semantic</th> <th>G-score</th> </tr> </thead> <tbody> <tr> <td>Yelp || CAAE (Shen et al. 2017)</td> <td>7.67</td> <td>3.87</td> <td>5.45</td> </tr> <tr>... | Table 2 | table_2 | P18-1090 | 7 | acl2018 | Table 2 shows the human evaluation results. It can be clearly seen that the proposed method obviously improves semantic preservation. The semantic score is increased from 3.87 to 5.08 on the Yelp dataset, and from 3.22 to 4.67 on the Amazon dataset. In general, our proposed model achieves the best overall performance. ... | [1, 1, 1, 1, 1, 2] | ['Table 2 shows the human evaluation results.', 'It can be clearly seen that the proposed method obviously improves semantic preservation.', 'The semantic score is increased from 3.87 to 5.08 on the Yelp dataset, and from 3.22 to 4.67 on the Amazon dataset.', 'In general, our proposed model achieves the best overall pe... | [None, ['Proposed Method', 'CAAE (Shen et al. 2017)', 'MDAL (Fu et al. 2018)', 'Semantic'], ['Proposed Method', 'CAAE (Shen et al. 2017)', 'MDAL (Fu et al. 2018)', 'Semantic', 'Yelp', 'Amazon'], ['Proposed Method', 'Semantic', 'G-score'], ['Sentiment', 'CAAE (Shen et al. 2017)', 'Proposed Method'], ['Sentiment']] | 1 |
P18-1093table_3 | Experimental results on Reddit datasets. Best result in is boldface and second best is underlined. Best performing baseline is in italics. | 2 | [['Model', 'NBOW'], ['Model', 'Vanilla CNN'], ['Model', 'Vanilla LSTM'], ['Model', 'Attention LSTM'], ['Model', 'GRNN (Zhang et al.)'], ['Model', 'CNN-LSTM-DNN (Ghosh and Veale)'], ['Model', 'SIARN (this paper)'], ['Model', 'MIARN (this paper)']] | 2 | [['Reddit (/r/movies)', 'P'], ['Reddit (/r/movies)', 'R'], ['Reddit (/r/movies)', 'F1'], ['Reddit (/r/movies)', 'Acc'], ['Reddit (/r/technology)', 'P'], ['Reddit (/r/technology)', 'R'], ['Reddit (/r/technology)', 'F1'], ['Reddit (/r/technology)', 'Acc']] | [['67.33', '66.56', '66.82', '67.52', '65.45', '65.62', '65.52', '66.55'], ['65.97', '65.97', '65.97', '66.24', '65.88', '62.9', '62.85', '66.8'], ['67.57', '67.67', '67.32', '67.34', '66.94', '67.22', '67.03', '67.92'], ['68.11', '67.87', '67.94', '68.37', '68.2', '68.78', '67.44', '67.22'], ['66.16', '66.16', '66.16'... | column | ['P', 'R', 'F1', 'Acc', 'P', 'R', 'F1', 'Acc'] | ['SIARN (this paper)', 'MIARN (this paper)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Reddit (/r/movies) || P</th> <th>Reddit (/r/movies) || R</th> <th>Reddit (/r/movies) || F1</th> <th>Reddit (/r/movies) || Acc</th> <th>Reddit (/r/technology) || P</th> <th>Reddit (/r/technol... | Table 3 | table_3 | P18-1093 | 8 | acl2018 | Table 3 reports a performance comparison of all benchmarked models on the Reddit datasets. Our proposed SIARN and MIARN models achieve very competitive performance on the Reddit datasets, with an average of ? 2% margin improvement over the best baselines. Notably, the baselines we compare against are extremely competit... | [1, 1, 1, 1] | ['Table 3 reports a performance comparison of all benchmarked models on the Reddit datasets.', 'Our proposed SIARN and MIARN models achieve very competitive performance on the Reddit datasets, with an average of ? 2% margin improvement over the best baselines.', 'Notably, the baselines we compare against are extremely ... | [None, ['SIARN (this paper)', 'MIARN (this paper)', 'Reddit (/r/movies)', 'Reddit (/r/technology)'], ['SIARN (this paper)', 'MIARN (this paper)', 'CNN-LSTM-DNN (Ghosh and Veale)', 'GRNN (Zhang et al.)', 'Vanilla CNN', 'NBOW'], ['SIARN (this paper)', 'MIARN (this paper)']] | 1 |
P18-1093table_4 | Experimental results on Debates datasets. Best result in is boldface and second best is underlined. Best performing baseline is in italics. | 2 | [['Model', 'NBOW'], ['Model', 'Vanilla CNN'], ['Model', 'Vanilla LSTM'], ['Model', 'Attention LSTM'], ['Model', 'GRNN (Zhang et al.)'], ['Model', 'CNN-LSTM-DNN (Ghosh and Veale)'], ['Model', 'SIARN (this paper)'], ['Model', 'MIARN (this paper)']] | 2 | [['Debates (IAC-V1)', 'P'], ['Debates (IAC-V1)', 'R'], ['Debates (IAC-V1)', 'F1'], ['Debates (IAC-V1)', 'Acc'], ['Debates (IAC-V2)', 'P'], ['Debates (IAC-V2)', 'R'], ['Debates (IAC-V2)', 'F1'], ['Debates (IAC-V2)', 'Acc']] | [['57.17', '57.03', '57', '57.51', '66.01', '66.03', '66.02', '66.09'], ['58.21', '58', '57.95', '58.55', '68.45', '68.18', '68.21', '68.56'], ['54.87', '54.89', '54.84', '54.92', '68.3', '63.96', '60.78', '62.66'], ['58.98', '57.93', '57.23', '59.07', '70.04', '69.62', '69.63', '69.96'], ['56.21', '56.21', '55.96', '5... | column | ['P', 'R', 'F1', 'Acc', 'P', 'R', 'F1', 'Acc'] | ['MIARN (this paper)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Debates (IAC-V1) || P</th> <th>Debates (IAC-V1) || R</th> <th>Debates (IAC-V1) || F1</th> <th>Debates (IAC-V1) || Acc</th> <th>Debates (IAC-V2) || P</th> <th>Debates (IAC-V2) || R</th> ... | Table 4 | table_4 | P18-1093 | 8 | acl2018 | Table 4 reports a performance comparison of all benchmarked models on the Debates datasets. The performance improvement on Debates (long text) is significantly larger than short text (i.e., Twitter and Reddit). For example, MIARN outperforms GRNN and CNN-LSTM-DNN by 8% to 10% on both IAC-V1 and IAC-V2. | [1, 1, 1] | ['Table 4 reports a performance comparison of all benchmarked models on the Debates datasets.', 'The performance improvement on Debates (long text) is significantly larger than short text (i.e., Twitter and Reddit).', 'For example, MIARN outperforms GRNN and CNN-LSTM-DNN by 8% to 10% on both IAC-V1 and IAC-V2.'] | [None, ['Debates (IAC-V1)', 'Debates (IAC-V2)'], ['Debates (IAC-V1)', 'Debates (IAC-V2)', 'MIARN (this paper)', 'GRNN (Zhang et al.)', 'CNN-LSTM-DNN (Ghosh and Veale)']] | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.