table_id_paper stringlengths 15 15 | caption stringlengths 14 1.88k | row_header_level int32 1 9 | row_headers large_stringlengths 15 1.75k | column_header_level int32 1 6 | column_headers large_stringlengths 7 1.01k | contents large_stringlengths 18 2.36k | metrics_loc stringclasses 2
values | metrics_type large_stringlengths 5 532 | target_entity large_stringlengths 2 330 | table_html_clean large_stringlengths 274 7.88k | table_name stringclasses 9
values | table_id stringclasses 9
values | paper_id stringlengths 8 8 | page_no int32 1 13 | dir stringclasses 8
values | description large_stringlengths 103 3.8k | class_sentence stringlengths 3 120 | sentences large_stringlengths 110 3.92k | header_mention stringlengths 12 1.8k | valid int32 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
D16-1231table_3 | Benchmark results: accuracy on addressee-response selection (ADR-RES), addressee selection (ADR), and response selection (RES). Nc is the context window. Bolded are the best per column. | 3 | [['Chance', 'Nc', '-'], ['Chance', 'Nc', '5'], ['Baseline', 'Nc', '10'], ['Baseline', 'Nc', '15'], ['Baseline', 'Nc', '5'], ['Static', 'Nc', '10'], ['Static', 'Nc', '15'], ['Static', 'Nc', '5'], ['Dynamic', 'Nc', '10'], ['Dynamic', 'Nc', '15']] | 2 | [['RES-CAND = 2', 'ADR-RES'], ['RES-CAND = 2', 'ADR'], ['RES-CAND = 2', 'RES'], ['RES-CAND = 10', 'ADR-RES'], ['RES-CAND = 10', 'ADR'], ['RES-CAND = 10', 'RES']] | [['0.62', '1.24', '50.00', '0.12', '1.24', '10.00'], ['36.97', '55.73', '65.68', '16.34', '55.73', '28.19'], ['37.42', '55.63', '67.79', '16.11', '55.63', '29.48'], ['37.13', '55.62', '67.89', '15.44', '55.62', '29.19'], ['46.99', '60.39', '75.07', '21.98', '60.26', '33.27'], ['48.67', '60.97', '77.75', '23.31', '60.66... | column | ['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy'] | ['Static', 'Dynamic'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>RES-CAND = 2 || ADR-RES</th> <th>RES-CAND = 2 || ADR</th> <th>RES-CAND = 2 || RES</th> <th>RES-CAND = 10 || ADR-RES</th> <th>RES-CAND = 10 || ADR</th> <th>RES-CAND = 10 || RES</th> </tr> ... | Table 3 | table_3 | D16-1231 | 8 | emnlp2016 | Table 3 shows the empirical benchmark results. The dynamic model achieves the best results in all the metrics. The static model outperforms the baseline, but is inferior to the dynamic model. In addressee selection (ADR), the baseline model achieves around 55% in accuracy. This means that if you select the agents that ... | [1, 1, 1, 1, 2, 1, 1, 1, 1] | ['Table 3 shows the empirical benchmark results.', 'The dynamic model achieves the best results in all the metrics.', 'The static model outperforms the baseline, but is inferior to the dynamic model.', 'In addressee selection (ADR), the baseline model achieves around 55% in accuracy.', 'This means that if you select th... | [None, ['Dynamic', 'ADR-RES', 'ADR', 'RES'], ['Baseline', 'Static', 'Dynamic'], ['ADR', 'Baseline'], None, ['Baseline'], ['Dynamic', 'ADR', 'Static'], ['RES', 'Baseline', 'Dynamic', 'Static'], ['Static', 'Dynamic', 'RES']] | 1 |
D16-1231table_4 | Performance comparison for different numbers of agents appearing in the context. The numbers are accuracies on the test set with the number of candidate responses CAND-RES = 2 and the context window Nc = 15. | 2 | [['ADR-RES', 'Baseline'], ['ADR-RES', 'Static'], ['ADR-RES', 'Dynamic'], ['ADR', 'Baseline'], ['ADR', 'Static'], ['ADR', 'Dynamic'], ['RES', 'Baseline'], ['RES', 'Static'], ['RES', 'Dynamic']] | 4 | [['No. of Agents', '2-5', 'No. of Samples', '3731'], ['No. of Agents', '6-10', 'No. of Samples', '5962'], ['No. of Agents', '11-15', 'No. of Samples', '5475'], ['No. of Agents', '16-20', 'No. of Samples', '4495'], ['No. of Agents', '21-30', 'No. of Samples', '5619'], ['No. of Agents', '31-100', 'No. of Samples', '7956'... | [['52.13', '43.51', '39.98', '42.96', '39.70', '36.55', '29.22'], ['64.17', '55.92', '50.72', '53.04', '48.69', '49.61', '42.86'], ['66.9', '57.73', '54.32', '55.64', '51.61', '55.88', '52.14'], ['84.94', '70.82', '62.14', '65.52', '58.89', '51.28', '41.47'], ['86.33', '74.37', '66.12', '68.54', '63.43', '59.24', '50.9... | column | ['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy'] | ['Static', 'Dynamic'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>No. of Agents || 2-5 || No. of Samples || 3731</th> <th>No. of Agents || 6-10 || No. of Samples || 5962</th> <th>No. of Agents || 11-15 || No. of Samples || 5475</th> <th>No. of Agents || 16-20 || No.... | Table 4 | table_4 | D16-1231 | 9 | emnlp2016 | To shed light on the relationship between the model performance and the number of agents in multi-party conversation, we investigate the effect of the number of agents participating in each context. Table 4 compares the performance of the models for different numbers of agents in a context. In addressee selection, the ... | [2, 1, 1, 1, 1, 1, 0] | ['To shed light on the relationship between the model performance and the number of agents in multi-party conversation, we investigate the effect of the number of agents participating in each context.', 'Table 4 compares the performance of the models for different numbers of agents in a context.', 'In addressee selecti... | [None, None, ['Baseline', 'Static', 'Dynamic', 'No. of Agents', 'ADR'], ['Baseline', 'Static', 'Dynamic'], ['Dynamic'], ['Baseline', 'Static', 'Dynamic', 'No. of Agents', 'RES'], None] | 1 |
D16-1237table_2 | F-score for headlines and images datasets. These tables show the result of our systems, baseline and top-ranked systems. DA is our strong baseline trained on interpretable STS dataset; DA + DS is trained on interpretable STS as well as STS dataset. The rank 1 system on headlines is Inspire (Kazmi and Sch¨uller, 2016) a... | 2 | [['Headline results', 'Baseline'], ['Headline results', 'Rank 1'], ['Headline results', 'DA'], ['Headline results', 'DA +DS'], ['Images results', 'Baseline'], ['Images results', 'Rank 1'], ['Images results', 'DA'], ['Images results', 'DA +DS']] | 2 | [['untyped', 'ali'], ['untyped', 'score'], ['typed', 'ali'], ['typed', 'score']] | [['0.8462', '0.7610', '0.5462', '0.5461'], ['0.8194', '0.7865', '0.7031', '0.696'], ['0.9257', '0.8377', '0.735', '0.6776'], ['0.9235', '0.8591', '0.7281', '0.6948'], ['0.8556', '0.7456', '0.4799', '0.4799.1'], ['0.8922', '0.8408', '0.6867', '0.6708'], ['0.8689', '0.7905', '0.6933', '0.6411'], ['0.8738', '0.8193', '0.7... | column | ['F-score', 'F-score', 'F-score', 'F-score'] | ['DA', 'DA +DS'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>untyped || ali</th> <th>untyped || score</th> <th>typed || ali</th> <th>typed || score</th> </tr> </thead> <tbody> <tr> <td>Headline results || Baseline</td> <td>0.8462</td> <td... | Table 2 | table_2 | D16-1237 | 7 | emnlp2016 | By comparing the rows labeled DA and DA +DS in Table 2 (a) and Table 2 (b), we see that in both the headlines and the images datasets, adding sentence level information improves the untyped score, lifting the stricter typed score F1. On the headlines dataset, incorporating sentence-level information degrades both the u... | [1, 1, 2, 1, 2, 1, 2, 0, 0] | ['By comparing the rows labeled DA and DA +DS in Table 2 (a) and Table 2 (b), we see that in both the headlines and the images datasets, adding sentence level information improves the untyped score, lifting the stricter typed score F1.', 'On the headlines dataset, incorporating sentence-level information degrades both ... | [['DA', 'DA +DS', 'untyped', 'typed'], ['Headline results', 'untyped', 'typed'], ['typed'], ['DA +DS', 'score', 'Rank 1'], None, ['DA', 'DA +DS'], None, None, None] | 1 |
D16-1238table_1 | Parsing accuracy on PTB test set. Our parser uses the same POS tagger as C&M (2014) and Dyer et al. (2015), whereas other parsers use a different POS tagger. Results with † and ∗ are provided in (Alberti et al., 2015) and (Andor et al., 2016), respectively. | 4 | [['Type', 'Trans.', 'Method', 'C&M (2014)'], ['Type', 'Trans.', 'Method', 'Dyer et al. (2015)'], ['Type', 'Trans.', 'Method', 'B&N (2012)'], ['Type', 'Trans.', 'Method', 'Alberti et al. (2015)'], ['Type', 'Trans.', 'Method', 'Weiss et al. (2015)'], ['Type', 'Trans.', 'Method', 'Andor et al. (2016)'], ['Type', 'Graph', ... | 1 | [['UAS'], ['LAS']] | [['91.8', '89.6'], ['93.2', '90.9'], ['93.33', '91.22'], ['94.23', '92.41'], ['94.26', '92.41'], ['94.41', '92.55'], ['92.88', '90.71'], ['92.89', '90.55'], ['93.22', '91.02'], ['94.10', '91.49']] | column | ['UAS', 'LAS'] | ['BiAtt-DP'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>UAS</th> <th>LAS</th> </tr> </thead> <tbody> <tr> <td>Type || Trans. || Method || C&M (2014)</td> <td>91.8</td> <td>89.6</td> </tr> <tr> <td>Type || Trans. || Method || Dye... | Table 1 | table_1 | D16-1238 | 7 | emnlp2016 | We first compare our parser with state-of-the-art neural transition-based dependency parsers on PTB and CTB. For English, we also compare with stateof-the-art graph-based dependency parsers. The results are shown in Table 1. It can be seen that the BiAtt-DP outperforms all other graph-based parsers on PTB. Compared wit... | [1, 1, 1, 1, 1, 1] | ['We first compare our parser with state-of-the-art neural transition-based dependency parsers on PTB and CTB.', 'For English, we also compare with state-of-the-art graph-based dependency parsers.', 'The results are shown in Table 1.', 'It can be seen that the BiAtt-DP outperforms all other graph-based parsers on PTB.'... | [None, None, None, ['BiAtt-DP', 'Graph'], ['Trans.', 'C&M (2014)', 'Dyer et al. (2015)'], ['BiAtt-DP', 'B&N (2012)', 'Alberti et al. (2015)']] | 1 |
D16-1238table_3 | UAS on 12 languages in the CoNLL 2006 shared task (Buchholz and Marsi, 2006). We also report corresponding LAS in squared brackets. The results of the 3rd-order RBGParser are reported in (Lei et al., 2014). Best published results on the same dataset in terms of UAS among (Pitler and McDonald, 2015), (Zhang and McDonald... | 2 | [['Language', 'Arabic'], ['Language', 'Bulgarian'], ['Language', 'Czech'], ['Language', 'Danish'], ['Language', 'Dutch'], ['Language', 'German'], ['Language', 'Japanese'], ['Language', 'Portuguese'], ['Language', 'Slovene'], ['Language', 'Spanish'], ['Language', 'Swedish'], ['Language', 'Turkish']] | 1 | [['BiAtt-DP'], ['RBGParser'], ['Best Published'], ['Crossed'], ['Uncrossed'], ['%Crossed']] | [['80.34 [68.58]', '79.95', '81.12 (Ma11)', '17.24', '80.71', '0.58'], ['93.96 [89.55]', '93.5', '94.02 (Zh14)', '79.59', '94.1', '0.98'], ['91.16 [85.14]', '90.5', '90.32 (Ma13)', '81.62', '91.63', '4.68'], ['91.56 [85.53]', '91.39', '92.00 (Zh13)', '73.33', '91.89', '1.8'], ['87.15 [82.41]', '86.41', '86.19 (Ma13)', ... | column | ['UAS', 'UAS', 'UAS', 'UAS', 'UAS', 'UAS'] | ['BiAtt-DP'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BiAtt-DP</th> <th>RBGParser</th> <th>Best Published</th> <th>Crossed</th> <th>Uncrossed</th> <th>%Crossed</th> </tr> </thead> <tbody> <tr> <td>Language || Arabic</td> <td>8... | Table 3 | table_3 | D16-1238 | 7 | emnlp2016 | It can be observed from Table 3 that the BiAttDP has highly competitive parsing accuracy as stateof-the-art parsers. Moreover, it achieves best UAS for 5 out of 12 languages. For the remaining seven languages, the UAS gaps between the BiAtt-DP and state-of-the-art parsers are within 1.0%, except Swedish. An arguably fa... | [1, 1, 1, 2, 0, 1, 2, 2, 1, 2, 1, 1] | ['It can be observed from Table 3 that the BiAttDP has highly competitive parsing accuracy as stateof-the-art parsers.', 'Moreover, it achieves best UAS for 5 out of 12 languages.', 'For the remaining seven languages, the UAS gaps between the BiAtt-DP and state-of-the-art parsers are within 1.0%, except Swedish.', 'An ... | [['BiAtt-DP'], ['BiAtt-DP', 'Language'], ['BiAtt-DP', 'Best Published', 'Swedish'], ['BiAtt-DP'], None, ['BiAtt-DP', 'RBGParser', 'Best Published'], ['BiAtt-DP'], None, ['Crossed', 'Uncrossed', '%Crossed'], None, ['BiAtt-DP', 'Dutch', 'German', 'Portuguese', 'Slovene'], ['BiAtt-DP', 'Crossed', 'Uncrossed']] | 1 |
D16-1242table_4 | The results on a subset of JSeM that is a translation of FraCaS. M15 refers to the accuracy of Mineshima et al. (2015) on the corresponding sections of FraCaS. | 2 | [['Section', 'Quantifier.1'], ['Section', 'Plural'], ['Section', 'Adjective'], ['Section', 'Verb'], ['Section', 'Attitude'], ['Section', 'Total']] | 1 | [['Gold'], ['System'], ['M15']] | [['92.5', '78.2', '78.4'], ['65.8', '52.6', '66.7'], ['57.1', '47.6', '68.2'], ['66.7', '66.7', '62.5'], ['78.6', '78.6', '76.9'], ['87.3', '74.1', '73.3']] | column | ['accuracy', 'accuracy', 'accuracy'] | ['Gold'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>#Problem</th> <th>Gold</th> <th>System</th> <th>M15</th> </tr> </thead> <tbody> <tr> <td>Section || Quantifier.1</td> <td>335</td> <td>92.5</td> <td>78.2</td> <td>78.4... | Table 4 | table_4 | D16-1242 | 5 | emnlp2016 | Out of the 523 problems, 417 are Japanese translations of the FraCaS problems. Table 4 shows a comparison between the performance of our system on this subset of the JSeM problems and the performance of the RTE system for English in Mineshima et al. (2015) on the corresponding problems in the FraCaS dataset. Mineshima ... | [0, 1, 2, 1, 2, 1] | ['Out of the 523 problems, 417 are Japanese translations of the FraCaS problems.', 'Table 4 shows a comparison between the performance of our system on this subset of the JSeM problems and the performance of the RTE system for English in Mineshima et al. (2015) on the corresponding problems in the FraCaS dataset.', 'Mi... | [None, None, ['M15'], ['Gold', 'M15'], None, ['System', 'Gold']] | 1 |
D16-1243table_2 | Experimental results. Top: development set; bottom: test set. AIC is not comparable between the two splits. HM and LUX are from McMahan and Stone (2015). We reimplemented HM and re-ran LUX from publicly available code, confirming all results to the reported precision except perplexity of LUX, for which we obtained a fi... | 4 | [['Model', 'atomic', 'Feats.', 'raw'], ['Model', 'atomic', 'Feats.', 'buckets'], ['Model', 'atomic', 'Feats.', 'Fourier'], ['Model', 'RNN', 'Feats.', 'raw'], ['Model', 'RNN', 'Feats.', 'buckets'], ['Model', 'RNN', 'Feats.', 'Fourier'], ['Model', 'HM', 'Feats.', 'buckets'], ['Model', 'LUX', 'Feats.', 'raw'], ['Model', '... | 1 | [['Perp.'], ['AIC'], ['Acc.']] | [['28.31', '0.108×10^5', '28.75%'], ['16.01', '0.131×10^5', '38.59%'], ['15.05', '8.86×10^5', '38.97%'], ['13.27', '8.40×10^5', '40.11%'], ['13.03', '0.126×10^5', '39.94%'], ['12.35', '8.33×10^5', '40.40%'], ['14.41', '0.482×10^5', '39.40%'], ['13.61', '0.413×10^5', '39.55%'], ['12.58', '0.403×10^5', '40.22%']] | column | ['Perp.', 'AIC', 'Acc.'] | ['Fourier'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Perp.</th> <th>AIC</th> <th>Acc.</th> </tr> </thead> <tbody> <tr> <td>Model || atomic || Feats. || raw</td> <td>28.31</td> <td>0.108×10^5</td> <td>28.75%</td> </tr> <tr> ... | Table 2 | table_2 | D16-1243 | 3 | emnlp2016 | Results. The top section of Table 2 shows development set results comparing modeling effectiveness for atomic and sequence model architectures and different features. The Fourier feature transformation generally improves on raw HSV vectors and discretized embeddings. The value of modeling descriptions as sequences can... | [0, 1, 1, 2] | ['Results.', 'The top section of Table 2 shows development set results comparing modeling effectiveness for atomic and sequence model architectures and different features.', 'The Fourier feature transformation generally improves on raw HSV vectors and discretized embeddings.', 'The value of modeling descriptions as se... | [None, None, ['Fourier'], None] | 1 |
D16-1247table_4 | Detailed Ja → En insertion position selection experimental result. | 1 | [['PBSMT'], ['Hiero'], ['No Flexible'], ['Baseline'], ['Proposed']] | 2 | [['Ja → En', 'BLEU'], ['Ja → En', 'RIBES'], ['Ja → En', 'Time'], ['En → Ja', 'BLEU'], ['En → Ja', 'RIBES'], ['En → Ja', 'Time'], ['Ja → Zh', 'BLEU'], ['Ja → Zh', 'RIBES'], ['Ja → Zh', 'Time'], ['Zh → Ja', 'BLEU'], ['Zh → Ja', 'RIBES'], ['Zh → Ja', 'Time']] | [['18.45', '64.51', '-', '27.48', '68.37', '-', '27.96', '78.90', '-', '34.65', '77.25', '-'], ['18.72', '65.11', '-', '30.19', '73.47', '-', '27.71', '80.91', '-', '35.43', '81.04', '-'], ['20.28', '65.08', '1.00', '28.77', '75.21', '1.00', '24.85', '66.60', '1.00', '30.51', '73.08', '1.00'], ['21.61', '69.82', '6.28'... | column | ['BLEU', 'RIBES', 'Time', 'BLEU', 'RIBES', 'Time', 'BLEU', 'RIBES', 'Time', 'BLEU', 'RIBES', 'Time'] | ['Proposed'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Ja → En || BLEU</th> <th>Ja → En || RIBES</th> <th>Ja → En || Time</th> <th>En → Ja || BLEU</th> <th>En → Ja || RIBES</th> <th>En → Ja || Time</th> <th>Ja → Zh || BLEU</th> <th>Ja ... | Table 4 | table_4 | D16-1247 | 5 | emnlp2016 | The results are shown in Table 4. The Proposed method achieved significantly better automatic evaluation scores than the Baseline for all the language pairs except the BLEU score of En → Ja direction. Also, the decoding time is reduced by about 60% relative to that of the Baseline. Our tree-based model is better than t... | [1, 1, 1, 1] | ['The results are shown in Table 4.', 'The Proposed method achieved significantly better automatic evaluation scores than the Baseline for all the language pairs except the BLEU score of En → Ja direction.', 'Also, the decoding time is reduced by about 60% relative to that of the Baseline.', 'Our tree-based model is be... | [None, ['Proposed', 'En → Ja', 'BLEU', 'Baseline', 'Ja → En', 'Ja → Zh', 'Zh → Ja'], ['Proposed', 'Time', 'Baseline'], ['Proposed', 'Zh → Ja', 'PBSMT', 'Hiero']] | 1 |
D16-1249table_1 | Single system results in terms of (TER-BLEU)/2 (T-B, the lower the better) on 5 million Chinese to English training set. BP denotes the brevity penalty. NMT results are on a large vocabulary (300k) and with UNK replaced. The second column shows different alignments (Zh → En (one direction), GDFA (“grow-diag-final-and”)... | 4 | [['single system', 'Tree-to-string', '-', '-'], ['single system', 'Cov. LVNMT (Mi et al. 2016b)', '-', '-'], ['single system', '+Alignment', 'Zh → En', 'A → J'], ['single system', '+Alignment', 'Zh → En', 'A → T'], ['single system', '+Alignment', 'Zh → En', 'A → T → J'], ['single system', '+Alignment', 'Zh → En', 'J'],... | 3 | [['MT06', '-', 'BP'], ['MT06', '-', 'BLEU'], ['MT06', '-', 'T-B'], ['MT08', 'News', 'BP'], ['MT08', 'News', 'BLEU'], ['MT08', 'News', 'T-B'], ['MT08', 'Web', 'BP'], ['MT08', 'Web', 'BLEU'], ['MT08', 'Web', 'T-B'], ['Avg', '-', 'T-B']] | [['0.95', '34.93', '9.45', '0.94', '31.12', '12.90', '0.90', '23.45', '17.72', '13.36'], ['0.92', '35.59', '10.71', '0.89', '30.18', '15.33', '0.97', '27.48', '16.67', '14.24'], ['0.95', '35.71', '10.38', '0.93', '30.73', '14.98', '0.96', '27.38', '16.24', '13.87'], ['0.95', '28.59', '16.99', '0.92', '24.09', '20.89', ... | column | ['BP', 'BLEU', 'T-B', 'BP', 'BLEU', 'T-B', 'BP', 'BLEU', 'T-B', 'T-B'] | ['+Alignment'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MT06 || - || BP</th> <th>MT06 || - || BLEU</th> <th>MT06 || - || T-B</th> <th>MT08 || News || BP</th> <th>MT08 || News || BLEU</th> <th>MT08 || News || T-B</th> <th>MT08 || Web || BP</t... | Table 1 | table_1 | D16-1249 | 4 | emnlp2016 | Experimental results in Table 1 show some interesting results. First, with the same alignment, J joint optimization works best than other optimization strategies (lines 3 to 6). Unfortunately, breaking down the network into two separate parts (A and T) and optimizing them separately do not help (lines 3 to 5). We have ... | [1, 1, 1, 2, 1, 1, 1, 1, 2] | ['Experimental results in Table 1 show some interesting results.', 'First, with the same alignment, J joint optimization works best than other optimization strategies (lines 3 to 6).', 'Unfortunately, breaking down the network into two separate parts (A and T) and optimizing them separately do not help (lines 3 to 5).'... | [None, ['Zh → En'], ['A → J', 'A → T', 'A → T → J'], ['A → J', 'A → T → J', 'J', 'Cov. LVNMT (Mi et al. 2016b)', 'Tree-to-string'], ['+Alignment', 'J'], ['J + Gau.', 'MaxEnt', 'Tree-to-string', 'Cov. LVNMT (Mi et al. 2016b)'], ['BLEU', 'J + Gau.', 'Cov. LVNMT (Mi et al. 2016b)'], ['BP', '+Alignment'], ['+Alignment']] | 1 |
D16-1250table_1 | Our results in bilingual and monolingual tasks. | 2 | [['Original embeddings', '-'], ['Unconstrained mapping', '-'], ['Unconstrained mapping', '+ length normalization'], ['Unconstrained mapping', '+ mean centering'], ['Orthogonal mapping', '-'], ['Orthogonal mapping', '+ length normalization'], ['Orthogonal mapping', '+ mean centering']] | 1 | [['EN-IT'], ['EN AN.']] | [['-', '76.66%'], ['34.93%', '73.80%'], ['33.80%', '73.61%'], ['38.47%', '73.71%'], ['36.73%', '76.66%'], ['36.87%', '76.66%'], ['39.27%', '76.59%']] | column | ['Accuracy', 'Accuracy'] | ['Orthogonal mapping', '+ length normalization', '+ mean centering'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>EN-IT</th> <th>EN AN.</th> </tr> </thead> <tbody> <tr> <td>Original embeddings || -</td> <td>-</td> <td>76.66%</td> </tr> <tr> <td>Unconstrained mapping || -</td> <td>34.9... | Table 1 | table_1 | D16-1250 | 4 | emnlp2016 | The rows in Table 1 show, respectively, the results for the original embeddings, the basic mapping proposed by Mikolov et al. (2013b) (cf. Section 2) and the addition of orthogonality constraint (cf. Section 2.1), with and without length normalization and, incrementally, mean centering. In all the cases, length normali... | [1, 2, 1, 1] | ['The rows in Table 1 show, respectively, the results for the original embeddings, the basic mapping proposed by Mikolov et al. (2013b) (cf. Section 2) and the addition of orthogonality constraint (cf. Section 2.1), with and without length normalization and, incrementally, mean centering.', 'In all the cases, length no... | [['Original embeddings', 'Unconstrained mapping', 'Orthogonal mapping', '+ length normalization', '+ mean centering'], None, ['Orthogonal mapping', '+ length normalization', '+ mean centering', 'EN-IT', 'EN AN.'], ['+ length normalization', '+ mean centering', 'EN-IT', 'EN AN.']] | 1 |
D16-1250table_2 | Comparison of our method to other work. | 1 | [['Original embeddings'], ['Mikolov et al. (2013b)'], ['Xing et al. (2015)'], ['Faruqui and Dyer (2014)'], ['Our method']] | 1 | [['EN-IT'], ['EN AN.']] | [['-', '76.66%'], ['34.93%', '73.80%'], ['36.87%', '76.66%'], ['37.80%', '69.64%'], ['39.27%', '76.59%']] | column | ['Accuracy', 'Accuracy'] | ['Our method'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>EN-IT</th> <th>EN AN.</th> </tr> </thead> <tbody> <tr> <td>Original embeddings</td> <td>-</td> <td>76.66%</td> </tr> <tr> <td>Mikolov et al. (2013b)</td> <td>34.93%</td> ... | Table 2 | table_2 | D16-1250 | 4 | emnlp2016 | Table 2 shows the results for our best performing configuration in comparison to previous work. As discussed before, (Mikolov et al., 2013b) and (Xing et al., 2015) were implemented as part of our framework, so they correspond to our uncostrained mapping with no preprocessing and orthogonal mapping with length normali... | [1, 2] | ['Table 2 shows the results for our best performing configuration in comparison to previous work.', 'As discussed before, (Mikolov et al., 2013b) and (Xing et al., 2015) were implemented as part of our framework, so they correspond to our uncostrained mapping with no preprocessing and orthogonal mapping with length no... | [['Our method', 'Xing et al. (2015)', 'Original embeddings', 'Mikolov et al. (2013b)', 'Faruqui and Dyer (2014)'], ['Mikolov et al. (2013b)', 'Xing et al. (2015)']] | 1 |
D16-1253table_2 | Results of 4-way classification on the PDTB. | 2 | [['Temp', 'P'], ['Temp', 'R'], ['Temp', 'F1'], ['Comp', 'P'], ['Comp', 'R'], ['Comp', 'F1'], ['Cont', 'P'], ['Cont', 'R'], ['Cont', 'F1'], ['Expa', 'P'], ['Expa', 'R'], ['Expa', 'F1'], ['macro F1', '-']] | 1 | [['STN'], ['MT Nbi']] | [['33.33', '34.48'], ['14.55', '18.18'], ['20.25', '23.81'], ['38.54', '42.11'], ['25.52', '33.10'], ['30.71', '37.07'], ['38.36', '44.22'], ['41.03', '40.66'], ['39.65', '42.37'], ['59.60', '62.56'], ['66.36', '71.75'], ['62.80', '66.84'], ['38.35', '42.52']] | column | ['F1', 'F1'] | ['MT Nbi'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>STN</th> <th>MT Nbi</th> </tr> </thead> <tbody> <tr> <td>Temp || P</td> <td>33.33</td> <td>34.48</td> </tr> <tr> <td>Temp || R</td> <td>14.55</td> <td>18.18</td> <... | Table 2 | table_2 | D16-1253 | 3 | emnlp2016 | Table 2 shows the results of MT N combining our BiSynData (denoted as MT Nbi) on the PDTB. STN means we train MT N with only the main task. On the macro F1, MT Nbi gains an improvement of 4.17% over ST N. The improvement is significant under one-tailed t-test (p<0.05). A closer look into the results shows that MT Nbi p... | [1, 2, 1, 1, 1, 2, 1, 2, 2, 1, 2] | ['Table 2 shows the results of MT N combining our BiSynData (denoted as MT Nbi) on the PDTB.', 'STN means we train MT N with only the main task.', 'On the macro F1, MT Nbi gains an improvement of 4.17% over ST N.', 'The improvement is significant under one-tailed t-test (p<0.05).', 'A closer look into the results shows... | [['MT Nbi'], ['STN'], ['F1', 'MT Nbi'], ['F1'], ['MT Nbi', 'P', 'R', 'F1', 'Cont'], ['Cont', 'R'], ['Comp', 'F1'], ['Comp', 'Temp', 'Cont'], ['Comp'], ['MT Nbi'], None] | 1 |
D16-1255table_3 | Unlabeled attachment scores (UAS) on the PTB validation set after parsing and aligning the output. For ZGEN we also include a result using the tree z∗ produced directly by the system. For WORDS+BNPS, internal BNP arcs are always counted as correct. | 2 | [['Model', 'ZGEN-64(z ? )'], ['Model', 'ZGEN-64'], ['Model', 'NGRAM-64'], ['Model', 'NGRAM-512'], ['Model', 'LSTM-64'], ['Model', 'LSTM-512']] | 1 | [['WORDS'], ['WORDS+BNPS']] | [['39.7', '64.9'], ['40.8', '65.2'], ['46.1', '67.0'], ['47.2', '67.8'], ['51.3', '71.9'], ['52.8', '73.1']] | column | ['UAS', 'UAS'] | ['LSTM-64', 'LSTM-512'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>WORDS</th> <th>WORDS+BNPS</th> </tr> </thead> <tbody> <tr> <td>Model || ZGEN-64(z ? )</td> <td>39.7</td> <td>64.9</td> </tr> <tr> <td>Model || ZGEN-64</td> <td>40.8</td> ... | Table 3 | table_3 | D16-1255 | 5 | emnlp2016 | One proposed advantage of syntax in linearization models is that it can better capture long-distance relationships. Figure 1 shows results by sentence length and distortion, which is defined as the absolute difference between a token’s index position in y? and yˆ, normalized by M. The LSTM model exhibits consistently b... | [0, 2, 1, 2, 2, 1] | ['One proposed advantage of syntax in linearization models is that it can better capture long-distance relationships.', 'Figure 1 shows results by sentence length and distortion, which is defined as the absolute difference between a token’s index position in y? and yˆ, normalized by M. The LSTM model exhibits consisten... | [None, None, None, None, None, None] | 1 |
D16-1260table_3 | Evaluation results on relation prediction. | 2 | [['Metric', 'TransE'], ['Metric', 'tTransE'], ['Metric', 'TransH'], ['Metric', 'tTransH'], ['Metric', 'TransR'], ['Metric', 'tTransR']] | 2 | [['Mean Rank', 'Raw'], ['Mean Rank', 'Filter'], ['Hits@1 (%)', 'Raw'], ['Hits@1 (%)', 'Filter']] | [['1.53', '1.48', '69.4', '73.0'], ['1.42', '1.35', '71.1', '75.7'], ['1.51', '1.37', '70.5', '72.2'], ['1.38', '1.30', '74.6', '76.9'], ['1.40', '1.28', '71.1', '74.3'], ['1.27', '1.12', '74.5', '78.9']] | column | ['Mean Rank', 'Mean Rank', 'Hits@1 (%)', 'Hits@1 (%)'] | ['Metric'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Mean Rank || Raw</th> <th>Mean Rank || Filter</th> <th>Hits@1 (%) || Raw</th> <th>Hits@1 (%) || Filter</th> </tr> </thead> <tbody> <tr> <td>Metric || TransE</td> <td>1.53</td> <... | Table 3 | table_3 | D16-1260 | 3 | emnlp2016 | Relation prediction aims to predict relations given two entities. Evaluation results are shown in Table 3 on only YG15K due to limited space, where we report Hits@1 instead of Hits@10. Example prediction results for TransE and tTransE are compared in Table 4. For example, when testing (Billy Hughes,?,London,1862), it’s... | [0, 1, 0, 0, 0] | ['Relation prediction aims to predict relations given two entities.', 'Evaluation results are shown in Table 3 on only YG15K due to limited space, where we report Hits@1 instead of Hits@10.', 'Example prediction results for TransE and tTransE are compared in Table 4.', 'For example, when testing (Billy Hughes,?,London,... | [None, ['Hits@1 (%)'], None, None, None] | 1 |
D16-1260table_5 | Evaluation results on triple classification (%). | 2 | [['Datasets', 'TransE'], ['Datasets', 'tTransE'], ['Datasets', 'TransH'], ['Datasets', 'tTransH'], ['Datasets', 'TransR'], ['Datasets', 'tTransR']] | 1 | [['YG15K'], ['YG36K']] | [['63.9', '71.9'], ['75.0', '82.7'], ['63.4', '72.1'], ['75.1', '82.3'], ['64.5', '74.9'], ['78.5', '83.9']] | column | ['Accuracy', 'Accuracy'] | ['tTransE'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>YG15K</th> <th>YG36K</th> </tr> </thead> <tbody> <tr> <td>Datasets || TransE</td> <td>63.9</td> <td>71.9</td> </tr> <tr> <td>Datasets || tTransE</td> <td>75.0</td> <t... | Table 5 | table_5 | D16-1260 | 4 | emnlp2016 | Results. Table 5 reports the results on the test sets. The results indicate that time-aware embedding outperforms all the baselines consistently. Temporal order information may help to distinguish positive and negative triples as different head entities may have different temporally associated relations. If the tempora... | [2, 1, 1, 2, 2] | ['Results.', 'Table 5 reports the results on the test sets.', 'The results indicate that time-aware embedding outperforms all the baselines consistently.', 'Temporal order information may help to distinguish positive and negative triples as different head entities may have different temporally associated relations.', '... | [None, None, None, None, None] | 1 |
D16-1262table_4 | Parsing results trained with different update methods. Our system uses all-violations updates and is the most accurate. | 2 | [['Update', 'Greedy'], ['Update', 'Max-violation'], ['Update', 'All-violations']] | 1 | [['Dev F1'], ['Optimal'], ['Explored']] | [['87.9', '99.2%', '2313.8'], ['88.1', '99.9%', '217.3'], ['88.4', '99.8%', '309.6']] | column | ['Dev F1', 'Optimal', 'Explored'] | ['All-violations'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Dev F1</th> <th>Optimal</th> <th>Explored</th> </tr> </thead> <tbody> <tr> <td>Update || Greedy</td> <td>87.9</td> <td>99.2%</td> <td>2313.8</td> </tr> <tr> <td>Updat... | Table 4 | table_4 | D16-1262 | 8 | emnlp2016 | Table 4 compares the different violation-based learning objectives, as discussed in Section 5. Our novel all-violation updates outperform the alternatives. We attribute this improvement to the robustness over poor search spaces, which the greedy update lacks, and the incentive to explore good parses early, which the ma... | [1, 1, 0, 0] | ['Table 4 compares the different violation-based learning objectives, as discussed in Section 5.', 'Our novel all-violation updates outperform the alternatives.', 'We attribute this improvement to the robustness over poor search spaces, which the greedy update lacks, and the incentive to explore good parses early, whic... | [None, ['All-violations', 'Update'], None, None] | 1 |
D16-1264table_5 | Performance of various methods and humans. Logistic regression outperforms the baselines, while there is still a significant gap between humans. | 1 | [['Random Guess'], ['Sliding Window'], ['Sliding Win. + Dist.'], ['Logistic Regression'], ['Human']] | 2 | [['Exact Match', 'Dev'], ['Exact Match', 'Test'], ['F1', 'Dev'], ['F1', 'Test']] | [['1.1%', '1.3%', '4.1%', '4.3%'], ['13.2%', '12.5%', '20.2%', '19.7%'], ['13.3%', '13.0%', '20.2%', '20.0%'], ['40.0%', '40.4%', '51.0%', '51.0%'], ['80.3%', '77.0%', '90.5%', '86.8%']] | column | ['accuracy', 'accuracy', 'F1', 'F1'] | ['Logistic Regression'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Exact Match || Dev</th> <th>Exact Match || Test</th> <th>F1 || Dev</th> <th>F1 || Test</th> </tr> </thead> <tbody> <tr> <td>Random Guess</td> <td>1.1%</td> <td>1.3%</td> <t... | Table 5 | table_5 | D16-1264 | 8 | emnlp2016 | Table 5 shows the performance of our models alongside human performance on the v1.0 of development and test sets. The logistic regression model significantly outperforms the baselines, but underperforms humans. We note that the model is able to select the sentence containing the answer correctly with 79.3% accuracy, he... | [1, 1, 2] | ['Table 5 shows the performance of our models alongside human performance on the v1.0 of development and test sets.', 'The logistic regression model significantly outperforms the baselines, but underperforms humans.', 'We note that the model is able to select the sentence containing the answer correctly with 79.3% accu... | [None, ['Logistic Regression', 'Random Guess', 'Sliding Window', 'Sliding Win. + Dist.', 'Human'], None] | 1 |
D17-1001table_3 | Evaluation results on the test set, where ∗ represents p-value < 0.05 against our method. | 2 | [['Method', 'Human'], ['Method', 'Proposed'], ['Method', 'Monotonic'], ['Method', 'w/o EM'], ['Method', '1-best tree']] | 1 | [['Recall'], ['Prec.'], ['UAS'], ['%']] | [['90.65', '88.21', '–', '–'], ['83.64', '78.91', '93.49', '98'], ['82.86*', '77.97*', '93.49', '98'], ['81.33*', '75.09*', '92.91*', '86'], ['80.11*', '73.26*', '93.56', '100']] | column | ['Recall', 'Prec.', 'UAS', '%'] | ['Proposed'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Recall</th> <th>Prec.</th> <th>UAS</th> <th>%</th> </tr> </thead> <tbody> <tr> <td>Method || Human</td> <td>90.65</td> <td>88.21</td> <td>–</td> <td>–</td> </tr> ... | Table 3 | table_3 | D17-1001 | 8 | emnlp2017 | Table 3 shows the performance on the test set for variations of our method and that of the human annotators. The last column shows the percentage of pairs where a root pair is reached to be aligned, called reachability. Our method is denoted as Proposed, while its variations include a method with only monotonic alignme... | [1, 2, 1, 2, 2, 1, 1, 2, 1, 1, 2, 1, 2, 1] | ['Table 3 shows the performance on the test set for variations of our method and that of the human annotators.', 'The last column shows the percentage of pairs where a root pair is reached to be aligned, called reachability.', 'Our method is denoted as Proposed, while its variations include a method with only monotonic... | [None, ['%'], ['Proposed', 'Monotonic', 'w/o EM', '1-best tree'], ['Human'], None, ['Proposed', 'Recall', 'Prec.'], ['Recall', 'Prec.', 'Human'], ['UAS'], ['Proposed', '1-best tree', 'UAS'], ['1-best tree', 'Recall', 'Prec.'], None, ['Proposed', '%'], ['Proposed', '%'], ['Proposed']] | 1 |
D17-1004table_4 | Model performance on the test set of TACRED, micro-averaged over instances. LR = Logistic Regression. | 3 | [['Traditional', 'Model', 'Patterns'], ['Traditional', 'Model', 'LR'], ['Traditional', 'Model', 'LR + Patterns'], ['Neural', 'Model', 'CNN'], ['Neural', 'Model', 'CNN-PE'], ['Neural', 'Model', 'SDP-LSTM'], ['Neural', 'Model', 'LSTM'], ['Neural', 'Model', 'Our model'], ['-', 'Model', 'Ensemble']] | 1 | [['P'], ['R'], ['F1']] | [['85.3', '23.4', '36.8'], ['72.0', '47.8', '57.5'], ['71.4', '50.1', '58.9'], ['72.1', '50.3', '59.2'], ['68.2', '55.4', '61.1'], ['62.0', '54.8', '58.2'], ['61.4', '61.7', '61.5'], ['67.7', '63.2', '65.4'], ['69.4', '64.8', '67.0']] | column | ['P', 'R', 'F1'] | ['Our model', 'Ensemble'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Traditional || Model || Patterns</td> <td>85.3</td> <td>23.4</td> <td>36.8</td> </tr> <tr> <td>Tradition... | Table 4 | table_4 | D17-1004 | 6 | emnlp2017 | Table 4 summarizes our results. We observe that all neural models achieve higher F1 scores than the logistic regression and patterns systems, which demonstrates the effectiveness of neural models for relation extraction. Although positional embeddings help increase the F1 by around 2% over the plain CNN model, a simple... | [1, 1, 1, 1, 1] | ['Table 4 summarizes our results.', 'We observe that all neural models achieve higher F1 scores than the logistic regression and patterns systems, which demonstrates the effectiveness of neural models for relation extraction.', 'Although positional embeddings help increase the F1 by around 2% over the plain CNN model, ... | [None, ['Neural', 'F1', 'LR + Patterns'], ['CNN-PE', 'F1', 'CNN', 'LSTM', 'SDP-LSTM'], ['Our model', 'F1', 'LSTM', 'LR'], ['Ensemble', 'Our model', 'F1']] | 1 |
D17-1004table_5 | Model performance on TAC KBP 2015 slot filling evaluation, micro-averaged over queries. Hop-0 scores are calculated on the simple single-hop slot filling results; hop-1 scores are calculated on slot filling results chained on systems’ hop-0 predictions; hop-all scores are calculated based on the combination of the two.... | 2 | [['Model', 'Patterns'], ['Model', 'LR'], ['Model', 'LR + Patterns (2015 winning system)'], ['Model', 'LR trained on TACRED'], ['Model', 'LR trained on TACRED + Patterns'], ['Model', 'Our model'], ['Model', 'Our model + Patterns']] | 2 | [['Hop-0', 'P'], ['Hop-0', 'R'], ['Hop-0', 'F1'], ['Hop-1', 'P'], ['Hop-1', 'R'], ['Hop-1', 'F1'], ['Hop-all', 'P'], ['Hop-all', 'R'], ['Hop-all', 'F1']] | [['63.8', '17.7', '27.7', '49.3', '8.6', '14.7', '58.9', '13.3', '21.8'], ['36.6', '21.9', '27.4', '15.1', '10.1', '12.2', '25.6', '16.3', '19.9'], ['37.5', '24.5', '29.7', '16.5', '12.8', '14.4', '26.6', '19.0', '22.2'], ['32.7', '20.6', '25.3', '7.9', '9.5', '8.6', '16.8', '15.3', '16.0'], ['36.5', '26.5', '30.7', '1... | column | ['P', 'R', 'F1', 'P', 'R', 'F1', 'P', 'R', 'F1'] | ['Our model', 'Our model + Patterns', 'F1'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Hop-0 || P</th> <th>Hop-0 || R</th> <th>Hop-0 || F1</th> <th>Hop-1 || P</th> <th>Hop-1 || R</th> <th>Hop-1 || F1</th> <th>Hop-all || P</th> <th>Hop-all || R</th> <th>Hop-all |... | Table 5 | table_5 | D17-1004 | 7 | emnlp2017 | Table 5 presents our results. We find that: (1) by only training our logistic regression model on TACRED (in contrast to on the 2 million bootstrapped examples used in the 2015 Stanford system) and combining it with patterns, we obtain a higher hop-0 F1 score than the 2015 Stanford system, and a similar hop-all F1; (2)... | [1, 1, 1] | ['Table 5 presents our results.', 'We find that: (1) by only training our logistic regression model on TACRED (in contrast to on the 2 million bootstrapped examples used in the 2015 Stanford system) and combining it with patterns, we obtain a higher hop-0 F1 score than the 2015 Stanford system, and a similar hop-all F1... | [None, ['LR trained on TACRED + Patterns', 'Hop-all', 'F1', 'Our model', 'Hop-0', 'Hop-1'], ['Our model + Patterns', 'Hop-all', 'F1']] | 1 |
D17-1006table_3 | Final results. | 2 | [['Method', 'PMI'], ['Method', 'Bigram'], ['Method', 'Event-Comp'], ['Method', 'RNN'], ['Method', 'MemNet']] | 1 | [['G&C16'], ['C&J08']] | [['30.52', '30.92'], ['29.67', '25.43'], ['49.57', '43.28'], ['45.74', '43.17'], ['55.12', '46.67']] | column | ['accuracy', 'accuracy'] | ['MemNet'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>G&C16</th> <th>C&J08</th> </tr> </thead> <tbody> <tr> <td>Method || PMI</td> <td>30.52</td> <td>30.92</td> </tr> <tr> <td>Method || Bigram</td> <td>29.67</td> ... | Table 3 | table_3 | D17-1006 | 8 | emnlp2017 | 5.4 Final Results Table 3 shows the final results on the G&C 16 and C&J08 datasets, respectively. We compare the results of our final model with the following baselines:. • PMI is the co-occurrence based model of Chambers and Jurafsky (2008), who calculate event pair relations based on Pointwise Mutual Information (PMI... | [1, 1, 2, 2, 2, 2, 2, 2, 1, 1, 2, 1, 2, 2, 1] | ['5.4 Final Results Table 3 shows the final results on the G&C 16 and C&J08 datasets, respectively.', 'We compare the results of our final model with the following baselines:.', '• PMI is the co-occurrence based model of Chambers and Jurafsky (2008), who calculate event pair relations based on Pointwise Mutual Informat... | [['G&C16', 'C&J08'], None, ['PMI'], ['Bigram'], ['Event-Comp'], ['RNN'], ['MemNet'], ['PMI', 'Bigram'], ['PMI', 'Bigram', 'Event-Comp', 'RNN', 'MemNet'], ['Bigram', 'PMI'], None, ['Event-Comp', 'RNN', 'MemNet'], None, None, ['MemNet', 'Event-Comp', 'RNN']] | 1 |
D17-1214table_1 | Results for SWEAR compared to top published results on the WikiReading test set. | 2 | [['Model', 'Placeholder seq2seq (HE16)'], ['Model', 'SoftAttend (CH17)'], ['Model', 'Reinforce (CH17)'], ['Model', 'Placeholder seq2seq (CH17)'], ['Model', 'SWEAR (w/ zeros)'], ['Model', 'SWEAR']] | 1 | [['Mean F1']] | [['71.8'], ['71.6'], ['74.5'], ['75.6'], ['76.4'], ['76.8']] | column | ['Mean F1'] | ['SWEAR'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Mean F1</th> </tr> </thead> <tbody> <tr> <td>Model || Placeholder seq2seq (HE16)</td> <td>71.8</td> </tr> <tr> <td>Model || SoftAttend (CH17)</td> <td>71.6</td> </tr> <tr> ... | Table 1 | table_1 | D17-1214 | 4 | emnlp2017 | Before exploring unsupervised pre-training, we present summary results for SWEAR in a fully supervised setting, for comparison to previous work on the WikiReading task, namely that of Hewlett et al.(2016) and Choi et al.(2017), which we refer to as HE16 and CH17 in tables. Table 1 shows that SWEAR outperforms the best,... | [2, 1, 1, 0] | ['Before exploring unsupervised pre-training, we present summary results for SWEAR in a fully supervised setting, for comparison to previous work on the WikiReading task, namely that of Hewlett et al.(2016) and Choi et al.(2017), which we refer to as HE16 and CH17 in tables.', 'Table 1 shows that SWEAR outperforms the ... | [['SWEAR', 'SWEAR (w/ zeros)', 'Placeholder seq2seq (HE16)', 'SoftAttend (CH17)', 'Reinforce (CH17)', 'Placeholder seq2seq (CH17)'], ['SWEAR', 'SoftAttend (CH17)', 'Reinforce (CH17)'], ['SWEAR', 'SoftAttend (CH17)', 'Mean F1'], None] | 1 |
D17-1214table_2 | Mean F1 for SWEAR on each type of property compared with the best results for each type reported in Hewlett et al. (2016), which come from different models. Other publications did not report these sub-scores. | 1 | [['Categorical'], ['Relational'], ['Date']] | 1 | [['HE16 Best'], ['SWEAR']] | [['88.6', '88.6'], ['56.5', '63.4'], ['73.8', '82.5']] | column | ['F1', 'F1'] | ['SWEAR', 'Relational', 'Date'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>HE16 Best</th> <th>SWEAR</th> </tr> </thead> <tbody> <tr> <td>Categorical</td> <td>88.6</td> <td>88.6</td> </tr> <tr> <td>Relational</td> <td>56.5</td> <td>63.4</td> ... | Table 2 | table_2 | D17-1214 | 4 | emnlp2017 | To quantify the effect of initializing the window encoder with the question state, we report results for two variants of SWEAR: In SWEAR the window encoder is initialized with the question encoding, while in SWEAR w/ zeros, the window encoder is initialized with zeros. In both cases the question encoding is used for at... | [0, 0, 0, 0, 1, 1] | ['To quantify the effect of initializing the window encoder with the question state, we report results for two variants of SWEAR: In SWEAR the window encoder is initialized with the question encoding, while in SWEAR w/ zeros, the window encoder is initialized with zeros.', 'In both cases the question encoding is used f... | [None, None, None, None, ['Categorical', 'Relational', 'Date', 'HE16 Best'], ['SWEAR', 'HE16 Best', 'Relational', 'Date']] | 1 |
D17-1214table_4 | Mean F1 results for SWEAR (fully supervised) and SWEAR-SS (semi-supervised) trained on 1%, 0.5%, and 0.1% subsets, respectively. Variants of SWEAR-SS indicate different sources of fixed encoder weights. 2 | 2 | [['Model', 'SWEAR'], ['Model', 'SWEAR-SS (RAE)'], ['Model', 'SWEAR-SS (VRAE)']] | 1 | [['1%'], ['0.5%'], ['0.1%']] | [['63.5', '57.6', '39.5'], ['64.7', '62.8', '55.3'], ['65.7', '64.0', '60.7']] | column | ['F1', 'F1', 'F1'] | ['SWEAR-SS (VRAE)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>1%</th> <th>0.5%</th> <th>0.1%</th> </tr> </thead> <tbody> <tr> <td>Model || SWEAR</td> <td>63.5</td> <td>57.6</td> <td>39.5</td> </tr> <tr> <td>Model || SWEAR-SS (RA... | Table 4 | table_4 | D17-1214 | 8 | emnlp2017 | Table 4 and 5 show the results of SWEAR and semi-supervised models with pretrained and fixed embeddings. Results show that SWEAR-SS always improves over SWEAR at small data sizes, with the difference become dramatic as the dataset becomes very small. VRAE pretraining yields the best performance. As training and testing... | [1, 1, 1, 2, 1, 2] | ['Table 4 and 5 show the results of SWEAR and semi-supervised models with pretrained and fixed embeddings.', 'Results show that SWEAR-SS always improves over SWEAR at small data sizes, with the difference become dramatic as the dataset becomes very small.', 'VRAE pretraining yields the best performance.', 'As training ... | [None, ['SWEAR-SS (RAE)', 'SWEAR-SS (VRAE)', 'SWEAR'], ['SWEAR-SS (VRAE)'], ['SWEAR', '0.1%'], ['SWEAR-SS (VRAE)', '1%', '0.5%'], None] | 1 |
D17-1214table_6 | Results for semi-supervised reviewer models trained on the 1% subset of WikiReading. | 3 | [['Model', 'SWEAR-PR', '-'], ['Model', 'SWEAR-PR', 'dropout on input only'], ['Model', 'SWEAR-PR', 'no dropout'], ['Model', 'SWEAR-PR', 'shared reviewer cells'], ['Model', 'SWEAR-MLR', '-'], ['Model', 'SWEAR-MLR', 'w/o skip connections']] | 1 | [['Mean F1']] | [['66.5'], ['65.4'], ['64.6'], ['63.8'], ['63.0'], ['60.0']] | column | ['Mean F1'] | ['SWEAR-PR'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Mean F1</th> </tr> </thead> <tbody> <tr> <td>Model || SWEAR-PR || -</td> <td>66.5</td> </tr> <tr> <td>Model || SWEAR-PR || dropout on input only</td> <td>65.4</td> </tr> <tr> ... | Table 6 | table_6 | D17-1214 | 8 | emnlp2017 | Table 6 shows the results of semi-supervised reviewer models. When trained on 1% of the training data, SWEAR-MLR and the supervised SWEAR model perform similarly. Without using skip connections between embedding and hidden layers, the performance drops. The SWEARPR model further improves Mean F1 and outperforms the str... | [1, 1, 1, 1] | ['Table 6 shows the results of semi-supervised reviewer models.', 'When trained on 1% of the training data, SWEAR-MLR and the supervised SWEAR model perform similarly.', 'Without using skip connections between embedding and hidden layers, the performance drops.', 'The SWEARPR model further improves Mean F1 and outperfo... | [None, ['SWEAR-MLR', 'SWEAR-PR', 'Mean F1'], ['w/o skip connections', 'Mean F1'], ['SWEAR-PR', 'Mean F1']] | 1 |
D17-1215table_5 | Transferability of adversarial examples across models. Each row measures performance on adversarial examples generated to target one particular model; each column evaluates one (possibly different) model on these examples. | 3 | [['Targeted Model', 'ADDSENT', 'ML Single'], ['Targeted Model', 'ADDSENT', 'ML Ens.'], ['Targeted Model', 'ADDSENT', 'BiDAF Single'], ['Targeted Model', 'ADDSENT', 'BiDAF Ens.'], ['Targeted Model', 'ADDANY', 'ML Single'], ['Targeted Model', 'ADDANY', 'ML Ens.'], ['Targeted Model', 'ADDANY', 'BiDAF Single'], ['Targeted ... | 3 | [['Model under Evaluation', 'ML', 'Single'], ['Model under Evaluation', 'ML', 'Ens.'], ['Model under Evaluation', 'BiDAF', 'Single'], ['Model under Evaluation', 'BiDAF', 'Ens.']] | [['27.3', '33.4', '40.3', '39.1'], ['31.6', '29.4', '40.2', '38.7'], ['32.7', '34.8', '34.3', '37.4'], ['32.7', '34.2', '38.3', '34.2'], ['7.6', '54.1', '57.1', '60.9'], ['44.9', '11.7', '50.4', '54.8'], ['58.4', '60.5', '4.8', '46.4'], ['48.8', '51.1', '25', '2.7']] | column | ['F1', 'F1', 'F1', 'F1'] | ['ADDSENT', 'ADDANY'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Model under Evaluation || ML || Single</th> <th>Model under Evaluation || ML || Ens.</th> <th>Model under Evaluation || BiDAF || Single</th> <th>Model under Evaluation || BiDAF || Ens.</th> </tr> ... | Table 5 | table_5 | D17-1215 | 7 | emnlp2017 | Table 5 shows the results of evaluating the four main models on adversarial examples generated by running either ADDSENT or ADDANY against each model. ADDSENT adversarial examples transfer between models quite effectively; in particular, they are harder than ADDONESENT examples, which implies that exampl... | [1, 2, 2, 1] | ['Table 5 shows the results of evaluating the four main models on adversarial examples generated by running either ADDSENT or ADDANY against each model.', 'ADDSENT adversarial examples transfer between models quite effectively; in particular, they are harder than ADDONESENT examples, which implies that e... | [['ADDSENT', 'ADDANY'], ['ADDSENT'], ['ADDANY'], ['ML Single', 'BiDAF Single', 'ML Ens.', 'BiDAF Ens.']] | 1 |
D17-1216table_4 | Comparison of accuracy for our model and three baselines on RocStories Spring 2016 Test Set. The result of DSSM is adapted from (Mostafazadeh et al., 2016a). | 2 | [['System', 'Narrative Event Chain'], ['System', 'DSSM'], ['System', 'RNN Model'], ['Our Model', ' -']] | 1 | [['Accuracy']] | [['57.62%'], ['58.52%'], ['58.93%'], ['67.02%']] | column | ['Accuracy'] | ['Our Model'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy</th> </tr> </thead> <tbody> <tr> <td>System || Narrative Event Chain</td> <td>57.62%</td> </tr> <tr> <td>System || DSSM</td> <td>58.52%</td> </tr> <tr> <td>Syste... | Table 4 | table_4 | D17-1216 | 7 | emnlp2017 | Table 4 shows the results. From this table, we can see that :. 1) Our model outperforms all baselines significantly. Compared with baselines, the accuracy improvement on test set is at least 13.7%. This demonstrates the effectiveness of our model by mining and exploiting heteregenous knowledge. 2) The event narrative k... | [1, 2, 1, 1, 2, 2, 1, 2, 1] | ['Table 4 shows the results.', 'From this table, we can see that :.', '1) Our model outperforms all baselines significantly.', 'Compared with baselines, the accuracy improvement on test set is at least 13.7%.', 'This demonstrates the effectiveness of our model by mining and exploiting heteregenous knowledge.', '2) The ... | [None, None, ['Our Model', 'Accuracy'], ['Our Model', 'Accuracy'], ['Our Model'], ['Narrative Event Chain'], ['Narrative Event Chain', 'Accuracy'], ['DSSM', 'RNN Model'], ['DSSM', 'RNN Model', 'Our Model', 'Accuracy']] | 1 |
D17-1216table_5 | Comparison of the performance using single type of knowledge. | 2 | [['System', 'Event Narrative Knowledge'], ['System', 'Entity Semantic Knowledge'], ['System', 'Sentiment Coherent Knowledge'], ['Our Model (All Knowledge)', '-']] | 1 | [['Accuracy']] | [['60.98%'], ['57.14%'], ['61.30%'], ['67.02%']] | column | ['Accuracy'] | ['Our Model (All Knowledge)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy</th> </tr> </thead> <tbody> <tr> <td>System || Event Narrative Knowledge</td> <td>60.98%</td> </tr> <tr> <td>System || Entity Semantic Knowledge</td> <td>57.14%</td> </t... | Table 5 | table_5 | D17-1216 | 7 | emnlp2017 | The first group of experiments was conducted using only one kind of knowledge at a time in our model. Table 5 shows the results. We can see that using a single kind of knowledge is insufficient for commonsense machine comprehension: all single-knowledge settings cannot achieve competitive performance to the all-knowled... | [2, 1, 1] | ['The first group of experiments was conducted using only one kind of knowledge at a time in our model.', 'Table 5 shows the results.', 'We can see that using a single kind of knowledge is insufficient for commonsense machine comprehension: all single-knowledge settings cannot achieve competitive performance to the all... | [['Our Model (All Knowledge)'], None, ['Our Model (All Knowledge)', 'Accuracy']] | 1 |
D17-1216table_7 | Comparison of the performance using different inference rule selection mechanism. | 2 | [['System', 'Minimum Cost Mechanism'], ['System', 'Average Cost Mechanism'], ['Our Model (Attention Mechanism)', ' -']] | 1 | [['Accuracy']] | [['54.84%'], ['63.01%'], ['67.02%']] | column | ['Accuracy'] | ['Our Model (Attention Mechanism)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy</th> </tr> </thead> <tbody> <tr> <td>System || Minimum Cost Mechanism</td> <td>54.84%</td> </tr> <tr> <td>System || Average Cost Mechanism</td> <td>63.01%</td> </tr> ... | Table 7 | table_7 | D17-1216 | 10 | emnlp2017 | Table 7 show the results. We can see that: 1) the minimum cost mechanism cannot achieve competitive performance, we believe this is because the selection of rules should not depend on the cost of them, and considering all valid inferences is critical for reasoning; 2) our attention mechanism can effectively model the i... | [1, 1, 1, 2] | ['Table 7 show the results.', 'We can see that: 1) the minimum cost mechanism cannot achieve competitive performance, we believe this is because the selection of rules should not depend on the cost of them, and considering all valid inferences is critical for reasoning; 2) our attention mechanism can effectively model ... | [None, ['Minimum Cost Mechanism', 'Accuracy', 'Our Model (Attention Mechanism)'], ['Average Cost Mechanism', 'Accuracy'], None] | 1 |
D17-1216table_8 | Comparison of the performance by removing negation rules. | 2 | [['System', 'Our Model'], ['System', '-w/o Negation Rules']] | 1 | [['Accuracy']] | [['67.02%'], ['63.12%']] | column | ['Accuracy'] | ['-w/o Negation Rules'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy</th> </tr> </thead> <tbody> <tr> <td>System || Our Model</td> <td>67.02%</td> </tr> <tr> <td>System || -w/o Negation Rules</td> <td>63.12%</td> </tr> </tbody></table> | Table 8 | table_8 | D17-1216 | 10 | emnlp2017 | Table 8 show the results. We can see that removing negation rules will significantly drop the system performance, which confirm the effectiveness of our proposed negation rules. | [1, 1] | ['Table 8 show the results.', 'We can see that removing negation rules will significantly drop the system performance, which confirm the effectiveness of our proposed negation rules.'] | [None, ['-w/o Negation Rules', 'Accuracy', 'Our Model']] | 1 |
D17-1218table_3 | Cross-domain experiments, best values per column are highlighted, in-domain results (for comparison) in italics; results only for selected systems. For each source/target combination we show two scores: Macro-F1 score (left-hand column) and F1 score for claims (right-hand column). | 4 | [['Source/Sys.', 'CNN-rand', '-', 'MT'], ['Source/Sys.', 'CNN-rand', '-', 'OC'], ['Source/Sys.', 'CNN-rand', '-', 'PE'], ['Source/Sys.', 'CNN-rand', '-', 'VG'], ['Source/Sys.', 'CNN-rand', '-', 'WD'], ['Source/Sys.', 'CNN-rand', '-', 'WTP'], ['Source/Sys.', 'CNN-rand', '-', ' Average'], ['Source/Sys.', 'LR All Features... | 3 | [['Target', 'MT', 'Macro-F1'], ['Target', 'MT', 'F1'], ['Target', 'OC', 'Macro-F1'], ['Target', 'OC', 'F1'], ['Target', 'PE', 'Macro-F1'], ['Target', 'PE', 'F1'], ['Target', 'VG', 'Macro-F1'], ['Target', 'VG', 'F1'], ['Target', 'WD', 'Macro-F1'], ['Target', 'WD', 'F1'], ['Target', 'WTP', 'Macro-F1'], ['Target', 'WTP', ... | [['78.6', '67.3', '51', '7.4', '56.9', '22.1', '57.2', '15.7', '52.4', '9.4', '49.4', '10.9', '53.4', '13.1'], ['57.1', '39.7', '60.5', '25.6', '56.4', '42.8', '58.9', '37.3', '54.6', '13.2', '58.4', '28.9', '57.1', '32.4'], ['59.8', '18', '54.2', '9.5', '73.6', '61.1', '57.5', '18.7', '55.5', '15.9', '54.7', '16', '56... | column | ['Macro-F1', 'F1', 'Macro-F1', 'F1', 'Macro-F1', 'F1', 'Macro-F1', 'F1', 'Macro-F1', 'F1', 'Macro-F1', 'F1', 'Macro-F1', 'F1'] | ['Target', 'MT'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Target || MT || Macro-F1</th> <th>Target || MT || F1</th> <th>Target || OC || Macro-F1</th> <th>Target || OC || F1</th> <th>Target || PE || Macro-F1</th> <th>Target || PE || F1</th> <th... | Table 3 | table_3 | D17-1218 | 7 | emnlp2017 | For all six datasets, training on different sources resulted in a performance drop. Table 3 lists the results of the best feature-based (LR All features) and deep learning (CNN-rand) systems, as well as single feature groups (averages over all source domains, results for individual source domains can be found in the su... | [2, 1, 1, 1, 2, 1, 1, 2, 1, 1, 1] | ['For all six datasets, training on different sources resulted in a performance drop.', 'Table 3 lists the results of the best feature-based (LR All features) and deep learning (CNN-rand) systems, as well as single feature groups (averages over all source domains, results for individual source domains can be found in t... | [None, ['LR All Features', 'CNN-rand', 'Single feature groups (averages across all source domains)'], ['MT', 'PE', 'CNN-rand', 'LR All Features'], ['Source/Sys.', 'CNN-rand', 'OC', 'Target', 'WTP', 'LR All Features', 'VG'], None, ['Target', 'MT', 'Source/Sys.'], ['Target', 'Average'], None, ['Single feature groups (ave... | 1 |
D17-1219table_2 | Results for the full QG systems using BLEU 1–4, METEOR. The first stage of the two pipeline systems are the feature-rich linear model (LREG) and our best performing selection model respectively. | 3 | [['Model', 'Conservative', 'LREG(C&L)+ NQG'], ['Model', 'Conservative', 'Ours + NQG'], ['Model', 'Liberal', 'LREG(C&L)+ NQG'], ['Model', 'Liberal', 'Ours + NQG']] | 1 | [['BLEU 1'], ['BLEU 2'], ['BLEU 3'], ['BLEU 4'], ['METEOR']] | [['38.3', '23.15', '15.64', '10.97', '15.09'], ['40.08', '24.26', '16.39', '11.5', '15.67'], ['51.55', '40.17', '34.35', '30.59', '24.17'], ['52.89', '41.16', '35.15', '31.25', '24.76']] | column | ['BLEU 1', 'BLEU 2', 'BLEU 3', 'BLEU 4', 'METEOR'] | ['Ours + NQG'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU 1</th> <th>BLEU 2</th> <th>BLEU 3</th> <th>BLEU 4</th> <th>METEOR</th> </tr> </thead> <tbody> <tr> <td>Model || Conservative || LREG(C&L)+ NQG</td> <td>38.3</td> <... | Table 2 | table_2 | D17-1219 | 4 | emnlp2017 | Table 2 shows that the QG system incorporating our best performing sentence extractor outperforms its LREG counterpart across metrics. Note that to calculate the score for the matching case, similar to our earlier work (Du et al., 2017), we adapt the image captioning evaluation scripts of Chen et al.(2015) since there ... | [1, 2] | ['Table 2 shows that the QG system incorporating our best performing sentence extractor outperforms its LREG counterpart across metrics.', 'Note that to calculate the score for the matching case, similar to our earlier work (Du et al., 2017), we adapt the image captioning evaluation scripts of Chen et al.(2015) since t... | [['Ours + NQG'], ['Ours + NQG']] | 1 |
D17-1224table_1 | Comparison results on overall evaluation | 2 | [['Methods', 'Lead'], ['Methods', 'Coverage'], ['Methods', 'TextRank'], ['Methods', 'Centroid'], ['Methods', 'ILP'], ['Methods', 'ClusterCMRW'], ['Methods', 'Submodular'], ['Methods', 'SenDivRank'], ['Methods', 'Our Approach']] | 1 | [['R-1'], ['R-2'], ['R-SU4']] | [['0.48029', '0.16183', '0.21156'], ['0.48085', '0.15849', '0.20615'], ['0.49453', '0.1637', '0.21457'], ['0.48582', '0.16099', '0.20919'], ['0.49302', '0.16651', '0.21493'], ['0.49363', '0.17205', '0.22033'], ['0.50273', '0.16963', '0.21775'], ['0.48701', '0.17491', '0.22382'], ['0.50215', '0.18631', '0.23426']] | column | ['R-1', 'R-2', 'R-SU4'] | ['Our Approach'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>R-1</th> <th>R-2</th> <th>R-SU4</th> </tr> </thead> <tbody> <tr> <td>Methods || Lead</td> <td>0.48029</td> <td>0.16183</td> <td>0.21156</td> </tr> <tr> <td>Methods ||... | Table 1 | table_1 | D17-1224 | 3 | emnlp2017 | Firstly, we perform evaluation on the whole articles and Table 1 shows the comparison results. We can see that our approach outperforms all the baseline methods with respect to ROUGE-2 and ROUGE-SU4. The Submodular method achieves the highest ROUGE-1 score, but our approach also achieves very high ROUGE-1 score, which ... | [1, 1, 1] | ['Firstly, we perform evaluation on the whole articles and Table 1 shows the comparison results.', 'We can see that our approach outperforms all the baseline methods with respect to ROUGE-2 and ROUGE-SU4.', 'The Submodular method achieves the highest ROUGE-1 score, but our approach also achieves very high ROUGE-1 score... | [None, ['Our Approach', 'R-2', 'R-SU4'], ['Submodular', 'R-1', 'Our Approach']] | 1 |
D17-1224table_2 | Comparison results on two-part evaluation I | 2 | [['Method', 'Lead'], ['Method', 'Coverage'], ['Method', 'TextRank'], ['Method', 'Centroid'], ['Method', 'ILP'], ['Method', 'ClusterCMRW'], ['Method', 'Submodular'], ['Method', 'SenDivRank'], ['Method', 'Our Approach']] | 1 | [['R-1'], ['R-2'], ['R-SU4']] | [['0.38757', '0.10631', '0.15138'], ['0.38932', '0.10399', '0.14714'], ['0.40246', '0.10651', '0.15327'], ['0.3891', '0.10297', '0.14774'], ['0.40004', '0.11256', '0.15641'], ['0.40565', '0.11855', '0.16195'], ['0.3999', '0.11044', '0.15442'], ['0.39462', '0.11575', '0.16028'], ['0.41913', '0.13369', '0.17735']] | column | ['R-1', 'R-2', 'R-SU4'] | ['Our Approach'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>R-1</th> <th>R-2</th> <th>R-SU4</th> </tr> </thead> <tbody> <tr> <td>Method || Lead</td> <td>0.38757</td> <td>0.10631</td> <td>0.15138</td> </tr> <tr> <td>Method || C... | Table 2 | table_2 | D17-1224 | 4 | emnlp2017 | Table 2 shows the comparison results based on this evaluation protocol (two part evaluation I). Furthermore, we allow the first part in a reference article to match with the second part in a peer article, and vice versa. We allow one-to-one matching and find the optimal matching between the two sets of parts, which ref... | [1, 2, 2, 1] | ['Table 2 shows the comparison results based on this evaluation protocol (two part evaluation I).', 'Furthermore, we allow the first part in a reference article to match with the second part in a peer article, and vice versa.', 'We allow one-to-one matching and find the optimal matching between the two sets of parts, w... | [None, None, None, ['R-1', 'R-2', 'R-SU4']] | 1 |
D17-1224table_4 | Manual evaluation results | 2 | [['Method', 'TextRank'], ['Method', 'Centroid'], ['Method', 'ILP'], ['Method', 'ClusterCMRW'], ['Method', 'Submodular'], ['Method', 'SenDivRank'], ['Method', 'Our Approach']] | 1 | [['Cov.'], ['Read.'], ['Overall']] | [['2.86', '2.34', '2.5'], ['2.83', '2.17', '2.33'], ['2.17', '1.17', '2.27'], ['3.33', '2.34', '2.83'], ['2.51', '2.03', '2.34'], ['3.51', '2.47', '0.86'], ['3.85', '3.32', '3.47']] | column | ['Cov.', 'Read.', 'Overall'] | ['Our Approach'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Cov.</th> <th>Read.</th> <th>Overall</th> </tr> </thead> <tbody> <tr> <td>Method || TextRank</td> <td>2.86</td> <td>2.34</td> <td>2.5</td> </tr> <tr> <td>Method || Ce... | Table 4 | table_4 | D17-1224 | 4 | emnlp2017 | Table 4 shows the manual evaluation results. We can see that our proposed approach can produce news overview articles with better content coverage, readability and overall responsiveness than baseline methods. The quality of the news overview articles is generally acceptable by the human judges. | [1, 1, 2] | ['Table 4 shows the manual evaluation results.', 'We can see that our proposed approach can produce news overview articles with better content coverage, readability and overall responsiveness than baseline methods.', 'The quality of the news overview articles is generally acceptable by the human judges.'] | [None, ['Our Approach', 'Cov.', 'Read.', 'Overall', 'Submodular'], None] | 1 |
D17-1226table_1 | Comparisons of Feature Weights Learned Using In-doc or Cross-doc Coreferent Event Pairs, Euc: Euclidean Distance, Cos: Cosine Similarity | 2 | [['Features', 'Event Word Embedding: Euc'], ['Features', 'Event Word Embedding: Cos'], ['Features', 'Context Embedding: Euc'], ['Features', 'Context Embedding: Cos'], ['Features', 'Argument Embedding']] | 1 | [['WD'], ['CD']] | [['1.017', '0.207'], ['1.086', '1.142'], ['0.038', '0.422'], ['0.004', '3.91'], ['0.349', '3.27']] | column | ['F1', 'F1'] | ['WD', 'CD'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>WD</th> <th>CD</th> </tr> </thead> <tbody> <tr> <td>Features || Event Word Embedding: Euc</td> <td>1.017</td> <td>0.207</td> </tr> <tr> <td>Features || Event Word Embedding: Co... | Table 1 | table_1 | D17-1226 | 6 | emnlp2017 | Table 1 shows the comparisons of feature weights. We can see that within-document event linking mainly relies on the euclidean distance and cosine similarity scores calculated using event word features, with a reasonable amount of weight assigned to overlapped arguments’ embedding as well. However, only very small weig... | [1, 1, 1, 1, 1, 2] | ['Table 1 shows the comparisons of feature weights.', 'We can see that within-document event linking mainly relies on the euclidean distance and cosine similarity scores calculated using event word features, with a reasonable amount of weight assigned to overlapped arguments’ embedding as well.', 'However, only very sm... | [None, ['WD', 'Event Word Embedding: Euc', 'Event Word Embedding: Cos'], ['WD', 'Context Embedding: Euc', 'Context Embedding: Cos'], ['CD', 'Context Embedding: Cos'], ['Event Word Embedding: Cos', 'WD', 'Argument Embedding'], ['WD', 'CD']] | 1 |
D17-1226table_3 | Withinand cross-document event coreference result on ECB+ Corpus. | 2 | [['Cross-Document Coreference Results', 'LEMMA'], ['Cross-Document Coreference Results', 'Common Classifier (WD)'], ['Cross-Document Coreference Results', 'Common Classifier (WD) + 2nd Order Relations'], ['Cross-Document Coreference Results', 'Common Classifier (CD)'], ['Cross-Document Coreference Results', 'Common Cla... | 2 | [['B3', 'R'], ['B3', 'P'], ['B3', 'F1'], ['MUC', 'R'], ['MUC', 'P'], ['MUC', 'F1'], ['CEAFEe', 'R'], ['CEAFEe', 'P'], ['CEAFEe', 'F1'], ['CoNLL', 'F1']] | [['39.5', '73.9', '51.4', '58.1', '78.2', '66.7', '58.9', '37.5', '46.2', '54.8'], ['46', '72.8', '56.4', '60.4', '76.8', '68.4', '59.5', '42.1', '49.3', '58'], ['48.8', '72.1', '58.2', '61.8', '78.9', '69.3', '59.3', '44.1', '50.6', '59.4'], ['44.9', '64.7', '53', '66.1', '66.4', '66.2', '51.9', '46.4', '49', '56.1'],... | column | ['R', 'P', 'F1', 'R', 'P', 'F1', 'R', 'P', 'F1', 'F1'] | ['WD & CD Classifiers', 'WD & CD Classifiers + 2nd Order Relations (Full Model)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>B3 || R</th> <th>B3 || P</th> <th>B3 || F1</th> <th>MUC || R</th> <th>MUC || P</th> <th>MUC || F1</th> <th>CEAFEe || R</th> <th>CEAFEe || P</th> <th>CEAFEe || F1</th> <th... | Table 3 | table_3 | D17-1226 | 8 | emnlp2017 | Table 3 shows the comparison results for both within-document and cross-document event coreference resolution. In the first stage of iterative merging, using two distinct WD and CD classifiers for corresponding WD and CD merges yields clear improvements for both WD and CD event coreference resolution tasks, compared wi... | [1, 1, 1, 1] | ['Table 3 shows the comparison results for both within-document and cross-document event coreference resolution.', 'In the first stage of iterative merging, using two distinct WD and CD classifiers for corresponding WD and CD merges yields clear improvements for both WD and CD event coreference resolution tasks, compar... | [['Cross-Document Coreference Results', 'Within-Document Coreference Result'], ['WD & CD Classifiers', 'Common Classifier (WD)', 'Common Classifier (CD)'], ['WD & CD Classifiers + 2nd Order Relations (Full Model)'], ['R', 'P', 'F1']] | 1 |
D17-1228table_3 | Results of quality assessments with 5scale mean opinion scores (MOS) and JFK style assessments with binary ratings. Style results are statistically significant compared to the selectivesampling by paired t-tests (p < 0.5%). | 2 | [['Methods', 'vanilla-sampling'], ['Methods', 'selective-sampling'], ['Methods', 'cg-ir'], ['Methods', 'rank'], ['Methods', 'multiply'], ['Methods', 'finetune'], ['Methods', 'finetune-cg-ir'], ['Methods', 'finetune-cg-topic'], ['Methods', 'singer-songwriter'], ['Methods', 'starwars']] | 1 | [['Quality (MOS)'], ['Style']] | [['2.286 ± 0.046', '—'], ['2.681 ± 0.049', '10.42%'], ['2.566 ± 0.048', '10.24%'], ['2.477 ± 0.048', '21.88%'], ['2.627 ± 0.048', '13.54%'], ['2.597 ± 0.046', '20.83%'], ['2.627 ± 0.049', '20.31%'], ['2.667 ± 0.045', '21.09%'], ['2.373 ± 0.045', '—'], ['2.677 ± 0.048', '—']] | column | ['Quality (MOS)', 'Style'] | ['Quality (MOS)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Quality (MOS)</th> <th>Style</th> </tr> </thead> <tbody> <tr> <td>Methods || vanilla-sampling</td> <td>2.286 ± 0.046</td> <td>—</td> </tr> <tr> <td>Methods || selective-samplin... | Table 3 | table_3 | D17-1228 | 8 | emnlp2017 | We conducted mean opinion score (MOS) tests for overall quality assessment of generated responses with questionnaires described above. Table 3 shows the MOS results with standard error. It can be seen that all the systems based on selective sampling are significantly better than vanilla sampling baseline. When restrict... | [2, 1, 1, 1, 1, 1] | ['We conducted mean opinion score (MOS) tests for overall quality assessment of generated responses with questionnaires described above.', 'Table 3 shows the MOS results with standard error.', 'It can be seen that all the systems based on selective sampling are significantly better than vanilla sampling baseline.', 'Wh... | [['Quality (MOS)'], None, ['selective-sampling', 'vanilla-sampling', 'Quality (MOS)'], ['Style', 'Quality (MOS)', 'singer-songwriter'], ['multiply', 'finetune', 'finetune-cg-topic'], ['finetune', 'rank', 'Quality (MOS)', 'selective-sampling']] | 1 |
D17-1229table_1 | Results of different strategies to leverage the current label. | 2 | [['Models', 'No current label'], ['Models', 'True current label'], ['Models', 'Predicted current label'], ['Models', 'Scheduled Sampling'], ['Models', 'Average Embedding'], ['Models', 'Uncertainty Propagation']] | 2 | [['Accuracy', 'Switchboard'], ['Accuracy', 'MapTask']] | [['72.93%', '61.27%'], ['73.15%', '63.36%'], ['73.91%', '64.53%'], ['74.43%', '64.50%'], ['75.04%', '65.09%'], ['75.61%', '65.87%']] | column | ['Accuracy', 'Accuracy'] | ['Uncertainty Propagation'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy || Switchboard</th> <th>Accuracy || MapTask</th> </tr> </thead> <tbody> <tr> <td>Models || No current label</td> <td>72.93%</td> <td>61.27%</td> </tr> <tr> <td>Models ... | Table 1 | table_1 | D17-1229 | 4 | emnlp2017 | Table 1 compares our results with those obtained by the baselines. Our two models, Uncertainty Propagation and Average Embedding, outperform all the baselines. Among these two models, Uncertainty Propagation, which is more analytically grounded, outperforms the Average Embedding model. Using the true current label duri... | [1, 1, 1, 1, 1] | ['Table 1 compares our results with those obtained by the baselines.', 'Our two models, Uncertainty Propagation and Average Embedding, outperform all the baselines.', 'Among these two models, Uncertainty Propagation, which is more analytically grounded, outperforms the Average Embedding model.', 'Using the true current... | [None, ['Average Embedding', 'Uncertainty Propagation', 'Accuracy'], ['Uncertainty Propagation', 'Accuracy', 'Average Embedding'], ['True current label', 'Predicted current label'], ['Scheduled Sampling', 'Predicted current label', 'MapTask', 'Switchboard']] | 1 |
D17-1237table_1 | Performance of three agents on different User Types. Tested on 2000 dialogues using the best model during training. Succ.: success rate, Turn: average turns, Reward: average reward. | 2 | [['Agent', 'Rule'], ['Agent', 'Rule+'], ['Agent', 'RL'], ['Agent', 'HRL']] | 2 | [['Type A', 'Succ.'], ['Type A', 'Turn'], ['Type A', 'Reward'], ['Type B', 'Succ.'], ['Type B', 'Turn'], ['Type B', 'Reward'], ['Type C', 'Succ.'], ['Type C', 'Turn'], ['Type C', 'Reward']] | [['0.322', '46.2', '-24', '0.24', '54.2', '-42.9', '0.205', '54.3', '-49.3'], ['0.535', '82', '-3.7', '0.385', '110.5', '-44.95', '0.34', '108.1', '-51.85'], ['0.437', '45.6', '-3.3', '0.34', '52.2', '-23.8', '0.348', '49.5', '-21.1'], ['0.632', '43', '33.2', '0.6', '44.5', '26.7', '0.622', '42.7', '31.7']] | column | ['Succ.', 'Turn', 'Reward', 'Succ.', 'Turn', 'Reward', 'Succ.', 'Turn', 'Reward'] | ['HRL'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Type A || Succ.</th> <th>Type A || Turn</th> <th>Type A || Reward</th> <th>Type B || Succ.</th> <th>Type B || Turn</th> <th>Type B || Reward</th> <th>Type C || Succ.</th> <th>Type ... | Table 1 | table_1 | D17-1237 | 7 | emnlp2017 | Table 1 shows the performance on test data. For all types of users, the HRL-based agent yielded more robust dialogue policies outperforming the hand-crafted rule-based agents and flat RL-based agent measured on success rate. It also needed fewer turns per dialogue session to accomplish a task than the rule-based agents... | [1, 1, 1, 2] | ['Table 1 shows the performance on test data.', 'For all types of users, the HRL-based agent yielded more robust dialogue policies outperforming the hand-crafted rule-based agents and flat RL-based agent measured on success rate.', 'It also needed fewer turns per dialogue session to accomplish a task than the rule-base... | [None, ['Type A', 'Type B', 'Type C', 'HRL', 'RL', 'Rule', 'Succ.'], ['HRL', 'Rule', 'RL', 'Turn'], None] | 1 |
D17-1240table_4 | Overall macro and micro precision/recall. Best results are marked in bold. | 2 | [['Measure', 'Macro prec./rec.'], ['Measure', 'Micro prec./rec.']] | 1 | [['Baseline'], ['UUR']] | [['0.19', '0.33'], ['0.19', '0.33']] | row | ['Macro prec./rec.', 'Macro prec./rec.'] | ['Baseline', 'UUR'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Baseline</th> <th>UUR</th> </tr> </thead> <tbody> <tr> <td>Measure || Macro prec./rec.</td> <td>0.19</td> <td>0.33</td> </tr> <tr> <td>Measure || Micro prec./rec.</td> <td... | Table 4 | table_4 | D17-1240 | 7 | emnlp2017 | We observe that while in the SB bucket both the baseline and U U R perform equally well, for all the other buckets U U R massively outperforms the baseline. This implies that for the case where the likeliness of borrowing is the strongest, the baseline does as good as U U R. However, as one moves down the rank list, U ... | [1, 2, 2, 1] | ['We observe that while in the SB bucket both the baseline and U U R perform equally well, for all the other buckets U U R massively outperforms the baseline.', 'This implies that for the case where the likeliness of borrowing is the strongest, the baseline does as good as U U R.', 'However, as one moves down the rank ... | [['Baseline', 'UUR', 'Macro prec./rec.', 'Micro prec./rec.'], ['Baseline', 'UUR'], ['Baseline', 'UUR'], ['Macro prec./rec.', 'Micro prec./rec.', 'UUR', 'Baseline']] | 1 |
D17-1240table_6 | Bucket-wise precision (p)/recall (r) for U U R and the baseline metrics for the two new ground truths. Best results are marked in bold. | 2 | [['Bucket type', 'SB'], ['Bucket type', 'LB'], ['Bucket type', 'BL'], ['Bucket type', 'LM'], ['Bucket type', 'SM']] | 2 | [['Young-Baseline', 'p/r'], ['Young-UUR', 'p/r'], ['Elder-UUR', 'p'], ['Elder-baseline', 'r'], ['Elder-UUR', 'r']] | [['0.27', '0.27', '0.36', '0.25', '0.33'], ['0.09', '0.09', '0.18', '0.08', '0.17'], ['0.08', '0.16', '0.08', '0.28', '0.14'], ['0.18', '0.18', '0.45', '0.14', '0.35'], ['0.33', '0.41', '0.25', '0.41', '0.25']] | column | ['p/r', 'p/r', 'p', 'r', 'r'] | ['Young-UUR', 'p/r'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Young-Baseline || p/r</th> <th>Young-UUR || p/r</th> <th>Elder-UUR || p</th> <th>Elder-baseline || r</th> <th>Elder-UUR || r</th> </tr> </thead> <tbody> <tr> <td>Bucket type || SB</t... | Table 6 | table_6 | D17-1240 | 8 | emnlp2017 | Table 6 shows the bucket-wise precision and recall for U U R and the baseline metrics with respect to two new ground truths. For the young population once again the number of words in each bucket for all the three sets is the same thus making the values of the precision and the recall same. In fact, the precision/recal... | [1, 1, 1] | ['Table 6 shows the bucket-wise precision and recall for U U R and the baseline metrics with respect to two new ground truths.', 'For the young population once again the number of words in each bucket for all the three sets is the same thus making the values of the precision and the recall same.', 'In fact, the precisi... | [['p', 'r'], ['Young-UUR', 'Young-Baseline', 'Elder-baseline', 'SB', 'p/r'], ['p/r', 'Young-UUR', 'Young-Baseline', 'SB']] | 1 |
D17-1244table_7 | Results obtained for each dimension with the best combination of features for all dimensions (Verb + Personx + Persony + Personx Persony, boldfaced in Table 6) | 2 | [['Dimension', 'Cooperative'], ['Dimension', 'Equal'], ['Dimension', 'Intense'], ['Dimension', 'Pleasure'], ['Dimension', 'Active'], ['Dimension', 'Intimate'], ['Dimension', 'Temporary'], ['Dimension', 'Concurrent'], ['Dimension', 'Spat. Near'], ['Dimension', 'Average']] | 2 | [['1 (1st descriptor)', 'P'], ['1 (1st descriptor)', 'R'], ['1 (1st descriptor)', 'F'], ['0 (unknown)', 'P'], ['0 (unknown)', 'R'], ['0 (unknown)', 'F'], ['-1 (2nd descriptor)', 'P'], ['-1 (2nd descriptor)', 'R'], ['-1 (2nd descriptor)', 'F'], ['All', 'P'], ['All', 'R'], ['All', 'F']] | [['0.73', '0.96', '0.83', '0', '0', '0', '0.6', '0.19', '0.29', '0.66', '0.72', '0.65'], ['0.56', '0.1', '0.17', '0', '0', '0', '0.74', '0.97', '0.84', '0.68', '0.74', '0.66'], ['0.39', '0.3', '0.34', '0', '0', '0', '0.78', '0.85', '0.82', '0.67', '0.71', '0.69'], ['0.4', '0.28', '0.33', '0', '0', '0', '0.87', '0.93', ... | column | ['P', 'R', 'F', 'P', 'R', 'F', 'P', 'R', 'F', 'P', 'R', 'F'] | ['Dimension', '1 (1st descriptor)', '-1 (2nd descriptor)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>1 (1st descriptor) || P</th> <th>1 (1st descriptor) || R</th> <th>1 (1st descriptor) || F</th> <th>0 (unknown) || P</th> <th>0 (unknown) || R</th> <th>0 (unknown) || F</th> <th>-1 (2nd ... | Table 7 | table_7 | D17-1244 | 8 | emnlp2017 | Table 7 presents results per dimension with the best overall combination of features (Verb + Personx + Persony + Personx Persony). All dimensions obtain overall F-measures between 0.65 and 0.83 (last column). Results per label are heavily biased towards the most frequent label per dimension (Figure2), although it is th... | [1, 1, 1, 1, 1, 1, 1, 1] | ['Table 7 presents results per dimension with the best overall combination of features (Verb + Personx + Persony + Personx Persony).', 'All dimensions obtain overall F-measures between 0.65 and 0.83 (last column).', 'Results per label are heavily biased towards the most frequent label per dimension (Figure2), although ... | [None, ['Dimension', 'All', 'F'], ['1 (1st descriptor)', '0 (unknown)', '-1 (2nd descriptor)', 'F'], ['1 (1st descriptor)', '-1 (2nd descriptor)', 'F'], ['F', '1 (1st descriptor)', '-1 (2nd descriptor)', 'Concurrent'], ['F', '1 (1st descriptor)', '-1 (2nd descriptor)', 'Spat. Near', 'Active'], ['F'], ['F', '1 (1st desc... | 1 |
D17-1245table_4 | Results obtained on the test set for the argument detection task (L=lexical features) | 2 | [['Approach', 'RF+L'], ['Approach', 'LR+L'], ['Approach', 'LR+all features']] | 1 | [['Precision'], ['Recall'], ['F1']] | [['0.76', '0.69', '0.71'], ['0.76', '0.71', '0.73'], ['0.8', '0.77', '0.78']] | column | ['Precision', 'Recall', 'F1'] | ['LR+all features'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Precision</th> <th>Recall</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Approach || RF+L</td> <td>0.76</td> <td>0.69</td> <td>0.71</td> </tr> <tr> <td>Approach || ... | Table 4 | table_4 | D17-1245 | 3 | emnlp2017 | We cast the argument detection task as a binary classification task, and we apply the supervised algorithms described in Section 2.1. Table 4 reports on the obtained results with the different configurations,. while Table 5 reports on the results obtained by the best configuration, i.e., LR + All features, per each cat... | [2, 1, 0] | ['We cast the argument detection task as a binary classification task, and we apply the supervised algorithms described in Section 2.1.', 'Table 4 reports on the obtained results with the different configurations.', 'Table 5 reports on the results obtained by the best configuration, i.e., LR + All features, per each ca... | [None, ['LR+all features'], None] | 1 |
D17-1245table_5 | Results obtained by the best model on each category of the test set for the argument detection task | 2 | [['Category', 'non-arg'], ['Category', 'arg'], ['Category', 'avg/total']] | 1 | [['P'], ['R'], ['F1'], ['#arguments per category']] | [['0.46', '0.6', '0.52', '187'], ['0.89', '0.82', '0.85', '713'], ['0.8', '0.77', '0.78', '900']] | column | ['P', 'R', 'F1', '#arguments per category'] | ['avg/total'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> <th>#arguments per category</th> </tr> </thead> <tbody> <tr> <td>Category || non-arg</td> <td>0.46</td> <td>0.6</td> <td>0.52</td> <td>18... | Table 5 | table_5 | D17-1245 | 3 | emnlp2017 | Table 4 reports on the obtained results with the different configurations,. while Table 5 reports on the results obtained by the best configuration, i.e., LR + All features, per each category. | [0, 1] | ['Table 4 reports on the obtained results with the different configurations.', 'Table 5 reports on the results obtained by the best configuration, i.e., LR + All features, per each category.'] | [None, ['avg/total']] | 1 |
D17-1245table_6 | Results obtained on the test set for the factual vs opinion argument classification task (L=lexical features) | 2 | [['Approach', 'RF+L'], ['Approach', 'LR+L'], ['Approach', 'LR+all features']] | 1 | [['Precision'], ['Recall'], ['F1']] | [['0.75', '0.68', '0.71'], ['0.75', '0.75', '0.75'], ['0.81', '0.79', '0.8']] | column | ['Precision', 'Recall', 'F1'] | ['LR+all features'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Precision</th> <th>Recall</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Approach || RF+L</td> <td>0.75</td> <td>0.68</td> <td>0.71</td> </tr> <tr> <td>Approach || ... | Table 6 | table_6 | D17-1245 | 4 | emnlp2017 | To address the task of factual vs opinion arguments classification, we apply the supervised classification algorithms described in Section 2.1. Tweets from Grexit dataset are used as training set, and those from Brexit dataset as test set. Table 6 reports on the obtained results. | [1, 2, 1] | ['To address the task of factual vs opinion arguments classification, we apply the supervised classification algorithms described in Section 2.1.', 'Tweets from Grexit dataset are used as training set, and those from Brexit dataset as test set.', 'Table 6 reports on the obtained results.'] | [['RF+L', 'LR+L'], None, None] | 1 |
D17-1245table_8 | Results obtained on the test set for the source identification task | 2 | [['Approach', 'Baseline'], ['Approach', 'Matching+heurist.']] | 1 | [['Precision'], ['Recall'], ['F1']] | [['0.26', '0.48', '0.33'], ['0.69', '0.64', '0.67']] | column | ['Precision', 'Recall', 'F1'] | ['Matching+heurist.'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Precision</th> <th>Recall</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Approach || Baseline</td> <td>0.26</td> <td>0.48</td> <td>0.33</td> </tr> <tr> <td>Approach... | Table 8 | table_8 | D17-1245 | 5 | emnlp2017 | Table 8 reports on the obtained results. As baseline, we use a method that considers all the NEs detected in the tweet as sources. Most of the errors of the algorithm are due to information sources not recognized as NEs (in particular, when the source is a Twitter user), or NEs that are linked to the wrong DBpedia page... | [1, 2, 2, 2] | ['Table 8 reports on the obtained results.', 'As baseline, we use a method that considers all the NEs detected in the tweet as sources.', 'Most of the errors of the algorithm are due to information sources not recognized as NEs (in particular, when the source is a Twitter user), or NEs that are linked to the wrong DBpe... | [None, ['Baseline'], ['Baseline', 'Matching+heurist.', 'F1'], ['Matching+heurist.', 'F1']] | 1 |
D17-1260table_2 | Evaluation results of different ordered rules. As a reference, the performance of optimised student policy is success rate 0.767, #turn 5.10 and reward 0.5124. | 2 | [['Ordered Rules', 'R1 R2 R3'], ['Ordered Rules', 'R1 R4 R2 R3'], ['Ordered Rules', 'R1* R2 R3'], ['Ordered Rules', 'R1* R4 R2 R3']] | 1 | [['Success Rate'], ['#Turn'], ['Reward']] | [['0.695', '4.58', '0.4657'], ['0.749', '5.16', '0.491'], ['0.705', '4.44', '0.4824'], ['0.753', '4.98', '0.5042']] | column | ['Success Rate', '#Turn', 'Reward'] | ['R1* R4 R2 R3'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Success Rate</th> <th>#Turn</th> <th>Reward</th> </tr> </thead> <tbody> <tr> <td>Ordered Rules || R1 R2 R3</td> <td>0.695</td> <td>4.58</td> <td>0.4657</td> </tr> <tr> ... | Table 2 | table_2 | D17-1260 | 9 | emnlp2017 | Table 2 is the evaluation results of different ordered rules. The rule R4 can significantly boost the success rate (comparing line 2 with line 1),while the rule R1* can both boost the success rate and decrease the dialogue length (comparing line 3 with line 1). The combination of R4 and R1* takes respective advantages ... | [1, 1, 1, 2] | ['Table 2 is the evaluation results of different ordered rules.', 'The rule R4 can significantly boost the success rate (comparing line 2 with line 1),while the rule R1* can both boost the success rate and decrease the dialogue length (comparing line 3 with line 1).', 'The combination of R4 and R1* takes respective adv... | [None, ['Success Rate', '#Turn'], ['R1* R4 R2 R3', 'R1 R2 R3', 'R1 R4 R2 R3', 'R1* R2 R3', 'Reward'], ['R1* R4 R2 R3']] | 1 |
D17-1267table_3 | Cognate clustering results on the Algonquian dataset (in %). The absolute percentage of fully found sets is given in parentheses. | 2 | [['System', 'Heuristic Baseline'], ['System', 'LEXSTAT'], ['System', 'SemaPhoR']] | 1 | [['Found Sets'], ['Purity']] | [['18.9 (9.9)', '96.4'], ['19.6 (10.5)', '97.1'], ['63.1(48.2)', '70.3']] | column | ['Found Sets', 'Purity'] | ['SemaPhoR'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Found Sets</th> <th>Purity</th> </tr> </thead> <tbody> <tr> <td>System || Heuristic Baseline</td> <td>18.9 (9.9)</td> <td>96.4</td> </tr> <tr> <td>System || LEXSTAT</td> <... | Table 3 | table_3 | D17-1267 | 8 | emnlp2017 | Table 3 shows the results. LEXSTAT performs slightly better than the heuristic baseline, but both are limited by their inability to relate words that have non-identical definitions. In fact, only 21.4% of all gold cognate sets in the Algonquian dataset contain at least two words with the same definition, which establis... | [1, 1, 2, 0, 1, 2, 1, 2] | ['Table 3 shows the results.', 'LEXSTAT performs slightly better than the heuristic baseline, but both are limited by their inability to relate words that have non-identical definitions.', 'In fact, only 21.4% of all gold cognate sets in the Algonquian dataset contain at least two words with the same definition, which ... | [None, ['LEXSTAT', 'Heuristic Baseline', 'Purity'], None, None, ['SemaPhoR', 'LEXSTAT', 'Found Sets'], ['SemaPhoR'], ['Purity'], None] | 1 |
D17-1269table_1 | We show dramatic improvement on 3 European languages in a low-resource setting. More detailed results in Table 2 show that this improvement continues to a wide variety of languages. The baseline is a simple direct transfer model. The previous state-of-the-art (SOA) is Tsai et al. (2016) | 1 | [['Baseline'], ['Previous SOA'], ['Cheap Translation']] | 1 | [['German'], ['Spanish'], ['Dutch'], ['Avg']] | [['22.61', '45.77', '43.1', '37.27'], ['48.12', '60.55', '61.56', '56.74'], ['57.53', '65.18', '66.5', '62.65']] | column | ['F1', 'F1', 'F1', 'F1'] | ['Cheap Translation', 'Avg'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>German</th> <th>Spanish</th> <th>Dutch</th> <th>Avg</th> </tr> </thead> <tbody> <tr> <td>Baseline</td> <td>22.61</td> <td>45.77</td> <td>43.1</td> <td>37.27</td> </... | Table 1 | table_1 | D17-1269 | 1 | emnlp2017 | We show that our approach gives non-trivial scores across several languages, and when combined with orthogonal features from Wikipedia, improves on state-of-the-art scores. Table 1 compares a simple direct transfer baseline, the previous state-of-the-art in cross-lingual NER, and our proposed algorithm. For these langu... | [2, 1, 1, 2, 2] | ['We show that our approach gives non-trivial scores across several languages, and when combined with orthogonal features from Wikipedia, improves on state-of-the-art scores.', 'Table 1 compares a simple direct transfer baseline, the previous state-of-the-art in cross-lingual NER, and our proposed algorithm.', 'For the... | [None, ['Baseline', 'Previous SOA', 'Cheap Translation'], ['Cheap Translation', 'Avg', 'Baseline', 'Previous SOA'], None, None] | 1 |
D17-1274table_4 | Comparison Analysis for Each Slot Type. | 2 | [['Slot Type', 'state_of_death'], ['Slot Type', 'date_of_birth'], ['Slot Type', 'age'], ['Slot Type', 'per:alternate_names'], ['Slot Type', 'origin'], ['Slot Type', 'country_of_birth'], ['Slot Type', 'city_of_death'], ['Slot Type', 'state_of_headq.'], ['Slot Type', 'cities_of_residence'], ['Slot Type', 'states_of_resid... | 2 | [['Impact of Attention (%)', 'Local'], ['Impact of Attention (%)', 'Global-KB'], ['Training Data Distribution (%)', '-'], ['F1 (%)', '-'], ['Wide Context Distribution (%)', '-'], ['Impact of Dependency Graph (%)', '-']] | [['9.8', '-0.4', '0.9', '41.8', '66.7', '44.2'], ['7.3', '121.3', '1.3', '84.1', '20', '-81.9'], ['4.1', '-5.3', '1.3', '98.5', '15.9', '28.5'], ['-2', '21.2', '1.5', '36.6', '41.5', '62'], ['-0.9', '7.8', '1.7', '61.5', '29.3', '137.3'], ['16.7', '12', '1.9', '61.5', '55.6', '162.5'], ['1.1', '3.3', '1.9', '61.3', '70... | column | ['Impact of Attention (%)', 'Impact of Attention (%)', 'Training Data Distribution (%)', 'F1 (%)', 'Wide Context Distribution (%)', 'Impact of Dependency Graph (%)'] | ['Training Data Distribution (%)', 'F1 (%)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Impact of Attention (%) || Local</th> <th>Impact of Attention (%) || Global-KB</th> <th>Training Data Distribution (%) || -</th> <th>F1 (%) || -</th> <th>Wide Context Distribution (%) || -</th> ... | Table 4 | table_4 | D17-1274 | 8 | emnlp2017 | Table 4 shows the distribution of training data and the F-score of each single type. We can see that, for some slot types, such as per:dateofbirthandper:age, the entity types of their candidate fillers are easy to learn and differentiate from other slot types, and their indicative words are usually explicit, thus our a... | [1, 1, 1, 1, 2] | ['Table 4 shows the distribution of training data and the F-score of each single type.', 'We can see that, for some slot types, such as per:dateofbirthandper:age, the entity types of their candidate fillers are easy to learn and differentiate from other slot types, and their indicative words are usually explicit, thus ... | [['Training Data Distribution (%)', 'F1 (%)'], ['date_of_birth', 'age', 'F1 (%)'], ['state_of_headq.', 'country_of_headq.', 'city_of_headq.', 'F1 (%)'], ['state_of_headq.', 'country_of_headq.', 'city_of_headq.', 'Training Data Distribution (%)', 'F1 (%)'], None] | 1 |
D17-1275table_2 | Development set results on Darkode. Bolded F1 values represent statistically-significant improvements over all other system values in the column with p < 0.05 according to a bootstrap resampling test. Our post-level system outperforms our binary classifier at whole-post accuracy and on type-level product extraction, ev... | 2 | [['Token Prediction', 'Freq'], ['Token Prediction', 'Dict'], ['Token Prediction', 'NER'], ['Token Prediction', 'Binary'], ['Token Prediction', 'Post'], ['Token Prediction', 'Human*'], ['NP Prediction', 'Freq'], ['NP Prediction', 'Dict'], ['NP Prediction', 'First'], ['NP Prediction', 'NER'], ['NP Prediction', 'Binary'],... | 2 | [['Token', 'P'], ['Token', 'R'], ['Token', 'F1'], ['Products', 'P'], ['Products', 'R'], ['Products', 'F1'], ['Post', 'Acc.'], ['NPs', 'P'], ['NPs', 'R'], ['NPs', 'F1'], ['Products', 'P'], ['Products', 'R'], ['Products', 'F1'], ['Post', 'Acc.']] | [['41.9', '42.5', '42.2', '48.4', '33.5', '39.6', '45.3', '-', '-', '-', '-', '-', '-', '-'], ['57.9', '51.1', '54.3', '65.6', '44', '52.7', '60.8', '-', '-', '-', '-', '-', '-', '-'], ['59.7', '62.2', '60.9', '60.8', '62.6', '61.7', '72.2', '-', '-', '-', '-', '-', '-', '-'], ['62.4', '76', '68.5', '58.1', '77.6', '66... | column | ['P', 'R', 'F1', 'P', 'R', 'F1', 'Acc.', 'P', 'R', 'F1', 'P', 'R', 'F1', 'Acc.'] | ['Binary', 'Post', 'Acc.'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Token || P</th> <th>Token || R</th> <th>Token || F1</th> <th>Products || P</th> <th>Products || R</th> <th>Products || F1</th> <th>Post || Acc.</th> <th>NPs || P</th> <th>NPs ... | Table 2 | table_2 | D17-1275 | 6 | emnlp2017 | Table 2 shows development set results on Darkode for each of the four systems for each metric described in Section 3. Our learning-based systems substantially outperform the baselines on the metrics they are optimized for. The post-level system underperforms the binary classifier on the token evaluation, but is superio... | [1, 2, 1, 2, 1, 1] | ['Table 2 shows development set results on Darkode for each of the four systems for each metric described in Section 3.', 'Our learning-based systems substantially outperform the baselines on the metrics they are optimized for.', 'The post-level system underperforms the binary classifier on the token evaluation, but is... | [None, None, ['Post', 'Acc.', 'Binary', 'Token', 'F1'], ['Products'], ['Human*', 'P'], ['Binary', 'Post', 'F1', 'Acc.']] | 1 |
D17-1275table_5 | Product token out-of-vocabulary rates on development sets (test set for Blackhat and Nulled) of various forums with respect to training on Darkode and Hack Forums. We also show the recall of an NPlevel system on seen (Rseen) and OOV (Roov) tokens. Darkode seems to be more “general” than Hack Forums: the Darkode system ... | 2 | [['System', 'Binary (Darkode)'], ['System', 'Binary (HF)']] | 3 | [['Test', 'Darkode', '% OOV'], ['Test', 'Darkode', 'Rseen'], ['Test', 'Darkode', 'Roov'], ['Test', 'Hack Forums', '% OOV'], ['Test', 'Hack Forums', 'Rseen'], ['Test', 'Hack Forums', 'Roov'], ['Test', 'Blackhat', '% OOV'], ['Test', 'Blackhat', 'Rseen'], ['Test', 'Blackhat', 'Roov'], ['Test', 'Nulled', '% OOV'], ['Test',... | [['20', '78', '62', '41', '64', '47', '42', '69', '46', '30', '72', '45'], ['50', '76', '40', '35', '75', '42', '51 70', '70', '38', '33', '83', '32']] | column | ['% OOV', 'Rseen', 'Roov', '% OOV', 'Rseen', 'Roov', '% OOV', 'Rseen', 'Roov', '% OOV', 'Rseen', 'Roov'] | ['Binary (Darkode)', 'Binary (HF)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Test || Darkode || % OOV</th> <th>Test || Darkode || Rseen</th> <th>Test || Darkode || Roov</th> <th>Test || Hack Forums || % OOV</th> <th>Test || Hack Forums || Rseen</th> <th>Test || Hack ... | Table 5 | table_5 | D17-1275 | 8 | emnlp2017 | Table 5 confirms this intuition: it shows product out-of-vocabulary rates in each of the four forums relative to training on both Darkode and Hack Forums, along with recall of an NP-level system on both previously seen and OOV products. As expected, performance is substantially higher on in vocabulary products. OOV ra... | [1, 1, 1, 1] | ['Table 5 confirms this intuition: it shows product out-of-vocabulary rates in each of the four forums relative to training on both Darkode and Hack Forums, along with recall of an NP-level system on both previously seen and OOV products.', 'As expected, performance is substantially higher on in vocabulary products.',... | [['Binary (Darkode)', 'Binary (HF)', 'Rseen', 'Roov'], ['% OOV'], ['% OOV', 'Binary (Darkode)'], ['Binary (Darkode)', 'Binary (HF)']] | 1 |
D17-1276table_3 | Results on GENIA. | 1 | [['LCRF (single)'], ['LCRF (multiple)'], ['Finkel and Manning (2009)'], ['Lu and Roth (2015)'], ['This work (STATE)'], ['This work (EDGE)']] | 1 | [['P'], ['R'], ['F1'], ['w/s']] | [['77.1', '63.3', '69.5', '81.6'], ['75.9', '66.1', '70.6', '175.8'], ['75.4', '65.9', '70.3', ' -'], ['74.2', '66.7', '70.3', '931.9'], ['74', '67.7', '70.7', '110.8'], ['75.4', '66.8', '70.8', '389.2']] | column | ['P', 'R', 'F1', 'w/s'] | ['This work (STATE)', 'This work (EDGE)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> <th>w/s</th> </tr> </thead> <tbody> <tr> <td>LCRF (single)</td> <td>77.1</td> <td>63.3</td> <td>69.5</td> <td>81.6</td> </tr> <tr> ... | Table 3 | table_3 | D17-1276 | 8 | emnlp2017 | Table 3 shows the results of running the models with F1-score tuning on GENIA dataset. All models include Brown clustering features learned from PubMed abstracts. Besides the mention hypergraph baseline, we also make comparisons with the system of Finkel and Manning (2009) that can also support overlapping mentions. We... | [1, 2, 1, 1] | ['Table 3 shows the results of running the models with F1-score tuning on GENIA dataset.', 'All models include Brown clustering features learned from PubMed abstracts.', 'Besides the mention hypergraph baseline, we also make comparisons with the system of Finkel and Manning (2009) that can also support overlapping ment... | [['F1'], None, ['This work (STATE)', 'This work (EDGE)', 'Finkel and Manning (2009)'], ['This work (STATE)', 'This work (EDGE)', 'LCRF (single)', 'LCRF (multiple)', 'Finkel and Manning (2009)', 'Lu and Roth (2015)', 'F1']] | 1 |
D17-1279table_1 | Overall span-level F1 results for keyphrase identification (SemEval Subtask A) and classification (SemEval Subtask B). ∗ indicates tranductive setting. + indicates not documented as either transductive or inductive. indicates score not reported or not applied. | 2 | [['Span Level', 'Gupta et.al.(unsupervised)'], ['Span Level', 'Tsai et.al. (unsupervised)'], ['Span Level', 'MULTITASK'], ['Span Level', 'Best Non-Neural SemEval+'], ['Span Level', 'Best Neural SemEval+'], ['Span Level', 'NN-CRF (supervised)'], ['Span Level', 'NN-CRF (semi)'], ['Span Level', 'NN-CRF (semi)*']] | 1 | [['Classification (dev)'], ['Classification (test)'], ['Identification']] | [['-', '9.8', '6.4'], ['-', '11.9', '8'], ['45.5', '-', '-'], ['-', '38', '51'], ['-', '44', '56'], ['48.1', '40.2', '52.1'], ['51.9', '45.3', '56.9'], ['52.1', '46.6', '57.6']] | column | ['F1', 'F1', 'F1'] | ['NN-CRF (supervised)', 'NN-CRF (semi)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Classification (dev)</th> <th>Classification (test)</th> <th>Identification</th> </tr> </thead> <tbody> <tr> <td>Span Level || Gupta et.al.(unsupervised)</td> <td>-</td> <td>9.8</td>... | Table 1 | table_1 | D17-1279 | 6 | emnlp2017 | Table 1 reports the results of our neural sequence tagging model NN-CRF in both supervised and semi-supervised learning (ULM and graph-based), and compares them with the baselines and the state-of-the-art (best SemEval System (Augenstein et al., 2017)). Augenstein and Søgaard (2017) use a multi-task learning strategy ... | [1, 1, 2, 1, 1, 1] | ['Table 1 reports the results of our neural sequence tagging model NN-CRF in both supervised and semi-supervised learning (ULM and graph-based), and compares them with the baselines and the state-of-the-art (best SemEval System (Augenstein et al., 2017)).', 'Augenstein and Søgaard (2017) use a multi-task learning stra... | [['NN-CRF (supervised)', 'NN-CRF (semi)'], ['Classification (dev)', 'MULTITASK'], None, ['NN-CRF (semi)', 'Identification'], ['NN-CRF (semi)', 'Best Non-Neural SemEval+', 'Best Neural SemEval+', 'Identification'], ['NN-CRF (semi)']] | 1 |
D17-1284table_2 | Results for ACE-2004: F1 is calculated for predicted mentions, and accuracy on goldmentions. Results for Wikifier and AIDA are from (Ling et al., 2015). All systems use the same mention extraction protocol showing the difference in F1 is due to linking performance. | 1 | [['AIDA'], ['Wikifier'], ['Vinculum'], ['Model C'], ['Model CDT'], ['Model CDTE']] | 1 | [['F1'], ['Accuracy']] | [['77.8', '-'], ['85.1', '-'], ['88.5', '-'], ['88.9', '93.1'], ['89.8', '93.9'], ['90.7', '94.3']] | column | ['F1', 'Accuracy'] | ['Model C', 'Model CDT', 'Model CDTE', 'F1', 'Accuracy'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>F1</th> <th>Accuracy</th> </tr> </thead> <tbody> <tr> <td>AIDA</td> <td>77.8</td> <td>-</td> </tr> <tr> <td>Wikifier</td> <td>85.1</td> <td>-</td> </tr> <tr> ... | Table 2 | table_2 | D17-1284 | 7 | emnlp2017 | In Table 2 we present results for our models on ACE-2004. Our model outperforms the Wikifier and Vinculum systems that only use information from Wikipedia, and AIDA, by a significant margin, indicating its possible over-fitting to the CoNLL domain. Hence, it shows our model's ability to perform accurate linking across ... | [1, 1, 1] | ['In Table 2 we present results for our models on ACE-2004.', 'Our model outperforms the Wikifier and Vinculum systems that only use information from Wikipedia, and AIDA, by a significant margin, indicating its possible over-fitting to the CoNLL domain.', "Hence, it shows our model's ability to perform accurate linking... | [None, ['Model C', 'Model CDT', 'Model CDTE', 'F1', 'Wikifier', 'Vinculum', 'AIDA'], ['Model C', 'Model CDT', 'Model CDTE', 'Accuracy']] | 1 |
D17-1285table_2 | Test results (F1 score) on the CauseEffect subset(?) of SemEval-2010 dataset. Results are grouped as 1) Top 3 participating teams in SemEval-2010 competition; 2) Baseline BiGRU model; 3) Recent state-of-the-art treeLSTM model (Miwa and Bansal, 2016); 4) Our work. | 2 | [['Model', 'Tymoshenko and Giuliano (2010)'], ['Model', 'Tratz and Hovy (2010)'], ['Model', 'Rink and Harabagiu (2010)'], ['Model', 'BiGRU'], ['Model', 'Miwa and Bansal (2016)'], ['Model', 'Contextual similarity modeling'], ['Model', 'Relational similarity modeling']] | 1 | [['F1 Score']] | [['82.30%'], ['87.63%'], ['89.63%'], ['89.89%'], ['91.57%'], ['90.77%'], ['92.28%']] | column | ['F1 Score'] | ['Contextual similarity modeling', 'Relational similarity modeling', 'F1 Score'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>F1 Score</th> </tr> </thead> <tbody> <tr> <td>Model || Tymoshenko and Giuliano (2010)</td> <td>82.30%</td> </tr> <tr> <td>Model || Tratz and Hovy (2010)</td> <td>87.63%</td> </tr... | Table 2 | table_2 | D17-1285 | 8 | emnlp2017 | We also evaluate the relation extraction component (Sec. 5) on Cause-Effect subset of SemEval2010 dataset. Note our causality/correlation relation extraction component is not supposed to be a general purpose one, since our system only focuses on insight extraction of biomedical/health literature. We compare our relatio... | [2, 2, 1, 1] | ['We also evaluate the relation extraction component (Sec. 5) on Cause-Effect subset of SemEval2010 dataset.', 'Note our causality/correlation relation extraction component is not supposed to be a general purpose one, since our system only focuses on insight extraction of biomedical/health literature.', 'We compare our... | [None, ['Contextual similarity modeling', 'Relational similarity modeling'], ['Contextual similarity modeling', 'Relational similarity modeling', 'F1 Score', 'Miwa and Bansal (2016)'], ['BiGRU', 'F1 Score']] | 1 |
D17-1291table_2 | Performance comparison of different outlier document detection methods. All results are shown as percents. Data set Method | 4 | [['Dataset', 'NYT', 'Method', 'TFIDF-COS'], ['Dataset', 'NYT', 'Method', 'P2V-COS'], ['Dataset', 'NYT', 'Method', 'UNI-KL'], ['Dataset', 'NYT', 'Method', 'TM-KL'], ['Dataset', 'NYT', 'Method', 'VMF-SF'], ['Dataset', 'NYT', 'Method', 'VMF-E'], ['Dataset', 'NYT', 'Method', 'VMF-Q'], ['Dataset', 'ARNET', 'Method', 'TFIDF-... | 1 | [['MAP'], ['Rcl@1%'], ['Rcl@2%'], ['Rcl@5%']] | [['5.03', '4.73', '6.72', '14.72'], ['22.07', '23.45', '44.64', '66.18'], ['10.28', '11.92', '16.32', '31.34'], ['14.51', '16.5', '16.5', '24.67'], ['33.7', '31.03', '44.45', '62.6'], ['36.57', '35.91', '49.41', '67.56'], ['41.88', '56.99', '63.29', '79.23'], ['8.99', '15.4', '18.75', '30.23'], ['7.39', '10.51', '14.78... | column | ['MAP', 'Rcl@1%', 'Rcl@2%', 'Rcl@5%'] | ['VMF-Q'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MAP</th> <th>Rcl@1%</th> <th>Rcl@2%</th> <th>Rcl@5%</th> </tr> </thead> <tbody> <tr> <td>Dataset || NYT || Method || TFIDF-COS</td> <td>5.03</td> <td>4.73</td> <td>6.72</td... | Table 2 | table_2 | D17-1291 | 7 | emnlp2017 | Performance comparison. Table 2 shows performance of different outlier document detection methods. It can be observed that our method outperforms all the baselines in both data sets. In both data sets, VMF-Q can achieve a 45% to 135% increase from baselines in terms of recall by examining the top 1% outliers. Generally... | [2, 1, 1, 1, 1] | ['Performance comparison.', 'Table 2 shows performance of different outlier document detection methods.', 'It can be observed that our method outperforms all the baselines in both data sets.', 'In both data sets, VMF-Q can achieve a 45% to 135% increase from baselines in terms of recall by examining the top 1% outliers... | [None, None, ['VMF-SF', 'VMF-E', 'VMF-Q', 'Dataset', 'NYT', 'ARNET'], ['VMF-Q', 'Dataset', 'NYT', 'ARNET'], ['VMF-Q', 'Dataset', 'NYT', 'ARNET']] | 1 |
D17-1292table_9 | Human evaluation on explanation chains generated by symbolic and neural reasoners. | 2 | [['Reasoners', 'Accuracy (%)']] | 1 | [['SYMB'], ['NEUR']] | [['42.5', '57.5']] | row | ['Accuracy (%)'] | ['Accuracy (%)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>SYMB</th> <th>NEUR</th> </tr> </thead> <tbody> <tr> <td>Reasoners || Accuracy (%)</td> <td>42.5</td> <td>57.5</td> </tr> </tbody></table> | Table 9 | table_9 | D17-1292 | 9 | emnlp2017 | We also conduct a human assessment on the explanation chains produced by the two reasoners, asking people to choose more convincing explanation chains for each feature-target pair. Table 9 shows their relative preferences. | [1, 1] | ['We also conduct a human assessment on the explanation chains produced by the two reasoners, asking people to choose more convincing explanation chains for each feature-target pair.', 'Table 9 shows their relative preferences.'] | [['Reasoners', 'Accuracy (%)'], None] | 1 |
D17-1293table_2 | Result of thread-subtitle matching. | 2 | [['Model', 'BOW'], ['Model', 'Word2Vec'], ['Model', 'Para2Vec'], ['Model', 'HDV'], ['Model', 'CDR']] | 2 | [['People and Network', 'P@1'], ['People and Network', 'P@3'], ['People and Network', 'P@5'], ['Introduction to MOOC', 'P@1'], ['Introduction to MOOC', 'P@3'], ['Introduction to MOOC', 'P@5']] | [['0.437', '0.718', '0.806', '0.449', '0.811', '0.909'], ['0.485', '0.699', '0.816', '0.453', '0.826', '0.89'], ['0.408', '0.612', '0.728', '0.504', '0.823', '0.894'], ['0.466', '0.621', '0.777', '0.496', '0.819', '0.913'], ['0.505', '0.689', '0.786', '0.52', '0.854', '0.941']] | column | ['P@1', 'P@3', 'P@5', 'P@1', 'P@3', 'P@5'] | ['CDR'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>People and Network || P@1</th> <th>People and Network || P@3</th> <th>People and Network || P@5</th> <th>Introduction to MOOC || P@1</th> <th>Introduction to MOOC || P@3</th> <th>Introductio... | Table 2 | table_2 | D17-1293 | 5 | emnlp2017 | Result Firstly we use all the data to learn word embeddings by models. Then the learned word vectors are utilized to calculate the similarity between threads and subtitles, and rank the subtitles. Table 2 reports the results of thread-subtitle matching. We can notice that there are some anomalies in P@3 and P@5 results... | [2, 2, 1, 1, 2, 1, 1, 1, 2] | ['Result Firstly we use all the data to learn word embeddings by models.', 'Then the learned word vectors are utilized to calculate the similarity between threads and subtitles, and rank the subtitles.', 'Table 2 reports the results of thread-subtitle matching.', 'We can notice that there are some anomalies in P@3 and ... | [None, ['Word2Vec'], None, ['P@3', 'P@5'], None, ['People and Network', 'P@3', 'P@5'], ['CDR', 'People and Network', 'P@1'], ['HDV', 'CDR'], None] | 1 |
D17-1294table_2 | Results of ablation tests for the coarsegrained classifier. | 2 | [['Features/Models', 'All'], ['Features/Models', 'All - Unigrams'], ['Features/Models', 'All - Bigrams'], ['Features/Models', 'All - Rel. Location'], ['Features/Models', 'All - Topic Models'], ['Features/Models', 'All - Productions'], ['Features/Models', 'All - Nonterminals'], ['Features/Models', 'All - Max. Depth'], [... | 1 | [['Precision'], ['Recall'], ['F1']] | [['0.862', '0.641', '0.735'], ['0.731', '0.487', '0.585'], ['0.885', '0.59', '0.708'], ['0.889', '0.615', '0.727'], ['0.852', '0.59', '0.697'], ['0.957', '0.564', '0.71'], ['0.913', '0.538', '0.677'], ['0.857', '0.615', '0.716'], ['0.857', '0.615', '0.716'], ['0.425', '0.797', '0.554'], ['0.667', '0.211', '0.32'], ['0.... | column | ['Precision', 'Recall', 'F1'] | ['All', 'F1'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Precision</th> <th>Recall</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Features/Models || All</td> <td>0.862</td> <td>0.641</td> <td>0.735</td> </tr> <tr> <td>Fea... | Table 2 | table_2 | D17-1294 | 5 | emnlp2017 | We performed ablation tests excluding one feature at a time from the coarse-grained classifier. The results of these tests are presented in Table 2 as precision, recall, and F1 scores for the positive class, i.e., the opt-out class. Using the F1 scores as the primary evaluation metric, it appears that all features help... | [2, 1, 1, 1, 1, 1, 2] | ['We performed ablation tests excluding one feature at a time from the coarse-grained classifier.', 'The results of these tests are presented in Table 2 as precision, recall, and F1 scores for the positive class, i.e., the opt-out class.', 'Using the F1 scores as the primary evaluation metric, it appears that all featu... | [None, ['Precision', 'Recall', 'F1', 'All'], ['F1', 'All'], ['All - Unigrams', 'All - Topic Models', 'All - Nonterminals', 'F1'], ['All', 'F1'], ['All - Unigrams', 'F1'], None] | 1 |
D17-1296table_4 | Experiment results on the development and test data of English Switchboard data. | 2 | [['Method', 'CRF'], ['Method', 'Bi-LSTM'], ['Method', 'greedy'], ['Method', 'greedy + beam'], ['Method', 'greedy + scheduled']] | 2 | [['Dev', 'P'], ['Dev', 'R'], ['Dev', 'F1'], ['Test', 'P'], ['Test', 'R'], ['Test', 'F1']] | [['93.9', '78.3', '85.4', '91.7', '75.1', '82.6'], ['94.1', '79.3', '86.1', '91.7', '80.6', '85.8'], ['91.4', '83.7', '87.4', '91.1', '83.3', '87.1'], ['93.6', '83.6', '88.3', '92.8', '82.7', '87.5'], ['92.3', '84.3', '88.1', '91.1', '84.1', '87.5']] | column | ['P', 'R', 'F1', 'P', 'R', 'F1'] | ['greedy + beam', 'greedy + scheduled', 'F1'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Dev || P</th> <th>Dev || R</th> <th>Dev || F1</th> <th>Test || P</th> <th>Test || R</th> <th>Test || F1</th> </tr> </thead> <tbody> <tr> <td>Method || CRF</td> <td>93.9</td... | Table 4 | table_4 | D17-1296 | 7 | emnlp2017 | We build two baseline systems using CRF and Bi-LSTM, respectively. The hand-crafted discrete features of CRF refer to those in Ferguson et al.(2015). For the Bi-LSTM model, the token embedding is the same with our transition-based method. Table 4 shows the result of our model on both the development and test sets. Beam... | [2, 2, 2, 1, 1, 1, 2] | ['We build two baseline systems using CRF and Bi-LSTM, respectively.', 'The hand-crafted discrete features of CRF refer to those in Ferguson et al.(2015).', 'For the Bi-LSTM model, the token embedding is the same with our transition-based method.', 'Table 4 shows the result of our model on both the development and test... | [['CRF', 'Bi-LSTM'], None, ['Bi-LSTM'], None, ['greedy + beam', 'Test', 'F1'], ['greedy + scheduled', 'Test', 'F1'], ['greedy + scheduled', 'F1']] | 1 |
D17-1296table_6 | Test result of our transition-based model using DPS files for training. | 2 | [['Method', 'Our'], ['Method', 'Bi-LSTM'], ['Method', 'M3N * (Qian and Liu, 2013)'], ['Method', 'CRF']] | 1 | [['P'], ['R'], ['F1']] | [['93.1', '83.5', '88.1'], ['92.4', '82', '86.9'], ['90.6', '78.7', '84.2'], ['91.8', '77.2', '83.9']] | column | ['P', 'R', 'F1'] | ['Our'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Method || Our</td> <td>93.1</td> <td>83.5</td> <td>88.1</td> </tr> <tr> <td>Method || Bi-LSTM</td> ... | Table 6 | table_6 | D17-1296 | 7 | emnlp2017 | As described in section 3.1, to directly compare with the transition-based parsing methods (Honnibal and Johnson, 2014; Wu et al., 2015), we only use MRG files, which are less than the DPS files. In fact, many methods, such as Qian and Liu (2013), have used all the DPS files as training data. We are curious about the p... | [2, 2, 1, 2, 1, 2, 1, 1] | ['As described in section 3.1, to directly compare with the transition-based parsing methods (Honnibal and Johnson, 2014; Wu et al., 2015), we only use MRG files, which are less than the DPS files.', 'In fact, many methods, such as Qian and Liu (2013), have used all the DPS files as training data.', 'We are curious abo... | [None, None, ['Our'], None, None, ['M3N*'], ['Our'], ['Our']] | 1 |
D17-1296table_7 | performance on Chinese annotated data The result of M3N∗ comes from our experiments with the toolkit4 released by Qian and Liu (2013), which use the same data set and pre-processing. Our model achieves a 88.1% F-score by using more training data, obtaining 0.6 point improvement compared with the system training on MRG ... | 2 | [['Method', 'Our'], ['Method', 'Bi-LSTM'], ['Method', 'CRF']] | 2 | [['Dev', 'P'], ['Dev', 'R'], ['Dev', 'F1'], ['Test', 'P'], ['Test', 'R'], ['Test', 'F1']] | [['68.9', '40.4', '50.9', '77.2', '37.7', '50.6'], ['60.1', '41.3', '48.9', '65.3', '38.2', '48.2'], ['73.7', '33.5', '46.1', '77.7', '32', '45.3']] | column | ['P', 'R', 'F1', 'P', 'R', 'F1'] | ['Our', 'F1'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Dev || P</th> <th>Dev || R</th> <th>Dev || F1</th> <th>Test || P</th> <th>Test || R</th> <th>Test || F1</th> </tr> </thead> <tbody> <tr> <td>Method || Our</td> <td>68.9</td... | Table 7 | table_7 | D17-1296 | 7 | emnlp2017 | Table 7 shows the results of Chinese disfluency detection. Our model obtains a 2.4 point improvement compared with the baseline Bi-LSTM model and a 5.3 point compared with the baseline CRF model. The performance on Chinese is much lower than that on English. Apart from the smaller training set, the main reason is that ... | [1, 1, 2, 2] | ['Table 7 shows the results of Chinese disfluency detection.', 'Our model obtains a 2.4 point improvement compared with the baseline Bi-LSTM model and a 5.3 point compared with the baseline CRF model.', 'The performance on Chinese is much lower than that on English.', 'Apart from the smaller training set, the main reas... | [None, ['Our', 'Test', 'F1', 'Bi-LSTM', 'CRF'], None, None] | 1 |
D17-1297table_6 | Error-type performance before and after re-ranking on the FCE test set (largest impact highlighted in bold; bottom part of the table displays negative effects on performance). | 2 | [['Type', 'M:ADV'], ['Type', 'M:VERB'], ['Type', 'R:NOUN:NUM'], ['Type', 'R:NOUN:POSS'], ['Type', 'R:OTHER'], ['Type', 'R:PRON'], ['Type', 'R:VERB:FORM'], ['Type', 'R:VERB:SVA'], ['Type', 'R:VERB:TENSE'], ['Type', 'U:ADV'], ['Type', 'U:DET'], ['Type', 'U:NOUN'], ['Type', 'U:PREP'], ['Type', 'U:PRON'], ['Type', 'U:PUNCT... | 2 | [['CAMB16SMT', 'F0.5'], ['CAMB16SMT + LSTMcamb', 'F0.5']] | [['25', '31.25'], ['25.42', '29.85'], ['56.6', '62.5'], ['35.71', '55.56'], ['34.99', '38.75'], ['26.88', '33.33'], ['53.62', '58.06'], ['58.38', '69.4'], ['31.94', '36.29'], ['13.51', '22.73'], ['46.27', '55.3'], ['10.1', '15.72'], ['47.62', '53.4'], ['30.77', '39.33'], ['51.22', '58.38'], ['28.41', '41.67'], ['43.69'... | column | ['F0.5', 'F0.5'] | ['R:NOUN:POSS', 'R:VERB:SVA', 'U:ADV', 'U:DET', 'U:VERB:TENSE', 'R:CONTR'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>CAMB16SMT || F0.5</th> <th>CAMB16SMT + LSTMcamb || F0.5</th> </tr> </thead> <tbody> <tr> <td>Type || M:ADV</td> <td>25</td> <td>31.25</td> </tr> <tr> <td>Type || M:VERB</td> ... | Table 6 | table_6 | D17-1297 | 9 | emnlp2017 | Table 6 presents the performance for a subset of error types that are affected the most re-ranking CAMB16SMT on before and after the FCE test set. The error types are interpreted as follows: Missing error; Replace error; Unnecessary error. The largest improvement is observed in replacement errors referring to possessiv... | [1, 2, 1] | ['Table 6 presents the performance for a subset of error types that are affected the most re-ranking CAMB16SMT on before and after the FCE test set.', 'The error types are interpreted as follows: Missing error; Replace error; Unnecessary error.', 'The largest improvement is observed in replacement errors referring to p... | [['CAMB16SMT', 'CAMB16SMT + LSTMcamb'], None, ['R:NOUN:POSS', 'R:VERB:SVA', 'U:ADV', 'U:DET', 'U:VERB:TENSE', 'R:CONTR', 'CAMB16SMT + LSTMcamb', 'F0.5']] | 1 |
D17-1298table_1 | AESW development/test set correction results. GLEU and M 2 differences on test are statistically significant via paired bootstrap resampling (Koehn, 2004; Graham et al., 2014) at the 0.05 level, resampling the full set 50 times. | 2 | [['Model', 'No Change'], ['Model', 'SMT-DIFFS+M2'], ['Model', 'SMT-DIFFS+BLEU'], ['Model', 'WORD+BI-DIFFS'], ['Model', 'CHAR+BI-DIFFS'], ['Model', 'SMT+BLEU'], ['Model', 'WORD+BI'], ['Model', 'CHARCNN'], ['Model', 'CHAR+BI'], ['Model', 'WORD+DOM'], ['Model', 'WORD+BI+DOM'], ['Model', 'CHARCNN+BI+DOM'], ['Model', 'CHARC... | 2 | [['GLUE', 'Dev'], ['GLUE', 'Test'], ['M2', 'Dev'], ['M2', 'Test']] | [['89.68', '89.45', '0', '0'], ['90.44', '-', '38.55', '-'], ['90.9', '-', '37.66', '-'], ['91.18', '-', '38.88', '-'], ['91.28', '-', '40.11', '-'], ['90.95', '90.7', '38.99', '38.31'], ['91.34', '91.05', '43.61', '42.78'], ['91.23', '90.96', '42.02', '41.21'], ['91.46', '91.22', '44.67', '44.62'], ['91.25', '-', '43.... | column | ['correlation', 'correlation', 'correlation', 'correlation'] | ['CHAR+BI+DOM'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>GLUE || Dev</th> <th>GLUE || Test</th> <th>M2 || Dev</th> <th>M2 || Test</th> </tr> </thead> <tbody> <tr> <td>Model || No Change</td> <td>89.68</td> <td>89.45</td> <td>0</t... | Table 1 | table_1 | D17-1298 | 3 | emnlp2017 | Table 1 shows the full set of experimental results on the AESW development and test data. The CHAR+BI+DOM model is stronger than the WORD+BI+DOM and CHARCNN+DOM models by 2.9 M2 (0.2 GLEU) and 3.3 M2 (0.3 GLEU), respectively. The sequence-to-sequence models were also more effective than the SMT models, as shown in Tabl... | [1, 1, 1, 1, 1] | ['Table 1 shows the full set of experimental results on the AESW development and test data.', 'The CHAR+BI+DOM model is stronger than the WORD+BI+DOM and CHARCNN+DOM models by 2.9 M2 (0.2 GLEU) and 3.3 M2 (0.3 GLEU), respectively.', 'The sequence-to-sequence models were also more effective than the SMT models, as shown... | [None, ['CHAR+BI+DOM', 'WORD+BI+DOM', 'CHARCNN+DOM', 'Dev'], None, ['WORD+BI-DIFFS', 'WORD+BI', 'Dev'], ['CHAR+BI+DOM']] | 1 |
D17-1299table_1 | Sentence-level formality quantifying evaluation (Spearman’s rho) among different models with different vector spaces. | 1 | [['SimDiff'], ['SVM'], ['PCA'], ['DENSIFIER'], ['baseline']] | 1 | [['LSA'], ['W2V']] | [['0.66', '0.654'], ['0.657', '0.585'], ['0.656', '0.663'], ['0.664', '0.644'], ['0.54', '0.54']] | column | ['rho', 'rho'] | ['W2V'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>LSA</th> <th>W2V</th> </tr> </thead> <tbody> <tr> <td>SimDiff</td> <td>0.66</td> <td>0.654</td> </tr> <tr> <td>SVM</td> <td>0.657</td> <td>0.585</td> </tr> <tr>... | Table 1 | table_1 | D17-1299 | 3 | emnlp2017 | Table 1 shows that all models based on the vector space achieve similar performance in terms of Spearman's ρ (except SVM-W2V which yields lower performance). The baseline method based on unigram models was outperformed by 0.1+ point. So we select DENSIFIER-LSA as a representative for our FSMT system. | [1, 1, 1] | ["Table 1 shows that all models based on the vector space achieve similar performance in terms of Spearman's ρ (except SVM-W2V which yields lower performance).", 'The baseline method based on unigram models was outperformed by 0.1+ point.', 'So we select DENSIFIER-LSA as a representative for our FSMT system.'] | [['SVM', 'W2V'], ['baseline'], ['DENSIFIER', 'LSA']] | 1 |
D17-1301table_1 | Evaluation of translation quality. “Init” denotes Initialization of encoder (“enc”), decoder (“dec”), or both (“enc+dec”), and “Auxi” denotes Auxiliary Context. “†” indicates statistically significant difference (P < 0.01) from the baseline NEMATUS. | 2 | [['System', 'MOSES'], ['System', 'NEMATUS'], ['System', '+Initenc'], ['System', '+Initdec'], ['System', '+Initenc+dec'], ['System', '+Auxi'], ['System', '+Gating Auxi'], ['System', '+Initenc+dec+Gating Auxi']] | 1 | [['MT05'], ['MT06'], ['MT08'], ['Ave.'], ['delta']] | [['33.08', '32.69', '23.78', '28.24', '–'], ['34.35', '35.75', '25.39', '30.57', '–'], ['36.05', '36.44†', '26.65†', '31.55', '+0.98'], ['36.27', '36.69†', '27.11†', '31.90', '+1.33'], ['36.34', '36.82†', '27.18†', '32.00', '+1.43'], ['35.26', '36.47†', '26.12†', '31.30', '+0.73'], ['36.64', '37.63†', '26.85†', '32.24'... | column | ['BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU'] | ['+Initenc+dec+Gating Auxi'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MT05</th> <th>MT06</th> <th>MT08</th> <th>Ave.</th> <th>delta</th> </tr> </thead> <tbody> <tr> <td>System || MOSES</td> <td>33.08</td> <td>32.69</td> <td>23.78</td> ... | Table 1 | table_1 | D17-1301 | 4 | emnlp2017 | Table 1 shows the translation performance in terms of BLEU score. Clearly, the proposed approaches significantly outperforms baseline in all cases. | [1, 1] | ['Table 1 shows the translation performance in terms of BLEU score.', 'Clearly, the proposed approaches significantly outperforms baseline in all cases.'] | [None, ['+Initenc+dec+Gating Auxi']] | 1 |
D17-1309table_6 | Preordering results for English → Japanese. FRS (in [0, 100]) is the fuzzy reordering score (Talbot et al., 2011). | 2 | [['Model', 'Nakagawa (2015)'], ['Model', 'Small FF'], ['Model', 'Small FF + POS tags'], ['Model', 'Small FF + Tagger input fts.']] | 1 | [['FRS'], ['Size']] | [['81.6', '-'], ['75.2', '0.5MB'], ['81.3', '1.3MB'], ['76.6', '3.7MB']] | column | ['FRS', 'Size'] | ['Small FF + POS tags'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>FRS</th> <th>Size</th> </tr> </thead> <tbody> <tr> <td>Model || Nakagawa (2015)</td> <td>81.6</td> <td>-</td> </tr> <tr> <td>Model || Small FF</td> <td>75.2</td> <td>... | Table 6 | table_6 | D17-1309 | 5 | emnlp2017 | Table 6 shows results with or without using the predicted POS tags in the preorderer, as well as including the features used by the tagger in the reorderer directly and only training the downstream task. The preorderer that includes a separate network for POS tagging and then extracts features over the predicted tags i... | [1, 1, 2] | ['Table 6 shows results with or without using the predicted POS tags in the preorderer, as well as including the features used by the tagger in the reorderer directly and only training the downstream task.', 'The preorderer that includes a separate network for POS tagging and then extracts features over the predicted t... | [['Small FF + POS tags', 'Small FF + Tagger input fts.'], ['Small FF + POS tags', 'FRS'], ['Small FF + POS tags']] | 1 |
D17-1317table_5 | Model performance on the Politifact validation set. | 1 | [['Majority Baseline'], ['Naive Bayes'], ['MaxEnt'], ['LSTM']] | 2 | [['2-CLASS', 'text'], ['2-CLASS', '+LIWC'], ['6-CLASS', 'text'], ['6-CLASS', '+LIWC']] | [['0.39', '-', '0.6', '-'], ['0.44', '0.58', '0.16', '0.21'], ['0.55', '0.58', '0.2', '0.21'], ['0.58', '0.57', '0.21', '0.22']] | column | ['F1', 'F1', 'F1', 'F1'] | ['LSTM'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>2-CLASS || text</th> <th>2-CLASS || +LIWC</th> <th>6-CLASS || text</th> <th>6-CLASS || +LIWC</th> </tr> </thead> <tbody> <tr> <td>Majority Baseline</td> <td>0.39</td> <td>-</td>... | Table 5 | table_5 | D17-1317 | 5 | emnlp2017 | Table 5 summarizes the performance on the development set. We report macro averaged F1 score in all tables. The LSTM outperforms the other models when only using text as input; however the other two models improve substantially with adding LIWC features, particularly in the case of the multinomial naive Bayes model. In... | [1, 2, 1, 1] | ['Table 5 summarizes the performance on the development set.', 'We report macro averaged F1 score in all tables.', 'The LSTM outperforms the other models when only using text as input; however the other two models improve substantially with adding LIWC features, particularly in the case of the multinomial naive Bayes m... | [None, None, ['LSTM', '2-CLASS', 'text', '+LIWC', 'Naive Bayes'], ['+LIWC', 'text']] | 1 |
D17-1318table_1 | Topics evaluation: accuracy in word intrusion task. The table reports the accuracy values on the first 4 and 8 key concepts in the clusters. | 2 | [['Method', 'Vanilla-LDA'], ['Method', 'Key concept-LDA'], ['Method', 'Graph-based Clusters'], ['Method', 'k-means Clusters'], ['Method', 'Key concept Clusters']] | 1 | [['Acc.@4'], ['Acc.@8']] | [['0.22', '0.35'], ['0.29', '0.36'], ['0.46', '0.44'], ['0.72', '0.67'], ['0.86', '0.67']] | column | ['Acc.@4', 'Acc.@8'] | ['Key concept Clusters'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Acc.@4</th> <th>Acc.@8</th> </tr> </thead> <tbody> <tr> <td>Method || Vanilla-LDA</td> <td>0.22</td> <td>0.35</td> </tr> <tr> <td>Method || Key concept-LDA</td> <td>0.29</... | Table 1 | table_1 | D17-1318 | 4 | emnlp2017 | As shown in Table 1, our system outperforms the other methods with an accuracy of 0.86 in the word-intrusion task with four key concepts in each cluster, while it decreases to 0.67 if we extend the evaluation to include eight key concepts. | [1] | ['As shown in Table 1, our system outperforms the other methods with an accuracy of 0.86 in the word-intrusion task with four key concepts in each cluster, while it decreases to 0.67 if we extend the evaluation to include eight key concepts.'] | [['Key concept Clusters', 'Acc.@4', 'Acc.@8']] | 1 |
D17-1323table_2 | Number of violated constraints, mean amplified bias, and test performance before and after calibration using RBA. The test performances of vSRL and MLC are measured by top-1 semantic role accuracy and top-1 mean average precision, respectively. | 3 | [['Method', 'vSRL: Development Set', 'CRF'], ['Method', 'vSRL: Development Set', 'CRF + RBA']] | 1 | [['Viol.'], ['Amp. bias'], ['Perf.(%)']] | [['154', '0.05', '24.07'], ['107', '0.024', '23.97']] | column | ['Viol.', 'Amp. bias', 'Perf.(%)'] | ['CRF + RBA'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Viol.</th> <th>Amp. bias</th> <th>Perf.(%)</th> </tr> </thead> <tbody> <tr> <td>Method || vSRL: Development Set || CRF</td> <td>154</td> <td>0.05</td> <td>24.07</td> </tr> ... | Table 2 | table_2 | D17-1323 | 9 | emnlp2017 | Our quantitative results are summarized in the Table 2. On the development set, the number of verbs whose bias exceed the original bias by over 5% decreases 30.5% (Viol.). Overall, we are able to significantly reduce bias amplification in vSRL by 52% on the development set (Amp. bias). | [1, 1, 1] | ['Our quantitative results are summarized in the Table 2.', 'On the development set, the number of verbs whose bias exceed the original bias by over 5% decreases 30.5% (Viol.).', 'Overall, we are able to significantly reduce bias amplification in vSRL by 52% on the development set (Amp. bias).'] | [None, ['CRF + RBA', 'Viol.'], ['CRF + RBA', 'Amp. bias']] | 1 |
D18-1001table_3 | Results on the test sets of the Trustpilot dataset, +DEMO setting. Main is the accuracy on sentiment analysis. Priv. is the privacy measure (i.e. the inverse accuracy of the attacker: higher is better, see Section 4.2). The baselines are most-frequent class classifiers. The values reported for the defense methods indic... | 2 | [['Corpus', 'Germany'], ['Corpus', 'baseline'], ['Corpus', 'Denmark'], ['Corpus', 'baseline'], ['Corpus', 'France'], ['Corpus', 'baseline'], ['Corpus', 'UK'], ['Corpus', 'baseline'], ['Corpus', 'US'], ['Corpus', 'baseline']] | 3 | [['base model', 'Standard', 'Main'], ['base model', 'Standard', 'Priv.'], ['defense method', 'M-Detask.', 'Main'], ['defense method', 'M-Detask.', 'Priv.'], ['defense method', 'A-Gener.', 'Main'], ['defense method', 'A-Gener.', 'Priv.'], ['defense method', 'Decl. alpha = 0.1', 'Main'], ['defense method', 'Decl. alpha =... | [['85.1', '32.2', '-0.6', '-0.3', '-1.3', '0.6', '-0.8', '1.9'], ['78.6', '36.9', '', '', '', '', '', ''], ['82.6', '28.1', '-0.2', '4.4', '-0.1', '6', '-0.3', '7.6'], ['70.4', '40', '', '', '', '', '', ''], ['75.1', '41.1', '-0.8', '0.7', '-1.4', '-6.4', '-1.5', '-18.2'], ['69.2', '44.4', '', '', '', '', '', ''], ['87... | column | ['Main', 'Priv.', 'Main', 'Priv.', 'Main', 'Priv.', 'Main', 'Priv.'] | ['defense method'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>base model || Standard || Main</th> <th>base model || Standard || Priv.</th> <th>defense method || M-Detask. || Main</th> <th>defense method || M-Detask. || Priv.</th> <th>defense method || A-Gen... | Table 3 | table_3 | D18-1001 | 7 | emnlp2018 | Effect of defenses. We report results for the main task accuracy and the representation privacy in Table 3 for the +DEMO setting and in Table 4 for the RAW setting. Recall that the privacy measure(Priv.) is computed by 1 - X where X is the average accuracy of the attacker on gender and age predictions. When this privac... | [2, 1, 2, 2, 1, 1, 2, 2, 1, 1] | ['Effect of defenses.', 'We report results for the main task accuracy and the representation privacy in Table 3 for the +DEMO setting and in Table 4 for the RAW setting.', 'Recall that the privacy measure(Priv.) is computed by 1 - X where X is the average accuracy of the attacker on gender and age predictions.', 'When ... | [None, None, ['Priv.'], ['Priv.'], ['Standard', 'Main', 'Priv.'], ['M-Detask.', 'A-Gener.', 'Decl. alpha = 0.1', 'Main', 'Priv.'], None, ['M-Detask.', 'A-Gener.', 'Decl. alpha = 0.1', 'Priv.'], ['M-Detask.', 'A-Gener.', 'Decl. alpha = 0.1', 'Priv.', 'Germany', 'Denmark', 'UK', 'US'], ['M-Detask.', 'A-Gener.', 'Decl. al... | 1 |
D18-1001table_4 | Results on the test sets of the Trustpilot dataset, RAW setting. See Section 4.2 and caption of Table 3 for details about the metrics. | 2 | [['Corpus', 'Germany'], ['Corpus', 'baseline'], ['Corpus', 'Denmark'], ['Corpus', 'baseline'], ['Corpus', 'France'], ['Corpus', 'baseline'], ['Corpus', 'UK'], ['Corpus', 'baseline'], ['Corpus', 'US'], ['Corpus', 'baseline']] | 3 | [['base model', 'Standard', 'Main'], ['base model', 'Standard', 'Priv.'], ['defense method', 'M-Detask.', 'Main'], ['defense method', 'M-Detask.', 'Priv.'], ['defense method', 'A-Gener.', 'Main'], ['defense method', 'A-Gener.', 'Priv.'], ['defense method', 'Decl. alpha = 0.1', 'Main'], ['defense method', 'Decl. alpha =... | [['85.5', '32.1', '0.3', '0.5', '-0.8', '0.9', '-1.7', '2.2'], ['78.6', '36.9', '', '', '', '', '', ''], ['82.3', '37.3', '-0.6', '0.6', '-0.1', '-0.3', '-0.2', '-0.1'], ['70.4', '40', '', '', '', '', '', ''], ['72.7', '40.6', '1.8', '-0.1', '1.9', '-0.4', '-0.3', '-0.1'], ['69.2', '44.4', '', '', '', '', '', ''], ['86... | column | ['Main', 'Priv.', 'Main', 'Priv.', 'Main', 'Priv.', 'Main', 'Priv.'] | ['defense method'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>base model || Standard || Main</th> <th>base model || Standard || Priv.</th> <th>defense method || M-Detask. || Main</th> <th>defense method || M-Detask. || Priv.</th> <th>defense method || A-Gen... | Table 4 | table_4 | D18-1001 | 7 | emnlp2018 | Effect of defenses. We report results for the main task accuracy and the representation privacy in Table 3 for the +DEMO setting and in Table 4 for the RAW setting. Recall that the privacy measure(Priv.) is computed by 1 - X where X is the average accuracy of the attacker on gender and age predictions. When this privac... | [2, 1, 2, 2, 1, 1, 2, 2, 1, 1] | ['Effect of defenses.', 'We report results for the main task accuracy and the representation privacy in Table 3 for the +DEMO setting and in Table 4 for the RAW setting.', 'Recall that the privacy measure(Priv.) is computed by 1 - X where X is the average accuracy of the attacker on gender and age predictions.', 'When ... | [None, None, ['Priv.'], ['Priv.'], ['Standard', 'Main', 'Priv.'], ['M-Detask.', 'A-Gener.', 'Decl. alpha = 0.1', 'Main', 'Priv.'], None, ['M-Detask.', 'A-Gener.', 'Decl. alpha = 0.1', 'Priv.'], ['M-Detask.', 'A-Gener.', 'Decl. alpha = 0.1', 'Priv.', 'Germany', 'Denmark', 'UK', 'US'], ['M-Detask.', 'A-Gener.', 'Decl. al... | 1 |
D18-1002table_4 | Results of different adversarial configurations. Sentiment/Mention: main task accuracy. Race/Gender/Age: protected attribute recovery difference from 50% rate by the attacker (values below 50% are as informative as those above it). ∆: the difference between the attacker score and the corresponding adversary’s accuracy.... | 2 | [['Method', 'No Adversary Baseline'], ['Method', 'Standard Adversary'], ['Method', 'Adv-Capacity'], ['Method', 'Adv-Capacity'], ['Method', 'Adv-Capacity'], ['Method', 'Adv-Capacity'], ['Method', 'Adv-Capacity'], ['Method', 'lambda'], ['Method', 'lambda'], ['Method', 'lambda'], ['Method', 'lambda'], ['Method', 'lambda']... | 2 | [['Parameter', '-'], ['DIAL', 'Sentiment'], ['DIAL', 'Race'], ['DIAL', 'delta'], ['PAN16', 'Mention'], ['PAN16', 'Gender'], ['PAN16', 'delta'], ['PAN16', 'Mention'], ['PAN16', 'Age'], ['PAN16', 'delta']] | [['67.4', '14.5', '-', '77.5', '10.1', '-', '74.7', '9.4', '-'], ['64.7', '6.0', '5.0', '75.6', '8.5', '8.0', '72.5', '7.3', '6.9'], ['64.1', '6.7', '5.2', '73.8', '8.1', '6.7', '71.4', '4.3', '4.1'], ['63.4', '7.1', '4.9', '75.2', '8.9', '7.0', '71.6', '6.3', '4.0'], ['65.2', '8.1', '6.9', '76.1', '6.7', '6.4', '71.9'... | column | ['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy'] | ['Method'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Parameter || -</th> <th>DIAL || Sentiment</th> <th>DIAL || Race</th> <th>DIAL || delta</th> <th>PAN16 || Mention</th> <th>PAN16 || Gender</th> <th>PAN16 || delta</th> <th>PAN16 || ... | Table 4 | table_4 | D18-1002 | 7 | emnlp2018 | All methods are effective to some extent, Table 4 summarizes the results. Increasing the capacity of the adversarial network helped reduce the protected attribute’s leakage, though different capacities work best on each setup. On the Sentiment/Race task, none of the higher dimensional adversaries worked better than t... | [1, 1, 1, 1, 1, 2, 1, 1] | ['All methods are effective to some extent, Table 4 summarizes the results.', 'Increasing the capacity of the adversarial network helped reduce the protected attribute’s leakage, though different capacities work best on each setup.', 'On the Sentiment/Race task, none of the higher dimensional adversaries worked bette... | [None, ['Adv-Capacity'], ['Adv-Capacity', 'Standard Adversary', 'Sentiment', 'Race', 'PAN16'], ['PAN16', 'Gender', 'Age'], ['lambda', 'Standard Adversary', 'PAN16'], ['lambda'], ['Ensemble', 'Standard Adversary', 'Sentiment', 'Race'], ['Ensemble', 'PAN16']] | 1 |
D18-1006table_1 | Results on the prediction task (test set). | 1 | [['ProLocal'], ['QRN'], ['EntNet'], ['ProGlobal'], ['ProStruct']] | 1 | [['Precision'], ['Recall'], ['F1']] | [['77.4', '22.9', '35.3'], ['55.5', '31.3', '40.0'], ['50.2', '33.5', '40.2'], ['46.7', '52.4', '49.4'], ['74.2', '42.1', '53.7']] | column | ['Precision', 'Recall', 'F1'] | ['ProStruct'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Precision</th> <th>Recall</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>ProLocal</td> <td>77.4</td> <td>22.9</td> <td>35.3</td> </tr> <tr> <td>QRN</td> <td>55... | Table 1 | table_1 | D18-1006 | 8 | emnlp2018 | 7.1 Comparison with Baselines. We compare our model (which make use of world knowledge) with the four baseline systems on the ProPara dataset. Table 1 shows the precision, recall, and F1 for all models on the the test partition. PROSTRUCT significantly outperforms the baselines, suggesting that world knowledge helps Pr... | [2, 2, 1, 1, 1, 1] | ['7.1 Comparison with Baselines.', 'We compare our model (which make use of world knowledge) with the four baseline systems on the ProPara dataset.', 'Table 1 shows the precision, recall, and F1 for all models on the the test partition.', 'PROSTRUCT significantly outperforms the baselines, suggesting that world knowled... | [None, ['ProStruct', 'ProLocal', 'QRN', 'EntNet', 'ProGlobal'], ['ProStruct', 'ProLocal', 'QRN', 'EntNet', 'ProGlobal', 'Precision', 'Recall', 'F1'], ['ProStruct', 'ProLocal', 'QRN', 'EntNet', 'ProGlobal'], ['ProGlobal', 'Recall', 'Precision'], ['ProLocal', 'Precision', 'Recall']] | 1 |
D18-1010table_2 | Wikipage retrieval evaluation on dev. “rate”: claim proportion, e.g., x%, if its gold passages are fully retrieved (for “SUPPORT” and “REFUTE” only); “acc ceiling”: x%·(#S+#R)+#N , the upper bound of accuracy for three classes if the coverage x% satisfies. | 2 | [['k', '1'], ['k', '5'], ['k', '10'], ['k', '25'], ['k', '50'], ['k', '100']] | 2 | [['(Thorne et al. 2018)', 'rate'], ['(Thorne et al. 2018)', 'acc ceiling'], ['ours', 'rate'], ['ours', 'acc ceiling']] | [['25.31', '50.21', '76.58', '84.38'], ['55.3', '70.2', '89.63', '93.08'], ['65.86', '77.24', '91.19', '94.12'], ['75.92', '83.95', '92.81', '95.2'], ['82.49', '90.13', '93.36', '95.57'], ['86.59', '91.06', '94.19', '96.12']] | column | ['rate', 'acc ceiling', 'rate', 'acc ceiling'] | ['ours'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>(Thorne et al. 2018) || rate</th> <th>(Thorne et al. 2018) || acc ceiling</th> <th>ours || rate</th> <th>ours || acc ceiling</th> </tr> </thead> <tbody> <tr> <td>k || 1</td> <td>25.3... | Table 2 | table_2 | D18-1010 | 7 | emnlp2018 | Performance of passage retrieval. Table 2 compares our wikipage retriever with the one in (Thorne et al., 2018), which used a document retriever from DrQA (Chen et al., 2017). Our document retrieval module surpasses the competitor by a big margin in terms of the coverage of gold passages: 89.63% vs. 55.30% (k = 5 in al... | [2, 1, 1, 2, 2, 2] | ['Performance of passage retrieval.', 'Table 2 compares our wikipage retriever with the one in (Thorne et al., 2018), which used a document retriever from DrQA (Chen et al., 2017).', 'Our document retrieval module surpasses the competitor by a big margin in terms of the coverage of gold passages: 89.63% vs. 55.30% (k =... | [None, ['ours', '(Thorne et al. 2018)'], ['ours', '(Thorne et al. 2018)', 'rate'], None, None, None] | 1 |
D18-1010table_3 | Performance on dev and test of FEVER. TWOWINGOS outperforms prior systems if vanilla CNN parameters are shared by evidence identification and claim verification subsystems. It gains more if fine-grained representations are adopted in both subtasks. | 4 | [['system', 'dev', 'MLP', '-'], ['system', 'dev', 'Decomp-Att', '-'], ['system', 'dev', 'TWOWINGOS', 'coarse & coarse'], ['system', 'dev', 'TWOWINGOS', 'pipeline'], ['system', 'dev', 'TWOWINGOS', 'diff-CNN'], ['system', 'dev', 'TWOWINGOS', 'share-CNN'], ['system', 'dev', 'TWOWINGOS', 'coarse & fine (single)'], ['system... | 2 | [['claim verification', 'NOSCOREEV'], ['claim verification', 'SCOREEV'], ['evidence identification', 'recall'], ['evidence identification', 'precision'], ['evidence identification', 'F1']] | [['41.86', '19.04', '44.22', '10.44', '16.89'], ['52.09', '32.57', '44.22', '10.44', '16.89'], ['', '', '', '', ''], ['35.72', '22.26', '53.75', '29.42', '33.80'], ['39.22', '21.04', '46.88', '43.01', '44.86'], ['72.32', '50.12', '45.55', '40.77', '43.03'], ['75.65', '52.65', '45.81', '42.53', '44.11'], ['78.77', '53.6... | column | ['NOSCOREEV', 'SCOREEV', 'recall', 'precision', 'F1'] | ['TWOWINGOS'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>claim verification || NOSCOREEV</th> <th>claim verification || SCOREEV</th> <th>evidence identification || recall</th> <th>evidence identification || precision</th> <th>evidence identification ||... | Table 3 | table_3 | D18-1010 | 8 | emnlp2018 | Performance on FEVER. Table 3 lists the performances of baselines and the TWOWINGOS variants on FEVER (dev&test). From the dev block, we observe that:. TWOWINGOS (from "share-CNN") surpasses prior systems in big margins. Overall,fine-grained schemes in each subtask contribute more than the coarse-grained counterparts;.... | [2, 1, 1, 1, 2, 1, 1, 2, 1, 1, 2, 2, 1, 2, 0, 0, 0, 2, 1, 1, 1, 2, 2] | ['Performance on FEVER.', 'Table 3 lists the performances of baselines and the TWOWINGOS variants on FEVER (dev&test).', 'From the dev block, we observe that:.', 'TWOWINGOS (from "share-CNN") surpasses prior systems in big margins.', 'Overall,fine-grained schemes in each subtask contribute more than the coarse-grained ... | [None, ['TWOWINGOS', 'MLP', 'Decomp-Att'], None, ['share-CNN', 'MLP', 'Decomp-Att'], None, ['pipeline', 'diff-CNN', 'share-CNN', '(Thorne et al. 2018)', 'evidence identification'], ['share-CNN', 'diff-CNN', 'NOSCOREEV', 'SCOREEV', 'F1'], ['claim verification', 'evidence identification'], ['pipeline', 'diff-CNN', 'share... | 1 |
D18-1013table_2 | Performance on the COCO Karpathy test split. Symbols, ∗ and †, are defined similarly. Our model outperforms the current state-of-the-art Up-Down substantially in terms of SPICE. | 2 | [['COCO', 'HardAtt (Xu et al. 2015)'], ['COCO', 'ATT-FCN (You et al. 2016)'], ['COCO', 'SCA-CNN (Chen et al. 2017)'], ['COCO', 'LSTM-A (Yao et al. 2017)'], ['COCO', 'SCN-LSTM (Gan et al. 2017)'], ['COCO', 'Skeleton (Wang et al. 2017)'], ['COCO', 'AdaAtt (Lu et al. 2017)'], ['COCO', 'NBT (Lu et al. 2018)'], ['COCO', 'DR... | 1 | [['SPICE'], ['CIDEr'], ['METEOR'], ['ROUGE-L'], ['BLEU-4']] | [['-', '-', '0.230', '-', '0.250'], ['-', '-', '0.243', '-', '0.304'], ['-', '0.952', '0.250', '0.531', '0.311'], ['0.186', '1.002', '0.254', '0.540', '0.326'], ['-', '1.012', '0.257', '-', '0.330'], ['-', '1.069', '0.268', '0.552', '0.336'], ['0.195', '1.085', '0.266', '0.549', '0.332'], ['0.201', '1.072', '0.271', '-... | column | ['SPICE', 'CIDEr', 'METEOR', 'ROUGE-L', 'BLEU-4'] | ['simNet'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>SPICE</th> <th>CIDEr</th> <th>METEOR</th> <th>ROUGE-L</th> <th>BLEU-4</th> </tr> </thead> <tbody> <tr> <td>COCO || HardAtt (Xu et al. 2015)</td> <td>-</td> <td>-</td> ... | Table 2 | table_2 | D18-1013 | 6 | emnlp2018 | Table 2 shows the results on COCO. Among the directly comparable models, our model is arguably the best and outperforms the existing models except in terms of BLEU-4. Most encouragingly, our model is also competitive with Up-Down (Ander-son et al. 2018), which uses much larger dataset, Visual Genome (Krishna et al. 201... | [1, 1, 2, 1] | ['Table 2 shows the results on COCO.', 'Among the directly comparable models, our model is arguably the best and outperforms the existing models except in terms of BLEU-4.', 'Most encouragingly, our model is also competitive with Up-Down (Ander-son et al. 2018), which uses much larger dataset, Visual Genome (Krishna et... | [['COCO'], ['simNet', 'HardAtt (Xu et al. 2015)', 'ATT-FCN (You et al. 2016)', 'SCA-CNN (Chen et al. 2017)', 'LSTM-A (Yao et al. 2017)', 'SCN-LSTM (Gan et al. 2017)', 'Skeleton (Wang et al. 2017)', 'AdaAtt (Lu et al. 2017)', 'NBT (Lu et al. 2018)', 'DRL (Ren et al. 2017b)', 'TD-M-ATT (Chen et al. 2018)', 'SCST (Rennie ... | 1 |
D18-1013table_6 | Performance on the online COCO evaluation server. The SPICE metric is unavailable for our model, thus not reported. c5 means evaluating against 5 references, and c40 means evaluating against 40 references. The symbol ∗ denotes directly optimizing CIDEr. The symbol † denotes model ensemble. The symbol ‡ denotes using ex... | 2 | [['COCO', 'HardAtt (Xu et al. 2015)'], ['COCO', 'ATT-FCN (You et al. 2016)'], ['COCO', 'SCA-CNN (Chen et al. 2017)'], ['COCO', 'LSTM-A (Yao et al. 2017)'], ['COCO', 'SCN-LSTM (Gan et al. 2017)'], ['COCO', 'AdaAtt (Lu et al. 2017)'], ['COCO', 'TD-M-ATT (Chen et al. 2018)'], ['COCO', 'SCST (Rennie et al. 2017)'], ['COCO'... | 2 | [['BLEU-1', 'c5'], ['BLEU-1', 'c40'], ['BLEU-2', 'c5'], ['BLEU-2', 'c40'], ['BLEU-3', 'c5'], ['BLEU-3', 'c40'], ['BLEU-4', 'c5'], ['BLEU-4', 'c40'], ['METEOR', 'c5'], ['METEOR', 'c40'], ['ROUGE-L', 'c5'], ['ROUGE-L', 'c40'], ['CIDEr', 'c5'], ['CIDEr', 'c40']] | [['0.705', '0.881', '0.528', '0.779', '0.383', '0.658', '0.277', '0.537', '0.241', '0.322', '0.516', '0.654', '0.865', '0.893'], ['0.731', '0.900', '0.565', '0.815', '0.424', '0.709', '0.316', '0.599', '0.250', '0.335', '0.535', '0.682', '0.943', '0.958'], ['0.712', '0.894', '0.542', '0.802', '0.404', '0.691', '0.302',... | column | ['BLEU-1', 'BLEU-1', 'BLEU-2', 'BLEU-2', 'BLEU-3', 'BLEU-3', 'BLEU-4', 'BLEU-4', 'METEOR', 'METEOR', 'ROUGE-L', 'ROUGE-L', 'CIDEr', 'CIDEr'] | ['simNet'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU-1 || c5</th> <th>BLEU-1 || c40</th> <th>BLEU-2 || c5</th> <th>BLEU-2 || c40</th> <th>BLEU-3 || c5</th> <th>BLEU-3 || c40</th> <th>BLEU-4 || c5</th> <th>BLEU-4 || c40</th> ... | Table 6 | table_6 | D18-1013 | 13 | emnlp2018 | A Supplementary Material. A.1 Results on COCO Evaluation Server. Table 6 shows the performance on the online COCO evaluation server. We put it in the appendix because the results are incomplete and the SPICE metric is not available for our submission, which correlates the best with human evaluation. The SPICE metrics a... | [2, 2, 1, 2, 0, 2, 0, 2, 1] | ['A Supplementary Material.', 'A.1 Results on COCO Evaluation Server.', 'Table 6 shows the performance on the online COCO evaluation server.', 'We put it in the appendix because the results are incomplete and the SPICE metric is not available for our submission, which correlates the best with human evaluation.', 'The S... | [None, None, ['COCO'], None, None, None, None, ['simNet'], ['simNet', 'Up-Down (Anderson et al. 2018)', 'c40', 'HardAtt (Xu et al. 2015)', 'ATT-FCN (You et al. 2016)', 'SCA-CNN (Chen et al. 2017)', 'LSTM-A (Yao et al. 2017)', 'SCN-LSTM (Gan et al. 2017)', 'AdaAtt (Lu et al. 2017)', 'TD-M-ATT (Chen et al. 2018)', 'SCST ... | 1 |
D18-1015table_1 | Performance comparisons of different methods on DiDeMo. The best performance for each metric entry is highlighted in boldface. R@1 IoU=1 19.40 13.10 18.35 19.88 28.10 24.28 27.52 28.23 | 2 | [['Method', 'MFP'], ['Method', 'MCN-VGG16'], ['Method', 'MCN-Flow'], ['Method', 'MCN-Fusion'], ['Method', 'MCN-Fusion+TEF'], ['Method', 'TGN-VGG16'], ['Method', 'TGN-Flow'], ['Method', 'TGN-Fusion']] | 2 | [['R@1', 'IoU=1'], ['R@5', 'IoU=1'], ['mIoU', '-']] | [['19.40', '66.38', '26.65'], ['13.10', '44.82', '25.13'], ['18.35', '56,25', '31.46'], ['19.88', '62.39', '33.51'], ['28.10', '78.21', '41.08'], ['24.28', '71.43', '38.62'], ['27.52', '76.94', '42.84'], ['28.23', '79.26', '42.97']] | column | ['R@1', 'R@5', 'mIoU'] | ['TGN-VGG16', 'TGN-Flow', 'TGN-Fusion'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>R@1 || IoU=1</th> <th>R@5 || IoU=1</th> <th>mIoU || -</th> </tr> </thead> <tbody> <tr> <td>Method || MFP</td> <td>19.40</td> <td>66.38</td> <td>26.65</td> </tr> <tr> ... | Table 1 | table_1 | D18-1015 | 7 | emnlp2018 | 4.3 Experimental Results and Analysis. 4.3.1 Comparisons with State-of-the-Arts. Experiments on DiDeMo. Table 1 illustrates the performance comparisons on the DiDeMo dataset. In addition to MCN, we also compare with the baseline Moment Frequency Prior (MFP) in (Hendricks et al., 2017), which selects segments correspond... | [2, 2, 2, 1, 1, 1, 1, 1, 2, 2, 0, 1, 2, 1, 1] | ['4.3 Experimental Results and Analysis.', '4.3.1 Comparisons with State-of-the-Arts.', 'Experiments on DiDeMo.', 'Table 1 illustrates the performance comparisons on the DiDeMo dataset.', 'In addition to MCN, we also compare with the baseline Moment Frequency Prior (MFP) in (Hendricks et al., 2017), which selects segme... | [None, None, None, None, ['MFP', 'MCN-VGG16', 'MCN-Flow', 'MCN-Fusion', 'MCN-Fusion+TEF'], ['MFP', 'TGN-VGG16', 'TGN-Flow', 'TGN-Fusion'], ['TGN-VGG16', 'TGN-Flow', 'MCN-VGG16', 'MCN-Flow'], ['TGN-Flow', 'TGN-VGG16'], None, None, None, ['TGN-Fusion', 'MCN-Fusion'], ['MCN-Fusion+TEF'], ['MCN-Fusion+TEF'], ['TGN-Fusion',... | 1 |
D18-1017table_2 | NER results for named entities on the original WeiboNER dataset (Peng and Dredze, 2015). There are three blocks. The first two blocks contain the main and simplified models proposed by Peng and Dredze (2015) and Peng and Dredze (2016), respectively. The last block lists the performance of our proposed model. | 2 | [['Models', 'CRF (Peng and Dredze 2015)'], ['Models', 'CRF+word (Peng and Dredze 2015)'], ['Models', 'CRF+character (Peng and Dredze 2015)'], ['Models', 'CRF+character+position (Peng and Dredze 2015)'], ['Models', 'Joint(cp) (main) (Peng and Dredze 2015)'], ['Models', 'Pipeline Seg.Repr.+NER (Peng and Dredze 2016)'], [... | 1 | [['P(%)'], ['R(%)'], ['F1(%)']] | [['56.98', '25.26', '35.00'], ['64.94', '25.77', '36.90'], ['57.89', '34.02', '42.86'], ['57.26', '34.53', '43.09'], ['57.98', '35.57', '44.09'], ['64.22', '36.08', '46.20'], ['63.16', '37.11', '46.75'], ['63.03', '38.66', '47.92'], ['63.33', '39.18', '48.41'], ['55.72', '50.68', '53.08']] | column | ['P(%)', 'R(%)', 'F1(%)'] | ['BiLSTM+CRF+adversarial+self-attention'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P(%)</th> <th>R(%)</th> <th>F1(%)</th> </tr> </thead> <tbody> <tr> <td>Models || CRF (Peng and Dredze 2015)</td> <td>56.98</td> <td>25.26</td> <td>35.00</td> </tr> <tr> ... | Table 2 | table_2 | D18-1017 | 6 | emnlp2018 | 4.3.1 Evaluation on WeiboNER. We compare our proposed model with the latest models on WeiboNER dataset. Table 2 shows the experimental results for named entities on the original WeiboNER dataset. In the first block of Table 2, we give the performance of the main model and baselines proposed by Peng and Dredze (2015). ... | [2, 2, 1, 1, 2, 2, 1, 1, 1, 1, 1, 2, 0, 2] | ['4.3.1 Evaluation on WeiboNER.', 'We compare our proposed model with the latest models on WeiboNER dataset.', 'Table 2 shows the experimental results for named entities on the original WeiboNER dataset.', 'In the first block of Table 2, we give the performance of the main model and baselines proposed by Peng and Dred... | [None, None, None, ['CRF (Peng and Dredze 2015)', 'CRF+word (Peng and Dredze 2015)', 'CRF+character (Peng and Dredze 2015)', 'CRF+character+position (Peng and Dredze 2015)', 'Joint(cp) (main) (Peng and Dredze 2015)'], ['CRF (Peng and Dredze 2015)'], ['CRF+word (Peng and Dredze 2015)', 'CRF+character (Peng and Dredze 20... | 1 |
D18-1017table_3 | Experimental results on the updated WeiboNER dataset (He and Sun, 2017a). There are two blocks. The first block is the performance of latest models. The second block reports the performance of our proposed model. With the limited length of the page, we use “adv” to denote “adversarial”. | 2 | [['Models', 'Peng and Dredze (2015)'], ['Models', 'Peng and Dredze (2016)'], ['Models', 'He and Sun (2017a)'], ['Models', 'He and Sun (2017b)'], ['Models', 'BiLSTM+CRF+adv+self-attention']] | 2 | [['Named Entity', 'P(%)'], ['Named Entity', 'R(%)'], ['Named Entity', 'F1(%)'], ['Nominal Mention', 'P(%)'], ['Nominal Mention', 'R(%)'], ['Nominal Mention', 'F1(%)'], ['Overall', 'F1(%)']] | [['74.78', '39.81', '51.96', '71.92', '53.03', '61.05', '56.05'], ['66.67', '47.22', '55.28', '74.48', '54.55', '62.97', '58.99'], ['66.93', '40.67', '50.6', '66.46', '53.57', '59.32', '54.82'], ['61.68', '48.82', '54.5', '74.13', '53.54', '62.17', '58.23'], ['59.51', '50', '54.34', '71.43', '47.9', '57.35', '58.7']] | column | ['P(%)', 'R(%)', 'F1(%)', 'P(%)', 'R(%)', 'F1(%)', 'F1(%)'] | ['BiLSTM+CRF+adv+self-attention'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Named Entity || P(%)</th> <th>Named Entity || R(%)</th> <th>Named Entity || F1(%)</th> <th>Nominal Mention || P(%)</th> <th>Nominal Mention || R(%)</th> <th>Nominal Mention || F1(%)</th> ... | Table 3 | table_3 | D18-1017 | 7 | emnlp2018 | We also conduct an experiment on the updated WeiboNER dataset. Table 3 lists the performance of the latest models and our proposed model on the updated dataset. In the first block of Table 3, we report the performance of the latest models. The model proposed by Peng and Dredze (2015) achieves F1 score of 56.05% on over... | [2, 1, 1, 1, 2, 1, 1, 1, 1, 2] | ['We also conduct an experiment on the updated WeiboNER dataset.', 'Table 3 lists the performance of the latest models and our proposed model on the updated dataset.', 'In the first block of Table 3, we report the performance of the latest models.', 'The model proposed by Peng and Dredze (2015) achieves F1 score of 56.... | [None, ['Peng and Dredze (2015)', 'Peng and Dredze (2016)', 'He and Sun (2017a)', 'He and Sun (2017b)', 'BiLSTM+CRF+adv+self-attention'], ['Peng and Dredze (2015)', 'Peng and Dredze (2016)', 'He and Sun (2017a)', 'He and Sun (2017b)'], ['Peng and Dredze (2015)', 'Overall', 'F1(%)'], ['He and Sun (2017b)'], ['He and Sun... | 1 |
D18-1017table_4 | Results on SighanNER dataset. There are two blocks. The first block reports the result of previous methods. The second block gives the performance of our proposed model. | 2 | [['Models', 'Chen et al. (2006)'], ['Models', 'Zhou et al. (2006)'], ['Models', 'Luo and Yang (2016)'], ['Models', 'BiLSTM+CRF+adversarial+self-attention']] | 1 | [['P(%)'], ['R(%)'], ['F1(%)']] | [['91.22', '81.71', '86.2'], ['88.94', '84.2', '86.51'], ['91.3', '87.22', '89.21'], ['91.73', '89.58', '90.64']] | column | ['P(%)', 'R(%)', 'F1(%)'] | ['BiLSTM+CRF+adversarial+self-attention'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P(%)</th> <th>R(%)</th> <th>F1(%)</th> </tr> </thead> <tbody> <tr> <td>Models || Chen et al. (2006)</td> <td>91.22</td> <td>81.71</td> <td>86.2</td> </tr> <tr> <td>Mo... | Table 4 | table_4 | D18-1017 | 7 | emnlp2018 | 4.3.2 Evaluation on SighanNER. Table 4 lists the comparisons on SighanNER dataset. We observe that our proposed model achieves new state-of-the-art performance. In the first block, we give the performance of previous methods for Chinese NER task on SighanNER dataset. Chen et al. (2006) propose a character-based CRF mod... | [2, 1, 1, 1, 2, 2, 2, 1, 1, 1] | ['4.3.2 Evaluation on SighanNER.', 'Table 4 lists the comparisons on SighanNER dataset.', 'We observe that our proposed model achieves new state-of-the-art performance.', 'In the first block, we give the performance of previous methods for Chinese NER task on SighanNER dataset.', 'Chen et al. (2006) propose a character... | [None, None, ['BiLSTM+CRF+adversarial+self-attention'], ['Chen et al. (2006)', 'Zhou et al. (2006)', 'Luo and Yang (2016)'], ['Chen et al. (2006)'], ['Zhou et al. (2006)'], None, ['Luo and Yang (2016)', 'F1(%)'], ['BiLSTM+CRF+adversarial+self-attention'], ['BiLSTM+CRF+adversarial+self-attention', 'Luo and Yang (2016)',... | 1 |
D18-1017table_5 | Comparison between our proposed model and simplified models on SighanNER dataset and original WeiboNER dataset. | 2 | [['Models', 'BiLSTM+CRF'], ['Models', 'BiLSTM+CRF+transfer'], ['Models', 'BiLSTM+CRF+adversarial'], ['Models', 'BiLSTM+CRF+self-attention'], ['Models', 'BiLSTM+CRF+adversarial+self-attention']] | 2 | [['SighanNER', 'P(%)'], ['SighanNER', 'R(%)'], ['SighanNER', 'F1(%)'], ['WeiboNER', 'P(%)'], ['WeiboNER', 'R(%)'], ['WeiboNER', 'F1(%)']] | [['89.84', '88.42', '89.13', '58.99', '44.93', '51.01'], ['90.60', '89.19', '89.89', '60.00', '46.03', '52.09'], ['90.52', '89.56', '90.04', '61.94', '45.48', '52.45'], ['90.62', '88.81', '89.71', '57.81', '47.67', '52.25'], ['91.73', '89.58', '90.64', '55.72', '50.68', '53.08']] | column | ['P(%)', 'R(%)', 'F1(%)', 'P(%)', 'R(%)', 'F1(%)'] | ['BiLSTM+CRF+adversarial+self-attention'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>SighanNER || P(%)</th> <th>SighanNER || R(%)</th> <th>SighanNER || F1(%)</th> <th>WeiboNER || P(%)</th> <th>WeiboNER || R(%)</th> <th>WeiboNER || F1(%)</th> </tr> </thead> <tbody> <t... | Table 5 | table_5 | D18-1017 | 8 | emnlp2018 | Table 5 provides the experimental results of our proposed model and baseline as well as its simplified models on SighanNER dataset and WeiboNER dataset. The simplified models are described as follows:. BiLSTM+CRF: The model is used as strong baseline in our work, which is trained using Chinese NER training data. BiLSTM... | [1, 2, 2, 2, 2, 2, 1, 2, 1, 2, 1, 2, 2, 1, 1] | ['Table 5 provides the experimental results of our proposed model and baseline as well as its simplified models on SighanNER dataset and WeiboNER dataset.', 'The simplified models are described as follows:.', 'BiLSTM+CRF: The model is used as strong baseline in our work, which is trained using Chinese NER training data... | [['BiLSTM+CRF', 'BiLSTM+CRF+transfer', 'BiLSTM+CRF+adversarial', 'BiLSTM+CRF+self-attention', 'BiLSTM+CRF+adversarial+self-attention', 'SighanNER', 'WeiboNER'], None, ['BiLSTM+CRF'], ['BiLSTM+CRF+transfer'], ['BiLSTM+CRF+adversarial', 'BiLSTM+CRF+transfer'], ['BiLSTM+CRF+self-attention', 'BiLSTM+CRF'], None, None, ['Bi... | 1 |
D18-1020table_2 | Tagging accuracies (%) on UD test sets. For each language, we show test accuracy (“acc.”) when only using labeled data and the change in test accuracy (“UL∆”) when adding unlabeled data. Results for NCRF and NCRF-AE are from Zhang et al. (2017), though results are not strictly comparable because we used pretrained word... | 1 | [['NCRF'], ['NCRF-AE'], ['BiGRU baseline'], ['VSL-G'], ['VSL-GG-Flat'], ['VSL-GG-Hier']] | 2 | [['French', 'acc.'], ['French', 'ULΔ'], ['German', 'acc.'], ['German', 'ULΔ'], ['Indonesian', 'acc.'], ['Indonesian', 'ULΔ'], ['Spanish', 'acc.'], ['Spanish', 'ULΔ'], ['Russian', 'acc.'], ['Russian', 'ULΔ'], ['Croatian', 'acc.'], ['Croatian', 'ULΔ']] | [['93.4', '-', '90.4', '-', '88.4', '-', '91.2', '-', '86.6', '-', '86.1', '-'], ['93.7', '+0.2', '90.8', '+0.2', '89.1', '+0.3', '91.7', '+0.5', '87.8', '+1.1', '87.9', '+1.2'], ['95.9', '-', '92.6', '-', '92.2', '-', '94.7', '-', '95.2', '-', '95.6', '-'], ['96.1', '+0.0', '92.8', '+0.0', '92.3', '+0.0', '94.8', '+0.... | column | ['acc.', 'ULΔ', 'acc.', 'ULΔ', 'acc.', 'ULΔ', 'acc.', 'ULΔ', 'acc.', 'ULΔ', 'acc.', 'ULΔ'] | ['VSL-G'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>French || acc.</th> <th>French || ULΔ</th> <th>German || acc.</th> <th>German || ULΔ</th> <th>Indonesian || acc.</th> <th>Indonesian || ULΔ</th> <th>Spanish || acc.</th> <th>Spanis... | Table 2 | table_2 | D18-1020 | 7 | emnlp2018 | Table 2 shows our results on the UD datasets. The trends are broadly consistent with those of Table 1a and 1b. The best performing models use hierarchical structure in the latent variables. There are some differences across languages. For French, German, Indonesian and Russian, VSLG does not show improvement when using... | [1, 2, 2, 1, 1, 1, 1, 2, 1] | ['Table 2 shows our results on the UD datasets.', 'The trends are broadly consistent with those of Table 1a and 1b.', 'The best performing models use hierarchical structure in the latent variables.', 'There are some differences across languages.', 'For French, German, Indonesian and Russian, VSLG does not show improvem... | [None, None, None, ['French', 'German', 'Indonesian', 'Spanish', 'Russian', 'Croatian'], ['French', 'German', 'Indonesian', 'Russian', 'VSL-G', 'ULΔ'], ['VSL-GG-Flat', 'VSL-GG-Hier'], ['NCRF', 'NCRF-AE', 'VSL-G'], None, ['BiGRU baseline']] | 1 |
D18-1020table_3 | Twitter and NER dev results (%), UD averaged test accuracies (%) for two choices of attaching the classification loss to latent variables in the VSLGG-Hier model. All previous results for VSL-GG-Hier used the classification loss on y. | 1 | [['classifier on y'], ['classifier on z']] | 2 | [['Twitter', 'acc.'], ['Twitter', 'ULΔ'], ['NER', 'acc.'], ['NER', 'ULΔ'], ['UD average', 'acc.'], ['UD average', 'ULΔ']] | [['91.6', '+0.3', '88.4', '+0.2', '95.0', '+0.1'], ['91.1', '+0.2', '87.8', '+0.1', '94.4', '+0.0']] | column | ['acc.', 'ULΔ', 'acc.', 'ULΔ', 'acc.', 'ULΔ'] | ['classifier on y', 'classifier on z'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Twitter || acc.</th> <th>Twitter || ULΔ</th> <th>NER || acc.</th> <th>NER || ULΔ</th> <th>UD average || acc.</th> <th>UD average || ULΔ</th> </tr> </thead> <tbody> <tr> <td>clas... | Table 3 | table_3 | D18-1020 | 7 | emnlp2018 | 6.1 Effect of Position of Classification Loss. We investigate the effect of attaching the classifier to different latent variables. In particular, for the VSL-GG-Hier model, we compare the attachment of the classifier between z and y. See Figure 2. The results in Table 3 suggest that attaching the reconstruction and cl... | [2, 2, 2, 2, 1] | ['6.1 Effect of Position of Classification Loss.', 'We investigate the effect of attaching the classifier to different latent variables.', 'In particular, for the VSL-GG-Hier model, we compare the attachment of the classifier between z and y.', 'See Figure 2.', 'The results in Table 3 suggest that attaching the reconst... | [None, None, None, None, ['classifier on z', 'acc.']] | 1 |
D18-1023table_4 | Results using bilingual lexicons with varying sizes (40,000, 10,000, 2,000, 1,000, 500, 250) and three languages. CorrNet W+N+C+L is the proposed approach with all the cluster types. | 2 | [['40000', 'multiCCA'], ['40000', 'multiCluster'], ['40000', 'CorrNet W'], ['40000', 'CorrNet W+N+C+L'], ['10000', 'multiCCA'], ['10000', 'multiCluster'], ['10000', 'CorrNet W'], ['10000', 'CorrNet W+N+C+L'], ['2000', 'multiCCA'], ['2000', 'multiCluster'], ['2000', 'CorrNet W'], ['2000', 'CorrNet W+N+C+L'], ['1000', 'm... | 2 | [['QVEC', 'Monolingual'], ['QVEC', 'Multilingual'], ['QVEC-CCA', 'Monolingual'], ['QVEC-CCA', 'Multilingual']] | [['10.8', '8.5', '63.8', '43.9'], ['10.8', '9.1', '63.6', '45.8'], ['14.8', '11.3', '63.6', '43.4'], ['16.2', '12.4', '67.3', '45.4'], ['9.8', '6.5', '63.6', '42.3'], ['10.6', '9.5', '62.4', '44.7'], ['14.8', '11.3', '63.4', '43.0'], ['15.7', '12.4', '68.0', '45.1'], ['9.9', '6.2', '63.6', '40.9'], ['10.5', '9.3', '62.... | column | ['QVEC', 'QVEC', 'QVEC-CCA', 'QVEC-CCA'] | ['CorrNet W+N+C+L'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>QVEC || Monolingual</th> <th>QVEC || Multilingual</th> <th>QVEC-CCA || Monolingual</th> <th>QVEC-CCA || Multilingual</th> </tr> </thead> <tbody> <tr> <td>40000 || multiCCA</td> <td>1... | Table 4 | table_4 | D18-1023 | 8 | emnlp2018 | For following experiments, we use MultiCluster and MultiCCA as baselines 14. Table 4 shows the results. We observe that both MultiCCA and CorrNet approaches are sensitive to the size of the bilingual lexicons. Our approach on the other hand can maintain high performance, even when the size of bilingual lexicons is redu... | [1, 1, 1, 1, 1] | ['For following experiments, we use MultiCluster and MultiCCA as baselines 14.', 'Table 4 shows the results.', 'We observe that both MultiCCA and CorrNet approaches are sensitive to the size of the bilingual lexicons.', 'Our approach on the other hand can maintain high performance, even when the size of bilingual lexic... | [['multiCCA', 'multiCluster'], None, ['multiCCA', 'CorrNet W'], ['CorrNet W+N+C+L', '250'], ['multiCluster', '40000', '10000', '2000', '1000', '500', '250']] | 1 |
D18-1023table_7 | Comparison on Cross-lingual Direct Transfer: name tagging performance (F-score, %) when the tagger was trained on 1-2 source languages and tested on a target language. | 4 | [['Train', 'Amh', 'Test', 'Tig'], ['Train', 'Tig', 'Test', 'Amh'], ['Train', 'Eng', 'Test', 'Uig'], ['Train', 'Tur', 'Test', 'Uig'], ['Train', 'Eng+Tur', 'Test', 'Uig'], ['Train', 'Eng', 'Test', 'Tur'], ['Train', 'Uig', 'Test', 'Tur'], ['Train', 'Eng+Uig', 'Test', 'Tur']] | 2 | [['MultiCCA', '-'], ['MultiCluster', '-'], ['CorrNet', 'W'], ['CorrNet', 'W+N+C+L']] | [['15.5', '29.7', '28.3', '33.7'], ['11.1', '24.7', '12.8', '23.3'], ['4.8', '9.1', '13.3', '15.5'], ['0.4', '11.4', '19.8', '25.0'], ['8.3', '10.5', '17.3', '23.3'], ['17.6', '21.4', '18.3', '22.4'], ['6.9', '12.8', '13.2', '10.7'], ['20.4', '23.3', '14.5', '27.0']] | column | ['F-score', 'F-score', 'F-score', 'F-score'] | ['W+N+C+L'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MultiCCA || -</th> <th>MultiCluster || -</th> <th>CorrNet || W</th> <th>CorrNet || W+N+C+L</th> </tr> </thead> <tbody> <tr> <td>Train || Amh || Test || Tig</td> <td>15.5</td> <t... | Table 7 | table_7 | D18-1023 | 9 | emnlp2018 | Cross-lingual direct transfer. In this setting, we train a name tagger on one or two languages using multilingual embeddings and test it on a new language without any annotated data. Table 7 shows the performance. For most testing languages, our approach achieves better performance than MultiCCA and MultiCluster. The c... | [2, 2, 1, 1, 1] | ['Cross-lingual direct transfer.', 'In this setting, we train a name tagger on one or two languages using multilingual embeddings and test it on a new language without any annotated data.', 'Table 7 shows the performance.', 'For most testing languages, our approach achieves better performance than MultiCCA and MultiClu... | [None, ['Train', 'Test'], None, ['CorrNet', 'W+N+C+L', 'MultiCCA', 'MultiCluster'], ['CorrNet', 'W+N+C+L', 'Amh', 'Tig']] | 1 |
D18-1027table_4 | Results on the hypernym discovery task. | 4 | [['Train data', '-', 'Model', 'BestUns'], ['Train data', 'EN', 'Model', 'VecMap'], ['Train data', 'EN', 'Model', 'VecMapμ'], ['Train data', 'EN', 'Model', 'MUSE'], ['Train data', 'EN', 'Model', 'MUSEμ'], ['Train data', 'EN+500', 'Model', 'VecMap'], ['Train data', 'EN+500', 'Model', 'VecMapμ'], ['Train data', 'EN+500', ... | 2 | [['Spanish', 'MAP'], ['Spanish', 'MRR'], ['Spanish', 'P@5'], ['Italian', 'MAP'], ['Italian', 'MRR'], ['Italian', 'P@5']] | [['2.4', '5.5', '2.5', '3.9', '8.7', '3.9'], ['6.4', '16.5', '6.0', '4.5', '10.6', '4.3'], ['6.1', '15.4', '5.7', '5.6', '13.3', '5.4'], ['5.9', '14.1', '5.5', '4.9', '11.1', '4.7'], ['6.2', '14.8', '5.8', '5.1', '11.7', '4.9'], ['7.3', '18.2', '7.0', '6.1', '14.0', '5.8'], ['7.0', '17.6', '6.6', '6.8', '16.2', '6.4'],... | column | ['MAP', 'MRR', 'P@5', 'MAP', 'MRR', 'P@5'] | ['VecMapμ', 'MUSEμ'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Spanish || MAP</th> <th>Spanish || MRR</th> <th>Spanish || P@5</th> <th>Italian || MAP</th> <th>Italian || MRR</th> <th>Italian || P@5</th> </tr> </thead> <tbody> <tr> <td>Train... | Table 4 | table_4 | D18-1027 | 8 | emnlp2018 | The results listed in Table 4 indicate several trends. First and foremost, in terms of model-wise comparisons, we observe that our proposed alterations of both VecMap and MUSE improve their quality in a consistent manner, across most metrics and data configurations. In Italian our proposed model shows an improvement ac... | [1, 1, 1, 1, 2, 1, 1, 2, 2, 1, 2, 2, 2, 1, 1] | ['The results listed in Table 4 indicate several trends.', 'First and foremost, in terms of model-wise comparisons, we observe that our proposed alterations of both VecMap and MUSE improve their quality in a consistent manner, across most metrics and data configurations.', 'In Italian our proposed model shows an improv... | [None, ['VecMap', 'VecMapμ', 'MUSE', 'MUSEμ'], ['VecMapμ', 'MUSEμ', 'EN', 'EN+500', 'EN+1k', 'EN+2k', 'Spanish'], ['Spanish', 'VecMap', 'VecMapμ', 'MUSEμ', 'EN+2k', 'MUSE', 'MRR'], None, ['MUSE', 'MUSEμ', 'Spanish', 'Italian', 'MRR'], ['BestUns', 'VecMap', 'VecMapμ', 'MUSE', 'MUSEμ'], None, ['VecMap', 'VecMapμ'], ['Vec... | 1 |
D18-1033table_1 | Evaluation results of cross-lingual lexical sememe prediction with different seed lexicon sizes. | 4 | [['Method', 'BiLex', 'Seed Lexicon', '1000'], ['Method', 'BiLex', 'Seed Lexicon', '2000'], ['Method', 'BiLex', 'Seed Lexicon', '4000'], ['Method', 'BiLex', 'Seed Lexicon', '6000'], ['Method', 'CLSP-WR', 'Seed Lexicon', '1000'], ['Method', 'CLSP-WR', 'Seed Lexicon', '2000'], ['Method', 'CLSP-WR', 'Seed Lexicon', '4000']... | 2 | [['Sememe Prediction', 'MAP'], ['Sememe Prediction', 'F1 Score']] | [['27.57', '16.08'], ['33.79', '22.33'], ['35.78', '25.74'], ['38.29', '28.71'], ['38.12', '18.55'], ['33.78', '23.64'], ['38.30', '27.74'], ['41.23', '30.64'], ['31.78', '18.22'], ['37.70', '24.31'], ['40.77', '29.33'], ['43.16', '32.49']] | column | ['MAP', 'F1 Score'] | ['CLSP-WR', 'CLSP-SE'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Sememe Prediction || MAP</th> <th>Sememe Prediction || F1 Score</th> </tr> </thead> <tbody> <tr> <td>Method || BiLex || Seed Lexicon || 1000</td> <td>27.57</td> <td>16.08</td> </tr> ... | Table 1 | table_1 | D18-1033 | 7 | emnlp2018 | Table 1 exhibits the evaluation results of crosslingual lexical sememe prediction with different seed lexicon sizes in {1000, 2000, 4000, 6000}. From the table, we can clearly see that:. (1) Our two models perform much better compared with BiLex in all the seed lexicon size settings. It indicates that incorporating sem... | [1, 1, 1, 2, 2, 1, 2] | ['Table 1 exhibits the evaluation results of crosslingual lexical sememe prediction with different seed lexicon sizes in {1000, 2000, 4000, 6000}.', 'From the table, we can clearly see that:.', '(1) Our two models perform much better compared with BiLex in all the seed lexicon size settings.', 'It indicates that incorp... | [['Seed Lexicon'], None, ['CLSP-WR', 'CLSP-SE', 'BiLex', '1000', '2000', '4000', '6000'], None, ['CLSP-WR', 'CLSP-SE'], ['CLSP-SE', 'CLSP-WR'], ['CLSP-SE']] | 1 |
D18-1036table_1 | The main results of CH-EN translation. (cid:52) shows the BLEU points improvement of system “X+MEM” than system X. “*” indicates that system “X+MEM” is statistically significant better (p < 0.05) than system X and “†” indicates p < 0.01. | 1 | [['Baseline'], ['Arthur(test)'], ['Arthur(train+test)'], ['Baseline(sub-word)'], ['Baseline+MEM'], ['Arthur(train+test)+MEM'], ['Baseline(sub-word)+MEM']] | 2 | [['Model', '03'], ['Model', '04'], ['Model', '05'], ['Model', '06'], ['Model', '08'], ['Model', 'Avg.'], ['Model', '△']] | [['41.01', '42.94', '40.31', '40.57', '30.96', '39.16', '-'], ['41.34', '43.31', '40.79', '40.84', '31.11', '39.48', '-'], ['41.88', '43.75', '41.16', '41.63', '31.47', '39.98', '-'], ['43.93', '44.74', '42.46', '43.01', '32.53', '41.33', '-'], ['42.74', '43.94', '42.15', '41.94', '31.86', '40.53', '+1.37'], ['43.04', ... | column | ['BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU'] | ['Baseline+MEM', 'Arthur(train+test)+MEM', 'Baseline(sub-word)+MEM'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Model || 03</th> <th>Model || 04</th> <th>Model || 05</th> <th>Model || 06</th> <th>Model || 08</th> <th>Model || Avg.</th> <th>Model || △</th> </tr> </thead> <tbody> <tr> ... | Table 1 | table_1 | D18-1036 | 6 | emnlp2018 | 5 Results on CH-EN Translation. 5.1 Our methods vs. Baseline. Table 1 reports the main translation results of CHEN translation. We first compare Baseline+MEM with Baseline. As shown in row 1 and row 5 in Table 1, Baseline+MEM can improve over Baseline on all test datasets, and the average improvement is 1.37 BLEU point... | [2, 2, 1, 1, 1, 1, 2, 2, 1, 1, 2, 2, 2, 2, 1, 2, 2, 2, 2, 1, 1] | ['5 Results on CH-EN Translation.', '5.1 Our methods vs. Baseline.', 'Table 1 reports the main translation results of CHEN translation.', 'We first compare Baseline+MEM with Baseline.', 'As shown in row 1 and row 5 in Table 1, Baseline+MEM can improve over Baseline on all test datasets, and the average improvement is 1... | [None, None, None, ['Baseline', 'Baseline+MEM'], ['Baseline', 'Baseline+MEM', '03', '04', '05', '06', '08', '△'], ['Baseline', 'Baseline+MEM'], None, ['Baseline(sub-word)', 'Baseline(sub-word)+MEM'], ['Baseline(sub-word)', 'Baseline(sub-word)+MEM'], ['Baseline(sub-word)', 'Baseline(sub-word)+MEM', '△'], None, None, Non... | 1 |
D18-1037table_4 | News Commentary v8 Experiment results. Seq2Seq and NMT+RNNG results are taken from Eriguchi et al. (2017), Str2Tree (string-to-linearised-tree) results (no RIBES scores) come from Aharoni and Goldberg (2017) All numbers reported here are of non-ensemble models. | 2 | [['Dataset', 'Seq2Seq'], ['Dataset', 'Str2Tree'], ['Dataset', 'NMT+RNNG'], ['Dataset', 'Seq2DRNN'], ['Dataset', 'Seq2DRNN+SynC']] | 2 | [['DE-EN', 'BLEU'], ['DE-EN', 'RIBES'], ['CS-EN', 'BLEU'], ['CS-EN', 'RIBES'], ['RU-EN', 'BLEU'], ['RU-EN', 'RIBES']] | [['16.61', '73.8', '11.22', '69.6', '12.03', '69.6'], ['16.13', '-', '11.65', '-', '11.94', '-'], ['16.41', '75.0', '12.06', '70.4', '12.46', '71.0'], ['16.90', '75.1', '11.84', '67.3', '12.04', '69.7'], ['17.21', '75.8', '12.11', '70.3', '12.96', '71.1']] | column | ['BLEU', 'RIBES', 'BLEU', 'RIBES', 'BLEU', 'RIBES'] | ['Seq2DRNN+SynC'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>DE-EN || BLEU</th> <th>DE-EN || RIBES</th> <th>CS-EN || BLEU</th> <th>CS-EN || RIBES</th> <th>RU-EN || BLEU</th> <th>RU-EN || RIBES</th> </tr> </thead> <tbody> <tr> <td>Dataset ... | Table 4 | table_4 | D18-1037 | 7 | emnlp2018 | 3.3 Results. Table 3 and Table 4 has the BLEU (Papineni et al., 2002) and RIBES (Isozaki et al., 2010) scores. In our IWSLT2017 tests, both Seq2DRNN and Seq2DRNN+SynC produce better results than the Seq2Seq baseline model in terms of BLEU scores, while Seq2DRNN+SynC also produces better RIBES scores indicating better r... | [2, 1, 0, 0, 0, 1, 1, 2] | ['3.3 Results.', 'Table 3 and Table 4 has the BLEU (Papineni et al., 2002) and RIBES (Isozaki et al., 2010) scores.', 'In our IWSLT2017 tests, both Seq2DRNN and Seq2DRNN+SynC produce better results than the Seq2Seq baseline model in terms of BLEU scores, while Seq2DRNN+SynC also produces better RIBES scores indicating ... | [None, ['BLEU', 'RIBES'], None, None, None, ['Seq2DRNN+SynC'], ['Seq2DRNN+SynC', 'Str2Tree', 'NMT+RNNG'], ['NMT+RNNG', 'Str2Tree']] | 1 |
D18-1045table_2 | Perplexity of source data as assigned by a language model (5-gram Kneser–Ney). Data generated by beam search is most predictable. | 1 | [['human data'], ['beam'], ['sampling'], ['top10'], ['beam+noise']] | 1 | [['Perplexity']] | [['75.34'], ['72.42'], ['500.17'], ['87.15'], ['2823.73']] | column | ['Perplexity'] | ['beam'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Perplexity</th> </tr> </thead> <tbody> <tr> <td>human data</td> <td>75.34</td> </tr> <tr> <td>beam</td> <td>72.42</td> </tr> <tr> <td>sampling</td> <td>500.17</td> ... | Table 2 | table_2 | D18-1045 | 5 | emnlp2018 | We report the perplexity of our language model on all versions of the source data in Table 2. The results show that beam outputs receive higher probability by the language model compared to sampling, beam+noise and
real source sentences. This indicates that beam search outputs are not as rich as sampling outputs or bea... | [1, 1, 1, 2] | ['We report the perplexity of our language model on all versions of the source data in Table 2.', 'The results show that beam outputs receive higher probability by the language model compared to sampling, beam+noise and\nreal source sentences.', 'This indicates that beam search outputs are not as rich as sampling outpu... | [None, ['beam', 'sampling', 'beam+noise', 'human data'], ['beam', 'sampling', 'beam+noise'], None] | 1 |
D18-1045table_6 | BLEU on newstest2014 for WMT English-German (En–De) and English-French (En–Fr). The first four results use only WMT bitext (WMT’14, except for b, c, d in En–De which train on WMT’16). DeepL uses proprietary high-quality bitext and our result relies on back-translation with 226M newscrawl sentences for En–De and 31M for... | 1 | [['a. Gehring et al. (2017)'], ['b. Vaswani et al. (2017)'], ['c. Ahmed et al. (2017)'], ['d. Shaw et al. (2018)'], ['DeepL'], ['Our result'], ['detok. sacreBLEU']] | 1 | [['En?De'], ['En?Fr']] | [['25.2', '40.5'], ['28.4', '41'], ['28.9', '41.4'], ['29.2', '41.5'], ['33.3', '45.9'], ['35', '45.6'], ['33.8', '43.8']] | column | ['BLEU', 'BLEU'] | ['Our result'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>En?De</th> <th>En?Fr</th> </tr> </thead> <tbody> <tr> <td>a. Gehring et al. (2017)</td> <td>25.2</td> <td>40.5</td> </tr> <tr> <td>b. Vaswani et al. (2017)</td> <td>28.4</... | Table 6 | table_6 | D18-1045 | 9 | emnlp2018 | We upsample the bitext with a rate of 16 so that we observe every bitext sentence 16 times more often than each monolingual sentence. This results in a new state of the art of 35 BLEU on newstest2014 by using only WMT benchmark data. For comparison, DeepL, a commercial translation engine relying on high quality bilingu... | [2, 1, 1, 1, 2] | ['We upsample the bitext with a rate of 16 so that we observe every bitext sentence 16 times more often than each monolingual sentence.', 'This results in a new state of the art of 35 BLEU on newstest2014 by using only WMT benchmark data.', 'For comparison, DeepL, a commercial translation engine relying on high quality... | [None, ['Our result'], ['DeepL'], ['Our result'], None] | 1 |
D18-1046table_2 | Comparing different approaches on the NEWS 2015 dataset using acc@1 as the evaluation metric. “Ours” denotes the Seq2Seq(HMA) model, with (.) denoting the inference strategy. Numbers for RPI-ISI are from Lin et al. (2016). | 3 | [['Approach', 'Full Supervision Setting (5-10k examples)', 'Seq2Seq w/ Att (U)'], ['Approach', 'Full Supervision Setting (5-10k examples)', 'P&R (U)'], ['Approach', 'Full Supervision Setting (5-10k examples)', 'DirecTL+ (U)'], ['Approach', 'Full Supervision Setting (5-10k examples)', 'RPI-ISI (U)'], ['Approach', 'Full ... | 2 | [['Lang.', 'hi'], ['Lang.', 'kn'], ['Lang.', 'bn'], ['Lang.', 'ta'], ['Lang.', 'he'], ['Lang.', 'Avg.']] | [['35.5', '33.4', '46.1', '17.2', '20.3', '30.5'], ['37.4', '31.6', '45.4', '20.2', '18.7', '30.7'], ['38.9', '34.7', '48.4', '19.9', '16.8', '31.7'], ['40.3', '29.8', '49.4', '20.2', '21.5', '32.2'], ['42.8', '38.9', '52.4', '20.5', '23.4', '35.6'], ['44.8', '37.6', '52.0', '29.0', '37.2', '40.1'], ['51.8', '43.3', '5... | column | ['acc@1', 'acc@1', 'acc@1', 'acc@1', 'acc@1', 'acc@1'] | ['Ours(U) + Boot.', 'Ours(U)'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Lang. || hi</th> <th>Lang. || kn</th> <th>Lang. || bn</th> <th>Lang. || ta</th> <th>Lang. || he</th> <th>Lang. || Avg.</th> </tr> </thead> <tbody> <tr> <td>Approach || Full Supe... | Table 2 | table_2 | D18-1046 | 5 | emnlp2018 | 6.1 Full Supervision Setting. We compare Seq2Seq(HMA) with previous approaches when provided all available supervision, to see how it fares under standard evaluation. Results in the unconstrained inference (U) setting (Table 2 top 5 rows) shows Seq2Seq(HMA),denoted by “Ours”, outperforms previous approaches on Hindi, K... | [2, 2, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1] | ['6.1 Full Supervision Setting.', 'We compare Seq2Seq(HMA) with previous approaches when provided all available supervision, to see how it fares under standard evaluation.', 'Results in the unconstrained inference (U) setting (Table 2 top 5 rows) shows Seq2Seq(HMA),denoted by “Ours”, outperforms previous approaches on ... | [None, ['Full Supervision Setting (5-10k examples)', 'Ours(U)'], ['Seq2Seq w/ Att (U)', 'P&R (U)', 'DirecTL+ (U)', 'RPI-ISI (U)', 'Ours(U)'], ['Seq2Seq w/ Att (U)'], ['Ours(U)', 'ta', 'he', 'Seq2Seq w/ Att (U)', 'P&R (U)', 'DirecTL+ (U)', 'RPI-ISI (U)'], ['Ours(U)', 'Seq2Seq w/ Att (U)', 'P&R (U)', 'DirecTL+ (U)', 'RPI... | 1 |
D18-1046table_3 | Acc@1 for native and foreign words for four languages (§7.2). Ratio is native performance relative to foreign. | 1 | [['Hindi'], ['Bengali'], ['Kannada'], ['Tamil']] | 1 | [['Native'], ['Foreign'], ['Ratio']] | [['45.1', '31.4', '1.44'], ['63.1', '20.1', '3.14'], ['42.6', '23.1', '1.84'], ['24.3', '05.2', '4.67']] | column | ['Acc@1', 'Acc@1', 'Acc@1'] | ['Native', 'Foreign'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Native</th> <th>Foreign</th> <th>Ratio</th> </tr> </thead> <tbody> <tr> <td>Hindi</td> <td>45.1</td> <td>31.4</td> <td>1.44</td> </tr> <tr> <td>Bengali</td> <td>... | Table 3 | table_3 | D18-1046 | 7 | emnlp2018 | To quantify the effect of this, we annotate native and foreign names in the test split of the four Indian languages, and evaluate performance for both categories. Table 3 shows that our model performs significantly better on native names for all the languages. A possible reason for is that the source scripts were desig... | [2, 1, 2, 2, 2] | ['To quantify the effect of this, we annotate native and foreign names in the test split of the four Indian languages, and evaluate performance for both categories.', 'Table 3 shows that our model performs significantly better on native names for all the languages.', 'A possible reason for is that the source scripts we... | [['Native', 'Foreign', 'Hindi', 'Bengali', 'Kannada', 'Tamil'], ['Native', 'Hindi', 'Bengali', 'Kannada', 'Tamil'], ['Tamil'], None, ['Tamil']] | 1 |
D18-1047table_3 | Performance for en-pt on rare words (RARE), and the en-pt MUSE dataset, which as shown in Figure 3 contains a lot of frequent words. | 1 | [['NORMA-Linear'], ['NORMA-Highway-NN'], ['1 layer-NN'], ['1 layer-Highway-NN'], ['Artetxe et al . 2018'], ['Lazaridou et al 2015']] | 2 | [['en-pt', 'RARE'], ['en-pt', 'MUSE']] | [['57.67', '72.60'], ['49.33', '71.73'], ['48.67', '72.13'], ['49.33', '72.10'], ['47.00', '77.73'], ['48.00', '72.27']] | column | ['accuracy', 'accuracy'] | ['NORMA-Linear', 'NORMA-Highway-NN'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>en-pt || RARE</th> <th>en-pt || MUSE</th> </tr> </thead> <tbody> <tr> <td>NORMA-Linear</td> <td>57.67</td> <td>72.60</td> </tr> <tr> <td>NORMA-Highway-NN</td> <td>49.33</t... | Table 3 | table_3 | D18-1047 | 7 | emnlp2018 | Table 3 shows that NORMA-Linear outperforms (Artetxe et al., 2018a) by over 10 points on the RARE words dataset. On the regular MUSE dictionary, (Artetxe et al., 2018a) is ahead by about 5 points. On RARE, (Lazaridou et al., 2015) is behind NORMA-Linear by 9 points, whereas on the MUSE dictionary performance of (Lazari... | [1, 1, 1] | ['Table 3 shows that NORMA-Linear outperforms (Artetxe et al., 2018a) by over 10 points on the RARE words dataset.', 'On the regular MUSE dictionary, (Artetxe et al., 2018a) is ahead by about 5 points.', 'On RARE, (Lazaridou et al., 2015) is behind NORMA-Linear by 9 points, whereas on the MUSE dictionary performance of... | [['NORMA-Linear', 'Artetxe et al . 2018', 'RARE'], ['Artetxe et al . 2018', 'NORMA-Linear', 'MUSE'], ['Lazaridou et al 2015', 'NORMA-Linear', 'RARE', 'MUSE']] | 1 |
D18-1049table_3 | Comparison with previous works on Chinese-English translation task. The evaluation metric is caseinsensitive BLEU score. (Wang et al., 2017) use a hierarchical RNN to incorporate document-level context into RNNsearch. (Kuang et al., 2017) use a cache to exploit document-level context for RNNsearch. (Kuang et al., 2017)... | 4 | [['Method', '(Wang et al. 2017)', 'Model', 'RNNsearch'], ['Method', '(Kuang et al. 2017)', 'Model', 'RNNsearch'], ['Method', '(Vaswani et al. 2017)', 'Model', 'Transformer'], ['Method', '(Kuang et al. 2017)*', 'Model', 'Transformer'], ['Method', 'this work', 'Model', 'Transformer']] | 1 | [['MT06'], ['MT02'], ['MT03'], ['MT04'], ['MT05'], ['MT08'], ['All']] | [['37.76', '-', '-', '-', '36.89', '27.57', '-'], ['-', '34.41', '-', '38.40', '32.90', '31.86', '-'], ['48.09', '48.63', '47.54', '47.79', '48.34', '38.31', '45.97'], ['48.14', '48.97', '48.05', '47.91', '48.53', '38.38', '46.37'], ['46.69', '50.96', '50.21', '49.73', '49.46', '39.69', '47.93']] | column | ['BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU'] | ['this work'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MT06</th> <th>MT02</th> <th>MT03</th> <th>MT04</th> <th>MT05</th> <th>MT08</th> <th>All</th> </tr> </thead> <tbody> <tr> <td>Method || (Wang et al. 2017) || Model || RNNsea... | Table 3 | table_3 | D18-1049 | 7 | emnlp2018 | As shown in Table 3, using the same data, our approach achieves significant improvements over the original Transformer model (Vaswani et al. 2017) (p < 0.01). The gain on the concatenated test set (i.e., “All”) is 1.96 BLEU points. It also outperforms the cache-based method (Kuang et al. 2017) adapted for Transformer s... | [1, 1, 1] | ['As shown in Table 3, using the same data, our approach achieves significant improvements over the original Transformer model (Vaswani et al. 2017) (p < 0.01).', 'The gain on the concatenated test set (i.e., “All”) is 1.96 BLEU points.', 'It also outperforms the cache-based method (Kuang et al. 2017) adapted for Trans... | [['this work', '(Vaswani et al. 2017)'], ['this work', '(Vaswani et al. 2017)', 'All'], ['this work', '(Kuang et al. 2017)*']] | 1 |
D18-1049table_5 | Subjective evaluation of the comparison between the original Transformer model and our model. “>” means that Transformer is better than our model, “=” means equal, and “<” means worse. | 1 | [['Human 1'], ['Human 2'], ['Human 3'], ['Overall']] | 1 | [['>'], ['='], ['<']] | [['24%', '45%', '31%'], ['20%', '55%', '25%'], ['12%', '52%', '36%'], ['19%', '51%', '31%']] | column | ['percentage', 'percentage', 'percentage'] | ['Overall'] | <table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>></th> <th>=</th> <th><</th> </tr> </thead> <tbody> <tr> <td>Human 1</td> <td>24%</td> <td>45%</td> <td>31%</td> </tr> <tr> <td>Human 2</td> <td>20%</td> ... | Table 5 | table_5 | D18-1049 | 7 | emnlp2018 | Table 5 shows the results of subjective evaluation. Three human evaluators generally made consistent judgements. On average, around 19% of Transformer’s translations are better than that of our model, 51% are equal, and 31% are worse. This evaluation confirms that exploiting document-level context helps to improve tr... | [1, 2, 1, 2] | ['Table 5 shows the results of subjective evaluation.', 'Three human evaluators generally made consistent judgements.', 'On average, around 19% of Transformer’s translations are better than that of our model, 51% are equal, and 31% are worse.', 'This evaluation confirms that exploiting document-level context helps to... | [None, ['Human 1', 'Human 2', 'Human 3'], ['Overall', '>', '=', '<'], None] | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.