table_id_paper
stringlengths
15
15
caption
stringlengths
14
1.88k
row_header_level
int32
1
9
row_headers
large_stringlengths
15
1.75k
column_header_level
int32
1
6
column_headers
large_stringlengths
7
1.01k
contents
large_stringlengths
18
2.36k
metrics_loc
stringclasses
2 values
metrics_type
large_stringlengths
5
532
target_entity
large_stringlengths
2
330
table_html_clean
large_stringlengths
274
7.88k
table_name
stringclasses
9 values
table_id
stringclasses
9 values
paper_id
stringlengths
8
8
page_no
int32
1
13
dir
stringclasses
8 values
description
large_stringlengths
103
3.8k
class_sentence
stringlengths
3
120
sentences
large_stringlengths
110
3.92k
header_mention
stringlengths
12
1.8k
valid
int32
0
1
P19-1359table_5
Results reported in the embedding scores, BLEU, diversity, and the quality of emotional expression.
2
[['Models', 'Seq2Seq'], ['Models', 'EmoEmb'], ['Models', 'ECM'], ['Models', 'EmoDS-MLE'], ['Models', 'EmoDS-EV'], ['Models', 'EmoDS-BS'], ['Models', 'EmoDS']]
2
[['Embedding', 'Average'], ['Embedding', 'Greedy'], ['Embedding', 'Extreme'], ['BLEU Score', 'BLEU'], ['Diversity', 'distinct-1'], ['Diversity', 'distinct-2'], ['Emotional Expression', 'emotion-a'], ['Emotional Expression', 'emotion-w']]
[['0.523', '0.376', '0.35', '1.5', '0.0038', '0.012', '0.335', '0.371'], ['0.524', '0.381', '0.355', '1.69', '0.0054', '0.0484', '0.72', '0.512'], ['0.624', '0.434', '0.409', '1.68', '0.009', '0.0735', '0.765', '0.58'], ['0.548', '0.367', '0.374', '1.6', '0.0053', '0.067', '0.721', '0.556'], ['0.571', '0.39', '0.384', ...
column
['Average', 'Greedy', 'Extreme', 'BLEU', 'distinct-1', 'distinct-2', 'emotion-a', 'emotion-w']
['EmoDS-MLE', 'EmoDS-EV', 'EmoDS-BS']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Embedding || Average</th> <th>Embedding || Greedy</th> <th>Embedding || Extreme</th> <th>BLEU Score || BLEU</th> <th>Diversity || distinct-1</th> <th>Diversity || distinct-2</th> <th>Em...
Table 5
table_5
P19-1359
6
acl2019
The bottom half of Table 5 shows the results of ablation tests. As we can see, after removing the emotion classification term (EmoDS-MLE), the performance decreased most significantly. Our interpretation is that without the emotion classification term, the model can only express the desired emotion explicitly in the ge...
[1, 1, 2, 1, 2, 1]
['The bottom half of Table 5 shows the results of ablation tests.', 'As we can see, after removing the emotion classification term (EmoDS-MLE), the performance decreased most significantly.', 'Our interpretation is that without the emotion classification term, the model can only express the desired emotion explicitly i...
[None, ['EmoDS-MLE'], None, ['EmoDS-EV', 'emotion-w'], None, ['EmoDS-BS', 'distinct-1', 'distinct-2']]
1
P19-1359table_6
The results of human evaluation. Cont. and Emot. denote content and emotion, respectively.
2
[['Models', 'Seq2Seq'], ['Models', 'EmoEmb'], ['Models', 'ECM'], ['Models', 'EmoDS']]
2
[['Joy', 'Cont.'], ['Joy', 'Emot.'], ['Contentment', 'Cont.'], ['Contentment', 'Emot.'], ['Disgust', 'Cont.'], ['Disgust', 'Emot.'], ['Anger', 'Cont.'], ['Anger', 'Emot.'], ['Sadness', 'Cont.'], ['Sadness', 'Emot.'], ['Overall', 'Cont.'], ['Overall', 'Emot.']]
[['1.35', '0.455', '1.445', '0.325', '1.18', '0.095', '1.15', '0.115', '1.09', '0.1', '1.243', '0.216'], ['1.285', '0.655', '1.32', '0.565', '1.015', '0.225', '1.16', '0.4', '0.995', '0.19', '1.155', '0.407'], ['1.395', '0.69', '1.4', '0.615', '1.13', '0.425', '1.19', '0.33', '1.195', '0.335', '1.262', '0.479'], ['1.26...
column
['Joy', 'Joy', 'Contentment', 'Contentment', 'Disguss', 'Disguss', 'Anger', 'Anger', 'Sadness', 'Sadness', 'Overall', 'Overall']
['EmoDS']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Joy || Cont.</th> <th>Joy || Emot.</th> <th>Contentment || Cont.</th> <th>Contentment || Emot.</th> <th>Disgust || Cont.</th> <th>Disgust || Emot.</th> <th>Anger || Cont.</th> <th>...
Table 6
table_6
P19-1359
7
acl2019
It is shown in Table 6 that EmoDS achieved the highest performance in most cases (Sign Test, with p-value < 0.05). Specifically, for content coherence, there was no obvious difference among most models, but for emotional expression, the EmoDS yielded a significant performance boost. As we can see from Table 6, EmoDS pe...
[1, 1, 1, 1, 1]
['It is shown in Table 6 that EmoDS achieved the highest performance in most cases (Sign Test, with p-value < 0.05).', 'Specifically, for content coherence, there was no obvious difference among most models, but for emotional expression, the EmoDS yielded a significant performance boost.', 'As we can see from Table 6, ...
[['EmoDS'], ['EmoDS'], ['EmoDS'], ['Seq2Seq'], ['EmoDS']]
1
P19-1367table_2
Performances on whether using multi-level vocabularies or not, where “SV” represents single vocabulary (from raw words), and “MVs” means multilevel vocabularies obtained from hierarchical clustering. “enc” and “dec” denote encoder and decoder, respectively, and numbers after them represent how many passes. For example,...
2
[['Models', 'enc3-dec1 (SV)'], ['Models', 'enc3-dec1 (MVs)'], ['Models', 'enc1-dec3 (SV)'], ['Models', 'enc1-dec3 (MVs)'], ['Models', 'enc3-dec3 (SV)'], ['Models', 'enc3-dec3 (MVs)']]
2
[['Twitter', 'BLEU'], ['Twitter', 'ROUGE'], ['Weibo', 'BLEU'], ['Weibo', 'ROUGE']]
[['6.27', '6.29', '6.61', '7.08'], ['7.16', '8.01', '9.15', '10.63'], ['7.43', '7.54', '9.92', '10.24'], ['6.75', '7.78', '12.01', '10.86'], ['7.44', '7.56', '9.95', '9.7'], ['8.58', '7.88', '12.51', '11.76']]
column
['BLEU', 'ROUGE', 'BLEU', 'ROUGE']
['enc3-dec1 (MVs)', 'enc1-dec3 (MVs)', 'enc3-dec3 (MVs)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Twitter || BLEU</th> <th>Twitter || ROUGE</th> <th>Weibo || BLEU</th> <th>Weibo || ROUGE</th> </tr> </thead> <tbody> <tr> <td>Models || enc3-dec1 (SV)</td> <td>6.27</td> <td>6.2...
Table 2
table_2
P19-1367
7
acl2019
Comparison Results. Table 2 demonstrates performances on whether using multi-level vocabularies. We can observe that incorporating multilevel vocabularies could improve performances on almost all of the metrics. For example, enc3-dec3 (MVs) improves relative performance up to 25.73% in BLEU score compared with enc3-dec...
[2, 1, 1, 1, 1]
['Comparison Results.', 'Table 2 demonstrates performances on whether using multi-level vocabularies.', 'We can observe that incorporating multilevel vocabularies could improve performances on almost all of the metrics.', 'For example, enc3-dec3 (MVs) improves relative performance up to 25.73% in BLEU score compared wi...
[None, ['enc3-dec1 (MVs)', 'enc1-dec3 (MVs)', 'enc3-dec3 (MVs)'], ['enc3-dec1 (MVs)', 'enc1-dec3 (MVs)', 'enc3-dec3 (MVs)', 'ROUGE', 'BLEU'], ['enc3-dec3 (MVs)', 'enc3-dec3 (SV)', 'Weibo', 'BLEU'], ['enc1-dec3 (MVs)', 'enc1-dec3 (SV)', 'Twitter', 'BLEU']]
1
P19-1368table_2
On-device Results and Comparison on Multiple Datasets and Languages
2
[['Model', 'SGNN++ (our on-device)'], ['Model', 'SGNN(Ravi and Kozareva, 2018)(sota on-device)'], ['Model', 'RNN(Khanpour et al., 2016)'], ['Model', 'RNN+Attention(Ortega and Vu, 2017)'], ['Model', 'CNN(Lee and Dernoncourt, 2016)'], ['Model', 'GatedAtten.(Goo et al., 2018)'], ['Model', 'JointBiLSTM(Hakkani-Tur et al., ...
1
[['MRDA'], ['SwDA'], ['ATIS'], ['CF-EN'], ['CF-JP'], ['CF-FR'], ['CF-SP']]
[['87.3', '88.43', '93.73', '65', '74.33', '70.93', '83.95'], ['86.7', '83.1', '-', '-', '-', '-', '-'], ['86.8', '80.1', '-', '-', '-', '-', '-'], ['84.3', '73.9', '-', '-', '-', '-', '-'], ['84.6', '73.1', '-', '-', '-', '-', '-'], ['-', '-', '93.6', '-', '-', '-', '-'], ['-', '-', '92.6', '-', '-', '-', ''], ['-', '...
column
['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy']
None
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MRDA</th> <th>SwDA</th> <th>ATIS</th> <th>CF-EN</th> <th>CF-JP</th> <th>CF-FR</th> <th>CF-SP</th> </tr> </thead> <tbody> <tr> <td>Model || SGNN++ (our on-device)</td> ...
Table 2
table_2
P19-1368
7
acl2019
Taking these major differences into consideration, we still compare results against prior non-ondevice state-of-art neural networks. As shown in Table 2 only (Khanpour et al., 2016; Ortega and Vu, 2017; Lee and Dernoncourt, 2016) have evaluated on more than one task, while the rest of the methods target specific one. W...
[0, 1, 2, 1, 1, 2, 1, 1, 2]
['Taking these major differences into consideration, we still compare results against prior non-ondevice state-of-art neural networks.', 'As shown in Table 2 only (Khanpour et al., 2016; Ortega and Vu, 2017; Lee and Dernoncourt, 2016) have evaluated on more than one task, while the rest of the methods target specific o...
[None, None, ['SGNN++ (our on-device)'], ['SGNN++ (our on-device)', 'CNN(Lee and Dernoncourt, 2016)', 'RNN(Khanpour et al., 2016)', 'JointBiLSTM(Hakkani-Tur et al., 2016)', 'Atten.RNN(Liu and Lane, 2016)', 'RNN+Attention(Ortega and Vu, 2017)', 'ADAPT-Run1(Dzendzik et al., 2017)', 'Bingo-logistic-reg(Elfardy et al., 201...
1
P19-1372table_1
Automatic evaluation results of different models where the best results are bold. The G, A and E of Embedding represent Greedy, Average, Extreme embedding-based metrics, repsectively.
2
[['Method', 'S2S'], ['Method', 'S2S+DB'], ['Method', 'MMS'], ['Method', 'CVAE'], ['Method', 'CVAE+BOW'], ['Method', 'WAE'], ['Method', 'Ours-First'], ['Method', 'Ours-Disc'], ['Method', 'Ours-MBOW'], ['Method', 'Ours'], ['Method', 'Ours+GMP']]
2
[['Multi-BLEU', 'BLEU-1'], ['Multi-BLEU', 'BLEU-2'], ['EMBEDDING', 'G'], ['EMBEDDING', 'A'], ['EMBEDDING', 'E'], ['Intra-Dist', 'Dist-1'], ['Intra-Dist', 'Dist-2'], ['Inter-Dist', 'Dist-1'], ['Inter-Dist', 'Dist-2']]
[['21.49', '9.498', '0.567', '0.677', '0.415', '0.311', '0.447', '0.027', '0.127'], ['20.2', '9.445', '0.561', '0.682', '0.422', '0.324', '0.457', '0.028', '0.13'], ['21.4', '9.398', '0.569', '0.691', '0.427', '0.561', '0.697', '0.033', '0.158'], ['22.71', '8.923', '0.601', '0.73', '0.452', '0.628', '0.801', '0.035', '...
column
['BLEU-1', 'BLEU-2', 'G', 'A', 'E', 'Dist-1', 'Dist-2', 'Dist-1', 'Dist-2']
['Ours-First', 'Ours-Disc', 'Ours-MBOW', 'Ours', 'Ours+GMP']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Multi-BLEU || BLEU-1</th> <th>Multi-BLEU || BLEU-2</th> <th>EMBEDDING || G</th> <th>EMBEDDING || A</th> <th>EMBEDDING || E</th> <th>Intra-Dist || Dist-1</th> <th>Intra-Dist || Dist-2</t...
Table 1
table_1
P19-1372
6
acl2019
5.1 Comparison against Baselines. Table 1 shows our main experimental results, with baselines shown in the top and our models at the bottom. The results show that our model (Ours) outperforms competitive baselines on various evaluation metrics. The Seq2seq based models (S2S, S2S-DB and MMS) tend to generate fluent utte...
[2, 1, 1, 2]
['5.1 Comparison against Baselines.', 'Table 1 shows our main experimental results, with baselines shown in the top and our models at the bottom.', 'The results show that our model (Ours) outperforms competitive baselines on various evaluation metrics.', 'The Seq2seq based models (S2S, S2S-DB and MMS) tend to generate ...
[None, ['Ours-First', 'Ours-Disc', 'Ours-MBOW', 'Ours', 'Ours+GMP'], ['Ours-First', 'Ours-Disc', 'Ours-MBOW', 'Ours', 'Ours+GMP'], ['S2S', 'MMS', 'BLEU-2']]
1
P19-1374table_4
Conversation results on the Ubuntu test set. Our new model is substantially better than prior work. Significance is not measured as we are unaware of methods for set structured data.
2
[['System', 'Previous'], ['System', 'Linear'], ['System', 'Feedforward'], ['System', 'x10 union'], ['System', 'x10 vote'], ['System', 'x10 intersect'], ['System', 'Lowe (2017)'], ['System', 'Elsner (2008)']]
1
[['VI'], ['1-1'], ['P'], ['R'], ['F']]
[['66.1', '27.6', '0', '0', '0'], ['88.9', '69.5', '19.3', '24.9', '21.8'], ['91.3', '75.6', '34.6', '38', '36.2'], ['86.2', '62.5', '40.4', '28.5', '33.4'], ['91.5', '76', '36.3', '39.7', '38'], ['69.3', '26.6', '67', '21.1', '32.1'], ['80.6', '53.7', '10.8', '7.6', '8.9'], ['82.1', '51.4', '12.1', '21.5', '15.5']]
column
['VI', '1-1', 'P', 'R', 'F']
['x10 vote']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>VI</th> <th>1-1</th> <th>P</th> <th>R</th> <th>F</th> </tr> </thead> <tbody> <tr> <td>System || Previous</td> <td>66.1</td> <td>27.6</td> <td>0</td> <td>0</td> ...
Table 4
table_4
P19-1374
6
acl2019
Conversations: Table 4 presents results on the metrics defined in Section 4.3. There are three regions of performance. First, the baseline has consistently low scores since it forms a single conversation containing all messages. Second, Elsner and Charniak (2008) and Lowe et al. (2017) perform similarly, with one doing...
[1, 1, 1, 1, 1]
['Conversations: Table 4 presents results on the metrics defined in Section 4.3.', 'There are three regions of performance.', 'First, the baseline has consistently low scores since it forms a single conversation containing all messages.', 'Second, Elsner and Charniak (2008) and Lowe et al. (2017) perform similarly, wit...
[None, None, None, ['VI', '1-1', 'Elsner (2008)', 'Lowe (2017)'], ['x10 vote']]
1
P19-1374table_5
Performance with different training conditions on the Ubuntu test set. For Graph-F, * indicates a significant difference at the 0.01 level compared to Standard. Results are averages over 10 runs, varying the data and random seeds. The standard deviation is shown in parentheses.
2
[['Training Condition', 'Standard'], ['Training Condition', 'No context'], ['Training Condition', '1k random msg'], ['Training Condition', '2x 500 msg samples']]
1
[['Graph-F'], ['Conv-F']]
[['72.3 (0.4)', '36.2 (1.7)'], ['72.3 (0.2)', '37.6 (1.6)'], ['63.0* (0.4)', '21 (2.3)'], ['61.4* (1.8)', '20.4 (3.2)']]
column
['accuracy', 'accuracy']
['Training Condition']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Graph-F</th> <th>Conv-F</th> </tr> </thead> <tbody> <tr> <td>Training Condition || Standard</td> <td>72.3 (0.4)</td> <td>36.2 (1.7)</td> </tr> <tr> <td>Training Condition || No...
Table 5
table_5
P19-1374
6
acl2019
Dataset Variations: Table 5 shows results for the feedforward model with several modifications to the training set, designed to test corpus design decisions. Removing context does not substantially impact results. Decreasing the data size to match Elsner and Charniak (2008)’s training set leads to worse results, both i...
[1, 1, 1, 1]
['Dataset Variations: Table 5 shows results for the feedforward model with several modifications to the training set, designed to test corpus design decisions.', 'Removing context does not substantially impact results.', 'Decreasing the data size to match Elsner and Charniak (2008)’s training set leads to worse results...
[None, ['No context'], ['1k random msg', '2x 500 msg samples'], None]
1
P19-1389table_2
Results (%) on 10,000 test query segments on the Classification-for-Modeling task.
2
[['Method', 'CNN-encoder (separated)'], ['Method', 'RNN-encoder (separated)'], ['Method', 'CNN-encoder (joint)'], ['Method', 'RNN-encoder (joint)']]
2
[['level-1 sentence functions', 'Accuracy'], ['level-1 sentence functions', 'Macro-F1'], ['level-1 sentence functions', 'Micro-F1'], ['level-2 sentence functions', 'Accuracy'], ['level-2 sentence functions', 'Macro-F1'], ['level-2 sentence functions', 'Micro-F1']]
[['97.5', '87.6', '97.5', '86.2', '52', '86.2'], ['97.6', '90.9', '97.6', '87.2', '65.8', '87.1'], ['97.4', '87.3', '97.3', '86.5', '51.8', '86.4'], ['97.6', '91.2', '97.5', '87.6', '64.2', '87.6']]
column
['Accuracy', 'Macro-F1', 'Micro-F1', 'Accuracy', 'Macro-F1', 'Micro-F1']
['RNN-encoder (joint)', 'RNN-encoder (separated)', 'CNN-encoder (separated)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>level-1 sentence functions || Accuracy</th> <th>level-1 sentence functions || Macro-F1</th> <th>level-1 sentence functions || Micro-F1</th> <th>level-2 sentence functions || Accuracy</th> <th>lev...
Table 2
table_2
P19-1389
6
acl2019
We randomly sample 10,000 query and response segments respectively from the STCSeFun dataset for testing. Results on test query is summarized in Table 2. As stated in Section 4.1, we train different models with query/response data only (denoted as separated), as well as query and response data jointly (denoted as joint...
[2, 1, 2, 1]
['We randomly sample 10,000 query and response segments respectively from the STCSeFun dataset for testing.', 'Results on test query is summarized in Table 2.', 'As stated in Section 4.1, we train different models with query/response data only (denoted as separated), as well as query and response data jointly (denoted ...
[None, None, None, ['RNN-encoder (joint)', 'RNN-encoder (separated)', 'CNN-encoder (separated)']]
1
P19-1389table_4
Results(%) on 5,000 test queries on the Classification-for-Testing task.
2
[['Method', 'CNN-encoder (without query SeFun)'], ['Method', 'RNN-encoder (without query SeFun)'], ['Method', 'CNN-encoder (with query SeFun)'], ['Method', 'RNN-encoder (with query SeFun)']]
2
[['level-1', 'Accuracy'], ['level-1', 'Macro-F1'], ['level-1', 'Micro-F1'], ['level-2', 'Accuracy'], ['level-2', 'Macro-F1'], ['level-2', 'Micro-F1']]
[['81.2', '15.1', '81.1', '55.7', '23.5', '55.7'], ['77.9', '30.3', '77.9', '65.6', '25.8', '65.5'], ['81.2', '17.4', '81.1', '65.6', '21.1', '65.6'], ['81.3', '28.5', '81.5', '65.5', '25.7', '65.7']]
column
['Accuracy', 'Macro-F1', 'Micro-F1', 'Accuracy', 'Macro-F1', 'Micro-F1']
['CNN-encoder (with query SeFun)', 'RNN-encoder (with query SeFun)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>level-1 || Accuracy</th> <th>level-1 || Macro-F1</th> <th>level-1 || Micro-F1</th> <th>level-2 || Accuracy</th> <th>level-2 || Macro-F1</th> <th>level-2 || Micro-F1</th> </tr> </thead> ...
Table 4
table_4
P19-1389
7
acl2019
We utilize classifiers for this task to estimate the proper response sentence function given the query with/without the query sentence functions. We also implement the RNN-based and CNN-based encoders for the query representation for comparison. Table 4 shows the results on 5,000 test queries by comparing the predicted...
[2, 1, 1, 1]
['We utilize classifiers for this task to estimate the proper response sentence function given the query with/without the query sentence functions.', 'We also implement the RNN-based and CNN-based encoders for the query representation for comparison.', 'Table 4 shows the results on 5,000 test queries by comparing the p...
[None, None, None, ['CNN-encoder (with query SeFun)', 'RNN-encoder (with query SeFun)']]
1
P19-1402table_2
Performance on Named Entity Recognition and Part-of-Speech Tagging tasks. All methods are evaluated on test data containing OOV words. Results demonstrate that the proposed approach, HiCE + Morph + MAML, improves the downstream model by learning better representations for OOV words.
2
[['Methods', 'Word2vec'], ['Methods', 'FastText'], ['Methods', 'Additive'], ['Methods', 'nonce2vec'], ['Methods', 'Ã\xa0 la carte'], ['Methods', 'HiCE w/o Morph'], ['Methods', 'HiCE + Morph'], ['Methods', 'HiCE + Morph + MAML']]
3
[['Named Entity Recognition', 'F1-score', 'Rare-NER'], ['Named Entity Recognition', 'F1-score', 'Bio-NER'], ['POS Tagging', 'Acc', 'Twitter POS']]
[['0.1862', '0.7205', '0.7649'], ['0.1981', '0.7241', '0.8116'], ['0.2021', '0.7034', '0.7576'], ['0.2096', '0.7289', '0.7734'], ['0.2153', '0.7423', '0.7883'], ['0.2394', '0.7486', '0.8194'], ['0.2375', '0.7522', '0.8227'], ['0.2419', '0.7636', '0.8286']]
column
['F1-score', 'F1-score', 'Acc']
['HiCE w/o Morph', 'HiCE + Morph', 'HiCE + Morph + MAML']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Named Entity Recognition (F1-score) || Rare-NER</th> <th>Named Entity Recognition (F1-score) || Bio-NER</th> <th>POS Tagging (Acc) || Twitter POS</th> </tr> </thead> <tbody> <tr> <td>Methods |...
Table 2
table_2
P19-1402
7
acl2019
Results. Table 2 illustrates the results evaluated on the downstream tasks. HiCE outperforms the baselines in all the settings. Compared to the best baseline `a la carte, the relative improvements are 12.4%, 2.9% and 5.1% for Rare-NER, BioNER, and Twitter POS, respectively. As aforementioned, the ratio of OOV words in ...
[2, 1, 1, 1, 2, 1, 2, 2]
['Results.', 'Table 2 illustrates the results evaluated on the downstream tasks.', 'HiCE outperforms the baselines in all the settings.', 'Compared to the best baseline `a la carte, the relative improvements are 12.4%, 2.9% and 5.1% for Rare-NER, BioNER, and Twitter POS, respectively.', 'As aforementioned, the ratio of...
[None, None, ['HiCE w/o Morph', 'HiCE + Morph', 'HiCE + Morph + MAML'], ['HiCE w/o Morph', 'HiCE + Morph', 'HiCE + Morph + MAML', 'Rare-NER', 'Twitter POS'], None, ['Rare-NER', 'HiCE w/o Morph', 'HiCE + Morph', 'HiCE + Morph + MAML'], None, ['HiCE w/o Morph', 'HiCE + Morph', 'HiCE + Morph + MAML']]
1
P19-1403table_5
Performance gains of two neural temporality adaptation models when they are initialized by diachronic word embeddings as compared to initialization with standard non-diachronic word embeddings. Subword refers to our proposed diachronic word embedding in this paper (Section 3). We report absolute percentage increases in...
2
[['Data', 'Twitter'], ['Data', 'Economy'], ['Data', 'Yelp-rest'], ['Data', 'Yelp-hotel'], ['Data', 'Amazon'], ['Data', 'Dianping'], ['Data', 'Average'], ['Data', 'Median']]
2
[['RCNN', 'Incre'], ['RCNN', 'Linear'], ['RCNN', 'Procrustes'], ['RCNN', 'Subword'], ['NTAM', 'Incre'], ['NTAM', 'Linear'], ['NTAM', 'Procrutes'], ['NTAM', 'Subword']]
[['-0.7', '1.4', '-0.2', '-0.8', '1.4', '-0.3', '1.7', '3.5'], ['0.5', '0', '-0.7', '0.4', '-0.3', '-1', '-0.5', '0.3'], ['1.4', '0.1', '-1.9', '2.3', '1.9', '1.6', '1.4', '4.3'], ['-1.5', '-1.2', '-0.5', '-0.2', '-0.7', '-2', '-1.8', '0.8'], ['0.2', '0.2', '-2', '0.5', '-0.8', '-0.7', '-0.8', '2.1'], ['0.4', '1.6', '0...
column
['F1', 'F1', 'F1', 'F1', 'F1', 'F1', 'F1', 'F1']
['RCNN']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>RCNN || Incre</th> <th>RCNN || Linear</th> <th>RCNN || Procrustes</th> <th>RCNN || Subword</th> <th>NTAM || Incre</th> <th>NTAM || Linear</th> <th>NTAM || Procrutes</th> <th>NTAM |...
Table 5
table_5
P19-1403
9
acl2019
Table 5 shows the absolute percentage improvement in classification performance when using each diachronic embedding compared to a classifier without diachronic embeddings. Overall, diachronic embeddings improve classification models. The diachronic embedding appears to be particularly important for NTAM, improving per...
[1, 2, 2, 1, 1, 2]
['Table 5 shows the absolute percentage improvement in classification performance when using each diachronic embedding compared to a classifier without diachronic embeddings.', 'Overall, diachronic embeddings improve classification models.', 'The diachronic embedding appears to be particularly important for NTAM, impro...
[None, None, ['NTAM'], ['RCNN', 'Incre', 'Linear', 'Procrustes', 'Subword'], ['NTAM', 'Incre', 'Linear', 'Procrutes', 'Subword'], None]
1
P19-1407table_1
Performance for non-scratchpad models are taken from He et al. (2018) except Stanford NMT (Luong and Manning, 2015). ∗: model is 2 layers.
2
[['Model', 'MIXER'], ['Model', 'AC + LL'], ['Model', 'NPMT'], ['Model', 'Stanford NMT'], ['Model', 'Transformer (6 layer)'], ['Model', 'Layer-Coord (14 layer)'], ['Model', 'Scratchpad (3 layer)']]
2
[['IWSLT14', 'De-En'], ['IWSLT15', 'Es-En'], ['IWSLT15', 'En-Vi']]
[['21.83', '-', '-'], ['28.53', '-', '-'], ['29.96', '-', '28.07'], ['-', '-', '26.1'], ['32.86', '38.57', '-'], ['35.07', '40.5', '-'], ['35.08', '40.92', '29.59']]
column
['BLEU', 'BLEU', 'BLEU']
['Scratchpad (3 layer)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>IWSLT14 || De-En</th> <th>IWSLT15 || Es-En</th> <th>IWSLT15 || En-Vi</th> </tr> </thead> <tbody> <tr> <td>Model || MIXER</td> <td>21.83</td> <td>-</td> <td>-</td> </tr> <t...
Table 1
table_1
P19-1407
3
acl2019
4.1 Translation . We evaluate on the IWLST 14 English to German and Spanish to English translation datasets (Cettolo et al., 2015) as well as the IWSLT 15 (Cettolo et al., 2015) English to Vietnamese translation dataset. For IWSLT14 (Cettolo et al., 2015), we compare to the models evaluated by He et al. (2018), which i...
[2, 2, 2, 2, 1, 1]
['4.1 Translation .', 'We evaluate on the IWLST 14 English to German and Spanish to English translation datasets (Cettolo et al., 2015) as well as the IWSLT 15 (Cettolo et al., 2015) English to Vietnamese translation dataset.', 'For IWSLT14 (Cettolo et al., 2015), we compare to the models evaluated by He et al. (2018),...
[None, ['IWSLT14', 'IWSLT15'], ['IWSLT14'], ['IWSLT15'], None, ['Scratchpad (3 layer)']]
1
P19-1408table_5
CoNLL-2012 shared task systems evaluations based on maximum spans, MINA spans, and head words. The rankings based on the CoNLL scores are included in parentheses for maximum and MINA spans. Rankings which are different based on maximum vs. MINA spans are highlighted.
1
[['fernandes'], ['martschat'], ['bjorkelund'], ['chang'], ['chen'], ['chunyuang'], ['shou'], ['yuan'], ['xu'], ['uryupina'], ['songyang']]
2
[['CoNLL', 'max'], ['CoNLL', 'MINA'], ['CoNLL', 'head'], ['LEA', 'max'], ['LEA', 'MINA'], ['LEA', 'head']]
[['60.6 (1)', ' 62.2 (1)', ' 63.9', ' 53.3', ' 55.1', ' 57.0'], ['57.7 (2)', ' 59.7 (2)', ' 61.0', ' 50.0', ' 52.4', ' 53.9'], ['57.4 (3)', ' 58.9 (3)', ' 60.7', ' 50.0', ' 51.6', ' 53.6'], ['56.1 (4)', ' 58.0 (4)', ' 59.6', ' 48.5', ' 50.7', ' 52.5'], ['54.5 (5)', ' 56.5 (5)', ' 58.2', ' 46.2', ' 48.6', ' 50.4'], ['54...
column
['CoNLL', 'CoNLL', 'CoNLL', 'LEA', 'LEA', 'LEA']
['MINA']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>CoNLL || max</th> <th>CoNLL || MINA</th> <th>CoNLL || head</th> <th>LEA || max</th> <th>LEA || MINA</th> <th>LEA || head</th> </tr> </thead> <tbody> <tr> <td>fernandes</td> ...
Table 5
table_5
P19-1408
11
acl2019
A Appendix . Table 5 shows CoNLL scores and the LEA F1 values of the participating systems in the CoNLL2012 shared task (closed task with predicted syntax and mentions) based on both maximum and minimum span evaluations. Minimum spans are detected using both MINA and Collins’ head finding rules using gold parse trees...
[2, 1, 2, 1, 1]
['A Appendix .', 'Table 5 shows CoNLL scores and the LEA F1 values of the participating systems in the CoNLL2012 shared task (closed task with predicted syntax and mentions) based on both maximum and minimum span evaluations.', 'Minimum spans are detected using both MINA and Collins’ head finding rules using gold par...
[None, ['CoNLL', 'LEA'], ['MINA'], ['MINA', 'max'], ['MINA', 'head']]
1
P19-1409table_3
Combined withinand cross-document event coreference results on the ECB+ test set.
3
[['Model', 'Baselines', 'CLUSTER+LEMMA'], ['Model', 'Baselines', 'CV (Cybulska and Vossen 2015a),71,75,73,71,78,74,-,-,64,73\nModel,Baselines,KCP (Kenyon-Dean et al. 2018)'], ['Model', 'Baselines', 'CLUSTER+KCP'], ['Model', 'Model', 'Variants DISJOINT'], ['Model', 'Model', 'Variants JOINT']]
2
[['MUC', 'R'], ['MUC', 'P'], ['MUC', 'F1'], ['B 3', 'R'], [' B 3', 'P'], [' B 3', 'F1'], [' CEAF-e', 'R'], [' CEAF-e', 'P'], [' CEAF-e', 'F1'], [' CoNLL', 'F1']]
[['76.5', '79.9', '78.1', '71.7', '85', '77.8', '75.5', '71.7', '73.6', '76.5'], ['67', '71', '69', '71', '67', '69', '71', '67', '69', '69'], ['68.4', '79.3', '73.4', '67.2', '87.2', '75.9', '77.4', '66.4', '71.5', '73.6'], ['75.5', '83.6', '79.4', '75.4', '86', '80.4', '80.3', '71.9', '75.9', '78.5'], ['77.6', '84.5'...
column
['R', 'P', 'F1', 'R', 'P', 'F1', 'R', 'P', 'F1', 'F1']
['Variants JOINT']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MUC || R</th> <th>MUC || P</th> <th>MUC || F1</th> <th>B 3 || R</th> <th>B 3 || P</th> <th>B 3 || F1</th> <th>CEAF-e || R</th> <th>CEAF-e || P</th> <th>CEAF-e || F1</th> ...
Table 3
table_3
P19-1409
7
acl2019
Table 3 presents the results on event coreference. Our joint model outperforms all the baseines with a gap of 10.5 CoNLL F1 points from the last published results (KCP), while surpassing our strong lemma baseline by 3 points.
[1, 1]
['Table 3 presents the results on event coreference.', 'Our joint model outperforms all the baseines with a gap of 10.5 CoNLL F1 points from the last published results (KCP), while surpassing our strong lemma baseline by 3 points.']
[None, ['Variants JOINT', ' CoNLL', 'F1', 'CV (Cybulska and Vossen 2015a),71,75,73,71,78,74,-,-,64,73\nModel,Baselines,KCP (Kenyon-Dean et al. 2018)', 'CLUSTER+KCP', 'Baselines']]
1
P19-1411table_2
System performance for the multi-class classification settings (i.e., F1 for 4-way and Accuracy for PDTB-Lin and PDTB-Ji as in the prior work). Our model is significantly better than the others (p < 0.05).
2
[['System', '(Lin et al. 2009)'], ['System', '(Ji and Eisenstein 2015b)'], ['System', '(Qin et al. 2016)'], ['System', '(Liu and Li 2016b)'], ['System', '(Qin et al. 2017)'], ['System', '(Lan et al. 2017)'], ['System', '(Dai and Huang 2018)'], ['System', '(Lei et al. 2018)'], ['System', '(Guo et al. 2018)'], ['System',...
2
[['4-way', 'F1'], ['PDTB-Lin', 'Accuracy'], ['PDTB-Ji', 'Accuracy']]
[['-', '40.2', '-0.1'], ['-', '-', '44.59'], ['-', '43.81', '45.04'], ['46.29', '-', '-'], ['-', '44.65', '46.23'], ['47.8', '-', '-'], ['51.84', '-', '-'], ['47.15', '-', '-'], ['47.59', '-', '-'], ['51.06', '45.73', '48.22'], ['53', '46.48', '49.95']]
column
['F1', 'Accuracy', 'Accuracy']
['This work']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>4-way || F1</th> <th>PDTB-Lin || Accuracy</th> <th>PDTB-Ji || Accuracy</th> </tr> </thead> <tbody> <tr> <td>System || (Lin et al. 2009)</td> <td>-</td> <td>40.2</td> <td>-0.1</t...
Table 2
table_2
P19-1411
5
acl2019
4.2 Comparing to the State of the Art . This section compares our proposed model with the current state-of-the-art models for IDRR. In particular, Table 2 shows the performance of the models for the multi-class classification settings (i.e., 4-way and 11-way with PDTB-Lin and PDTB-Ji) on the corresponding test sets. Th...
[2, 1, 1, 1, 2, 1, 1]
['4.2 Comparing to the State of the Art .', 'This section compares our proposed model with the current state-of-the-art models for IDRR.', 'In particular, Table 2 shows the performance of the models for the multi-class classification settings (i.e., 4-way and 11-way with PDTB-Lin and PDTB-Ji) on the corresponding test ...
[None, None, ['4-way', 'PDTB-Lin', 'PDTB-Ji'], ['This work', '(Bai and Zhao 2018)'], ['This work', '(Bai and Zhao 2018)'], ['This work'], ['This work']]
1
P19-1411table_3
System performance with different combinations of L1, L2 and L3 (i.e., F1 for 4-way and Accuracy for PDTB-Lin and PDTB-Ji as in prior work). “None”: not using any term.
2
[['System', 'L1 + L2 + L3'], ['System', 'L1 + L2'], ['System', 'L1 + L3'], ['System', 'L2 + L3'], ['System', 'L1'], ['System', 'L2'], ['System', 'L3'], ['System', 'None']]
2
[['4-way', 'F1'], ['PDTB-Lin', 'Accuracy'], ['PDTB-Ji', 'Accuracy']]
[['53', '46.48', '49.95'], ['52.18', '46.08', '49.28'], ['52.31', '45.3', '49.57'], ['52.57', '44.91', '49.86'], ['51.11', '46.21', '49.09'], ['50.38', '45.56', '47.83'], ['52.52', '45.69', '49.09'], ['51.62', '45.82', '48.6']]
column
['F1', 'Accuracy', 'Accuracy']
['L1 + L2 + L3']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>4-way || F1</th> <th>PDTB-Lin || Accuracy</th> <th>PDTB-Ji || Accuracy</th> </tr> </thead> <tbody> <tr> <td>System || L1 + L2 + L3</td> <td>53</td> <td>46.48</td> <td>49.95</td>...
Table 3
table_3
P19-1411
5
acl2019
4.3 Ablation Study . The multi-task learning framework in this work involves three penalization terms (i.e., L1, L2 and L3 in Equations 2, 3 and 4). In order to illustrate the contribution of these terms, Table 3 presents the test set performance of the proposed model when different combinations of the terms are employ...
[0, 2, 1, 2, 1, 1, 1]
['4.3 Ablation Study .', 'The multi-task learning framework in this work involves three penalization terms (i.e., L1, L2 and L3 in Equations 2, 3 and 4).', 'In order to illustrate the contribution of these terms, Table 3 presents the test set performance of the proposed model when different combinations of the terms ar...
[None, ['L1', 'L2', 'L3'], None, ['L1', 'L2', 'L3'], ['L1 + L2 + L3'], ['L1', 'L2', 'L3', 'None'], ['L1 + L2 + L3']]
1
P19-1412table_3
Performance and number of items per feature. The scores in bold indicate the classes on which each model has the best performance (with respect to both metrics). † marks statistical significance of Pearson’s correlation (p < 0.05).
3
[['Feature', 'Embedding', 'Cond.'], ['Feature', 'Embedding', 'Modal'], ['Feature', 'Embedding', 'Negation'], ['Feature', 'Embedding', 'Question']]
2
[['r', 'Rule'], ['r', 'Hybr.'], ['MAE', 'Rule'], ['MAE', 'Hybr.']]
[['', '0.02', '2.08', '1.50'], ['-0.01', '0.21', '1.37', '1.08'], ['0.45', '0.22', '2.26', '2.40'], ['-0.22', '0.29', '2.35', '1.25']]
column
['r', 'r', 'MAE', 'MAE']
['Embedding']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>r || Rule</th> <th>r || Hybr.</th> <th>MAE || Rule</th> <th>MAE || Hybr.</th> </tr> </thead> <tbody> <tr> <td>Feature || Embedding || Cond.</td> <td></td> <td>0.02</td> <td...
Table 3
table_3
P19-1412
4
acl2019
Focusing on the restricted set, we perform detailed error analysis of the outputs of the rule-based and hybrid biLSTM models, which achieved the best correlation. Table 3 shows performance for the following linguistic features. The rule-based model can only capture inferences involving negation (r = 0.45), while the hy...
[2, 1, 1, 1]
['Focusing on the restricted set, we perform detailed error analysis of the outputs of the rule-based and hybrid biLSTM models, which achieved the best correlation.', 'Table 3 shows performance for the following linguistic features.', 'The rule-based model can only capture inferences involving negation (r = 0.45), whil...
[None, None, ['r', 'Rule', 'Negation', 'Modal', 'Question'], ['r', 'Rule', 'Cond.']]
1
P19-1414table_3
Why-QA performances
1
[['Oh et al.(2013)'], ['Sharp et al.(2016)'], ['Tan et al.(2016)'], ['Oh et al.(2017)'], ['BASE'], ['BASE+AddTr'], ['BASE+CAns'], ['BASE+CEnc'], ['BASE+Enc'], ['BERT'], ['BERT+AddTr'], ['BERT+FOP'], ['BERT+FRV'], ['Ours (OP)'], ['Ours (RP)'], ['Ours (RV)'], ['Oracle']]
1
[['P@1'], ['MAP']]
[['41.8', '41.0'], ['33.2', '32.2'], ['34.0', '33.4'], ['47.6', '45.0'], ['51.4', '50.4'], ['52.0', '49.3'], ['51.8', '50.3'], ['52.4', '51.5'], ['52.2', '50.6'], ['51.2', '50.8'], ['51.8', '51.0'], ['53.4', '51.2'], ['53.2', '50.9'], ['54.8', '52.4'], ['53.4', '51.5'], ['54.6', '51.8'], ['60.4', '60.4']]
column
['P@1', 'map']
['Ours (OP)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P@1</th> <th>MAP</th> </tr> </thead> <tbody> <tr> <td>Oh et al.(2013)</td> <td>41.8</td> <td>41.0</td> </tr> <tr> <td>Sharp et al.(2016)</td> <td>33.2</td> <td>32.2</...
Table 3
table_3
P19-1414
7
acl2019
4.4 Results . Table 3 shows the performances of all the methods in the Precision of the top answer (P@1) and the Mean Average Precision (MAP) (Oh et al., 2013). Note that the Oracle method indicates the performance of a fictional method that ranks the answer passages perfectly, i.e., it locates all the m correct answer...
[2, 1, 2, 2, 1, 1, 1, 1, 2, 1, 2]
['4.4 Results .', 'Table 3 shows the performances of all the methods in the Precision of the top answer (P@1) and the Mean Average Precision (MAP) (Oh et al., 2013).', 'Note that the Oracle method indicates the performance of a fictional method that ranks the answer passages perfectly, i.e., it locates all the m correc...
[None, ['P@1', 'MAP'], ['Oracle'], None, ['Ours (OP)'], ['BASE'], ['Ours (OP)', 'BASE', 'BASE+AddTr', 'P@1'], ['Ours (OP)', 'BASE+CAns', 'BASE+CEnc', 'BASE+Enc'], None, ['Ours (OP)', 'BERT'], None]
1
P19-1415table_2
Experimental results of applying data augmentation to reading comprehension models on the SQuAD 2.0 dataset. “(cid:52)” indicates absolute improvement.
1
[['BNA'], ['BNA + UNANSQ'], ['DocQA'], ['DocQA + UNANSQ'], ['BERTBase'], ['BERTBase + UNANSQ'], ['BERT Large'], ['BERT Large+ UNANSQ']]
1
[['EM'], ['F1']]
[['59.7', '62.7'], ['61.0', '63.5'], ['61.9', '64.5'], ['62.4', '65.3'], ['74.3', '77.4'], ['76.4', '79.3'], ['78.2', '81.3'], ['80.0', '83.0']]
column
['EM', 'F1']
['BERTBase + UNANSQ', 'BERT Large+ UNANSQ']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>EM</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>BNA</td> <td>59.7</td> <td>62.7</td> </tr> <tr> <td>BNA + UNANSQ</td> <td>61.0</td> <td>63.5</td> </tr> <tr>...
Table 2
table_2
P19-1415
6
acl2019
Table 2 shows the exact match and F1 scores of multiple reading comprehension models with and without data augmentation. We can see that the generated unanswerable questions can improve both specifically designed reading comprehension models and strong BERT fine-tuning models, yielding 1.9 absolute F1 improvement with ...
[1, 1, 2]
['Table 2 shows the exact match and F1 scores of multiple reading comprehension models with and without data augmentation.', 'We can see that the generated unanswerable questions can improve both specifically designed reading comprehension models and strong BERT fine-tuning models, yielding 1.9 absolute F1 improvement ...
[['EM', 'F1'], ['F1', 'BERTBase + UNANSQ', 'BERT Large+ UNANSQ'], None]
1
P19-1415table_3
Human evaluation results. Unanswerability (UNANS): 1 for unanswerable, 0 otherwise. Relatedness (RELA): 3 for relevant to both answerable question and paragraph, 2 for relevant to only one, 1 for irrelevant. Readability (READ): 3 for fluent, 2 for minor grammatical errors, 1 for incomprehensible.
1
[['TFIDF'], ['SEQ2SEQ'], ['PAIR2SEQ'], ['Human']]
1
[['UNANS'], ['RELA'], ['READ']]
[['0.96', ' 1.52', ' 2.98'], ['0.62', '2.88', '2.39'], ['0.65', '2.95', '2.61'], ['0.95', '2.96', '3']]
column
['UNANS', 'RELA', 'READ']
['PAIR2SEQ']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>UNANS</th> <th>RELA</th> <th>READ</th> </tr> </thead> <tbody> <tr> <td>TFIDF</td> <td>0.96</td> <td>1.52</td> <td>2.98</td> </tr> <tr> <td>SEQ2SEQ</td> <td>0.62<...
Table 3
table_3
P19-1415
6
acl2019
Table 3 shows the human evaluation results of generated unanswerable questions. We compare with the baseline method TFIDF, which uses the input answerable question to retrieve similar questions towards other articles as outputs. The retrieved questions are mostly unanswerable and readable, but they are not quite releva...
[1, 1, 1, 1, 1, 2]
['Table 3 shows the human evaluation results of generated unanswerable questions.', 'We compare with the baseline method TFIDF, which uses the input answerable question to retrieve similar questions towards other articles as outputs.', 'The retrieved questions are mostly unanswerable and readable, but they are not quit...
[None, ['TFIDF'], ['UNANS', ' READ'], [' RELA'], ['PAIR2SEQ', 'SEQ2SEQ'], None]
1
P19-1425table_2
Comparison with baseline methods trained on different backbone models (second column). * indicates the method trained using an extra corpus.
3
[['Method', 'Vaswani et al. (2017)', 'Trans.-Base'], ['Method', 'Miyato et al. (2017)', 'Trans.-Base'], ['Method', 'Sennrich et al. (2016a)', 'Trans.-Base'], ['Method', 'Wang et al. (2018)', 'Trans.-Base'], ['Method', 'Cheng et al. (2018)', 'RNMT lex.'], ['Method', 'Cheng et al. (2018)', 'RNMT feat.'], ['Method', 'Chen...
1
[['MT06'], ['MT02'], ['MT03'], ['MT04'], ['MT05'], ['MT08']]
[['44.59', '44.82', '43.68', '45.6', '44.57', '35.07'], ['45.11', '45.95', '44.68', '45.99', '45.32', '35.84'], ['44.96', '46.03', '44.81', '46.01', '45.69', '35.32'], ['45.47', '46.31', '45.3', '46.45', '45.62', '35.66'], ['43.57', '44.82', '42.95', '45.05', '43.45', '34.85'], ['44.44', '46.1', '44.07', '45.61', '44.0...
column
['BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU']
['Ours', 'Ours + BackTranslation*']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MT06</th> <th>MT02</th> <th>MT03</th> <th>MT04</th> <th>MT05</th> <th>MT08</th> </tr> </thead> <tbody> <tr> <td>Method || Vaswani et al. (2017) || Trans.-Base</td> <td>44.5...
Table 2
table_2
P19-1425
6
acl2019
Table 2 shows the comparisons to the above five baseline methods. Among all methods trained without extra corpora, our approach achieves the best result across datasets. After incorporating the back-translated corpus, our method yields an additional gain of 1-3 points over (Sennrich et al., 2016b) trained on the same b...
[1, 1, 1, 2, 2]
['Table 2 shows the comparisons to the above five baseline methods.', 'Among all methods trained without extra corpora, our approach achieves the best result across datasets.', 'After incorporating the back-translated corpus, our method yields an additional gain of 1-3 points over (Sennrich et al., 2016b) trained on th...
[None, ['Ours', 'Trans.-Base'], ['Ours + BackTranslation*', 'Trans.-Base'], None, None]
1
P19-1425table_3
Results on NIST Chinese-English translation.
3
[['Method', 'Vaswani et al. (2017)', 'Trans.-Base'], ['Method', 'Ours', ' Trans.-Base']]
1
[['MT06'], ['MT02'], ['MT03'], ['MT04'], ['MT05'], ['MT08']]
[['44.59', '44.82', '43.68', '45.60', '44.57', '35.07'], ['46.95', '47.06', '46.48', '47.39', '46.58', '37.38']]
column
['BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU']
['Ours']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MT06</th> <th>MT02</th> <th>MT03</th> <th>MT04</th> <th>MT05</th> <th>MT08</th> </tr> </thead> <tbody> <tr> <td>Method || Vaswani et al. (2017) || Trans.-Base</td> <td>44.5...
Table 3
table_3
P19-1425
6
acl2019
Table 3 shows the BLEU scores on the NIST Chinese-English translation task. We first compare our approach with the Transformer model (Vaswani et al., 2017) on which our model is built. As we see, the introduction of our method to the standard backbone model (Trans.-Base) leads to substantial improvements across the val...
[1, 1, 1, 1]
['Table 3 shows the BLEU scores on the NIST Chinese-English translation task.', 'We first compare our approach with the Transformer model (Vaswani et al., 2017) on which our model is built.', 'As we see, the introduction of our method to the standard backbone model (Trans.-Base) leads to substantial improvements across...
[None, ['Ours', 'Vaswani et al. (2017)'], ['Ours'], ['Ours']]
1
P19-1429table_2
Experiment results on ACE 2005. For a fair comparison, the results of baselines are adapted from their original papers.
2
[['Feature based Approaches', 'MaxEnt'], ['Feature based Approaches', 'Combined-PSL'], ['Representation Learning based Approaches', 'DMCNN'], ['Representation Learning based Approaches', 'Bi-RNN'], ['Representation Learning based Approaches', 'NC-CNN'], ['External Resource based Approaches', 'SA-ANN-Arg (+Arguments)'],...
1
[['P'], [' R'], [' F1']]
[['74.5', '59.1', '65.9'], ['75.3', '64.4', '69.4'], ['75.6', '63.6', '69.1'], ['66', '73', '69.3'], [' -', ' -', '71.3'], ['78', '66.3', '71.7'], ['78.9', '66.9', '72.4'], ['77.9', '68.8', '73.1'], ['77.9', '69.1', '73.3'], ['75.6', '62.3', '68.3'], ['71.8', '70.8', '71.3'], ['73.7', '71.9', '72.8'], ['74', '70.5', '7...
column
['P', 'R', 'F1']
['Our Approach']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Feature based Approaches || MaxEnt</td> <td>74.5</td> <td>59.1</td> <td>65.9</td> </tr> <tr> <td>Feature...
Table 2
table_2
P19-1429
7
acl2019
4.2 Overall Performance . Table 2 shows the overall ACE2005 results of all baselines and our approach. For our approach, we show the results of four settings: our approach using word embedding as its word representation rw – ∆w2v; our approach using ELMo as rw ∆ELM o; our approach simply concatenating [rd, rg, rw] as i...
[2, 1, 1, 1, 1, 1, 1]
['4.2 Overall Performance .', 'Table 2 shows the overall ACE2005 results of all baselines and our approach.', 'For our approach, we show the results of four settings: our approach using word embedding as its word representation rw – ∆w2v; our approach using ELMo as rw ∆ELM o; our approach simply concatenating [rd, rg, ...
[None, None, None, ['Our Approach'], ['Feature based Approaches', '∆w2v', '∆ELM o', ' F1'], ['Representation Learning based Approaches', '∆w2v', '∆ELM o'], ['∆ELM o', 'External Resource based Approaches']]
1
P19-1435table_1
The accuracy scores of predicting the label with unlexicalized features, leakage features, and advanced graph-based features and the relative improvements. Result with ∗ is from Bowman et al. (2015). Results with † are from Williams et al. (2018). Result with ‡ is from Wang et al. (2017). Result with (cid:5) is from Sh...
2
[['Method', 'Majority'], ['Method', 'Unlexicalized'], ['Method', 'LSTM'], ['Method', 'Leakage'], ['Method', 'Advanced'], ['Method', 'Leakage vs Majority'], ['Method', 'Advanced vs Majority']]
1
[[' SNLI'], [' MultiNLI Matched'], ['MultiNLI Mismatched'], [' QuoraQP'], [' MSRP'], ['SICK NLI'], ['SICK STS'], [' ByteDance']]
[['33.7', '35.6', '36.5', '50', '66.5', '56.7', '50.3', '68.59'], ['47.7', '44.9', '45.5', '68.2', '73.9', '70.1', '70.2', '75.23'], ['77.6', '66.9', '66.9', '82.58', '70.6', '71.3', '70.2', '86.45'], ['36.6', '32.1', '31.1', '79.63', '66.7', '56.7', '55.5', '78.24'], ['39.1', '32.7', '33.8', '80.47', '67.9', '57.5', '...
column
['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy']
['Leakage', 'Advanced', 'Majority']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>SNLI</th> <th>MultiNLI Matched</th> <th>MultiNLI Mismatched</th> <th>QuoraQP</th> <th>MSRP</th> <th>SICK NLI</th> <th>SICK STS</th> <th>ByteDance</th> </tr> </thead> <tbody> ...
Table 1
table_1
P19-1435
3
acl2019
Predicting semantic relationships without using sentence contents seems impossible. However, we find that the graph-based features (Leakage and Advanced) make the problem feasible on a wide range of datasets. Specifically, on the datasets like QuoraQP and ByteDance, the leakage features are even more effective than the...
[2, 1, 1, 1, 1, 1, 2, 1]
['Predicting semantic relationships without using sentence contents seems impossible.', 'However, we find that the graph-based features (Leakage and Advanced) make the problem feasible on a wide range of datasets.', 'Specifically, on the datasets like QuoraQP and ByteDance, the leakage features are even more effective ...
[None, ['Leakage', 'Advanced'], [' QuoraQP', ' ByteDance', 'Leakage', 'Unlexicalized'], [' MultiNLI Matched', 'MultiNLI Mismatched', 'Majority', 'Leakage', 'Advanced'], [' SNLI', ' ByteDance', ' QuoraQP', 'Advanced', 'Leakage'], [' MSRP', 'SICK NLI', 'Leakage'], None, None]
1
P19-1435table_4
Evaluation Results with the synthetic dataset, MSRP and SICKSTS dataset. We report the accuracy scores and “%” is omitted.
2
[['Method', 'Biased Model'], ['Method', 'Debiased Model']]
1
[['Synthetic'], ['MSRP'], ['SICK STS']]
[['89.46', '51.94', '64.95'], ['92.62', '56.77', '66.05']]
column
['accuracy', 'accuracy', 'accuracy']
['Debiased Model']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Synthetic</th> <th>MSRP</th> <th>SICK STS</th> </tr> </thead> <tbody> <tr> <td>Method || Biased Model</td> <td>89.46</td> <td>51.94</td> <td>64.95</td> </tr> <tr> <td...
Table 4
table_4
P19-1435
7
acl2019
Table 4 reports the results on the datasets that are not biased to the leakage pattern of QuoraQP. We find that the Debiased Model significantly outperforms the Biased Model on all three datasets. This indicates that the Debiased Model better captures the true semantic similarities of the input sentences. We further vi...
[1, 1, 1, 2, 2, 1]
['Table 4 reports the results on the datasets that are not biased to the leakage pattern of QuoraQP.', 'We find that the Debiased Model significantly outperforms the Biased Model on all three datasets.', 'This indicates that the Debiased Model better captures the true semantic similarities of the input sentences.', 'We...
[None, ['Debiased Model', 'Biased Model'], ['Debiased Model'], None, None, ['Debiased Model', 'Synthetic', 'MSRP', 'SICK STS']]
1
P19-1436table_1
Results on SQuAD v1.1. ‘W/s’ indicates number of words the model can process (read) per second on a CPU in a batch mode (multiple queries at a time). DrQA (Chen et al., 2017) and BERT (Devlin et al., 2019) are from SQuAD leaderboard, and LSTM+SA and LSTM+SA+ELMo are query-agnostic baselines from Seo et al. (2018).
3
[['Original', 'Model', 'DrQA'], ['Original', 'Model', 'BERT-Large'], ['Query-Agnostic', 'Model', 'LSTM+SA'], ['Query-Agnostic', 'Model', 'LSTM+SA+ELMo'], ['Query-Agnostic', 'Model', 'DENSPI (dense only)'], ['Query-Agnostic', 'Model', '+ Linear layer'], ['Query-Agnostic', 'Model', '+ Indep. encoders'], ['Query-Agnostic'...
1
[['EM'], ['F1'], ['W/s']]
[['69.5', '78.8', '4.8K'], ['84.1', '90.9', '51'], ['49', '59.8', '-'], ['52.7', '62.7', '-'], ['73.6', '81.7', '28.7M'], ['66.9', '76.4', '-'], ['65.4', '75.1', '-'], ['71.5', '81.5', '-']]
column
['EM', 'F1', 'W/s']
['DENSPI (dense only)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>EM</th> <th>F1</th> <th>W/s</th> </tr> </thead> <tbody> <tr> <td>Original || Model || DrQA</td> <td>69.5</td> <td>78.8</td> <td>4.8K</td> </tr> <tr> <td>Original || M...
Table 1
table_1
P19-1436
7
acl2019
Results . Table 1 compares the performance of our system with different baselines in terms of efficiency and accuracy. We note the following observations from the result table. (1) DENSPI outperforms the query-agnostic baseline (Seo et al., 2018) by a large margin, 20.1% EM and 18.5% F1. This is largely credited toward...
[2, 1, 1, 1, 2, 1, 2, 1, 2, 2, 2]
['Results .', 'Table 1 compares the performance of our system with different baselines in terms of efficiency and accuracy.', 'We note the following observations from the result table.', '(1) DENSPI outperforms the query-agnostic baseline (Seo et al., 2018) by a large margin, 20.1% EM and 18.5% F1.', 'This is largely c...
[None, None, None, ['DENSPI (dense only)', 'LSTM+SA+ELMo'], None, ['DENSPI (dense only)', 'DrQA'], None, ['DENSPI (dense only)', 'BERT-Large'], None, ['Query-Agnostic'], ['DENSPI (dense only)', 'DrQA']]
1
P19-1441table_2
GLUE test set results scored using the GLUE evaluation server. The number below each task denotes the number of training examples. The state-of-the-art results are in bold, and the results on par with or pass human performance are in bold. MT-DNN uses BERTLARGE to initialize its shared layers. All the results are obtai...
2
[['Model', 'BiLSTM+ELMo+Attn 1'], ['Model', 'Singletask Pretrain Transformer 2'], ['Model', 'GPT on STILTs 3'], ['Model', 'BERT LARGE 4'], ['Model', 'MT-DNNno-fine-tune'], ['Model', 'MT-DNN'], ['Model', 'Human Performance']]
2
[[' CoLA', '8.5k'], [' SST-2', '67k'], [' MRPC', '3.7k'], ['STS-B', '7k'], ['QQP', '364k'], [' MNLI-m/mm', '393k'], [' QNLI', '108k'], [' RTE', '2.5k'], [' WNLI', '634'], [' AX', '-'], [' Score', '-']]
[['36', '90.4', ' 84.9/77.9', ' 75.1/73.3', ' 64.8/84.7', ' 76.4/76.1', ' -', '56.8', '65.1', '26.5', '70.5'], ['45.4', '91.3', ' 82.3/75.7', ' 82.0/80.0', ' 70.3/88.5', ' 82.1/81.4', ' -', '56', '53.4', '29.8', '72.8'], ['47.2', '93.1', ' 87.7/83.7', ' 85.3/84.8', ' 70.1/88.1', ' 80.8/80.6', ' -', '69.1', '65.1', '29....
column
['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy']
['MT-DNNno-fine-tune', 'MT-DNN']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>CoLA || 8.5k</th> <th>SST-2 || 67k</th> <th>MRPC || 3.7k</th> <th>STS-B || 7k</th> <th>QQP || 364k</th> <th>MNLI-m/mm || 393k</th> <th>QNLI || 108k</th> <th>RTE || 2.5k</th> <...
Table 2
table_2
P19-1441
6
acl2019
MT-DNNno-fine-tune. Since the MTL of MT-DNN uses all GLUE tasks, it is possible to directly apply MT-DNN to each GLUE task without finetuning. The results in Table 2 show that MTDNNno-fine-tune still outperforms BERTLARGE consistently among all tasks but CoLA. Our analysis shows that CoLA is a challenge task with much ...
[1, 1, 1, 1, 1, 1, 1, 1, 0]
['MT-DNNno-fine-tune.', 'Since the MTL of MT-DNN uses all GLUE tasks, it is possible to directly apply MT-DNN to each GLUE task without finetuning.', 'The results in Table 2 show that MTDNNno-fine-tune still outperforms BERTLARGE consistently among all tasks but CoLA.', 'Our analysis shows that CoLA is a challenge task...
[['MT-DNNno-fine-tune'], ['MT-DNN'], ['MT-DNNno-fine-tune', 'BERT LARGE 4', ' CoLA'], [' CoLA'], [' CoLA', 'MT-DNN'], None, ['MT-DNNno-fine-tune', 'MT-DNN'], ['MT-DNN', 'BERT LARGE 4', ' CoLA'], None]
1
P19-1443table_8
Performance stratified by question difficulty on the development set. The performances of the two models decrease as questions are more difficult.
2
[['Goal Difficulty', 'Easy (483)'], ['Goal Difficulty', 'Medium (441)'], ['Goal Difficulty', 'Hard (145)'], ['Goal Difficulty', 'Extra hard (134)']]
1
[[' CD-Seq2Seq'], [' SyntaxSQL-con']]
[['35.1', '38.9'], ['7', '7.3'], ['2.8', '1.4'], ['0.8', '0.7']]
column
['accuracy', 'accuracy']
['Goal Difficulty']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>CD-Seq2Seq</th> <th>SyntaxSQL-con</th> </tr> </thead> <tbody> <tr> <td>Goal Difficulty || Easy (483)</td> <td>35.1</td> <td>38.9</td> </tr> <tr> <td>Goal Difficulty || Medium (...
Table 8
table_8
P19-1443
9
acl2019
Performance stratified by SQL difficulty We group individual questions in SParC into different difficulty levels based on the complexity of their corresponding SQL representations using the criteria proposed in Yu et al.(2018c). As shown in Figure 3, the questions turned to get harder as interaction proceeds, more ques...
[2, 2, 1, 1, 1, 2]
['Performance stratified by SQL difficulty We group individual questions in SParC into different difficulty levels based on the complexity of their corresponding SQL representations using the criteria proposed in Yu et al.(2018c).', 'As shown in Figure 3, the questions turned to get harder as interaction proceeds, more...
[None, None, [' CD-Seq2Seq', ' SyntaxSQL-con', 'Goal Difficulty'], ['Easy (483)'], ['Hard (145)', 'Extra hard (134)'], None]
1
P19-1446table_4
SEMBLEU and SMATCH scores for several recent models. † indicates previously reported result.
4
[['Data', 'LDC2015E86', ' Model', ' Lyu'], ['Data', 'LDC2015E86', ' Model', ' Guo'], ['Data', 'LDC2015E86', ' Model', ' Gros'], ['Data', 'LDC2015E86', ' Model', ' JAMR'], ['Data', 'LDC2015E86', ' Model', ' CAMR'], ['Data', 'LDC2016E25', ' Model', ' Lyu'], ['Data', 'LDC2016E25', ' Model', ' van Nood'], ['Data', 'LDC2017...
1
[[' SEMBLEU'], [' SMATCH']]
[['52.7', ' 73.7†'], ['50.1', ' 68.7†'], ['50', ' 70.2†'], ['46.8', '67'], ['37.2', '62'], ['54.3', ' 74.4†'], ['49.2', ' 71.0†'], ['52', ' 69.8†'], ['50.7', ' 71.0†'], ['47', '66'], ['36.6', '61']]
column
['SEMBLEU', 'SMATCH']
[' SEMBLEU']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>SEMBLEU</th> <th>SMATCH</th> </tr> </thead> <tbody> <tr> <td>Data || LDC2015E86 || Model || Lyu</td> <td>52.7</td> <td>73.7†</td> </tr> <tr> <td>Data || LDC2015E86 || Model ...
Table 4
table_4
P19-1446
5
acl2019
3.4 Evaluating with SEMBLEU . Table 4 shows the SEMBLEU and SMATCH scores several recent models. In particular, we asked for the outputs of Lyu (Lyu and Titov, 2018), Gros (Groschwitz et al., 2018), van Nood (van Noord and Bos, 2017) and Guo (Guo and Lu, 2018) to evaluate on our SEMBLEU. For CAMR and JAMR, we obtain th...
[2, 1, 1, 1, 1, 2]
['3.4 Evaluating with SEMBLEU .', 'Table 4 shows the SEMBLEU and SMATCH scores several recent models.', 'In particular, we asked for the outputs of Lyu (Lyu and Titov, 2018), Gros (Groschwitz et al., 2018), van Nood (van Noord and Bos, 2017) and Guo (Guo and Lu, 2018) to evaluate on our SEMBLEU.', 'For CAMR and JAMR, w...
[[' SEMBLEU'], [' SEMBLEU', ' SMATCH'], [' Gros', ' Guo', ' SEMBLEU'], [' CAMR', ' JAMR'], [' SEMBLEU', ' SMATCH', ' Guo'], [' Guo']]
1
P19-1453table_1
Comparison between training on 1 million examples from a backtranslated English-English corpus (En-En) and the original bitext corpus (En-Cs) sampling 1 million and 2 million sentence pairs (the latter equalizes the amount of English text with the En-En setting). Performance is the average Pearson’s r over the 2012-201...
2
[['Model', 'LSTM-SP (20k)'], ['Model', 'SP (20k)'], ['Model', 'WORD'], ['Model', 'TRIGRAM']]
1
[['En-En'], ['En-Cs (1M)'], ['En-Cs (2M)']]
[['66.7', '65.7', '66.6'], ['68.3', '68.6', '70'], ['66', '63.8', '65.9'], ['69.2', '68.6', '69.9']]
column
['r', 'r', 'r']
['En-Cs (1M)', 'En-Cs (2M)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>En-En</th> <th>En-Cs (1M)</th> <th>En-Cs (2M)</th> </tr> </thead> <tbody> <tr> <td>Model || LSTM-SP (20k)</td> <td>66.7</td> <td>65.7</td> <td>66.6</td> </tr> <tr> <t...
Table 1
table_1
P19-1453
3
acl2019
Results in Table 1 show two observations. First, models trained on En-En, in contrast to those trained on En-CS, have higher correlation for all encoders except SP. However, when the same number of English sentences is used, models trained on bitext have greater than or equal performance across all encoders. Second, SP...
[1, 1, 1, 1, 2, 2]
['Results in Table 1 show two observations.', 'First, models trained on En-En, in contrast to those trained on En-CS, have higher correlation for all encoders except SP.', 'However, when the same number of English sentences is used, models trained on bitext have greater than or equal performance across all encoders.', ...
[None, ['En-En', 'En-Cs (1M)', 'En-Cs (2M)', 'Model', 'SP (20k)'], ['En-Cs (2M)', 'En-En'], ['LSTM-SP (20k)', 'SP (20k)', 'En-Cs (1M)', 'En-Cs (2M)'], ['LSTM-SP (20k)', 'SP (20k)', 'TRIGRAM'], None]
1
P19-1457table_2
Experimental results with constituent Tree-LSTMs.
2
[['Model', 'ConTree (Le and Zuidema, 2015)'], ['Model', 'ConTree (Tai et al., 2015)'], ['Model', 'ConTree (Zhu et al., 2015)'], ['Model', 'ConTree (Li et al., 2015)'], ['Model', 'ConTree (Our implementation)'], ['Model', 'ConTree + WG'], ['Model', 'ConTree + LVG4'], ['Model', 'ConTree + LVeG']]
1
[[' SST-5 Root'], [' SST-5 Phrase'], [' SST-2 Root'], [' SST-2 Phrase']]
[[' 49.9', ' -', ' 88.0', ' -'], [' 51.0', ' -', ' 88.0', ' -'], [' 50.1', ' -', ' -', ' -'], [' 50.4', ' 83.4', ' 86.7', ' -'], [' 51.5', ' 82.8', ' 89.4', ' 86.9'], [' 51.7', ' 83.0', ' 89.7', ' 88.9'], [' 52.2', ' 83.2', ' 89.8', ' 89.1'], [' 52.9', ' 83.4', ' 89.8', ' 89.5']]
column
['accuracy', 'accuracy', 'accuracy', 'accuracy']
['ConTree (Our implementation)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>SST-5 Root</th> <th>SST-5 Phrase</th> <th>SST-2 Root</th> <th>SST-2 Phrase</th> </tr> </thead> <tbody> <tr> <td>Model || ConTree (Le and Zuidema, 2015)</td> <td>49.9</td> <td>-<...
Table 2
table_2
P19-1457
7
acl2019
We re-implement constituent Tree-LSTM (ConTree) of Tai et al. (2015) and obtain better results than their original implementation. We then integrate ConTree with Weighted Grammars (ConTree+WG), Latent Variable Grammars with a subtype number of 4 (ConTree+LVG4), and Latent Variable Grammars (ConTree+LVeG), respectively....
[1, 2, 1]
['We re-implement constituent Tree-LSTM (ConTree) of Tai et al. (2015) and obtain better results than their original implementation.', 'We then integrate ConTree with Weighted Grammars (ConTree+WG), Latent Variable Grammars with a subtype number of 4 (ConTree+LVG4), and Latent Variable Grammars (ConTree+LVeG), respecti...
[['ConTree (Our implementation)', 'ConTree (Tai et al., 2015)', 'ConTree (Le and Zuidema, 2015)', 'ConTree (Zhu et al., 2015)', 'ConTree (Li et al., 2015)'], ['ConTree + WG', 'ConTree + LVG4', 'ConTree + LVeG'], [' SST-5 Root', ' SST-5 Phrase', ' SST-2 Root', ' SST-2 Phrase']]
1
P19-1457table_3
Experimental results with ELMo. BCN(P) is the BCN implemented by Peters et al. (2018). BCN(O) is the BCN implemented by ourselves.
2
[['Model', 'BCN(P)'], ['Model', 'BCN(O)'], ['Model', 'BCN+WG'], ['Model', 'BCN+LVG4'], ['Model', 'BCN+LVeG']]
2
[[' SST-5', 'Root'], [' SST-5', ' Phrase'], [' SST-2', ' Root'], [' SST-2', ' Phrase']]
[['54.7', ' -', ' -', ' -'], ['54.6', '83.3', '91.4', '88.8'], ['55.1', '83.5', '91.5', '90.5'], ['55.5', '83.5', '91.7', '91.3'], ['56', '83.5', '92.1', '91.6']]
column
['accuracy', 'accuracy', 'accuracy', 'accuracy']
['BCN+WG', 'BCN+LVG4', 'BCN+LVeG']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>SST-5 || Root</th> <th>SST-5 || Phrase</th> <th>SST-2 || Root</th> <th>SST-2 || Phrase</th> </tr> </thead> <tbody> <tr> <td>Model || BCN(P)</td> <td>54.7</td> <td>-</td> ...
Table 3
table_3
P19-1457
7
acl2019
There has also been work using large-scale external datasets to improve performances of sentiment classification. Peters et al. (2018) combined bi-attentive classification network (BCN, McCann et al. (2017)) with a pretrained language modelwith character convolutions on a large-scale corpus (ELMo) and reported an accur...
[2, 2, 2, 1, 1, 1, 1, 1]
['There has also been work using large-scale external datasets to improve performances of sentiment classification.', 'Peters et al. (2018) combined bi-attentive classification network (BCN, McCann et al. (2017)) with a pretrained language modelwith character convolutions on a large-scale corpus (ELMo) and reported an ...
[None, ['BCN(P)', ' SST-5'], None, ['BCN+WG', 'BCN+LVG4', 'BCN+LVeG'], ['BCN+WG'], ['BCN+LVG4', 'BCN+LVeG'], ['BCN+LVeG', 'BCN(O)'], [' SST-5', ' SST-2']]
1
P19-1458table_6
Comparison of results using large and small corpora. The small corpus is uniformly sampled from the Japanese Wikipedia (100MB). The large corpus is the entire Japanese Wikipedia (2.9GB).
2
[['Model', 'BCN+ELMo'], ['Model', 'ULMFiT'], ['Model', 'ULMFiT Adapted'], ['Model', 'BERTBASE'], ['Model', 'BCN+ELMo [100MB]'], ['Model', 'ULMFiT Adapted [100MB]'], ['Model', 'BERTBASE [100MB]']]
1
[[' Yahoo Binary']]
[['10.24'], ['12.2'], ['8.52'], ['8.42'], ['10.32'], ['8.57'], ['14.26']]
column
['error']
['BCN+ELMo [100MB]', 'BERTBASE [100MB]']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Yahoo Binary</th> </tr> </thead> <tbody> <tr> <td>Model || BCN+ELMo</td> <td>10.24</td> </tr> <tr> <td>Model || ULMFiT</td> <td>12.2</td> </tr> <tr> <td>Model || ULMFiT A...
Table 6
table_6
P19-1458
4
acl2019
7.3 Size of Pre-Training Corpus. We also investigate whether the size of the source language model affects the sentiment analysis performance on the Yahoo dataset. This is especially important for low-resource languages that do not usually have large amounts of data available for training. We used the ja.text816 small...
[2, 2, 2, 2, 1, 2]
['7.3 Size of Pre-Training Corpus.', ' We also investigate whether the size of the source language model affects the sentiment analysis performance on the Yahoo dataset.', 'This is especially important for low-resource languages that do not usually have large amounts of data available for training.', 'We used the ja.te...
[None, None, None, None, ['BCN+ELMo', 'BCN+ELMo [100MB]', 'ULMFiT', 'ULMFiT Adapted [100MB]', 'BERTBASE', 'BERTBASE [100MB]'], None]
1
P19-1465table_3
Experimental results on Quora test set.
2
[['Model', 'BiMPM (Wang et al., 2017)'], ['Model', 'pt-DecAttn-word (Tomar et al., 2017)'], ['Model', 'pt-DecAttn-char (Tomar et al., 2017)'], ['Model', 'DIIN (Gong et al., 2018)'], ['Model', 'MwAN (Tan et al., 2018)'], ['Model', 'CSRAN (Tay et al., 2018a)'], ['Model', 'SAN (Liu et al., 2018)'], ['Model', 'RE2 (ours)']...
1
[['Acc.(%)']]
[['88.2'], ['87.5'], ['88.4'], ['89.1'], ['89.1'], ['89.2'], ['89.4'], [' 89.2±0.2']]
column
['Acc.(%)']
['RE2 (ours)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Acc.(%)</th> </tr> </thead> <tbody> <tr> <td>Model || BiMPM (Wang et al., 2017)</td> <td>88.2</td> </tr> <tr> <td>Model || pt-DecAttn-word (Tomar et al., 2017)</td> <td>87.5</td> ...
Table 3
table_3
P19-1465
5
acl2019
Results on Quora dataset are listed in Table 3. Since paraphrase identification is a symmetric task where two input sequences can be swapped with no effect to the label of the text pair, in hyperparameter tuning we validate between two symmetric versions of the prediction layer (Equation 6 and Equation 7) and use no ad...
[1, 2, 1]
['Results on Quora dataset are listed in Table 3.', 'Since paraphrase identification is a symmetric task where two input sequences can be swapped with no effect to the label of the text pair, in hyperparameter tuning we validate between two symmetric versions of the prediction layer (Equation 6 and Equation 7) and use ...
[None, None, ['RE2 (ours)']]
1
P19-1469table_1
Semi-supervised classification results on the SNLI dataset. (a) Zhao et al. (2018); (b) Shen et al. (2018a).
2
[['Model', 'LSTM(a)'], ['Model', 'CNN(b)'], ['Model', 'LSTM-AE(a)'], ['Model', 'LSTM-ADAE(a)'], ['Model', 'DeConv-AE(b)'], ['Model', 'LSTM-VAE(b)'], ['Model', 'DeConv-VAE(b)'], ['Model', 'LSTM-vMF-VAE (ours)'], ['Model', 'CS-LVM (ours)']]
1
[['28k'], ['59k'], ['120k']]
[['57.9', '62.5', '65.9'], ['58.7', '62.7', '65.6'], ['59.9', '64.6', '68.5'], ['62.5', '66.8', '70.9'], ['62.1', '65.5', '68.7'], ['64.7', '67.5', '71.1'], ['67.2', '69.3', '72.2'], ['65.6', '68.7', '71.1'], ['68.4', '73.5', '76.9']]
column
['accuracy', 'accuracy', 'accuracy']
['CS-LVM (ours)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>28k</th> <th>59k</th> <th>120k</th> </tr> </thead> <tbody> <tr> <td>Model || LSTM(a)</td> <td>57.9</td> <td>62.5</td> <td>65.9</td> </tr> <tr> <td>Model || CNN(b)</td...
Table 1
table_1
P19-1469
6
acl2019
Table 1 summarizes the result of experiments. We can clearly see that the proposed CS-LVM architecture substantially outperforms other models based on auto-encoding. Also, the semantic constraints brought additional boost in performance, achieving the new state of the art in semisupervised classification of the SNLI da...
[1, 1, 1, 1]
['Table 1 summarizes the result of experiments.', 'We can clearly see that the proposed CS-LVM architecture substantially outperforms other models based on auto-encoding.', 'Also, the semantic constraints brought additional boost in performance, achieving the new state of the art in semisupervised classification of the...
[None, ['CS-LVM (ours)'], ['CS-LVM (ours)'], ['CS-LVM (ours)', 'LSTM(a)', 'LSTM-AE(a)', 'LSTM-VAE(b)', 'DeConv-VAE(b)']]
1
P19-1470table_1
Automatic evaluations of quality and novelty for generations of ATOMIC commonsense. No novelty scores are reported for the NearestNeighbor baseline because all retrieved sequences are in the training set.
2
[['Model', '9ENC9DEC (Sap et al., 2019)'], ['Model', 'NearestNeighbor (Sap et al., 2019)'], ['Model', 'Event2(IN)VOLUN (Sap et al., 2019)'], ['Model', 'Event2PERSONX/Y (Sap et al., 2019)'], ['Model', 'Event2PRE/POST (Sap et al., 2019)'], ['Model', 'COMET (- pretrain)'], ['Model', 'COMET']]
1
[['PPL5'], ['BLEU-2'], ['N/T sro6'], ['N/T o'], ['N/U o']]
[['-', '10.01', '100', '8.61', '40.77'], ['-', '6.61', '-', '-', '-'], ['-', '9.67', '100', '9.52', '45.06'], ['-', '9.24', '100', '8.22', '41.66'], ['-', '9.93', '100', '7.38', '41.99'], ['15.42', '13.88', '100', '7.25', '45.71'], ['11.14', '15.1', '100', '9.71', '51.2']]
column
['PPL5', 'BLEU-2', 'N/T sro6', 'N/T o', 'N/U o']
['COMET']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>PPL5</th> <th>BLEU-2</th> <th>N/T sro6</th> <th>N/T o</th> <th>N/U o</th> </tr> </thead> <tbody> <tr> <td>Model || 9ENC9DEC (Sap et al., 2019)</td> <td>-</td> <td>10.01</td...
Table 1
table_1
P19-1470
5
acl2019
4.2 Results. The BLEU-2 results in Table 1 indicate that COMET exceeds the performance of all baselines, achieving a 51% relative improvement over the top performing model of Sap et al. (2019). More interesting, however, is the result of the human evaluation, where COMET reported a statistically significant relative Av...
[2, 1, 1, 1, 1]
['4.2 Results.', 'The BLEU-2 results in Table 1 indicate that COMET exceeds the performance of all baselines, achieving a 51% relative improvement over the top performing model of Sap et al. (2019).', 'More interesting, however, is the result of the human evaluation, where COMET reported a statistically significant rel...
[None, ['COMET', 'BLEU-2'], ['COMET', 'Event2(IN)VOLUN (Sap et al., 2019)'], ['COMET', 'N/T sro6', 'N/T o', 'N/U o'], ['COMET']]
1
P19-1470table_4
Effect of amount of training data on automatic evaluation of commonsense generations
2
[['% train data', '1% train'], ['% train data', '10% train'], ['% train data', '50% train'], ['% train data', 'FULL (- pretrain)'], ['% train data', 'FULL train']]
1
[['PPL'], ['BLEU-2'], ['N/T o'], ['N/U o']]
[['23.81', '5.08', '7.24', '49.36'], ['13.74', '12.72', '9.54', '58.34'], ['11.82', '13.97', '9.32', '50.37'], ['15.18', '13.22', '7.14', '44.55'], ['11.13', '14.34', '9.51', '50.05']]
column
['PPL', 'BLEU-2', 'N/T o', 'N/U o']
['% train data']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>PPL</th> <th>BLEU-2</th> <th>N/T o</th> <th>N/U o</th> </tr> </thead> <tbody> <tr> <td>% train data || 1% train</td> <td>23.81</td> <td>5.08</td> <td>7.24</td> <td>49....
Table 4
table_4
P19-1470
6
acl2019
Efficiency of learning from seed tuples . Because not all domains will have large available commonsense KBs on which to train, we explore how varying the amount of training data available for learning affects the quality and novelty of the knowledge that is produced. Our results in Table 4 indicate that even with only ...
[2, 2, 1, 1, 1]
['Efficiency of learning from seed tuples .', 'Because not all domains will have large available commonsense KBs on which to train, we explore how varying the amount of training data available for learning affects the quality and novelty of the knowledge that is produced.', 'Our results in Table 4 indicate that even wi...
[None, None, None, ['1% train'], ['10% train']]
1
P19-1470table_6
ConceptNet generation Results
2
[['Model', 'LSTM - s'], ['Model', 'CKBG (Saito et al., 2018)'], ['Model', 'COMET (- pretrain)'], ['Model', 'COMET - RELTOK'], ['Model', 'COMET']]
1
[['PPL'], ['Score'], ['N/T sro'], ['N/T o'], ['Human']]
[['-', '60.83', '86.25', '7.83', '63.86'], ['-', '57.17', '86.25', '8.67', '53.95'], ['8.05', '89.25', '36.17', '6', '83.49'], ['4.39', '95.17', '56.42', '2.62', '92.11'], ['4.32', '95.25', '59.25', '3.75', '91.69']]
column
['PPL', 'Score', 'N/T sro', 'N/T o', 'Human']
['COMET']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>PPL</th> <th>Score</th> <th>N/T sro</th> <th>N/T o</th> <th>Human</th> </tr> </thead> <tbody> <tr> <td>Model || LSTM - s</td> <td>-</td> <td>60.83</td> <td>86.25</td> ...
Table 6
table_6
P19-1470
7
acl2019
5.2 Results . Quality . Our results indicate that high-quality knowledge can be generated by the model: the low perplexity scores in Table 6 indicate high model confidence in its predictions, while the high classifier score (95.25%) indicates that the KB completion model of Li et al. (2016) scores the generated tuples ...
[2, 2, 1, 1, 0]
['5.2 Results .', 'Quality .', 'Our results indicate that high-quality knowledge can be generated by the model: the low perplexity scores in Table 6 indicate high model confidence in its predictions, while the high classifier score (95.25%) indicates that the KB completion model of Li et al. (2016) scores the generated...
[None, None, ['COMET', 'Score'], ['COMET', 'Human'], None]
1
P19-1478table_1
Results on WSC273 and its subsets. The comparison between each language model and its WSCR-tuned model is given. For each column, the better result of the two is in bold. The best result in the column overall is underlined. Results for the LM ensemble and Knowledge Hunter are taken from Trichelair et al. (2018). All mo...
1
[['BERT_WIKI'], ['BERT_WIKI_WSCR'], ['BERT'], ['BERT_WSCR'], ['BERT-base'], ['BERT-base_WSCR'], ['GPT'], ['GPT_WSCR'], ['BERT_WIKI_WSCR_no_pairs'], ['BERT_WIKI_WSCR_pairs'], ['LM ensemble'], ['Knowledge Hunter']]
1
[['WSC273'], ['non-assoc.'], ['assoc.'], ['unswitched'], ['switched'], ['consist.'], ['WNLI']]
[['0.619', '0.597', '0.757', '0.573', '0.603', '0.389', '0.712'], ['0.725', '0.72', '0.757', '0.732', '0.71', '0.55', '0.747'], ['0.619', '0.602', '0.73', '0.595', '0.573', '0.458', '0.658'], ['0.714', '0.699', '0.811', '0.695', '0.702', '0.55', '0.719'], ['0.564', '0.551', '0.649', '0.527', '0.565', '0.443', '0.63'], ...
column
['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy']
['BERT_WIKI_WSCR']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>WSC273</th> <th>non-assoc.</th> <th>assoc.</th> <th>unswitched</th> <th>switched</th> <th>consist.</th> <th>WNLI</th> </tr> </thead> <tbody> <tr> <td>BERT_WIKI</td> <t...
Table 1
table_1
P19-1478
5
acl2019
We evaluate all models on WSC273 and the WNLI test dataset, as well as the various subsets of WSC273, as described in Section 2. The results are reported in Table 1 and will be discussed next. We note that models that are fine-tuned on the WSCR dataset consistently outperform their non-fine-tuned counterparts. The BERT...
[1, 1, 1, 1, 1, 1]
['We evaluate all models on WSC273 and the WNLI test dataset, as well as the various subsets of WSC273, as described in Section 2.', 'The results are reported in Table 1 and will be discussed next.', 'We note that models that are fine-tuned on the WSCR dataset consistently outperform their non-fine-tuned counterparts.'...
[['WSC273', 'WNLI'], None, ['BERT_WIKI_WSCR', 'BERT_WSCR', 'BERT-base_WSCR', 'GPT_WSCR', 'BERT_WIKI_WSCR_no_pairs', 'BERT_WIKI_WSCR_pairs'], ['BERT_WIKI_WSCR'], ['LM ensemble', 'assoc.', 'non-assoc.', 'switched'], ['LM ensemble']]
1
P19-1479table_4
Comparison between our graph2seq model and baseline models for the topic of entertainment. T, C, B, K represents title, content, bag of words, keywords separately. Total is the average of other three metrics
2
[['Models', 'seq2seq-T (Qin et al., 2018)'], ['Models', 'seq2seq-C (Qin et al., 2018)'], ['Models', 'seq2seq-TC (Qin et al., 2018)'], ['Models', 'self-attention-B (Chen et al., 2018)'], ['Models', 'self-attention-K (Chen et al., 2018)'], ['Models', 'hierarchical-attention (Yang et al., 2016)'], ['Models', 'graph2seq (p...
1
[['Coherence'], ['Informativeness'], ['Fluency'], ['Total']]
[['5.38', '3.7', '8.22', '5.77'], ['4.87', '3.72', '8.53', '5.71'], ['3.28', '4.02', '8.68', '5.33'], ['6.72', '5.05', '8.27', '6.68'], ['6.62', '4.73', '8.28', '6.54'], ['1.38', '2.97', '8.65', '4.33'], ['8.23', '5.27', '8.08', '7.19']]
column
['Coherence', 'Informativeness', 'Fluency', 'Total']
['graph2seq (proposed)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Coherence</th> <th>Informativeness</th> <th>Fluency</th> <th>Total</th> </tr> </thead> <tbody> <tr> <td>Models || seq2seq-T (Qin et al., 2018)</td> <td>5.38</td> <td>3.7</td> ...
Table 4
table_4
P19-1479
7
acl2019
4.5 Results. In Table 4, we show the results of different baseline models and our graph2seq model for the topic of entertainment. From the results we can see that our proposed graph2seq model beats all the baselines in both coherence and informativeness. Our model receives much higher scores in coherence compared with ...
[2, 1, 1, 1]
['4.5 Results.', 'In Table 4, we show the results of different baseline models and our graph2seq model for the topic of entertainment.', 'From the results we can see that our proposed graph2seq model beats all the baselines in both coherence and informativeness.', 'Our model receives much higher scores in coherence com...
[None, ['graph2seq (proposed)'], ['graph2seq (proposed)', 'Coherence', 'Informativeness'], ['graph2seq (proposed)', 'Coherence']]
1
P19-1481table_2
BLEU, METEOR and ROUGE-L scores on the test set for Hindi and Chinese question generation. Best results for each metric (column) are highlighted in bold.
4
[['Languange', 'Hindi', 'Model', 'Transformer'], ['Languange', 'Hindi', 'Model', 'Transformer+pretraining'], ['Languange', 'Hindi', 'Model', 'CLQG'], ['Languange', 'Hindi', 'Model', 'CLQG+parallel'], ['Languange', 'Hindi', 'Model', 'Transformer'], ['Languange', 'Chinese', 'Model', 'Transformer+pretraining'], ['Languang...
1
[['BLEU-1'], ['BLEU-2'], ['BLEU-3'], ['BLEU-4'], ['METEOR'], ['ROUGE-L']]
[['28.414', '18.493', '12.356', '8.644', '23.803', '29.893'], ['41.059', '29.294', '21.403', '16.047', '28.159', '39.395'], ['41.034', '29.792', '22.038', '16.598', '27.581', '39.852'], ['42.281', '32.074', '25.182', '20.242', '29.143', '40.643'], ['25.52', '9.22', '5.14', '3.25', '7.64', '27.4'], ['30.38', '14.01', '8...
column
['BLEU-1', 'BLEU-2', 'BLEU-3', 'BLEU-4', 'METEOR', 'ROUGE-L']
['CLQG+parallel']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU-1</th> <th>BLEU-2</th> <th>BLEU-3</th> <th>BLEU-4</th> <th>METEOR</th> <th>ROUGE-L</th> </tr> </thead> <tbody> <tr> <td>Languange || Hindi || Model || Transformer</td> ...
Table 2
table_2
P19-1481
6
acl2019
CLQG+parallel: The CLQG model undergoes further training using a parallel corpus (with primary language as source and secondary language as target). After unsupervised pretraining, the encoder and decoder weights are fine-tuned using the parallel corpus. This fine-tuning further refines the language models for both lan...
[2, 2, 2, 1, 2]
['CLQG+parallel: The CLQG model undergoes further training using a parallel corpus (with primary language as source and secondary language as target).', 'After unsupervised pretraining, the encoder and decoder weights are fine-tuned using the parallel corpus.', 'This fine-tuning further refines the language models for ...
[['CLQG+parallel'], None, None, ['CLQG+parallel'], ['CLQG+parallel', 'CLQG']]
1
P19-1482table_4
Automatic evaluation results for classification accuracy and BLEU with human reference. Human denotes human references. Note that Acc for human references are relatively low; thus, we do not consider it as a valid metric for comparison.
1
[['CrossAligned'], ['MultiDecoder'], ['StyleEmbedding'], ['TemplateBased'], ['DeleteOnly'], ['Del-Ret-Gen'], ['BackTranslate'], ['UnpairedRL'], ['UnsuperMT'], ['Human'], ['Point-Then-Operate']]
2
[['Yelp', 'Acc'], ['Yelp', 'BLEU'], ['Amazon', 'Acc'], ['Amazon', 'BLEU']]
[['74.7', '9.06', '75.1', '1.9'], ['50.6', '14.54', '69.9', '9.07'], ['8.4', '21.06', '38.2', '15.07'], ['81.2', '22.57', '64.3', '34.79'], ['86', '14.64', '47', '33'], ['88.6', '15.96', '51', '30.09'], ['94.6', '2.46', '76.7', '1.04'], ['57.5', '18.81', '56.3', '15.93'], ['97.8', '22.75', '72.4', '33.95'], ['74.7', '-...
column
['Acc', 'BLEU', 'Acc', 'BLEU']
['Point-Then-Operate']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Yelp || Acc</th> <th>Yelp || BLEU</th> <th>Amazon || Acc</th> <th>Amazon || BLEU</th> </tr> </thead> <tbody> <tr> <td>CrossAligned</td> <td>74.7</td> <td>9.06</td> <td>75.1...
Table 4
table_4
P19-1482
7
acl2019
5.4 Evaluation Results . Table 4 shows the results of automatic evaluation. It should be noted that the classification accuracy for human reference is relatively low (74.7% on Yelp and 43.2% on Amazon); thus, we do not consider it as a valid metric for comparison. For BLEU score, our method outperforms recent systems b...
[0, 1, 1, 1]
['5.4 Evaluation Results .', 'Table 4 shows the results of automatic evaluation.', 'It should be noted that the classification accuracy for human reference is relatively low (74.7% on Yelp and 43.2% on Amazon); thus, we do not consider it as a valid metric for comparison.', 'For BLEU score, our method outperforms recen...
[None, None, ['Acc', 'Human', 'Yelp', 'Amazon'], ['BLEU', 'Point-Then-Operate']]
1
P19-1487table_3
Test accuracy on CQA v1.0. The addition of CoS-E-open-ended during training dramatically improves performance. Replacing CoS-E during training with CAGE reasoning during both training and inference leads to an absolute gain of 10% over the previous state-of-the-art.
2
[['Method', 'RC (Talmor et al., 2019)'], ['Method', 'GPT (Talmor et al., 2019)'], ['Method', 'CoS-E-open-ended'], ['Method', 'CAGE-reasoning'], ['Method', 'Human (Talmor et al., 2019)']]
1
[['Accuracy (%)']]
[['47.7'], ['54.8'], ['60.2'], ['64.7'], ['95.3']]
column
['Accuracy (%)']
['CoS-E-open-ended', 'CAGE-reasoning']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy (%)</th> </tr> </thead> <tbody> <tr> <td>Method || RC (Talmor et al., 2019)</td> <td>47.7</td> </tr> <tr> <td>Method || GPT (Talmor et al., 2019)</td> <td>54.8</td> </tr...
Table 3
table_3
P19-1487
6
acl2019
Table 3 shows the results obtained on the CQA test split. We report our two best models that represent using human explanations (CoS-E-openended) for training only and using language model explanations (CAGE-reasoning) during both train and test. We compare our approaches to the best reported models for the CQA task (T...
[1, 1, 2, 1, 2, 2, 1, 1]
['Table 3 shows the results obtained on the CQA test split.', 'We report our two best models that represent using human explanations (CoS-E-openended) for training only and using language model explanations (CAGE-reasoning) during both train and test.', 'We compare our approaches to the best reported models for the CQA...
[None, ['CoS-E-open-ended', 'CAGE-reasoning'], None, ['CoS-E-open-ended'], ['RC (Talmor et al., 2019)', 'GPT (Talmor et al., 2019)', 'Human (Talmor et al., 2019)'], ['RC (Talmor et al., 2019)', 'GPT (Talmor et al., 2019)', 'Human (Talmor et al., 2019)'], ['CAGE-reasoning'], ['CoS-E-open-ended', 'CAGE-reasoning']]
1
P19-1487table_4
Oracle results on CQA dev-random-split using different variants of CoS-E for both training and validation. * indicates CoS-E-open-ended used during both training and validation to contrast with CoS-E-openended used only during training in Table 2.
2
[['Method', 'CoS-E-selected w/o ques'], ['Method', 'CoS-E-limited-open-ended'], ['Method', 'CoS-E-selected'], ['Method', 'CoS-E-open-ended w/o ques'], ['Method', 'CoS-E-open-ended*']]
1
[['Accuracy (%)']]
[['53'], ['67.6'], ['70'], ['84.5'], ['89.8']]
column
['Accuracy (%)']
['CoS-E-open-ended*']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy (%)</th> </tr> </thead> <tbody> <tr> <td>Method || CoS-E-selected w/o ques</td> <td>53</td> </tr> <tr> <td>Method || CoS-E-limited-open-ended</td> <td>67.6</td> </tr> ...
Table 4
table_4
P19-1487
6
acl2019
Table 4 also contains results that use only the explanation and exclude the original question from CQA denoted by ‘w/o question’. These variants also use explanation during both train and validation. For these experiments we give the explanation in place of the question followed by the answer choices as input to the mo...
[1, 2, 2, 1, 1, 1]
['Table 4 also contains results that use only the explanation and exclude the original question from CQA denoted by ‘w/o question’.', 'These variants also use explanation during both train and validation.', 'For these experiments we give the explanation in place of the question followed by the answer choices as input t...
[None, None, None, ['CoS-E-selected', 'CoS-E-open-ended*'], ['CoS-E-selected', 'CoS-E-open-ended*'], ['CoS-E-open-ended*']]
1
P19-1493table_6
M-BERT’s POS accuracy on the code-switched Hindi/English dataset from Bhat et al. (2018), on script-corrected and original (transliterated) tokens, and comparisons to existing work on code-switch POS.
2
[['Train on monolingual HI+EN', 'M-BERT'], ['Train on monolingual HI+EN', 'Ball and Garrette (2018)'], ['Train on code-switched HI/EN', 'M-BERT'], ['Train on code-switched HI/EN', 'Bhat et al. (2018)']]
1
[['Corrected'], ['Transliterated']]
[['86.59', '50.41'], ['-', '77.4'], ['90.56', '85.64'], ['-', '90.53']]
column
['accuracy', 'accuracy']
['Train on monolingual HI+EN', 'Train on code-switched HI/EN']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Corrected</th> <th>Transliterated</th> </tr> </thead> <tbody> <tr> <td>Train on monolingual HI+EN || M-BERT</td> <td>86.59</td> <td>50.41</td> </tr> <tr> <td>Train on monolingu...
Table 6
table_6
P19-1493
4
acl2019
We test M-BERT on the CS Hindi/English UD corpus from Bhat et al. (2018), which provides texts in two formats: transliterated, where Hindi words are written in Latin script, and corrected, where annotators have converted them back to Devanagari script. Table 6 shows the results for models fine-tuned using a combination...
[2, 1]
['We test M-BERT on the CS Hindi/English UD corpus from Bhat et al. (2018), which provides texts in two formats: transliterated, where Hindi words are written in Latin script, and corrected, where annotators have converted them back to Devanagari script.', 'Table 6 shows the results for models fine-tuned using a combin...
[['M-BERT', 'Train on code-switched HI/EN', 'Transliterated', 'Corrected'], ['Train on monolingual HI+EN', 'Train on code-switched HI/EN', 'Corrected', 'Transliterated']]
1
P19-1495table_7
Complaint prediction results using the original data set and distantly supervised data. All models are based on logistic regression with bag-of-word and Partof-Speech tag features.
1
[['Most Frequent Class'], ['LR-All Features – Original Data'], ['Dist. Supervision + Pooling'], ['Dist. Supervision + EasyAdapt']]
2
[['Model', 'Acc'], ['Model', 'F1'], ['Model', 'AUC']]
[['64.2', '39.1', '0.5'], ['80.5', '78', '0.873'], ['77.2', '75.7', '0.853'], ['81.2', '79', '0.885']]
column
['Acc', 'F1', 'AUC']
['Dist. Supervision + EasyAdapt']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Model || Acc</th> <th>Model || F1</th> <th>Model || AUC</th> </tr> </thead> <tbody> <tr> <td>Most Frequent Class</td> <td>64.2</td> <td>39.1</td> <td>0.5</td> </tr> <tr> ...
Table 7
table_7
P19-1495
8
acl2019
Results presented in Table 7 show that the domain adaptation approach further boosts F1 by 1 point to 79 (t-test, p<0.5) and ROC AUC by 0.012. However, simply pooling the data actually hurts predictive performance leading to a drop of more than 2 points in F1.
[1, 1]
['Results presented in Table 7 show that the domain adaptation approach further boosts F1 by 1 point to 79 (t-test, p<0.5) and ROC AUC by 0.012.', 'However, simply pooling the data actually hurts predictive performance leading to a drop of more than 2 points in F1.']
[['Dist. Supervision + EasyAdapt', 'F1', 'AUC'], ['Dist. Supervision + Pooling', 'F1']]
1
P19-1503table_1
Experimental results of abstractive summarization on Gigaword test set with ROUGE metric. The top section is prefix baselines, the second section is recent unsupervised methods and ours, the third section is state-of-the-art supervised method along with our implementation of a seq-to-seq model with attention, and the b...
2
[['Model', 'Lead-75C'], ['Model', 'Lead-8'], ['Model', 'Schumann (2018)'], ['Model', 'Wang and Lee (2018)'], ['Model', 'Contextual Match'], ['Model', 'Cao et al. (2018)'], ['Model', 'seq2seq'], ['Model', 'Contextual Oracle']]
1
[['R1'], ['R2'], ['RL']]
[['23.69', '7.93', '21.5'], ['21.3', '7.34', '19.94'], ['22.19', '4.56', '19.88'], ['27.09', '9.86', '24.97'], ['26.48', '10.05', '24.41'], ['37.04', '19.03', '34.46'], ['33.5', '15.85', '31.44'], ['37.03', '15.46', '33.23']]
column
['R1', 'R2', 'RL']
['Contextual Match', 'Contextual Oracle']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>R1</th> <th>R2</th> <th>RL</th> </tr> </thead> <tbody> <tr> <td>Model || Lead-75C</td> <td>23.69</td> <td>7.93</td> <td>21.5</td> </tr> <tr> <td>Model || Lead-8</td> ...
Table 1
table_1
P19-1503
4
acl2019
The automatic evaluation scores are presented in Table 1. For abstractive sentence summarization, we report the ROUGE F1 scores compared with baselines and previous unsupervised methods. Our method outperforms commonly used prefix baselines for this task which take the first 75 characters or 8 words of the source as a ...
[1, 1, 1, 1, 2, 1]
['The automatic evaluation scores are presented in Table 1.', 'For abstractive sentence summarization, we report the ROUGE F1 scores compared with baselines and previous unsupervised methods.', 'Our method outperforms commonly used prefix baselines for this task which take the first 75 characters or 8 words of the sour...
[None, ['R1', 'R2', 'RL'], ['Contextual Match', 'Lead-75C', 'Lead-8'], ['Contextual Match', 'Wang and Lee (2018)'], None, ['seq2seq', 'Contextual Oracle']]
1
P19-1514table_3
Performance comparison between our model and three baselines on four frequent attributes. For baselines, only the performance on AE-110K is reported since they do not scale up to large set of attributes; while for our model, the performances on both AE-110K and AE-650K are reported.
4
[['Attributes', 'Brand Name', 'Models', 'BiLSTM'], ['Attributes', 'Brand Name', 'Models', 'BiLSTM-CRF'], ['Attributes', 'Brand Name', 'Models', 'OpenTag'], ['Attributes', 'Brand Name', 'Models', 'Our model-110k'], ['Attributes', 'Brand Name', 'Models', 'Our model-650k'], ['Attributes', 'Material', 'Models', 'BiLSTM'], ...
1
[['P (%)'], ['R (%)'], ['F1 (%)']]
[['95.08', '96.81', '95.94'], ['95.45', '97.17', '96.3'], ['95.18', '97.55', '96.35'], ['97.21', '96.68', '96.94'], ['96.94', '97.14', '97.04'], ['78.26', '78.54', '78.4'], ['77.15', '78.12', '77.63'], ['78.69', '78.62', '78.65'], ['82.76', '83.57', '83.16'], ['83.3', '82.94', '83.12'], ['68.08', '68', '68.04'], ['68.1...
column
['P (%)', 'R (%)', 'F1 (%)']
['Our model-110k', 'Our model-650k']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P (%)</th> <th>R (%)</th> <th>F1 (%)</th> </tr> </thead> <tbody> <tr> <td>Attributes || Brand Name || Models || BiLSTM</td> <td>95.08</td> <td>96.81</td> <td>95.94</td> </tr>...
Table 3
table_3
P19-1514
6
acl2019
5.1 Results on Frequent Attributes . The first experiment is conducted on four frequent attributes (i.e., with sufficient data) on AE-110k and AE-650k datasets. Table 3 reports the comparison results of our two models (on AE-110k and AE-650k datasets) and three baselines. It is observed that our models are consistently...
[2, 2, 1, 1, 2, 2]
['5.1 Results on Frequent Attributes .', 'The first experiment is conducted on four frequent attributes (i.e., with sufficient data) on AE-110k and AE-650k datasets.', 'Table 3 reports the comparison results of our two models (on AE-110k and AE-650k datasets) and three baselines.', 'It is observed that our models are c...
[None, None, ['BiLSTM', 'BiLSTM-CRF', 'OpenTag', 'Our model-110k', 'Our model-650k'], ['Our model-110k', 'Our model-650k'], None, ['OpenTag', 'Our model-110k', 'Our model-650k']]
1
P19-1516table_1
Result comparison of the proposed method with the state-of-art baseline methods. Here, ‘P’, ‘R’, ‘F1’ represents Precision, Recall and F1-Score. The results on CADEC and MEDLINE are on 10-fold cross validation; for the twitter dataset, we use the train and test sets as provided by the PSB 2016 shared task.
2
[['Models', 'ST-BLSTM'], ['Models', 'ST-CNN'], ['Models', 'CRNN (Huynh et al., 2016)'], ['Models', 'RCNN (Huynh et al., 2016)'], ['Models', 'MT-BLSTM (Chowdhury et al., 2018)'], ['Models', 'MT-Atten-BLSTM (Chowdhury et al., 2018)'], ['Models', 'Proposed Model']]
2
[['Twitter', 'P'], ['Twitter', 'R'], ['Twitter', 'F1'], ['CADEC', 'P'], ['CADEC', 'R'], ['CADEC', 'F1'], ['MEDLINE', 'P'], ['MEDLINE', 'R'], ['MEDLINE', 'F1']]
[['57.7', '56.8', '57.3', '52.9', '49.4', '51.1', '71.65', '72.19', '71.91'], ['63.8', '65.8', '67.1', '39.7', '42.7', '42', '66.88', '73.81', '70.17'], ['61.1', '62.4', '64.9', '49.5', '46.9', '48.2', '71', '77.3', '75.5'], ['57.6', '58.7', '63.6', '42.4', '44.9', '43.6', '73.5', '72', '74'], ['65.57', '61.02', '63.19...
column
['P', 'R', 'F1', 'P', 'R', 'F1', 'P', 'R', 'F1']
['MT-BLSTM (Chowdhury et al., 2018)', 'MT-Atten-BLSTM (Chowdhury et al., 2018)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Twitter || P</th> <th>Twitter || R</th> <th>Twitter || F1</th> <th>CADEC || P</th> <th>CADEC || R</th> <th>CADEC || F1</th> <th>MEDLINE || P</th> <th>MEDLINE || R</th> <th>MED...
Table 1
table_1
P19-1516
8
acl2019
The extensive results of our proposed model with comparisons to the state-of-the-art baselines techniques are reported in Table 1. Our proposed model outperforms the state-of-the-art baselines techniques by fair margins in terms of precision, recall and F1-Score for all the datasets. In our first experiment, we train t...
[1, 1, 2, 1, 1, 2]
['The extensive results of our proposed model with comparisons to the state-of-the-art baselines techniques are reported in Table 1.', 'Our proposed model outperforms the state-of-the-art baselines techniques by fair margins in terms of precision, recall and F1-Score for all the datasets.', 'In our first experiment, we...
[None, ['Proposed Model'], ['ST-BLSTM', 'MT-BLSTM (Chowdhury et al., 2018)', 'MT-Atten-BLSTM (Chowdhury et al., 2018)'], ['MT-BLSTM (Chowdhury et al., 2018)', 'MT-Atten-BLSTM (Chowdhury et al., 2018)'], ['MT-BLSTM (Chowdhury et al., 2018)', 'MT-Atten-BLSTM (Chowdhury et al., 2018)'], None]
1
P19-1520table_2
Aspect and opinion term extraction performance of different approaches. F 1 score is reported. IHS RD, DLIREC, Elixa and WDEmb* use manually designed features. For different versions of RINANTE, “Shared” and “Double” means shared BiLSTM model and double BiLSTM model, respectively; “Alt” and “Pre” means the first and th...
2
[['Approach', 'DP (Qiu et al. 2011)'], ['Approach', 'IHS RD (Chernyshevich 2014)'], ['Approach', 'DLIREC (Toh and Wang 2014)'], ['Approach', 'Elixa (Vicente et al. 2017)'], ['Approach', 'WDEmb (Yin et al. 2016)'], ['Approach', 'WDEmb* (Yin et al. 2016)'], ['Approach', 'RNCRF (Wang et al. 2016)'], ['Approach', 'CMLA (Wa...
2
[['SE14-R', 'Aspect'], ['SE14-R', 'Opinion'], ['SE14-L', 'Aspect'], ['SE14-L', 'Opinion'], ['SE15-R', 'Aspect'], ['SE15-R', 'Opinion']]
[['38.72', '65.94', '19.19', '55.29', '27.32', '46.31'], ['79.62', ' -', '74.55', ' -', ' -', ' -'], ['84.01', ' -', '73.78', ' -', ' -', ' -'], [' -', ' -', ' -', ' -', '70.04', ' -'], ['84.31', ' -', '74.68', ' -', '69.12', ' -'], ['84.97', ' -', '75.16', ' -', '69.73', ' -'], ['82.23', '83.93', '75.28', '77.03', '65...
column
['F1', 'F1', 'F1', 'F1', 'F1', 'F1']
['Mined Rules']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>SE14-R || Aspect</th> <th>SE14-R || Opinion</th> <th>SE14-L || Aspect</th> <th>SE14-L || Opinion</th> <th>SE15-R || Aspect</th> <th>SE15-R || Opinion</th> </tr> </thead> <tbody> <tr>...
Table 2
table_2
P19-1520
7
acl2019
The experimental results are shown in Table 2. From the results, we can see that the mined rules alone do not perform well. However, by learning from the data automatically labeled by these rules, all four versions of RINANTE achieves better performances than RINANTE (no rule). This verifies that we can indeed use the ...
[1, 1, 1, 2, 1, 2, 1, 2]
['The experimental results are shown in Table 2.', 'From the results, we can see that the mined rules alone do not perform well.', 'However, by learning from the data automatically labeled by these rules, all four versions of RINANTE achieves better performances than RINANTE (no rule).', 'This verifies that we can inde...
[None, ['Mined Rules'], ['RINANTE-Shared-Alt', 'RINANTE-Shared-Pre', 'RINANTE-Double-Alt', 'RINANTE-Double-Pre', 'RINANTE (No Rule)'], None, ['RINANTE (No Rule)', 'SE14-L', 'SE15-R'], ['SE14-L', 'SE15-R'], ['Mined Rules', 'DP (Qiu et al. 2011)'], ['DP (Qiu et al. 2011)']]
1
P19-1524table_1
Results on CoNLL 2003 and OntoNotes 5.0
2
[['Model', 'Ma and Hovy (2016)'], ['Model', 'Lample et al. (2016)'], ['Model', 'Liu et al. (2018)'], ['Model', 'Devlin et al. (2018)'], ['Model', 'Chiu and Nichols (2016)'], ['Model', 'Ghaddar and Langlais ’18'], ['Model', 'Peters et al. (2018)'], ['Model', 'Clark et al. (2018)'], ['Model', 'Akbik et al. (2018)'], ['Mo...
2
[['F1-score', 'CoNLL'], ['F1-score', 'OntoNotes']]
[['91.21', ' -'], ['90.94', ' -'], ['91.24±0.12', ' -'], ['92.8', ' -'], ['91.62±0.33', ' 86.28±0.26'], ['91.73±0.10', ' 87.95±0.13'], ['92.22±0.10', ' 89.04±0.27'], ['92.6 ±0.1', ' 88.8±0.1'], ['93.09±0.12', '89.71'], ['92.54±0.11', ' 89.38±0.11'], ['92.52±0.09', ' 89.73±0.19'], ['92.63±0.08', ' 89.77±0.20'], ['92.75±...
column
['F1-score', 'F1-score']
['HSCRF + softdict']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>F1-score || CoNLL</th> <th>F1-score || OntoNotes</th> </tr> </thead> <tbody> <tr> <td>Model || Ma and Hovy (2016)</td> <td>91.21</td> <td>-</td> </tr> <tr> <td>Model || Lample ...
Table 1
table_1
P19-1524
4
acl2019
3.5 Results . Table 1 shows the results on the CoNLL 2003 dataset and OntoNotes 5.0 dataset respectively. HSCRFs using gazetteer-enhanced sub-tagger outperform the baselines, achieving comparable results with those of more complex or larger models on CoNLL 2003 and new state-of-the-art results on OntoNotes 5.0. We also...
[2, 1, 1, 2]
['3.5 Results .', 'Table 1 shows the results on the CoNLL 2003 dataset and OntoNotes 5.0 dataset respectively.', 'HSCRFs using gazetteer-enhanced sub-tagger outperform the baselines, achieving comparable results with those of more complex or larger models on CoNLL 2003 and new state-of-the-art results on OntoNotes 5.0....
[None, ['CoNLL', 'OntoNotes'], ['HSCRF + softdict', 'CoNLL', 'OntoNotes'], None]
1
P19-1526table_6
Comparison of different sentence encoders in D-NDMV.
2
[['SENTENCE ENCODER', 'Bag-of-Tags Method'], ['SENTENCE ENCODER', 'Anchored Words Method'], ['SENTENCE ENCODER', 'LSTM'], ['SENTENCE ENCODER', 'Attention-Based LSTM'], ['SENTENCE ENCODER', 'Bi-LSTM']]
1
[[' DDA']]
[['74.1'], ['75.1'], ['75.9'], ['75.5'], ['74.2']]
column
['DDA']
['LSTM']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>DDA</th> </tr> </thead> <tbody> <tr> <td>SENTENCE ENCODER || Bag-of-Tags Method</td> <td>74.1</td> </tr> <tr> <td>SENTENCE ENCODER || Anchored Words Method</td> <td>75.1</td> </t...
Table 6
table_6
P19-1526
8
acl2019
Besides LSTM, there are a few other methods of producing the sentence representation. Table 6 compares the experimental results of these methods. The bag-of-tags method simply computes the average of all the POS tag embeddings and has the lowest accuracy, showing that the word order is informative for sentence encoding...
[1, 1, 1, 1, 1]
['Besides LSTM, there are a few other methods of producing the sentence representation.', 'Table 6 compares the experimental results of these methods.', 'The bag-of-tags method simply computes the average of all the POS tag embeddings and has the lowest accuracy, showing that the word order is informative for sentence ...
[['LSTM'], None, ['Bag-of-Tags Method'], ['LSTM', 'Bag-of-Tags Method'], ['LSTM']]
1
P19-1527table_1
Nested NER results (F1) for ACE-2004, ACE-2005, GENIA and CNEC 1.0 (Czech) corpora. Bold indicates the best result, italics results above SoTA and gray background indicates the main contribution. * uses different data split in ACE-2005. ** non-neural model
2
[['model', '(Finkel and Manning, 2009)**'], ['model', '(Lu and Roth, 2015)**'], ['model', '(Muis and Lu, 2017)**'], ['model', '(Katiyar and Cardie, 2018)'], ['model', '(Ju et al., 2018)*'], ['model', '(Wang and Lu, 2018)'], ['model', '(Straková et al., 2016)'], ['model', 'LSTM-CRF'], ['model', 'LSTM-CRF+ELMo'], ['mode...
1
[['ACE-2004'], ['ACE-2005'], ['GENIA'], ['CNEC 1.0']]
[['-', '-', '70.3', '-'], ['62.8', '62.5', '70.3', '-'], ['64.5', '63.1', '70.8', '-'], ['72.7', '70.5', '73.6', '-'], ['-', '72.2', '74.7', '-'], ['75.1', '74.5', '75.1', '-'], ['-', '-', '-', '81.2'], ['72.26', '71.62', '76.23', '80.28'], ['78.72', '78.36', '75.94', '-'], ['81.48', '79.95', '77.8', '85.67'], ['77.65'...
column
['F1', 'F1', 'F1', 'F1']
['LSTM-CRF']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>ACE-2004</th> <th>ACE-2005</th> <th>GENIA</th> <th>CNEC 1.0</th> </tr> </thead> <tbody> <tr> <td>model || (Finkel and Manning, 2009)**</td> <td>-</td> <td>-</td> <td>70.3</...
Table 1
table_1
P19-1527
4
acl2019
5 Results . Table 1 shows the F1 score for the nested NER. When comparing the results for the nested NER in the baseline models (without the contextual word embeddings) to the previous results in literature, we see that LSTM-CRF reaches comparable, but suboptimal results in three out of four nested NE corpora, while se...
[2, 1, 1, 1, 1]
['5 Results .', 'Table 1 shows the F1 score for the nested NER.', 'When comparing the results for the nested NER in the baseline models (without the contextual word embeddings) to the previous results in literature, we see that LSTM-CRF reaches comparable, but suboptimal results in three out of four nested NE corpora, ...
[None, None, ['LSTM-CRF', 'seq2seq'], ['seq2seq'], ['ACE-2004', 'ACE-2005']]
1
P19-1531table_2
Results on the PTB and SPMRL test sets.
3
[['English (PTB)', 'Model', 'S-S'], ['English (PTB)', 'Model', 'S-MTL'], ['English (PTB)', 'Model', 'D-MTL-AUX'], ['English (PTB)', 'Model', 'D-MTL'], ['Basque', 'Model', 'S-S'], ['Basque', 'Model', 'S-MTL'], ['Basque', 'Model', 'D-MTL-AUX'], ['Basque', 'Model', 'D-MTL'], ['French', 'Model', 'S-S'], ['French', 'Model',...
2
[['Dependency Parsing', 'UAS'], ['Dependency Parsing', 'LAS'], ['Constituency Parsing', 'F1']]
[['93.6', '91.74', '90.14'], ['93.84', '91.83', '90.32'], ['94.05', '92.01', '90.39'], ['93.96', '91.9', '89.81'], ['86.2', '81.7', '89.54'], ['87.42', '81.71', '90.86'], ['87.19', '81.73', '91.12'], ['87.09', '81.77', '90.76'], ['89.13', '85.03', '80.68'], ['89.54', '84.89', '81.34'], ['89.52', '84.97', '81.33'], ['89...
column
['UAS', 'LAS', 'F1']
['D-MTL-AUX']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Dependency Parsing || UAS</th> <th>Dependency Parsing || LAS</th> <th>Constituency Parsing || F1</th> </tr> </thead> <tbody> <tr> <td>English (PTB) || Model || S-S</td> <td>93.6</td> ...
Table 2
table_2
P19-1531
4
acl2019
4.2 Results . Table 2 compares single-paradigm models against their double-paradigm MTL versions. On average, MTL models with auxiliary losses achieve the best performance for both parsing abstractions. They gain 1.05 F1 points on average in comparison with the single model for constituency parsing, and 0.62 UAS and 0....
[2, 1, 1, 1, 1]
['4.2 Results .', 'Table 2 compares single-paradigm models against their double-paradigm MTL versions.', 'On average, MTL models with auxiliary losses achieve the best performance for both parsing abstractions.', 'They gain 1.05 F1 points on average in comparison with the single model for constituency parsing, and 0.62...
[None, None, ['D-MTL-AUX'], ['D-MTL-AUX', 'F1', 'UAS', 'LAS'], ['S-MTL', 'F1', 'UAS', 'LAS']]
1
P19-1542table_1
Results of automatic and human evaluation: PAML vs Dialogue+Persona shows the our approach can achieve good consistency by using few dialogues instead of conditioning on the persona description, PAML vs Dialogue+Fine-tuning shows the effectiveness of meta-learning approach in personalizing dialogue model.
1
[['Human'], ['Dialogue+Persona'], ['Dialogue'], ['Dialogue+Fine-tuning'], ['PAML']]
2
[['Automatic', 'PPL'], ['Automatic', 'BLEU'], ['Automatic', 'C'], ['Human', 'Fluency'], ['Human', 'Consistency']]
[['-', '-', '0.33', '3.434', '0.234'], ['30.42', '1', '0.07', '3.053', '0.011'], ['36.75', '0.64', '-0.03', '-', '-'], ['32.96', '0.9', '0', '3.103', '0.038'], ['41.64', '0.74', '0.2', '3.185', '0.197']]
column
['PPL', 'BLEU', 'C', 'Fluency', 'Consistency']
['PAML']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Automatic || PPL</th> <th>Automatic || BLEU</th> <th>Automatic || C</th> <th>Human || Fluency</th> <th>Human || Consistency</th> </tr> </thead> <tbody> <tr> <td>Human</td> <td>-...
Table 1
table_1
P19-1542
3
acl2019
3.2 Results. Table 1 shows both automatic and human evaluation results. PAML achieve consistently better results in term of dialogue consistency in both automatic and human evaluation. The latter also shows that all the experimental settings have comparable fluency scores, where instead perplexity and BLEU score are l...
[0, 1, 1, 1, 1, 2]
['3.2 Results.', ' Table 1 shows both automatic and human evaluation results.', 'PAML achieve consistently better results in term of dialogue consistency in both automatic and human evaluation.', 'The latter also shows that all the experimental settings have comparable fluency scores, where instead perplexity and BLEU ...
[None, None, ['PAML'], ['Fluency', 'PPL', 'BLEU', 'PAML'], ['Human'], ['PAML']]
1
P19-1543table_1
Comparison with baseline models.
2
[['Model', 'Pointer LSTM'], ['Model', 'Bi-DAF'], ['Model', 'R-Net'], ['Model', 'Utterance-based HA'], ['Model', 'Turn-based HA (Proposed)']]
1
[['EM Score'], ['F1 Score']]
[['77.85', '82.73'], ['87.24', '88.67'], ['88.93', '90.41'], ['88.59', '90.12'], ['91.07', '92.39']]
column
['EM Score', 'F1 Score']
['Turn-based HA (Proposed)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>EM Score</th> <th>F1 Score</th> </tr> </thead> <tbody> <tr> <td>Model || Pointer LSTM</td> <td>77.85</td> <td>82.73</td> </tr> <tr> <td>Model || Bi-DAF</td> <td>87.24</td>...
Table 1
table_1
P19-1543
4
acl2019
We adopted Exact Match (EM) and F1 score in SQuAD as metrics (Rajpurkar et al., 2016). Results in Table 1 show that while the utterance-based HA network is on par with established baselines, the proposed turn-based HA model obtains more gains, achieving the best EM and F1 scores.
[2, 1]
['We adopted Exact Match (EM) and F1 score in SQuAD as metrics (Rajpurkar et al., 2016).', 'Results in Table 1 show that while the utterance-based HA network is on par with established baselines, the proposed turn-based HA model obtains more gains, achieving the best EM and F1 scores.']
[['EM Score', 'F1 Score'], ['Utterance-based HA', 'Turn-based HA (Proposed)', 'EM Score', 'F1 Score']]
1
P19-1557table_4
Competitive results on DBpedia and AG News reported in accuracy (%) without any hyper-parameter tuning.
1
[['Bi-BloSAN(Shen et al., 2018)'], ['LEAM(Wang et al., 2018a)'], ['This work']]
1
[['DBpedia(%)'], ['AG News (%)']]
[['98.77', '93.32'], ['99.02', '92.45'], ['98.9', '92.05']]
column
['accuracy', 'accuracy']
['This work']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>DBpedia(%)</th> <th>AG News (%)</th> </tr> </thead> <tbody> <tr> <td>Bi-BloSAN(Shen et al., 2018)</td> <td>98.77</td> <td>93.32</td> </tr> <tr> <td>LEAM(Wang et al., 2018a)</td...
Table 4
table_4
P19-1557
5
acl2019
Table 3 shows that the system obtains superior results in the Hate Speech dataset and yields competitive results on the Kaggle data in comparison to some sate-of-the-art baseline systems. Table 4 shows the results of our system on the DBpedia and AG News datasets. Using the same model without any tuning, we managed to ...
[0, 1, 1]
['Table 3 shows that the system obtains superior results in the Hate Speech dataset and yields competitive results on the Kaggle data in comparison to some sate-of-the-art baseline systems.', 'Table 4 shows the results of our system on the DBpedia and AG News datasets.', 'Using the same model without any tuning, we man...
[None, ['DBpedia(%)', 'AG News (%)'], ['This work']]
1
P19-1564table_2
Comparison of MTN (Base) to state-of-the-art visual dialogue models on the test-std v1.0. The best measure is highlighted in bold.
2
[['Model', 'MTN (Base)'], ['Model', 'CorefNMN (Kottur et al., 2018)'], ['Model', 'MN (Das et al., 2017a)'], ['Model', 'HRE (Das et al., 2017a)'], ['Model', 'LF (Das et al., 2017a)']]
1
[['NDCG']]
[['55.33'], ['54.7'], ['47.5'], ['45.46'], ['45.31']]
column
['NDCG']
['MTN (Base)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>NDCG</th> </tr> </thead> <tbody> <tr> <td>Model || MTN (Base)</td> <td>55.33</td> </tr> <tr> <td>Model || CorefNMN (Kottur et al., 2018)</td> <td>54.7</td> </tr> <tr> <td...
Table 2
table_2
P19-1564
8
acl2019
We trained MTN with the Base parameters on the Visual Dialogue v1.0 2 training data and evaluate on the test-std v1.0 set. The image features are extracted by a pre-trained object detection model (Refer to the appendix Section A.2 for data preprocessing). We evaluate our model with Normalized Discounted Cumulative Gain...
[2, 2, 2, 2, 1, 1, 0]
['We trained MTN with the Base parameters on the Visual Dialogue v1.0 2 training data and evaluate on the test-std v1.0 set.', 'The image features are extracted by a pre-trained object detection model (Refer to the appendix Section A.2 for data preprocessing).', 'We evaluate our model with Normalized Discounted Cumulat...
[None, None, None, None, None, ['MTN (Base)'], ['CorefNMN (Kottur et al., 2018)', 'MN (Das et al., 2017a)', 'HRE (Das et al., 2017a)', 'LF (Das et al., 2017a)']]
1
P19-1565table_3
Results of Turn-level Evaluation.
2
[['System', 'Retrieval'], ['System', 'Ours-Random'], ['System', 'Ours-PMI'], ['System', 'Ours-Neural'], ['System', 'Ours-Kernel']]
2
[['Keyword Prediction', 'Rw@1'], ['Keyword Prediction', 'Rw@3'], ['Keyword Prediction', 'Rw@5'], ['Keyword Prediction', 'P@1'], ['Keyword Prediction', 'Cor.'], ['Response Retrieval', 'R20@1'], ['Response Retrieval', 'R20@3'], ['Response Retrieval', 'R20@5'], ['Response Retrieval', 'MRR']]
[['-', '-', '-', '-', '-', '0.5196', '0.7636', '0.8622', '0.6661'], ['0.0005', '0.0015', '0.0025', '0.0009', '0.4995', '0.5187', '0.7619', '0.8631', '0.665'], ['0.0585', '0.1351', '0.1872', '0.0871', '0.7974', '0.5441', '0.7839', '0.8716', '0.6847'], ['0.0609', '0.1324', '0.1825', '0.1006', '0.8075', '0.5395', '0.7801'...
column
['Rw@1', 'Rw@3', 'Rw@5', 'P@1', 'Cor.', 'R20@1', 'R20@3', 'R20@5', 'MRR']
['Ours-Random', 'Ours-PMI', 'Ours-Neural', 'Ours-Kernel']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Keyword Prediction || Rw@1</th> <th>Keyword Prediction || Rw@3</th> <th>Keyword Prediction || Rw@5</th> <th>Keyword Prediction || P@1</th> <th>Keyword Prediction || Cor.</th> <th>Response Re...
Table 3
table_3
P19-1565
8
acl2019
Results . Table 3 shows the evaluation results. Our system with Kernel transition module outperforms all other systems in terms of all metrics on both two tasks, expect for R20@3 where the system with PMI transition performs best. The Kernel approach can predict the next keywords more precisely. In the task of response...
[2, 1, 1, 1, 1, 1]
['Results .', 'Table 3 shows the evaluation results.', 'Our system with Kernel transition module outperforms all other systems in terms of all metrics on both two tasks, expect for R20@3 where the system with PMI transition performs best.', 'The Kernel approach can predict the next keywords more precisely.', 'In the ta...
[None, None, ['Ours-Kernel', 'Keyword Prediction', 'Response Retrieval', 'Ours-PMI', 'R20@3'], ['Ours-Kernel'], ['Response Retrieval', 'Ours-PMI', 'Retrieval'], ['Response Retrieval', 'Ours-Random', 'Retrieval']]
1
P19-1569table_3
Comparison with other works on the test sets of Raganato et al. (2017a). All works used sense annotations from SemCor as supervision, although often different pretrained embeddings. † reproduced from Raganato et al. (2017a); * used as a development set; bold new state-of-the-art (SOTA); underlined previous SOTA.
2
[['Model', 'MFS† (Most Frequent Sense)'], ['Model', 'IMS† (2010)'], ['Model', 'IMS + embeddings† (2016)'], ['Model', 'context2vec k-NN† (2016)'], ['Model', 'word2vec k-NN (2016)'], ['Model', 'LSTM-LP (Label Prop.) (2016)'], ['Model', 'Seq2Seq (Task Modelling) (2017b)'], ['Model', 'BiLSTM (Task Modelling) (2017b)'], ['M...
1
[['Senseval2'], ['Senseval3'], ['SemEval2007'], ['SemEval2013'], ['SemEval2015'], ['ALL']]
[['65.6', '66', '54.5', '63.8', '67.1', '64.8'], ['70.90', '69.30', '61.30', '65.3', '69.5', '68.4'], ['72.2', '70.4', '62.6', '65.9', '71.5', '69.6'], ['71.80', '69.10', '61.30', '65.60', '71.9', '69'], ['67.80', '62.10', '58.50', '66.10', '66.7', '-'], ['73.80', '71.80', '63.50', '69.50', '72.6', '-'], ['70.10', '68....
column
['F1', 'F1', 'F1', 'F1', 'F1', 'F1']
['LMMS2348 (ELMo)', 'LMMS2348 (BERT)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Senseval2</th> <th>Senseval3</th> <th>SemEval2007</th> <th>SemEval2013</th> <th>SemEval2015</th> <th>ALL</th> </tr> </thead> <tbody> <tr> <td>Model || MFS† (Most Frequent Sense)...
Table 3
table_3
P19-1569
7
acl2019
5.1 All-Words Disambiguation . In Table 3 we show our results for all tasks of Raganato et al. (2017a)’s evaluation framework. We used the framework’s scoring scripts to avoid any discrepancies in the scoring methodology. Note that the k-NN referred in Table 3 always refers to the closest neighbor, and relies on MFS fa...
[2, 1, 2, 2, 1, 1, 1]
['5.1 All-Words Disambiguation .', 'In Table 3 we show our results for all tasks of Raganato et al. (2017a)’s evaluation framework.', 'We used the framework’s scoring scripts to avoid any discrepancies in the scoring methodology.', 'Note that the k-NN referred in Table 3 always refers to the closest neighbor, and relie...
[None, None, None, ['context2vec k-NN† (2016)', 'word2vec k-NN (2016)', 'ELMo k-NN (2018)', 'BERT k-NN'], ['LMMS2348 (BERT)'], ['LMMS2348 (BERT)', 'BERT k-NN', 'ALL', 'SemEval2013'], ['LMMS2348 (ELMo)', 'ELMo k-NN (2018)']]
1
P19-1570table_6
Comparison of W ordCtx2Sense with the state-of-the-art methods for Word Sense Induction on MakeSense-2016 and SemEval-2010 dataset. We report Fscore and V-measure scores multiplied by 100.
4
[['Method', '(Huang et al., 2012)', 'K', '-'], ['Method', '(Neelakantan et al., 2015) 300D.30K.key', 'K', '-'], ['Method', '(Neelakantan et al., 2015) 300D.6K.key', 'K', '-'], ['Method', '(Mu et al., 2017)', 'K', '2'], ['Method', '(Mu et al., 2017)', 'K', '5'], ['Method', '(Arora et al., 2018)', 'K', '2'], ['Method', '...
2
[['MakeSense-2016', 'F-scr'], ['MakeSense-2016', 'V-msr'], ['SemEval-2010', 'F-scr'], ['SemEval-2010', 'V-msr']]
[['47.4', '15.5', '38.05', '10.6'], ['54.49', '19.4', '47.26', '9'], ['57.91', '14.4', '48.43', '6.9'], ['64.66', '28.8', '57.14', '7.1'], ['58.25', '34.3', '44.07', '14.5'], ['-', '-', '58.55', '6.1'], ['-', '-', '46.38', '11.5'], ['63.71', '22.2', '59.38', '6.8'], ['59.75', '32.9', '46.47', '13.2'], ['59.13', '34.2',...
column
['F-scr', 'V-msr', 'F-scr', 'V-msr']
['WordCtx2Sense (? = 0.0)', 'WordCtx2Sense (? = 10^?2)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MakeSense-2016 || F-scr</th> <th>MakeSense-2016 || V-msr</th> <th>SemEval-2010 || F-scr</th> <th>SemEval-2010 || V-msr</th> </tr> </thead> <tbody> <tr> <td>Method || (Huang et al., 2012) ...
Table 6
table_6
P19-1570
9
acl2019
Results . Table 6 shows the results of clustering on WSI SemEval-2010 dataset. WordCtx2Sense outperforms (Arora et al., 2018) and (Mu et al., 2017) on both F-score and V-measure scores by a considerable margin. We observe similar improvements on the MakeSense-2016 dataset.
[2, 1, 1, 1]
['Results .', 'Table 6 shows the results of clustering on WSI SemEval-2010 dataset.', 'WordCtx2Sense outperforms (Arora et al., 2018) and (Mu et al., 2017) on both F-score and V-measure scores by a considerable margin.', 'We observe similar improvements on the MakeSense-2016 dataset.']
[None, ['SemEval-2010'], ['WordCtx2Sense (? = 0.0)', 'WordCtx2Sense (? = 10^?2)', '(Arora et al., 2018)', '(Mu et al., 2017)'], ['MakeSense-2016']]
1
P19-1584table_2
Binary HIPAA F1 scores of our non-private (top) and private (bottom) de-identification approaches on the i2b2 2014 test set in comparison to non-private the state of the art. Our private approaches use N = 100 neighbors as a privacy criterion.
2
[['Model', 'Our non-private FastText'], ['Model', 'Our non-private GloVe'], ['Model', 'Our non-private GloVe + casing'], ['Model', 'Dernoncourt et al. (LSTM-CRF)'], ['Model', 'Liu et al. (ensemble + rules)']]
1
[['F1 (%)']]
[['97.67'], ['97.24'], ['97.62'], ['97.85'], ['98.27']]
column
['F1 (%)']
['Our non-private FastText', 'Our non-private GloVe', 'Our non-private GloVe + casing']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>F1 (%)</th> </tr> </thead> <tbody> <tr> <td>Model || Our non-private FastText</td> <td>97.67</td> </tr> <tr> <td>Model || Our non-private GloVe</td> <td>97.24</td> </tr> <tr> ...
Table 2
table_2
P19-1584
6
acl2019
7 Result. Table 2 shows de-identification performance results for the non-private de-identification classifier in comparison to the state of the art. The results are average values out of five experiment runs. When trained on the raw i2b2 2014 data, our models achieve F1 scores that are comparable to Dernoncourt et al....
[2, 1, 1, 1, 1]
['7 Result.', 'Table 2 shows de-identification performance results for the non-private de-identification classifier in comparison to the state of the art.', 'The results are average values out of five experiment runs.', 'When trained on the raw i2b2 2014 data, our models achieve F1 scores that are comparable to Dernonc...
[None, None, None, ['Our non-private FastText', 'Our non-private GloVe', 'Our non-private GloVe + casing', 'Dernoncourt et al. (LSTM-CRF)'], ['Our non-private GloVe + casing', 'Our non-private GloVe']]
1
P19-1595table_2
Comparison of test set results. *MT-DNNKD is distilled from a diverse ensemble of models.
2
[['Model', 'BERT-Base (Devlin et al., 2019)'], ['Model', 'BERT-Large (Devlin et al., 2019)'], ['Model', 'BERT on STILTs (Phang et al., 2018)'], ['Model', 'MT-DNN (Liu et al., 2019b)'], ['Model', 'Span-Extractive BERT on STILTs (Keskar et al., 2019)'], ['Model', 'Snorkel MeTaL ensemble (Hancock et al., 2019)'], ['Model'...
1
[['GLUE score']]
[['78.5'], ['80.5'], ['82'], ['82.2'], ['82.3'], ['83.2'], ['83.7'], ['82.3']]
column
['GLUE score']
['BERT-Large + BAM (ours)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>GLUE score</th> </tr> </thead> <tbody> <tr> <td>Model || BERT-Base (Devlin et al., 2019)</td> <td>78.5</td> </tr> <tr> <td>Model || BERT-Large (Devlin et al., 2019)</td> <td>80.5</t...
Table 2
table_2
P19-1595
4
acl2019
We compare against recent work by submitting to the GLUE leaderboard. We use Single→Multi distillation. Following the procedure used by BERT, we train multiple models and submit the one with the highest average dev set score to the test set. BERT trained 10 models for each task (80 total);. we trained 20 multi-task mod...
[2, 2, 2, 2, 2, 1, 1, 2, 2]
['We compare against recent work by submitting to the GLUE leaderboard.', 'We use Single→Multi distillation.', 'Following the procedure used by BERT, we train multiple models and submit the one with the highest average dev set score to the test set.', 'BERT trained 10 models for each task (80 total);.', 'we trained 20 ...
[['GLUE score'], None, None, None, None, None, ['BERT-Large + BAM (ours)'], None, None]
1
P19-1599table_3
Test results with WPL at different positions.
1
[['VGVAE w/o WPL'], ['Dec. hidden state'], ['Enc. emb.'], ['Dec. emb.'], ['Enc. & Dec. emb.']]
1
[['BL R-1'], ['R-2'], ['R-L'], ['MET'], ['ST']]
[['3.5 24.8', '7.3', '29.7', '12.6', '10.6'], ['3.6 24.9', '7.3', '29.7', '12.6', '10.5'], ['3.9 26.1', '7.8', '31', '12.9', '10.2'], ['4.1 26.3', '8.1', '31.3', '13.1', '10.1'], ['4.5 26.5', '8.2', '31.5', '13.3', '10']]
column
['BL R-1', 'R-2', 'R-L', 'MET', 'ST']
['Dec. hidden state']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BL R-1</th> <th>R-2</th> <th>R-L</th> <th>MET</th> <th>ST</th> </tr> </thead> <tbody> <tr> <td>VGVAE w/o WPL</td> <td>3.5 24.8</td> <td>7.3</td> <td>29.7</td> <td...
Table 3
table_3
P19-1599
7
acl2019
Effect of Position of Word Position Loss. We also study the effect of the position of WPL by (1) using the decoder hidden state, (2) using the concatenation of word embeddings in the syntactic encoder and the syntactic variable, (3) using the concatenation of word embeddings in the decoder and the syntactic variable, o...
[0, 0, 1, 1, 1]
['Effect of Position of Word Position Loss.', 'We also study the effect of the position of WPL by (1) using the decoder hidden state, (2) using the concatenation of word embeddings in the syntactic encoder and the syntactic variable, (3) using the concatenation of word embeddings in the decoder and the syntactic variab...
[None, None, ['Dec. hidden state'], ['Dec. hidden state'], ['Dec. hidden state']]
1
P19-1599table_7
Test results when using a single code.
1
[['LC'], ['Single LC']]
1
[['BL'], ['R-1'], ['R-2'], ['R-L'], ['MET'], ['ST']]
[['13.6', '44.7', '21', '48.3', '24.8', '6.7'], ['12.9', '44.2', '20.3', '47.4', '24.1', '6.9']]
column
['BL', 'R-1', 'R-2', 'R-L', 'MET', 'ST']
['LC']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BL</th> <th>R-1</th> <th>R-2</th> <th>R-L</th> <th>MET</th> <th>ST</th> </tr> </thead> <tbody> <tr> <td>LC</td> <td>13.6</td> <td>44.7</td> <td>21</td> <td>4...
Table 7
table_7
P19-1599
8
acl2019
We also compare the performance of LC by using a single latent code that has 50 classes. The results in Table 7 show that it is better to use smaller number of classes for each cluster instead of using a cluster with a large number of classes.
[2, 1]
['We also compare the performance of LC by using a single latent code that has 50 classes.', 'The results in Table 7 show that it is better to use smaller number of classes for each cluster instead of using a cluster with a large number of classes.']
[['Single LC'], ['LC', 'Single LC']]
1
P19-1602table_3
Performance of paraphrase generation. The larger↑ (or lower↓), the better. Some results are quoted from †Miao et al. (2019) and ‡Gupta et al. (2018).
2
[['Model', 'Origin Sentence'], ['Model', 'VAE-SVG-eq (supervised)'], ['Model', 'VAE (unsupervised)'], ['Model', 'CGMH'], ['Model', 'DSS-VAE']]
1
[['BLEU-ref'], ['BLEU-ori']]
[['30.49', '100'], ['22.9', '–'], ['9.25', '27.23'], ['18.85', '50.18'], ['20.54', '52.77']]
column
['BLEU-ref', 'BLEU-ori']
['DSS-VAE']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU-ref</th> <th>BLEU-ori</th> </tr> </thead> <tbody> <tr> <td>Model || Origin Sentence</td> <td>30.49</td> <td>100</td> </tr> <tr> <td>Model || VAE-SVG-eq (supervised)</td> ...
Table 3
table_3
P19-1602
7
acl2019
Results Table 3 shows the performance of unsupervised paraphrase generation. In the first row of Table 3, simply copying the original sentences yields the highest BLEU-ref, but is meaningless as it has a BLEU-ori score of 100. We see that DSS-VAE outperforms the CGMH and the original VAE in BLEU-ref. Especially, DSS-VA...
[1, 1, 1, 1, 2, 2, 2, 2, 1]
['Results Table 3 shows the performance of unsupervised paraphrase generation.', 'In the first row of Table 3, simply copying the original sentences yields the highest BLEU-ref, but is meaningless as it has a BLEU-ori score of 100.', 'We see that DSS-VAE outperforms the CGMH and the original VAE in BLEU-ref.', 'Especia...
[None, ['Origin Sentence', 'BLEU-ref', 'BLEU-ori'], ['DSS-VAE', 'CGMH', 'VAE (unsupervised)', 'BLEU-ref'], ['DSS-VAE', 'BLEU-ref'], None, None, None, ['DSS-VAE'], ['VAE (unsupervised)', 'CGMH', 'DSS-VAE']]
1
P19-1603table_2
Automatic evaluation of generation models.
2
[['Model', 'Seq2Seq + SentiMod'], ['Model', 'SIC-Seq2Seq + RB'], ['Model', 'SIC-Seq2Seq + RM'], ['Model', 'SIC-Seq2Seq + DA']]
1
[['BLEU-1'], ['BLEU-2'], ['I-O SentiCons']]
[['10.7', '3.2', '0.788'], ['19.3', '6.3', '0.879'], ['19.5', '6.2', '0.83'], ['19.8', '6.7', '0.794']]
column
['BLEU-1', 'BLEU-2', 'I-O SentiCons']
['SIC-Seq2Seq + RB', 'SIC-Seq2Seq + RM', 'SIC-Seq2Seq + DA']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU-1</th> <th>BLEU-2</th> <th>I-O SentiCons</th> </tr> </thead> <tbody> <tr> <td>Model || Seq2Seq + SentiMod</td> <td>10.7</td> <td>3.2</td> <td>0.788</td> </tr> <tr> ...
Table 2
table_2
P19-1603
4
acl2019
The automatic results of four generation models are shown in Table 2. We have the following observations: (1) Three models based on our proposed framework do not have obvious performance difference in terms of BLEU. Meanwhile, all of them can largely outperform the Seq2Seq+SentiMod baseline which does not follow our fr...
[1, 1, 1, 1, 1, 2]
['The automatic results of four generation models are shown in Table 2.', 'We have the following observations: (1) Three models based on our proposed framework do not have obvious performance difference in terms of BLEU.', 'Meanwhile, all of them can largely outperform the Seq2Seq+SentiMod baseline which does not follo...
[None, ['SIC-Seq2Seq + RB', 'SIC-Seq2Seq + RM', 'SIC-Seq2Seq + DA', 'BLEU-2'], ['SIC-Seq2Seq + RB', 'SIC-Seq2Seq + RM', 'SIC-Seq2Seq + DA', 'Seq2Seq + SentiMod'], ['SIC-Seq2Seq + RB', 'SIC-Seq2Seq + RM', 'SIC-Seq2Seq + DA'], ['I-O SentiCons'], None]
1
P19-1603table_3
Human evaluation of generation models.
2
[['Model', 'Seq2Seq + SentiMod'], ['Model', 'SIC-Seq2Seq + RB'], ['Model', 'SIC-Seq2Seq + RM'], ['Model', 'SIC-Seq2Seq + DA']]
1
[['Coherency'], ['Fluency'], ['Sentiment']]
[['1.5', '2.5', '3.68'], ['2.65', '4.75', '4.09'], ['2.15', '4.6', '3.65'], ['2.2', '4.5', '3.71']]
column
['Coherency', 'Fluency', 'Sentiment']
['SIC-Seq2Seq + RB', 'SIC-Seq2Seq + RM', 'SIC-Seq2Seq + DA']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Coherency</th> <th>Fluency</th> <th>Sentiment</th> </tr> </thead> <tbody> <tr> <td>Model || Seq2Seq + SentiMod</td> <td>1.5</td> <td>2.5</td> <td>3.68</td> </tr> <tr> ...
Table 3
table_3
P19-1603
4
acl2019
The automatic and human evaluation results of four generation models are shown in Table 2 and Table 3 respectively. We have the following observations: (1) Three models based on our proposed framework do not have obvious performance difference in terms of BLEU, Coherency, and Fluency. Meanwhile, all of them can largely...
[1, 1, 1, 2, 2, 2]
['The automatic and human evaluation results of four generation models are shown in Table 2 and Table 3 respectively.', 'We have the following observations: (1) Three models based on our proposed framework do not have obvious performance difference in terms of BLEU, Coherency, and Fluency.', 'Meanwhile, all of them can...
[None, ['SIC-Seq2Seq + RB', 'SIC-Seq2Seq + RM', 'SIC-Seq2Seq + DA'], ['SIC-Seq2Seq + RB', 'SIC-Seq2Seq + RM', 'SIC-Seq2Seq + DA', 'Seq2Seq + SentiMod'], None, None, None]
1
P19-1607table_3
Comparison with previous models on text simplification in Newsela dataset and formality transfer in GYAFC dataset. Our models achieved the best BLEU scores across styles and domains.
1
[['Source'], ['Reference'], ['Dress-LS'], ['BiFT-Ens'], ['Ours (RNN)'], ['Ours (SAN)']]
2
[['Nowsela', 'Add'], ['Nowsela', 'Keep'], ['Nowsela', 'Del'], ['Nowsela', 'BLEU'], ['Nowsela', 'SARI'], ['GYAFC-E&M', 'Add'], ['GYAFC-E&M', 'Keep'], ['GYAFC-E&M', 'Del'], ['GYAFC-E&M', 'BLEU'], ['GYAFC-F&R', 'Add'], ['GYAFC-F&R', 'Keep'], ['GYAFC-F&R', 'Del'], ['GYAFC-F&R', 'BLEU']]
[['0', '60.3', '0', '21.4', '2.8', '0', '85.4', '0', '49.1', '0', '85.8', '0', '51'], ['100', '100', '100', '100', '70.3', '57.2', '82.9', '61.2', '100', '56.5', '82.7', '60.6', '100'], ['2.4', '60.7', '44.9', '24.3', '26.6', '', '', '', '', '', '', '', ''], ['', '', '', '', '', '32.1', '90', '58.2', '71.4', '32.6', '9...
column
['Add', 'Keep', 'Del', 'BLEU', 'SARI', 'Add', 'Keep', 'Del', 'BLEU', 'Add', 'Keep', 'Del', 'BLEU']
['Ours (RNN)', 'Ours (SAN)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Nowsela || Add</th> <th>Nowsela || Keep</th> <th>Nowsela || Del</th> <th>Nowsela || BLEU</th> <th>Nowsela || SARI</th> <th>GYAFC-E&amp;M || Add</th> <th>GYAFC-E&amp;M || Keep</th> ...
Table 3
table_3
P19-1607
4
acl2019
Table 3 shows a comparison between our models and comparative models. Whereas Dress-LS has a higher SARI score because it directly optimizes SARI using reinforcement learning, our models achieved the best BLEU scores across styles and domains.
[1, 1]
['Table 3 shows a comparison between our models and comparative models.', 'Whereas Dress-LS has a higher SARI score because it directly optimizes SARI using reinforcement learning, our models achieved the best BLEU scores across styles and domains.']
[None, ['Dress-LS', 'SARI', 'Ours (RNN)', 'Ours (SAN)', 'BLEU']]
1
P19-1623table_2
Human evaluation results on the Chinese-toEnglish task. “Flu.” denotes fluency and “Ade.” denotes adequacy. Two human evaluators who can read both Chinese and English were asked to assess the fluency and adequacy of the translations. The scores of fluency and adequacy range from 1 to 5.
3
[['Method', 'Evaluator 1', 'MLE'], ['Method', 'Evaluator 1', 'MLE + CP'], ['Method', 'Evaluator 1', 'WordDropout'], ['Method', 'Evaluator 1', 'CLone'], ['Method', 'Evaluator 2', 'MLE'], ['Method', 'Evaluator 2', 'MLE + CP'], ['Method', 'Evaluator 2', 'WordDropout'], ['Method', 'Evaluator 2', 'CLone']]
1
[['Flu.'], ['Ade.']]
[['4.31', '4.25'], ['4.31', '4.31'], ['4.29', '4.25'], ['4.32', '4.58'], ['4.27', '4.22'], ['4.26', '4.25'], ['4.25', '4.23'], ['4.27', '4.53']]
column
['Flu.', 'Ade.']
['CLone']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Flu.</th> <th>Ade.</th> </tr> </thead> <tbody> <tr> <td>Method || Evaluator 1 || MLE</td> <td>4.31</td> <td>4.25</td> </tr> <tr> <td>Method || Evaluator 1 || MLE + CP</td> ...
Table 2
table_2
P19-1623
4
acl2019
Table 2 shows the results of human evaluation on the Chinese-to-English task. We asked two human evaluators who can read both Chinese and English to evaluate the fluency and adequacy of the translations generated by MLE, MLE + CP, MLE + data, and CLone. The scores of fluency and adequacy range from 1 to 5. The translatio...
[1, 1, 2, 2, 1, 2, 2]
['Table 2 shows the results of human evaluation on the Chinese-to-English task.', 'We asked two human evaluators who can read both Chinese and English to evaluate the fluency and adequacy of the translations generated by MLE, MLE + CP, MLE + data, and CLone.', 'The scores of fluency and adequacy range from 1 to 5.', 'The...
[None, ['WordDropout', 'CLone'], ['Flu.', 'Ade.'], None, ['CLone', 'Ade.'], ['Ade.'], ['CLone']]
1
P19-1628table_2
Test set results on the NYT and CNNDailyMail datasets using ROUGE F1 (R-1 and R-2 are shorthands for unigram and bigram overlap, R-L is the longest common subsequence).
2
[['Method', 'ORACLE'], ['Method', 'REFRESH 4 (Narayan et al., 2018b)'], ['Method', 'POINTER-GENERATOR (See et al., 2017)'], ['Method', 'LEAD-3'], ['Method', 'DEGREE (tf-idf)'], ['Method', 'TEXTRANK (tf-idf)'], ['Method', 'TEXTRANK (skip-thought vectors)'], ['Method', 'TEXTRANK (BERT)'], ['Method', 'PACSUM (tf-idf)'], [...
2
[['NYT', 'R-1'], ['NYT', 'R-2'], ['NYT', 'R-L'], ['CNN+DM', 'R-1'], ['CNN+DM', 'R-2'], ['CNN+DM', 'R-L']]
[['61.9', '41.7', '58.3', '54.7', '30.4', '50.8'], ['41.3', '22', '37.8', '41.3', '18.4', '37.5'], ['42.7', '22.1', '38', '39.5', '17.3', '36.4'], ['35.5', '17.2', '32', '40.5', '17.7', '36.7'], ['33.2', '13.1', '29', '33.0', '11.7', '29.5'], ['33.2', '13.1', '29', '33.2', '11.8', '29.6'], ['30.1', '9.6', '26.1', '31.4...
column
['R-1', 'R-2', 'R-L', 'R-1', 'R-2', 'R-L']
['PACSUM (tf-idf)', 'DEGREE (tf-idf)', 'TEXTRANK (tf-idf)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>NYT || R-1</th> <th>NYT || R-2</th> <th>NYT || R-L</th> <th>CNN+DM || R-1</th> <th>CNN+DM || R-2</th> <th>CNN+DM || R-L</th> </tr> </thead> <tbody> <tr> <td>Method || ORACLE</td...
Table 2
table_2
P19-1628
6
acl2019
As can be seen in Table 2, DEGREE (tf-idf) is very close to TEXTRANK (tf-idf). . Due to space limitations, we only show comparisons between DEGREE and TEXTRANK with tf-idf, however, we observed similar trends across sentence rep-resentations. These results indicate that considering global struc...
[1, 1, 1, 1, 2, 2, 2]
['As can be seen in Table 2, DEGREE (tf-idf) is very close to TEXTRANK (tf-idf). .', 'Due to space limitations, we only show comparisons between DEGREE and TEXTRANK with tf-idf, however, we observed similar trends across sentence rep-resentations.', ' These results indicate that considering globa...
[['DEGREE (tf-idf)', 'TEXTRANK (tf-idf)'], ['DEGREE (tf-idf)', 'TEXTRANK (tf-idf)'], ['NYT', 'CNN+DM'], ['PACSUM (tf-idf)', 'TEXTRANK (tf-idf)'], None, None, None]
1
P19-1629table_3
Results (dev set) on document-level GMB benchmark.
2
[['DRTS parser', 'Shallow'], ['DRTS parser', 'Deep'], ['DRTS parser', 'DeepFeat'], ['DRTS parser', 'DeepCopy']]
1
[['par-F1'], ['exa-F1']]
[['66.63', '61.74'], ['71.01', '65.42'], ['71.44', '66.43'], ['75.89', '69.45']]
column
['par-F1', 'exa-F1']
['Deep', 'DeepFeat', 'DeepCopy']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>par-F1</th> <th>exa-F1</th> </tr> </thead> <tbody> <tr> <td>DRTS parser || Shallow</td> <td>66.63</td> <td>61.74</td> </tr> <tr> <td>DRTS parser || Deep</td> <td>71.01</td...
Table 3
table_3
P19-1629
8
acl2019
Parsing Documents . Table 3 presents various ablation studies for the document-level model on the development set. Deep sentence representations when combined with multi-attention bring improvements over shallow representations (+3.68 exact-F1). Using alignments as features and as a way of highlighting where to copy f...
[2, 1, 1, 1, 1, 0, 0]
['Parsing Documents .', 'Table 3 presents various ablation studies for the document-level model on the development set.', 'Deep sentence representations when combined with multi-attention bring improvements over shallow representations (+3.68 exact-F1).', ' Using alignments as features and as a way of highlighting wher...
[None, None, ['exa-F1', 'Deep'], ['Deep', 'par-F1', 'exa-F1'], ['DeepCopy'], None, None]
1
P19-1629table_4
Results (test set) on document-level GMB benchmark.
2
[['Models', 'DocSent'], ['Models', 'DocTree'], ['Models', 'DeepCopy']]
1
[['par-F1'], ['exa-F1']]
[['57.1', '53.27'], ['62.83', '58.22'], ['70.83', '66.56']]
column
['par-F1', 'exa-F1']
['DeepCopy']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>par-F1</th> <th>exa-F1</th> </tr> </thead> <tbody> <tr> <td>Models || DocSent</td> <td>57.1</td> <td>53.27</td> </tr> <tr> <td>Models || DocTree</td> <td>62.83</td> <...
Table 4
table_4
P19-1629
8
acl2019
Parsing Documents. Table 3 presents various ablation studies for the document-level model on the development set. Deep sentence representations when combined with multi-attention bring improvements over shallow representations (+3.68 exact-F1). Using alignments as features and as a way of highlighting where...
[2, 0, 0, 0, 0, 1, 1]
['Parsing Documents.', 'Table 3 presents various ablation studies for the document-level model on the development set.', 'Deep sentence representations when combined with multi-attention bring improvements over shallow representations (+3.68 exact-F1).', 'Using alignments as features and as a way of highlig...
[None, None, None, None, None, ['DeepCopy', 'DocSent', 'DocTree'], ['DeepCopy']]
1
P19-1631table_5
Performance statistics of all approaches on the Wikipedia dataset filtered on samples including identity terms. Numbers represent the mean of 5 runs. Maximum variance is .001.
2
[['Identity', 'Baseline'], ['Identity', 'Importance'], ['Identity', 'TOK Replace'], ['Identity', 'Our Method'], ['Identity', 'Finetuned']]
1
[['Acc'], ['F1'], ['AUC'], ['FP'], ['FN']]
[['0.931', '0.692', '0.91', '0.011', '0.057'], ['0.933', '0.704', '0.945', '0.012', '0.055'], ['0.91', '0.528', '0.882', '0.008', '0.081'], ['0.934', '0.697', '0.949', '0.008', '0.058'], ['0.928', '0.66', '0.94', '0.007', '0.064']]
column
['Acc', 'F1', 'AUC', 'FP', 'FN']
['Our Method']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Acc</th> <th>F1</th> <th>AUC</th> <th>FP</th> <th>FN</th> </tr> </thead> <tbody> <tr> <td>Identity || Baseline</td> <td>0.931</td> <td>0.692</td> <td>0.91</td> <t...
Table 5
table_5
P19-1631
5
acl2019
4.3.1 Evaluation on Original Data . We first verify that the prior loss term does not adversely affect overall classifier performance on the main task using general performance metrics such as accuracy and F-1. Results are shown in Table 4. Unlike previous approaches (Park et al., 2018; Dixon et al., 2018; Madras et al...
[2, 0, 0, 0, 0, 1, 1, 1, 2, 2, 1]
['4.3.1 Evaluation on Original Data .', 'We first verify that the prior loss term does not adversely affect overall classifier performance on the main task using general performance metrics such as accuracy and F-1.', 'Results are shown in Table 4.', 'Unlike previous approaches (Park et al., 2018; Dixon et al., 2018; M...
[None, None, None, None, None, None, ['Importance', 'Baseline'], ['TOK Replace'], None, None, ['Our Method']]
1
P19-1635table_7
Results for exaggerated numeral detection.
2
[['Distortion factor', '±10%'], ['Distortion factor', '±30%'], ['Distortion factor', '±50%'], ['Distortion factor', '±70%'], ['Distortion factor', '±90%']]
1
[['Micro-F1'], ['Macro-F1']]
[['58.54%', '57.87%'], ['56.94%', '56.11%'], ['57.69%', '56.85%'], ['70.92%', '70.85%'], ['76.91%', '76.94%']]
column
['micro-f1', 'macro-f1']
['Distortion factor']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Micro-F1</th> <th>Macro-F1</th> </tr> </thead> <tbody> <tr> <td>Distortion factor || ±10%</td> <td>58.54%</td> <td>57.87%</td> </tr> <tr> <td>Distortion factor || ±30%</td> ...
Table 7
table_7
P19-1635
5
acl2019
In this experiment, we release the boundary limitation, and test the numeracy for all real numbers. For instance, the altered results of 138 with 10% distortion factor are in the same magnitude, and that with 30% distortion factor, 96.6 and 179.4, are in different magnitude. Table 7 lists the experimental results. We f...
[2, 2, 1, 1, 1]
['In this experiment, we release the boundary limitation, and test the numeracy for all real numbers.', 'For instance, the altered results of 138 with 10% distortion factor are in the same magnitude, and that with 30% distortion factor, 96.6 and 179.4, are in different magnitude.', 'Table 7 lists the experimental resul...
[None, ['±10%', '±30%'], None, ['±70%', '±90%'], ['Micro-F1', 'Macro-F1']]
1
P19-1643table_6
Results from baselines and our best multimodal method on validation and test data. ActionG indicates action representation using GloVe embedding, and ActionE indicates action representation using ELMo embedding. ContextS indicates sentence-level context, and ContextA indicates action-level context.
2
[['Input Feature', 'Action E + Inception'], ['Input Feature', 'Action E + Inception + C3D'], ['Input Feature', 'Action E + POS + Inception + C3D'], ['Input Feature', 'Action E + Context S + Inception + C3D'], ['Input Feature', 'Action E + Context A + Inception + C3D'], ['Input Feature', 'Action E + Concreteness + Incep...
2
[['Metric', 'Accuracy'], ['Metric', 'Precision'], ['Metric', 'Recall'], ['Metric', 'F1']]
[['0.722', '0.765', '0.863', '0.811'], ['0.725', '0.769', '0.869', '0.814'], ['0.731', '0.763', '0.885', '0.82'], ['0.725', '0.77', '0.859', '0.812'], ['0.729', '0.757', '0.895', '0.82'], ['0.723', '0.768', '0.86', '0.811'], ['0.737', '0.758', '0.911', '0.827']]
column
['Accuracy', 'Precision', 'Recall', 'F1']
['Action E + POS + Context S + Concreteness + Inception + C3D']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Metric || Accuracy</th> <th>Metric || Precision</th> <th>Metric || Recall</th> <th>Metric || F1</th> </tr> </thead> <tbody> <tr> <td>Input Feature || Action E + Inception</td> <td>0....
Table 6
table_6
P19-1643
9
acl2019
Table 6 shows the results obtained using the multimodal model for different sets of input features. The model that uses all the input features available leads to the best results, improving significantly over the text-only and video-only methods.
[1, 1]
['Table 6 shows the results obtained using the multimodal model for different sets of input features.', 'The model that uses all the input features available leads to the best results, improving significantly over the text-only and video-only methods.']
[None, ['Action E + POS + Context S + Concreteness + Inception + C3D']]
1
P19-1645table_1
Summary of segmentation performance on phoneme version of the Brent Corpus (BR-phono).
2
[['Model', 'LSTM suprisal (Elman, 1990)'], ['Model', 'HMLSTM (Chung et al. 2017)'], ['Model', 'Unigram DP'], ['Model', 'Bigram HDP'], ['Model', 'SNLM (- memory, - length)'], ['Model', 'SNLM (+ memory, - length)'], ['Model', 'SNLM (- memory, + length)'], ['Model', 'SNLM (+ memory, + length)']]
2
[['Metric', 'P'], ['Metric', 'R'], ['Metric', 'F1']]
[['54.5', '55.5', '55'], ['8.1', '13.3', '10.1'], ['63.3', '50.4', '56.1'], ['53', '61.4', '56.9'], ['54.3', '34.9', '42.5'], ['52.4', '36.8', '43.3'], ['57.6', '43.4', '49.5'], ['81.3', '77.5', '79.3']]
column
['P', 'R', 'F1']
['SNLM (- memory, - length)', 'SNLM (- memory, + length)', 'SNLM (+ memory, - length)', 'SNLM (+ memory, + length)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Metric || P</th> <th>Metric || R</th> <th>Metric || F1</th> </tr> </thead> <tbody> <tr> <td>Model || LSTM suprisal (Elman, 1990)</td> <td>54.5</td> <td>55.5</td> <td>55</td> ...
Table 1
table_1
P19-1645
6
acl2019
Table 1 summarizes the segmentation results on the widely used BR-phono corpus, comparing it to a variety of baselines. Unigram DP, Bigram HDP, LSTM suprisal and HMLSTM refer to the benchmark models explained in ¤6. The ablated versions of our model show that without the lexicon (-memory), without the expected length p...
[1, 1, 1]
['Table 1 summarizes the segmentation results on the widely used BR-phono corpus, comparing it to a variety of baselines.', 'Unigram DP, Bigram HDP, LSTM suprisal and HMLSTM refer to the benchmark models explained in ¤6.', 'The ablated versions of our model show that without the lexicon (-memory), without the expected ...
[None, ['Unigram DP', 'LSTM suprisal (Elman, 1990)', 'HMLSTM (Chung et al. 2017)'], ['SNLM (- memory, - length)', 'SNLM (- memory, + length)', 'SNLM (+ memory, + length)', 'SNLM (+ memory, - length)']]
1
P19-1645table_2
Summary of segmentation performance on other corpora.
4
[['Corpus', 'BR-text', 'Model', 'LSTM surprisal'], ['Corpus', 'BR-text', 'Model', 'Unigram DP'], ['Corpus', 'BR-text', 'Model', 'Bigram HDP'], ['Corpus', 'BR-text', 'Model', 'SNLM'], ['Corpus', 'PTB', 'Model', 'LSTM surprisal'], ['Corpus', 'PTB', 'Model', 'Unigram DP'], ['Corpus', 'PTB', 'Model', 'Bigram HDP'], ['Corpu...
2
[['Metric', 'P'], ['Metric', 'R'], ['Metric', 'F1']]
[['36.4', '49', '41.7'], ['64.9', '55.7', '60'], ['52.5', '63.1', '57.3'], ['68.7', '78.9', '73.5'], ['27.3', '36.5', '31.2'], ['51', '49.1', '50'], ['34.8', '47.3', '40.1'], ['54.1', '60.1', '56.9'], ['41.6', '25.6', '31.7'], ['61.8', '49.6', '55'], ['67.3', '67.7', '67.5'], ['78.1', '81.5', '79.8'], ['38.1', '23', '2...
column
['P', 'R', 'F1']
['SNLM']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Metric || P</th> <th>Metric || R</th> <th>Metric || F1</th> </tr> </thead> <tbody> <tr> <td>Corpus || BR-text || Model || LSTM surprisal</td> <td>36.4</td> <td>49</td> <td>41.7<...
Table 2
table_2
P19-1645
7
acl2019
Table 2 summarizes results on the BR-text (orthographic Brent corpus) and Chinese corpora. As in the previous section, all the models were trained to maximize held-out likelihood. Here we observe a similar pattern, with the SNLM outperforming the baseline models, despite the tasks being quite different from each other ...
[1, 2, 1]
['Table 2 summarizes results on the BR-text (orthographic Brent corpus) and Chinese corpora.', 'As in the previous section, all the models were trained to maximize held-out likelihood.', 'Here we observe a similar pattern, with the SNLM outperforming the baseline models, despite the tasks being quite different from eac...
[None, None, ['SNLM']]
1
P19-1645table_4
Test language modeling performance (bpc).
2
[['Model', 'Unigram DP'], ['Model', 'Bigram HDP'], ['Model', 'LSTM'], ['Model', 'SNLM']]
2
[['Corpus', 'BR-text'], ['Corpus', 'BR-phono'], ['Corpus', 'PTB'], ['Corpus', 'CTB'], ['Corpus', 'PKU']]
[['2.33', '2.93', '2.25', '6.16', '6.88'], ['1.96', '2.55', '1.8', '5.4', '6.42'], ['2.03', '2.62', '1.65', '4.94', '6.2'], ['1.94', '2.54', '1.56', '4.84', '5.89']]
column
['bpc', 'bpc', 'bpc', 'bpc', 'bpc']
['SNLM']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Corpus || BR-text</th> <th>Corpus || BR-phono</th> <th>Corpus || PTB</th> <th>Corpus || CTB</th> <th>Corpus || PKU</th> </tr> </thead> <tbody> <tr> <td>Model || Unigram DP</td> ...
Table 4
table_4
P19-1645
8
acl2019
Table 4 summarizes the results of the language modeling experiments. Again, we see that SNLM outperforms the Bayesian models and a character LSTM.
[1, 1]
['Table 4 summarizes the results of the language modeling experiments.', 'Again, we see that SNLM outperforms the Bayesian models and a character LSTM.']
[None, ['SNLM', 'LSTM']]
1
P19-1648table_1
Comparison of ReDAN with a discriminative decoder to state-of-the-art methods on VisDial v1.0 validation set. Higher score is better for NDCG, MRR and Recall@k, while lower score is better for mean rank. All these baselines are re-implemented with bottom-up features and incorporated with GloVe vectors for fair comparis...
2
[['Model', 'MN-G (Das et al., 2017a)'], ['Model', 'HCIAE-G (Lu et al., 2017)'], ['Model', 'CoAtt-G (Wu et al., 2018)'], ['Model', 'ReDAN-G (T=1)'], ['Model', 'ReDAN-G (T=2)'], ['Model', 'ReDAN-G (T=3)'], ['Model', 'Ensemble of 4']]
1
[['NDCG'], ['MRR'], ['R@1'], ['R@5'], ['R@10'], ['Mean']]
[['56.99', '47.83', '38.01', '57.49', '64.08', '18.76'], ['59.7', '49.07', '39.72', '58.23', '64.73', '18.43'], ['59.24', '49.64', '40.09', '59.37', '65.92', '17.86'], ['59.41', '49.6', '39.95', '59.32', '65.97', '17.79'], ['60.11', '49.96', '40.36', '59.72', '66.57', '17.53'], ['60.47', '50.02', '40.27', '59.93', '66....
column
['NDCG', 'MRR', 'R@1', 'R@5', 'R@10', 'Mean']
['ReDAN-G (T=1)', 'ReDAN-G (T=2)', 'ReDAN-G (T=3)', 'Ensemble of 4']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>NDCG</th> <th>MRR</th> <th>R@1</th> <th>R@5</th> <th>R@10</th> <th>Mean</th> </tr> </thead> <tbody> <tr> <td>Model || MN-G (Das et al., 2017a)</td> <td>56.99</td> <td>...
Table 1
table_1
P19-1648
7
acl2019
Results on VisDial val v1.0. Experimental results on val v1.0 are shown in Table 1. “-D” denotes that a discriminative decoder is used. With only one reasoning step, our ReDAN model already achieves better performance than CoAtt, which is the previous best-performing model. Using two or three reasoning steps further in...
[2, 1, 1, 1, 2, 1, 1]
['Results on VisDial val v1.0.', 'Experimental results on val v1.0 are shown in Table 1. “-D” denotes that a discriminative decoder is used.', 'With only one reasoning step, our ReDAN model already achieves better performance than CoAtt, which is the previous best-performing model.', 'Using two or three reasoning steps...
[None, None, ['ReDAN-G (T=1)', 'CoAtt-G (Wu et al., 2018)'], ['ReDAN-G (T=2)', 'ReDAN-G (T=3)'], None, ['Ensemble of 4'], ['Ensemble of 4', 'NDCG', 'MRR']]
1
P19-1650table_1
Baseline model results, using either image or entity labels (2nd column). The informativeness metric we is low when additional input labels are not used, Covr and high when they are.
4
[['Baseline', 'Labels-to-captions', ' Image|Label', 'N|Y'], ['Baseline', '(Anderson et al., 2018)', 'Image|Label', ' Y|N'], ['Baseline', '(Sharma et al., 2018)', 'Image|Label', ' Y|N'], ['Baseline', '(Lu et al., 2018) w/ T', ' Image|Label', 'Y|Y']]
1
[[' CIDEr'], [' Covr we'], [' Covp obj']]
[['62.08', '21.01', '6.19'], ['51.09', '7.3', '4.95'], ['62.35', '10.52', '6.74'], ['69.46', '36.8', '6.93']]
column
['CIDEr', 'Covr we', 'Covp obj']
[' Image|Label']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>CIDEr</th> <th>Covr we</th> <th>Covp obj</th> </tr> </thead> <tbody> <tr> <td>Baseline || Labels-to-captions || Image|Label || N|Y</td> <td>62.08</td> <td>21.01</td> <td>6.19</...
Table 1
table_1
P19-1650
6
acl2019
Table 1 shows the performance of these baselines. We observe that the image-only models perform poorly on Covr we because they are unable to identify them from the image pixels alone. On the other hand, the labels-only baseline and the proposal of Lu et al.(2018) has high performance across all three metrics.
[1, 1, 1]
['Table 1 shows the performance of these baselines.', 'We observe that the image-only models perform poorly on Covr we because they are unable to identify them from the image pixels alone.', 'On the other hand, the labels-only baseline and the proposal of Lu et al.(2018) has high performance across all three metrics.']
[None, [' Image|Label', ' Y|N', ' Covr we'], [' Image|Label', 'N|Y', 'Y|Y']]
1
P19-1653table_3
Human ranking results: normalised rank (micro-averaged). Bold highlights best results.
2
[['lang', 'DE'], ['lang', 'FR']]
1
[['base+att'], ['del'], ['del+obj']]
[['0.35', '0.62', '0.59'], ['0.41', '0.6', '0.67']]
column
['rank', 'rank', 'rank']
[' del+obj']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>base+att</th> <th>del</th> <th>del+obj</th> </tr> </thead> <tbody> <tr> <td>lang || DE</td> <td>0.35</td> <td>0.62</td> <td>0.59</td> </tr> <tr> <td>lang || FR</td> ...
Table 3
table_3
P19-1653
8
acl2019
Table 3 shows the human evaluation results. They are consistent with the automatic evaluation results when it comes to the preference of humans towards the deliberation-based setups, but show a more positive outlook regarding the addition of visual information (del+obj over del) for French.
[1, 1]
['Table 3 shows the human evaluation results.', 'They are consistent with the automatic evaluation results when it comes to the preference of humans towards the deliberation-based setups, but show a more positive outlook regarding the addition of visual information (del+obj over del) for French.']
[None, [' del+obj', ' del', 'FR']]
1
P19-1657table_1
Main results for findings generation on the CX-CHR (upper) and IU-Xray (lower) datasets. BLEU-n denotes the BLEU score that uses up to n-grams.
4
[['Dataset', 'CX-CHR', ' Methods', 'CNN-RNN (Vinyals et al.,2015)'], ['Dataset', 'CX-CHR', ' Methods', 'LRCN (Donahue et al., 2015)'], ['Dataset', 'CX-CHR', ' Methods', 'AdaAtt (Lu et al., 2017)'], ['Dataset', 'CX-CHR', ' Methods', 'Att2in (Rennie et al., 2017)'], ['Dataset', 'CX-CHR', ' Methods', 'CoAtt (Jing et al., ...
1
[['BLEU-1'], ['BLEU-2'], ['BLEU-3'], ['BLEU-4'], ['ROUGE'], ['CIDEr']]
[['0.59', '0.506', '0.45', '0.411', '0.577', '1.58'], ['0.593', '0.508', '0.452', '0.413', '0.577', '1.588'], ['0.588', '0.503', '0.446', '0.409', '0.575', '1.568'], ['0.587', '0.503', '0.446', '0.408', '0.576', '1.566'], ['0.651', '0.568', '0.521', '0.469', '0.602', '2.532'], ['0.673', '0.587', '0.53', '0.486', '0.612...
column
['BLEU-1', 'BLEU-2', 'BLEU-3', 'BLEU-4', 'ROUGE', 'CIDEr']
['CMASW', 'CMASNWAW', 'CMAS-IL', 'CMAS-RL', 'CMASNW AW']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU-1</th> <th>BLEU-2</th> <th>BLEU-3</th> <th>BLEU-4</th> <th>ROUGE</th> <th>CIDEr</th> </tr> </thead> <tbody> <tr> <td>Dataset || CX-CHR || Methods || CNN-RNN (Vinyals et al...
Table 1
table_1
P19-1657
7
acl2019
Ablation Study. CMASW has only one writer, which is trained on both normal and abnormal findings. Table 1 shows that CMASW can achieve competitive performances to the state-of-the-art methods. CMASNW, AW is a simple concatenation of two single agent models CMASNW and CMASAW, where CMASNW is trained only on normal findi...
[2, 1, 1, 1, 1, 1, 2, 2, 2, 1, 1]
['Ablation Study.', 'CMASW has only one writer, which is trained on both normal and abnormal findings.', 'Table 1 shows that CMASW can achieve competitive performances to the state-of-the-art methods.', 'CMASNW, AW is a simple concatenation of two single agent models CMASNW and CMASAW, where CMASNW is trained only on n...
[None, ['CMASW'], ['CMASW'], ['CMASNWAW'], ['CMASNWAW'], ['CMASNWAW', 'CMASW', 'CX-CHR'], None, None, None, ['CMAS-IL', 'CMASNWAW'], ['CMAS-RL', 'CMAS-IL']]
1
P19-1658table_2
Human evaluation results. Five human judges on MTurk rate each story on the following six aspects, using a 5-point Likert scale (from Strongly Disagree to Strongly Agree): Focus, Structure and Coherence, Willingto-Share (“I Would Share”), Written-by-a-Human (“This story sounds like it was written by a human.”), Visuall...
2
[['Edited', 'N/A'], ['Edited', 'TF (T)'], ['Edited', 'TF (T+I)'], ['Edited', 'LSTM (T)'], ['Edited', 'LSTM (T+I)'], ['Edited', 'Human']]
2
[['AREL', 'Focus'], ['AREL', 'Coherence'], ['AREL', 'Share'], ['AREL', 'Human'], ['AREL', 'Grounded'], ['AREL', 'Detailed'], [' GLAC', 'Focus'], [' GLAC', 'Coherence'], [' GLAC', 'Share'], [' GLAC', 'Human'], [' GLAC', 'Grounded'], [' GLAC', 'Detailed']]
[['3.487', '3.751', '3.763', '3.746', '3.602', '3.761', '3.878', '3.908', '3.93', '3.817', '3.864', '3.938'], ['3.433', '3.705', '3.641', '3.656', '3.619', '3.631', '3.717', '3.773', '3.863', '3.672', '3.765', '3.795'], ['3.542', '3.693', '3.676', '3.643', '3.548', '3.672', '3.734', '3.759', '3.786', '3.622', '3.758', ...
column
['Focus', 'Coherence', 'Share', 'Human', 'Grounded', 'Detailed', 'Focus', 'Coherence', 'Share', 'Human', 'Grounded', 'Detailed']
['LSTM (T)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>AREL || Focus</th> <th>AREL || Coherence</th> <th>AREL || Share</th> <th>AREL || Human</th> <th>AREL || Grounded</th> <th>AREL || Detailed</th> <th>GLAC || Focus</th> <th>GLAC || C...
Table 2
table_2
P19-1658
4
acl2019
Human Evaluation . Following the evaluation procedure of the first VIST Challenge (Mitchell et al., 2018), for each visual story, we recruit five human judges on MTurk to rate it on six aspects (at $0.1/HIT.). We take the average of the five judgments as the final scores for the story. Table 2 shows the results. The LS...
[2, 2, 2, 1, 1, 1, 1, 1, 1, 1, 1]
['Human Evaluation .', 'Following the evaluation procedure of the first VIST Challenge (Mitchell et al., 2018), for each visual story, we recruit five human judges on MTurk to rate it on six aspects (at $0.1/HIT.).', 'We take the average of the five judgments as the final scores for the story.', 'Table 2 shows the resu...
[None, None, None, None, ['LSTM (T)'], ['AREL', ' GLAC', 'LSTM (T)'], None, None, ['Human', 'AREL', ' GLAC'], ['TF (T+I)'], None]
1