table_id_paper
stringlengths
15
15
caption
stringlengths
14
1.88k
row_header_level
int32
1
9
row_headers
large_stringlengths
15
1.75k
column_header_level
int32
1
6
column_headers
large_stringlengths
7
1.01k
contents
large_stringlengths
18
2.36k
metrics_loc
stringclasses
2 values
metrics_type
large_stringlengths
5
532
target_entity
large_stringlengths
2
330
table_html_clean
large_stringlengths
274
7.88k
table_name
stringclasses
9 values
table_id
stringclasses
9 values
paper_id
stringlengths
8
8
page_no
int32
1
13
dir
stringclasses
8 values
description
large_stringlengths
103
3.8k
class_sentence
stringlengths
3
120
sentences
large_stringlengths
110
3.92k
header_mention
stringlengths
12
1.8k
valid
int32
0
1
D18-1280table_6
Comparison of the performance of ICON on both IEMOCAP and SEMAINE considering different modality combinations. Note: T=Text, A=Audio, V=Video
2
[['Modality', 'T'], ['Modality', 'A'], ['Modality', 'V'], ['Modality', 'A+V'], ['Modality', 'T+A'], ['Modality', 'T+V'], ['Modality', 'T+A+V']]
3
[['IEMOCAP', 'Emotions', 'acc.'], ['IEMOCAP', 'Emotions', 'F1'], ['SEMAINE', 'DV', 'r'], ['SEMAINE', 'DA', 'r'], ['SEMAINE', 'DP', 'r'], ['SEMAINE', 'DE', 'r']]
[['58.3', '57.9', '.237', '.297', '.260', '.225'], ['50.7', '50.9', '.021', '.082', '.250', '.035'], ['41.2', '39.8', '.001', '.068', '.251', '.001'], ['52.0', '51.2', '.031', '.122', '.283', '.050'], ['63.8', '63.2', '.237', '.310', '.272', '.242'], ['61.4', '61.2', '.238', '.293', '.268', '.239'], ['64.0', '63.5', '....
column
['acc.', 'F1', 'r', 'r', 'r', 'r']
['T', 'A', 'V', 'A+V', 'T+A', 'T+V', 'T+A+V']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>IEMOCAP || Emotions || acc.</th> <th>IEMOCAP || Emotions || F1</th> <th>SEMAINE || DV || r</th> <th>SEMAINE || DA || r</th> <th>SEMAINE || DP || r</th> <th>SEMAINE || DE || r</th> </tr> ...
Table 6
table_6
D18-1280
7
emnlp2018
Multimodality:. We investigate the importance of multimodal features for our task. Table 6 presents the results for different combinations of modes used by ICON on IEMOCAP. As seen, the trimodal network provides the best performance which is preceded by the bimodal variants. Among unimodals, language modality performs ...
[2, 2, 1, 1, 1, 1]
['Multimodality:.', 'We investigate the importance of multimodal features for our task.', 'Table 6 presents the results for different combinations of modes used by ICON on IEMOCAP.', 'As seen, the trimodal network provides the best performance which is preceded by the bimodal variants.', 'Among unimodals, language moda...
[None, None, ['Modality', 'T', 'A', 'V', 'A+V', 'T+A', 'T+V', 'T+A+V'], ['Modality', 'T+A+V', 'A+V', 'T+A', 'T+V'], ['Modality', 'T'], ['Modality', 'A', 'V', 'T+A', 'T+V']]
1
D18-1282table_5
Sense prediction accuracy for motion (left) and non-motion verbs (right) using different image representations. + marks results taken from Gella et al. (2018). MFS is the most frequent sense heuristic.
2
[['Features', 'Random'], ['Features', 'MFS'], ['Features', 'CNN'], ['Features', 'Gella–CNN+O'], ['Features', 'Gella–CNN+C'], ['Features', 'CNN (reproduced)'], ['Features', 'ImgObjLoc']]
1
[['Motion'], ['Non-motion']]
[['76.7 ± 0.86', '78.5 ± 0.39'], ['76.1', '80.0'], ['82.3', '80.0'], ['83.0', '80.0'], ['82.3', '80.3'], ['83.1', '79.8 ± 0.53'], ['84.8 ± 0.69', '80.4 ± 0.57']]
column
['accuracy', 'accuracy']
['ImgObjLoc']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Motion</th> <th>Non-motion</th> </tr> </thead> <tbody> <tr> <td>Features || Random</td> <td>76.7 ± 0.86</td> <td>78.5 ± 0.39</td> </tr> <tr> <td>Features || MFS</td> <td>7...
Table 5
table_5
D18-1282
9
emnlp2018
Table 5 gives the mean accuracy obtained on the test data (of 100 runs). Our ImgObjLoc vectors outperform all comparison models on motion verbs, including CNN-based image features and the best-performing models of (Gella et al., 2018), namely Gella–CNN+O and Gella–CNN+C (CNN features concatenated with predicted object ...
[1, 1, 1, 2, 2]
['Table 5 gives the mean accuracy obtained on the test data (of 100 runs).', 'Our ImgObjLoc vectors outperform all comparison models on motion verbs, including CNN-based image features and the best-performing models of (Gella et al., 2018), namely Gella–CNN+O and Gella–CNN+C (CNN features concatenated with predicted ob...
[None, ['ImgObjLoc', 'Random', 'MFS', 'CNN', 'Gella–CNN+O', 'Gella–CNN+C', 'CNN (reproduced)', 'Motion'], ['Non-motion', 'MFS', 'CNN', 'Gella–CNN+O', 'Gella–CNN+C', 'CNN (reproduced)', 'ImgObjLoc'], ['ImgObjLoc'], None]
1
D18-1283table_4
Results from the human subject study on common ground.
1
[['Easy'], ['Hard']]
1
[['Attenton'], ['CVAE'], ['CVAE+SV'], ['Gold']]
[['0.665', '0.776', '0.818', '0.888'], ['0.576', '0.718', '0.788', '0.841']]
column
['accuracy', 'accuracy', 'accuracy', 'accuracy']
['CVAE', 'CVAE+SV']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Attenton</th> <th>CVAE</th> <th>CVAE+SV</th> <th>Gold</th> </tr> </thead> <tbody> <tr> <td>Easy</td> <td>0.665</td> <td>0.776</td> <td>0.818</td> <td>0.888</td> </t...
Table 4
table_4
D18-1283
9
emnlp2018
7.2 Experimental Results. Table 4 shows the comparison results among various models and the upper bound where the gold commonsense evidence provided to the human. It’s not surprising that performance on common ground is worse in the hard configuration as the distracting verbs are more similar to the target action. Th...
[2, 1, 1, 1]
['7.2 Experimental Results.', 'Table 4 shows the comparison results among various models and the upper bound where the gold commonsense evidence provided to the human.', 'It’s not surprising that performance on common ground is worse in the hard configuration as the distracting verbs are more similar to the target ac...
[None, ['Attenton', 'CVAE', 'CVAE+SV', 'Gold'], ['Hard', 'Attenton', 'CVAE', 'CVAE+SV', 'Gold'], ['CVAE', 'Attenton']]
1
D18-1289table_3
Performance of the linear SVM regression model and the avg score at different agreements.
1
[['mean abs err 1'], ['Spearman 1'], ['mean abs err 2'], ['Spearman 2'], ['mean abs err 3'], ['Spearman 3'], ['avg mean abs err'], ['avg Spearman']]
1
[['IT-10'], ['IT-14'], ['EN-10'], ['EN-14']]
[['0.77', '0.78', '0.71', '0.68'], ['0.57', '0.64', '0.68', '0.64'], ['0.79', '0.80', '0.70', '0.70'], ['0.55', '0.63', '0.67', '0.73'], ['0.85', '0.75', '0.77', '0.60'], ['0.55', '0.64', '0.61', '0.71'], ['0.80', '0.78', '0.72', '0.66'], ['0.56', '0.63', '0.65', '0.69']]
row
['mean abs err 1', 'Spearman 1', 'mean abs err 2', 'Spearman 2', 'mean abs err 3', 'Spearman 3', 'avg mean abs err', 'avg Spearman']
['IT-10', 'IT-14', 'EN-10', 'EN-14']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>IT-10</th> <th>IT-14</th> <th>EN-10</th> <th>EN-14</th> </tr> </thead> <tbody> <tr> <td>mean abs err 1</td> <td>0.77</td> <td>0.78</td> <td>0.71</td> <td>0.68</td> ...
Table 3
table_3
D18-1289
7
emnlp2018
5.1 Predicting Human Complexity Judgmen. To asses the contribution of the linguistic features to predict the judgment of sentence complexity we trained a linear SVM regression model with default parameters. We performed a 3-fold cross validation over each subset of agreed sentences at agreement 10 and 14. We measured t...
[2, 2, 2, 2, 1, 1, 1, 2, 2, 2]
['5.1 Predicting Human Complexity Judgmen.', 'To asses the contribution of the linguistic features to predict the judgment of sentence complexity we trained a linear SVM regression model with default parameters.', 'We performed a 3-fold cross validation over each subset of agreed sentences at agreement 10 and 14.', 'We...
[None, None, ['IT-10', 'IT-14', 'EN-10', 'EN-14'], ['mean abs err 1', 'Spearman 1', 'mean abs err 2', 'Spearman 2', 'mean abs err 3', 'Spearman 3'], ['mean abs err 1', 'Spearman 1', 'mean abs err 2', 'Spearman 2', 'mean abs err 3', 'Spearman 3', 'avg mean abs err', 'avg Spearman'], ['Spearman 1', 'Spearman 2', 'Spearma...
1
D18-1292table_1
Development results for different systems using posterior inference on constituents (PIoC).
2
[['System', 'Best'], ['System', 'Best w/ PIoC'], ['System', 'All w/ PIoC'], ['System', 'All w/ PIoC w/o best']]
1
[['Rec'], ['Prec'], ['F1']]
[['73.65', '55.66', '63.40'], ['73.59', '56.41', '63.87'], ['72.99', '59.21', '65.38'], ['73.00', '59.06', '65.29']]
column
['Rec', 'Prec', 'F1']
['Best', 'Best w/ PIoC', 'All w/ PIoC', 'All w/ PIoC w/o best']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Rec</th> <th>Prec</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>System || Best</td> <td>73.65</td> <td>55.66</td> <td>63.40</td> </tr> <tr> <td>System || Best w/ P...
Table 1
table_1
D18-1292
8
emnlp2018
Table 1 shows parsing results on the WSJ20dev dataset. The Best result is from an arbitrary sample at convergence of the oracle best run. The Best with PIoC is the same run, but with PIoC to aggregate 100 posterior samples at convergence. All with PIoC uses 100 posterior samples from all of the 10 chosen runs, and fina...
[1, 2, 2, 2, 1, 1, 2, 2]
['Table 1 shows parsing results on the WSJ20dev dataset.', 'The Best result is from an arbitrary sample at convergence of the oracle best run.', 'The Best with PIoC is the same run, but with PIoC to aggregate 100 posterior samples at convergence.', 'All with PIoC uses 100 posterior samples from all of the 10 chosen run...
[None, None, None, None, ['Best', 'Best w/ PIoC', 'Rec', 'Prec'], ['All w/ PIoC w/o best', 'Prec', 'Best w/ PIoC'], ['All w/ PIoC w/o best'], None]
1
D18-1296table_2
Recognition results with standard and session-based LSTM-LMs, measured by word error rates (WER).
4
[['Word encoding', 'Letter 3gram', 'Model', 'LSTM-LM'], ['Word encoding', 'Letter 3gram', 'Model', 'Session LSTM-LM'], ['Word encoding', 'Letter 3gram', 'Model', 'Session LSTM-LM 2nd iteration'], ['Word encoding', 'One-hot', 'Model', 'LSTM-LM'], ['Word encoding', 'One-hot', 'Model', 'Session LSTM-LM'], ['Word encoding'...
3
[['WER', 'dev set', '-'], ['WER', 'test', 'SWB'], ['WER', 'test', 'CH']]
[['10.01', '6.88', '12.79'], ['9.67', '6.81', '12.54'], ['9.66', '6.77', '12.56'], ['9.81', '6.89', '13.02'], ['9.47', '6.81', '12.60'], ['9.50', '6.83', '12.73'], ['9.66', '6.63', '12.77'], ['9.28', '6.52', '12.34'], ['9.22', '6.45', '12.11']]
column
['WER', 'WER', 'WER']
['Session LSTM-LM', 'Session LSTM-LM 2nd iteration', 'LSTM-LM + Session LSTM-LM']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>WER || devset</th> <th>WER test || SWB</th> <th>WER test || CH</th> </tr> </thead> <tbody> <tr> <td>Word encoding || Letter 3gram || Model || LSTM-LM</td> <td>10.01</td> <td>6.88</td...
Table 2
table_2
D18-1296
4
emnlp2018
Table 2 presents recognition results, comparing baseline LSTM-LMs to the full session-based LSTM-LMs. Both the letter-trigram and one-word word encoding versions are reported. The different models may also be used jointly, using log-linear score combination in rescoring, shown in the third section of the table. We also...
[1, 1, 1, 1, 1, 2, 1, 1, 1]
['Table 2 presents recognition results, comparing baseline LSTM-LMs to the full session-based LSTM-LMs.', 'Both the letter-trigram and one-word word encoding versions are reported.', 'The different models may also be used jointly, using log-linear score combination in rescoring, shown in the third section of the table....
[['LSTM-LM', 'Session LSTM-LM'], ['Letter 3gram', 'One-hot'], ['Letter 3gram + One-hot'], ['Session LSTM-LM 2nd iteration'], ['Session LSTM-LM', 'Letter 3gram', 'One-hot', 'Letter 3gram + One-hot', 'WER', 'test', 'SWB', 'CH'], ['Letter 3gram + One-hot', 'Session LSTM-LM'], ['Session LSTM-LM', 'Session LSTM-LM 2nd itera...
1
D18-1303table_2
Multi-label classification results.
2
[['Model', 'Random Forest'], ['Model', 'CNN'], ['Model', 'RNN'], ['Model', 'CNN-RNN'], ['Model', 'CNN-RNN (bidirec + char)']]
1
[['Exact Match'], ['Hamming']]
[['35.0', '70.2'], ['53.7', '80.2'], ['57.1', '81.5'], ['59.2', '82.3'], ['62.0', '82.5']]
column
['accuracy', 'accuracy']
['CNN-RNN', 'CNN-RNN (bidirec + char)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Exact Match</th> <th>Hamming</th> </tr> </thead> <tbody> <tr> <td>Model || Random Forest</td> <td>35.0</td> <td>70.2</td> </tr> <tr> <td>Model || CNN</td> <td>53.7</td> ...
Table 2
table_2
D18-1303
3
emnlp2018
See Table 2 for multi-label classification results, where the Hamming score for the multi-label CNN-RNN model is 82.5%, showing potential for real-world use as well as substantial future research scope.
[1]
['See Table 2 for multi-label classification results, where the Hamming score for the multi-label CNN-RNN model is 82.5%, showing potential for real-world use as well as substantial future research scope.']
[['CNN-RNN (bidirec + char)', 'Hamming']]
1
D18-1309table_2
Performance comparison of the state-ofthe-art nested NER models on the test dataset.
2
[['Model', 'Exhaustive Model'], ['Model', 'Ju et al. (2018)'], ['Model', 'Katiyar and Cardie'], ['Model', 'Muis and Lu (2017)'], ['Model', 'Lu and Roth (2015)'], ['Model', 'Finkel and Manning']]
1
[['P(%)'], ['R(%)'], ['F(%)']]
[['93.2', '64.0', '77.1'], ['78.5', '71.3', '74.7'], ['76.7', '71.1', '73.8'], ['75.4', '66.8', '70.8'], ['72.5', '65.2', '68.7'], ['75.4', '65.9', '70.3']]
column
['P(%)', 'R(%)', 'F(%)']
['Exhaustive Model']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P(%)</th> <th>R(%)</th> <th>F(%)</th> </tr> </thead> <tbody> <tr> <td>Model || Exhaustive Model</td> <td>93.2</td> <td>64.0</td> <td>77.1</td> </tr> <tr> <td>Model ||...
Table 2
table_2
D18-1309
4
emnlp2018
4.1 Nested NER. Table 2 shows the comparison of our model with several previous state-of-the nested NER models on the test dataset. Our model outperforms the state-of-the-art models in terms of F-score. Our results on Table 2 is based on bidirectional LSTM with character embeddings and the maximum region size is 10.
[2, 1, 1, 2]
['4.1 Nested NER.', 'Table 2 shows the comparison of our model with several previous state-of-the nested NER models on the test dataset.', 'Our model outperforms the state-of-the-art models in terms of F-score.', 'Our results on Table 2 is based on bidirectional LSTM with character embeddings and the maximum region siz...
[None, ['Exhaustive Model', 'Ju et al. (2018)', 'Katiyar and Cardie', 'Muis and Lu (2017)', 'Lu and Roth (2015)', 'Finkel and Manning'], ['Exhaustive Model', 'F(%)'], ['Exhaustive Model']]
1
D18-1309table_3
Performances of our model on different entity level on the test dataset.
2
[['Entity Level', 'Single-token'], ['Entity Level', 'Multi-token'], ['Entity Level', 'Top Level'], ['Entity Level', 'Nested'], ['Entity Level', 'All entities']]
1
[['P(%)'], ['R(%)'], ['F(%)']]
[['91.6', '58.4', '69.9'], ['95.9', '65.8', '77.9'], ['92.7', '69.8', '79.3'], ['94.3', '59.3', '72.7'], ['93.2', '64.0', '77.1']]
column
['P(%)', 'R(%)', 'F(%)']
['Entity Level']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P(%)</th> <th>R(%)</th> <th>F(%)</th> </tr> </thead> <tbody> <tr> <td>Entity Level || Single-token</td> <td>91.6</td> <td>58.4</td> <td>69.9</td> </tr> <tr> <td>Entit...
Table 3
table_3
D18-1309
4
emnlp2018
Table 3 describes the performances of our model on different entity levels on the test dataset. The model performs well on multi-token and top-level entities. This is interesting because they are often considered difficult for sequential labeling models.
[1, 1, 2]
['Table 3 describes the performances of our model on different entity levels on the test dataset.', 'The model performs well on multi-token and top-level entities.', 'This is interesting because they are often considered difficult for sequential labeling models.']
[['Single-token', 'Multi-token', 'Top Level', 'Nested', 'All entities'], ['Multi-token', 'Top Level'], None]
1
D18-1309table_4
Categorical performances on the GENIA test dataset.
2
[['Label', 'DNA'], ['Label', 'RNA'], ['Label', 'cell line'], ['Label', 'cell type'], ['Label', 'protein']]
1
[['P(%)'], ['R(%)'], ['F(%)'], ['F&M F(%)']]
[['92.6', '58.7', '71.8', '65.2'], ['98.8', '57.1', '72.4', '74.7'], ['94.6', '53.1', '67.9', '64.0'], ['88.4', '70.0', '78.1', '67.1'], ['94.1', '70.8', '80.8', '73.8']]
column
['P(%)', 'R(%)', 'F(%)', 'F&M F(%)']
['DNA', 'RNA', 'cell line', 'cell type', 'protein']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P(%)</th> <th>R(%)</th> <th>F(%)</th> <th>F&amp;M F(%)</th> </tr> </thead> <tbody> <tr> <td>Label || DNA</td> <td>92.6</td> <td>58.7</td> <td>71.8</td> <td>65.2</td> ...
Table 4
table_4
D18-1309
4
emnlp2018
Table 4 shows the performances on the five entity types on the test dataset. We here show the performance by Finkel and Manning (2009) (F&M) for the reference. Our system performs better than their model except for the RNA type.
[1, 1, 1]
['Table 4 shows the performances on the five entity types on the test dataset.', 'We here show the performance by Finkel and Manning (2009) (F&M) for the reference.', 'Our system performs better than their model except for the RNA type.']
[['DNA', 'RNA', 'cell line', 'cell type', 'protein'], ['F&M F(%)'], ['F(%)', 'F&M F(%)', 'DNA', 'cell line', 'cell type', 'protein']]
1
D18-1309table_5
Performance of our model with different maximum region sizes on the development dataset. Ratio refers to the coverage ratio of entity mentions.
2
[['Region', 'size = 3'], ['Region', 'size = 6'], ['Region', 'size = 8'], ['Region', 'size = 10']]
1
[['Ratio(%)'], ['P(%)'], ['R(%)'], ['F(%)']]
[['89.6', '92.9', '69.8', '79.5'], ['98.9', '93.6', '66.7', '77.5'], ['99.4', '93.7', '66.5', '77.6'], ['100', '93.5', '67.6', '78.2']]
column
['Ratio(%)', 'P(%)', 'R(%)', 'F(%)']
['Region']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Ratio(%)</th> <th>P(%)</th> <th>R(%)</th> <th>F(%)</th> </tr> </thead> <tbody> <tr> <td>Region || size = 3</td> <td>89.6</td> <td>92.9</td> <td>69.8</td> <td>79.5</td>...
Table 5
table_5
D18-1309
4
emnlp2018
Table 5 shows the coverage ratio and the performance with different maximum region sizes. Since the average entity mention length of GENIA dataset is less than 4, the system can cover almost all the entities for the maximum sizes of 6 or more. The longer maximum region size is desirable to cover all the mentions, but i...
[1, 2, 2, 1]
['Table 5 shows the coverage ratio and the performance with different maximum region sizes.', 'Since the average entity mention length of GENIA dataset is less than 4, the system can cover almost all the entities for the maximum sizes of 6 or more.', 'The longer maximum region size is desirable to cover all the mention...
[['Ratio(%)', 'Region', 'size = 3', 'size = 6', 'size = 8', 'size = 10'], None, ['Region'], ['Region', 'size = 6', 'size = 8', 'size = 10']]
1
D18-1309table_6
Performance of our model with different model architectures on the development dataset. (cid:63) indicates results using character embeddings.
2
[['Setting', 'Bi-LSTM'], ['Setting', 'Bi-LSTM + Character*'], ['Setting', 'Boundary*'], ['Setting', 'Inside*'], ['Setting', 'Boundary+Inside*']]
1
[['P(%)'], ['R(%)'], ['F(%)']]
[['94.1', '65.7', '77.1'], ['93.5', '67.6', '78.2'], ['94.1', '54.3', '68.5'], ['93.2', '46.4', '61.2'], ['93.5', '67.6', '78.2']]
column
['P(%)', 'R(%)', 'F(%)']
['Bi-LSTM', 'Bi-LSTM + Character*']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P(%)</th> <th>R(%)</th> <th>F(%)</th> </tr> </thead> <tbody> <tr> <td>Setting || Bi-LSTM</td> <td>94.1</td> <td>65.7</td> <td>77.1</td> </tr> <tr> <td>Setting || Bi-L...
Table 6
table_6
D18-1309
4
emnlp2018
Ablations on character embeddings in Table 6 also show the importance of character embeddings. It also shows that both the boundary information and the inside information, i.e., average of the embeddings in a region, are necessary to improve the performance.
[1, 1]
['Ablations on character embeddings in Table 6 also show the importance of character embeddings.', 'It also shows that both the boundary information and the inside information, i.e., average of the embeddings in a region, are necessary to improve the performance.']
[['Bi-LSTM', 'Bi-LSTM + Character*', 'F(%)'], ['Boundary*', 'Inside*', 'Boundary+Inside*', 'F(%)']]
1
D18-1309table_7
Categorical and overall performances of the JNLPBA test dataset.
2
[['Label', 'DNA'], ['Label', 'RNA'], ['Label', 'cell line'], ['Label', 'cell type'], ['Label', 'protein'], ['Label', 'overall']]
1
[['P(%)'], ['R(%)'], ['F(%)']]
[['95.2', '56.8', '71.4'], ['96.1', '61.4', '75.2'], ['86.2', '44.1', '58.8'], ['96.7', '61.5', '75.3'], ['97.1', '72.2', '82.6'], ['96.4', '66.8', '78.4']]
column
['P(%)', 'R(%)', 'F(%)']
['DNA', 'RNA', 'cell line', 'cell type', 'protein']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P(%)</th> <th>R(%)</th> <th>F(%)</th> </tr> </thead> <tbody> <tr> <td>Label || DNA</td> <td>95.2</td> <td>56.8</td> <td>71.4</td> </tr> <tr> <td>Label || RNA</td> ...
Table 7
table_7
D18-1309
4
emnlp2018
4.3 Flat NER. We evaluated our model on JNLPBA as a flat dataset, where nested and discontinuous entities are removed. Table 7 shows the performances of our model on JNLPBA dataset. We compared our result with the state-of-the-art result of Gridach (2017) which achieved 75.8% in F-score, where our model obtained 78.4% ...
[2, 2, 1, 1]
['4.3 Flat NER.', 'We evaluated our model on JNLPBA as a flat dataset, where nested and discontinuous entities are removed.', 'Table 7 shows the performances of our model on JNLPBA dataset.', 'We compared our result with the state-of-the-art result of Gridach (2017) which achieved 75.8% in F-score, where our model obta...
[None, None, None, ['overall', 'F(%)']]
1
D18-1315table_1
We reproduce experiments in Malouf (2017) using our own implementation of the model. In contrast to Malouf (2017), who used cross-validation, we train one system for each language. Therefore, we only report standard deviation for the results in Column 2.
1
[['FINNISH NOUNS'], ['FRENCH VERBS'], ['IRISH NOUNS'], ['KHALING VERBS'], ['MALTESE VERBS'], ['P. CHINANTEC VERBS'], ['RUSSIAN NOUNS']]
1
[['Our baseline'], ['Malouf (2017)']]
[['99.50', '99.27 ±0.09'], ['99.88', '99.92 ±0.02'], ['85.11', '85.69 ±1.71'], ['99.66', '99.29 ±0.08'], ['98.65', '98.93 ±0.32'], ['91.16', '91.20 ±0.97'], ['95.90', '96.34 ±0.96']]
column
['accuracy', 'accuracy']
['Our baseline']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Our baseline</th> <th>Malouf (2017)</th> </tr> </thead> <tbody> <tr> <td>FINNISH NOUNS</td> <td>99.50</td> <td>99.27 ±0.09</td> </tr> <tr> <td>FRENCH VERBS</td> <td>99.88<...
Table 1
table_1
D18-1315
3
emnlp2018
In order to assure fair comparison, we perform the paradigm completion experiment described in Malouf (2017), where 90% of the word forms in the data set is used for training and the remaining 10% for testing. As the results in Table 1 show, our results very closely replicate those reported by Malouf (2017).
[2, 1]
['In order to assure fair comparison, we perform the paradigm completion experiment described in Malouf (2017), where 90% of the word forms in the data set is used for training and the remaining 10% for testing.', 'As the results in Table 1 show, our results very closely replicate those reported by Malouf (2017).']
[None, ['Our baseline', 'Malouf (2017)']]
1
D18-1315table_4
Overall results for filling in missing forms when the 10,000 most frequent forms are given in the inflection tables. We give the 0.99 confidence intervals as given by a one-sided t-test. Figures where one system significantly outperforms the other one are in boldface.
1
[['FINNISH NOUNS'], ['FINNISH VERBS'], ['FRENCH VERBS'], ['GERMAN NOUNS'], ['GERMAN VERBS'], ['LATVIAN NOUNS'], ['SPANISH VERBS'], ['TURKISH NOUNS']]
1
[['Our system'], ['Baseline']]
[['63.64 ± 3.24', '25.63 ± 1.63'], ['24.82 ± 1.13', '16.14 ± 1.14'], ['31.34 ± 1.18', '14.34 ± 0.87'], ['18.73 ± 1.26', '67.16 ± 3.20'], ['61.21 ± 1.85', '50.18 ± 2.58'], ['76.90 ± 5.30', '57.28 ± 2.05'], ['27.27 ± 0.72', '16.61 ± 0.70'], ['33.87 ± 2.03', '25.00 ± 2.52']]
column
['accuracy', 'accuracy']
['Our system']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Our system</th> <th>Baseline</th> </tr> </thead> <tbody> <tr> <td>FINNISH NOUNS</td> <td>63.64 ± 3.24</td> <td>25.63 ± 1.63</td> </tr> <tr> <td>FINNISH VERBS</td> <td>24.8...
Table 4
table_4
D18-1315
4
emnlp2018
Table 4 shows results for completing tables for common lexemes. Our system significantly outperforms the baseline on all other datasets apart from German nouns. We believe that the reason for the German outlier is the high degree of syncretism in German noun tables.
[1, 1, 2]
['Table 4 shows results for completing tables for common lexemes.', 'Our system significantly outperforms the baseline on all other datasets apart from German nouns.', 'We believe that the reason for the German outlier is the high degree of syncretism in German noun tables.']
[None, ['Our system', 'Baseline', 'FINNISH NOUNS', 'FINNISH VERBS', 'FRENCH VERBS', 'GERMAN VERBS', 'LATVIAN NOUNS', 'SPANISH VERBS', 'TURKISH NOUNS'], ['GERMAN NOUNS']]
1
D18-1316table_3
Comparison between the attack success rate and mean percentage of modifications required by the genetic attack and perturb baseline for the two tasks.
1
[['Perturb baseline'], ['Genetic attack']]
2
[['Sentiment Analysis', '% success'], ['Sentiment Analysis', '% modified'], ['Textual Entailment', '% success'], ['Textual Entailment', '% modified']]
[['52%', '19%', '-', '-'], ['97%', '14.7%', '70%', '23%']]
column
['% success', '% modified', '% success', '% modified']
['Genetic attack']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Sentiment Analysis || % success</th> <th>Sentiment Analysis || % modified</th> <th>Textual Entailment || % success</th> <th>Textual Entailment || % modified</th> </tr> </thead> <tbody> <tr> ...
Table 3
table_3
D18-1316
4
emnlp2018
Sample outputs produced by our attack are shown in Tables 1 and 2. Additional outputs can be found in the supplementary material. Table 3 shows the attack success rate and mean percentage of modified words on each task. We compare to the Perturb baseline, which greedily applies the Perturb subroutine, to validate the u...
[2, 2, 1, 1, 1, 1, 2, 2, 1, 2, 1, 1]
['Sample outputs produced by our attack are shown in Tables 1 and 2.', 'Additional outputs can be found in the supplementary material.', 'Table 3 shows the attack success rate and mean percentage of modified words on each task.', 'We compare to the Perturb baseline, which greedily applies the Perturb subroutine, to val...
[['Genetic attack'], None, ['Sentiment Analysis', 'Textual Entailment', '% success', '% modified'], ['Perturb baseline'], ['Genetic attack', 'Sentiment Analysis', 'Textual Entailment', '% success'], ['Genetic attack', 'Perturb baseline', '% success', '% modified'], ['Sentiment Analysis', 'Textual Entailment'], ['Geneti...
1
D18-1321table_3
Comparison with previous state-of-the-art models.
1
[['Google IME'], ['CoCat'], ['OMWA'], ['basic P2C'], ['Simple C+ P2C'], ['Gated C+ P2C']]
2
[['DC', 'Top-1'], ['DC', 'Top-5'], ['DC', 'Top-10'], ['DC', 'KySS'], ['PD', 'Top-1'], ['PD', 'Top-5'], ['PD', 'Top-10'], ['PD', 'KySS']]
[['62.13', '72.17', '74.72', '0.6731', '70.93', '80.32', '82.23', '0.7535'], ['59.15', '71.85', '76.78', '0.7651', '61.42', '73.08', '78.33', '0.7933'], ['57.14', '72.32', '80.21', '0.7389', '64.42', '72.91', '77.93', '0.7115'], ['71.31', '89.12', '90.17', '0.8845', '70.5', '79.8', '80.1', '0.8301'], ['61.28', '71.88',...
column
['Top-1', 'Top-5', 'Top-10', 'KySS', 'Top-1', 'Top-5', 'Top-10', 'KySS']
['basic P2C', 'Simple C+ P2C', 'Gated C+ P2C']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>DC || Top-1</th> <th>DC || Top-5</th> <th>DC || Top-10</th> <th>DC || KySS</th> <th>PD || Top-1</th> <th>PD || Top-5</th> <th>PD || Top-10</th> <th>PD || KySS</th> </tr> </thea...
Table 3
table_3
D18-1321
5
emnlp2018
Effect of Gated Attention Mechanism. Table 3 shows the Effect of gated attention mechanism. We compared models with Gated C+ P2C and Simple C+ P2C. The MIU accuracy of the P2C model has over 10% improvement when changing the operate pattern of the extra information proves the effect of GA mechanism. The Gated C+ P2C ac...
[2, 1, 1, 1, 1, 2, 1, 2, 1, 1, 1, 1]
['Effect of Gated Attention Mechanism.', 'Table 3 shows the Effect of gated attention mechanism.', 'We compared models with Gated C+ P2C and Simple C+ P2C.', 'The MIU accuracy of the P2C model has over 10% improvement when changing the operate pattern of the extra information proves the effect of GA mechanism.', 'The G...
[None, None, ['basic P2C', 'Simple C+ P2C', 'Gated C+ P2C'], ['basic P2C'], ['Gated C+ P2C', 'DC'], None, ['basic P2C', 'Google IME', 'CoCat', 'OMWA'], ['CoCat', 'OMWA'], ['basic P2C', 'Google IME', 'CoCat', 'OMWA', 'Top-5', 'Top-10'], ['Gated C+ P2C', 'Google IME', 'Top-5', 'Top-10', 'PD'], ['Gated C+ P2C', 'DC', 'Top...
1
D18-1323table_1
LM perplexity results on PTB. ∆: difference in test perplexity of the tied models with respect to the non-tied model with the same number of hidden units.
6
[['Hid', '200', 'Emb', '200', 'Model', 'non-tied'], ['Hid', '200', 'Emb', '200', 'Model', 'tied'], ['Hid', '200', 'Emb', '200', 'Model', 'tied+L'], ['Hid', '400', 'Emb', '200', 'Model', 'non-tied'], ['Hid', '400', 'Emb', '200', 'Model', 'tied+L'], ['Hid', '400', 'Emb', '400', 'Model', 'non-tied'], ['Hid', '400', 'Emb',...
1
[['Valid'], ['Test'], ['Δ']]
[['95.0', '91.1', ''], ['90.8', '86.6', '-4.5'], ['89.8', '85.8', '-5.3'], ['89.4', '85.3', ''], ['83.4', '80.3', '-5.0'], ['87.2', '83.5', ''], ['82.0', '78.2', '-5.3'], ['81.9', '78.0', '-5.5'], ['85.8', '82.4', ''], ['79.0', '76.0', '-6.4'], ['84.3', '81.3', ''], ['79.7', '76.1', '-5.2'], ['78.7', '75.5', '-5.8'], [...
column
['perplexity', 'perplexity', 'perplexity']
['tied+L']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Valid</th> <th>Test</th> <th>Δ</th> <th>Size</th> </tr> </thead> <tbody> <tr> <td>Hid || 200 || Emb || 200 || Model || non-tied</td> <td>95.0</td> <td>91.1</td> <td></td> ...
Table 1
table_1
D18-1323
4
emnlp2018
3.3 Language modelling results. We present the LM results for the standard nontied model, the tied model as in Inan et al. (2017) and Press and Wolf (2017), and our tied model with an additional linear transformation (tied+L) in Tables 1 (PTB) and 2 (Wiki). Table 1 confirms that tying generally brings gains with respec...
[2, 1, 1, 1, 1, 1, 1, 1, 1]
['3.3 Language modelling results.', 'We present the LM results for the standard nontied model, the tied model as in Inan et al. (2017) and Press and Wolf (2017), and our tied model with an additional linear transformation (tied+L) in Tables 1 (PTB) and 2 (Wiki).', 'Table 1 confirms that tying generally brings gains wit...
[None, ['tied+L', 'Inan2017 VD tied 650', 'P&W2016 tied 1500'], None, ['tied+L', 'non-tied', 'Hid', 'Emb'], ['tied+L', 'tied'], ['tied', 'tied+L', 'Hid', 'Emb'], ['tied', 'tied+L', 'Hid', '600', 'Emb', '400', 'Test'], ['Inan2017 VD tied 650', 'Zaremba2014 1500', 'P&W2016 tied 1500'], ['non-tied', 'Hid', '600', 'Emb', '...
1
D18-1325table_3
Performance for variable context sizes k with the HAN encoder + HAN decoder.
2
[['k', '1'], ['k', '3'], ['k', '5'], ['k', '7']]
2
[['TED Talks', 'Zh-En'], ['TED Talks', 'Es-En'], ['Subtitles', 'Zh-En'], ['Subtitles', 'Es-En'], ['News', 'Es-En']]
[['17.70', '37.20', '29.35', '36.20', '22.46'], ['17.79', '37.24', '29.67', '36.23', '22.76'], ['17.49', '37.11', '29.69', '36.22', '22.54'], ['17.00', '37.22', '29.64', '36.21', '22.64']]
column
['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy']
['k']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>TED Talks || Zh-En</th> <th>TED Talks || Es-En</th> <th>Subtitles || Zh-En</th> <th>Subtitles || Es-En</th> <th>News || Es-En</th> </tr> </thead> <tbody> <tr> <td>k || 1</td> <t...
Table 3
table_3
D18-1325
4
emnlp2018
Table 3 shows the performance of our best HAN model with a varying number k of previous sentences in the test-set. We can see that the best performance for TED talks and news is archived with 3, while for subtitles it is similar between 3 and 7.
[1, 1]
['Table 3 shows the performance of our best HAN model with a varying number k of previous sentences in the test-set.', 'We can see that the best performance for TED talks and news is archived with 3, while for subtitles it is similar between 3 and 7.']
[['k'], ['k', '3', '7', 'TED Talks', 'News', 'Subtitles']]
1
D18-1326table_1
Translation performance of our methods on Zh→En/Ja and En→De/Fr tasks. Indiv means translation model of individual pair. O2M is the our baseline system. 1 , 2 and 3 denote our proposed three strategies of special label initialization, language-dependent positional embedding and the new parameter-sharing mecha2 (Dyn) an...
2
[['Methods', 'Indiv'], ['Methods', 'O2M'], ['Methods', 'O2M + ①'], ['Methods', 'O2M + ① + ② (Dyn)'], ['Methods', 'O2M + ① + ② (Fixed)'], ['Methods', 'O2M + ① + ③'], ['Methods', 'O2M + ① + ② (Dyn)+ 3']]
2
[['Zh→En', 'MT03'], ['Zh→En', 'MT04'], ['Zh→En', 'MT05'], ['Zh→En', 'MT06'], ['Zh→En', 'Ave'], ['Zh→Ja', 'test'], ['En→De', 'test'], ['En→Fr', 'test']]
[['43.59', '43.95', '45.34', '44.05', '44.23', '40.71', '27.84', '41.50'], ['43.20', '43.55', '44.68', '43.93', '43.84', '42.09', '26.42', '41.32'], ['43.91', '44.01', '45.12', '44.14', '44.30', '42.54', '26.78', '41.56'], ['44.24', '44.45', '45.43', '44.51', '44.66', '42.77', '26.98', '41.78'], ['44.13', '44.57', '45....
column
['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy']
['O2M + ①', 'O2M + ① + ② (Dyn)', 'O2M + ① + ② (Fixed)', 'O2M + ① + ③', 'O2M + ① + ② (Dyn)+ 3']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Zh→En || MT03</th> <th>Zh→En || MT04</th> <th>Zh→En || MT05</th> <th>Zh→En || MT06</th> <th>Zh→En || Ave</th> <th>Zh→Ja || test</th> <th>En→De || test</th> <th>En→Fr || test</th> ...
Table 1
table_1
D18-1326
4
emnlp2018
5.1 Our Strategies vs. Baseline. Table 1 reports the main translation results of Zh→En/Ja and En→De/Fr translation tasks. Our ultimate goal is to make the universal one-to-many framework as good as or better than the individually trained systems. We conduct universal one-to-many translation using Johnson et al. (2017) ...
[2, 1, 2, 2, 1, 2, 1, 1, 1, 2, 1, 1, 2]
['5.1 Our Strategies vs. Baseline.', 'Table 1 reports the main translation results of Zh→En/Ja and En→De/Fr translation tasks.', 'Our ultimate goal is to make the universal one-to-many framework as good as or better than the individually trained systems.', 'We conduct universal one-to-many translation using Johnson et ...
[None, ['Zh→En', 'Zh→Ja', 'En→De', 'En→Fr'], ['Indiv', 'O2M'], ['O2M'], ['Indiv', 'O2M'], ['O2M'], ['O2M', 'O2M + ①', 'O2M + ① + ② (Dyn)', 'O2M + ① + ② (Fixed)', 'O2M + ① + ③', 'O2M + ① + ② (Dyn)+ 3'], ['O2M + ① + ② (Dyn)+ 3', 'O2M', 'Zh→En', 'MT04'], ['O2M + ① + ② (Dyn)', 'O2M + ① + ② (Fixed)'], ['Indiv', 'O2M'], None...
1
D18-1330table_1
Comparison between RCSLS, Least Square Error, Procrustes and unsupervised approaches in the setting of Conneau et al. (2017). All the methods use the CSLS criterion for retrieval. “Refine” is the refinement step of Conneau et al. (2017). Adversarial, ICP and Wassertsein Proc. are unsupervised (Conneau et al., 2017; Hos...
2
[['Method', 'Adversarial + refine'], ['Method', 'ICP + refine'], ['Method', 'Wass. Proc. + refine'], ['Method', 'Least Square Error'], ['Method', 'Procrustes'], ['Method', 'Procrustes + refine'], ['Method', 'RCSLS + spectral'], ['Method', 'RCSLS']]
1
[['en-es'], ['es-en'], ['en-fr'], ['fr-en'], ['en-de'], ['de-en'], ['en-ru'], ['ru-en'], ['en-zh'], ['zh-en'], ['avg.']]
[['81.7', '83.3', '82.3', '82.1', '74.0', '72.2', '44.0', '59.1', '32.5', '31.4', '64.3'], ['82.2', '83.8', '82.5', '82.5', '74.8', '73.1', '46.3', '61.6', '-', '-', '-'], ['82.8', '84.1', '82.6', '82.9', '75.4', '73.3', '43.7', '59.1', '-', '-', '-'], ['78.9', '80.7', '79.3', '80.7', '71.5', '70.1', '47.2', '60.2', '4...
column
['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy']
['RCSLS', 'RCSLS + spectral']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>en-es</th> <th>es-en</th> <th>en-fr</th> <th>fr-en</th> <th>en-de</th> <th>de-en</th> <th>en-ru</th> <th>ru-en</th> <th>en-zh</th> <th>zh-en</th> <th>avg.</th> </...
Table 1
table_1
D18-1330
4
emnlp2018
4.2 The MUSE benchmark,2, Table 1 reports the comparison of RCSLS with standard supervised and unsupervised approaches on 5 language pairs (in both directions) of the MUSE benchmark (Conneau et al., 2017). Every approach uses the Wikipedia fastText vectors and supervision comes in the form of a lexicon composed of 5k w...
[1, 2, 1, 2, 2, 1, 2, 1, 2]
['4.2 The MUSE benchmark,2,\nTable 1 reports the comparison of RCSLS with standard supervised and unsupervised approaches on 5 language pairs (in both directions) of the MUSE benchmark (Conneau et al., 2017).', 'Every approach uses the Wikipedia fastText vectors and supervision comes in the form of a lexicon composed o...
[None, None, ['RCSLS', 'Adversarial + refine'], ['RCSLS'], None, ['RCSLS', 'RCSLS + spectral', 'avg.'], ['RCSLS + spectral', 'RCSLS'], ['Least Square Error', 'Procrustes', 'avg.'], None]
1
D18-1330table_3
Accuracy on English and Italian with the setting of Dinu et al. (2014). “Adversarial” is an unsupervised technique. The adversarial and Procrustes results are from Conneau et al. (2017). We use a CSLS criterion for retrieval.
1
[['Adversarial + refine + CSLS'], ['Mikolov et al. (2013b)'], ['Dinu et al. (2014)'], ['Artetxe et al. (2016)'], ['Smith et al. (2017)'], ['Procrustes + CSLS'], ['RCSLS']]
1
[['en-it'], ['it-en']]
[['45.1', '38.3'], ['33.8', '24.9'], ['38.5', '24.6'], ['39.7', '33.8'], ['43.1', '38.0'], ['44.9', '38.5'], ['45.5', '38.0']]
column
['accuracy', 'accuracy']
['RCSLS']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>en-it</th> <th>it-en</th> </tr> </thead> <tbody> <tr> <td>Adversarial + refine + CSLS</td> <td>45.1</td> <td>38.3</td> </tr> <tr> <td>Mikolov et al. (2013b)</td> <td>33.8<...
Table 3
table_3
D18-1330
4
emnlp2018
4.3 The WaCky dataset. Dinu et al. (2014) introduce a setting where word vectors are learned on the WaCky datasets (Baroni et al., 2009) and aligned with a noisy bilingual lexicon. We select the number of epochs within {1, 2, 5, 10} on a validation set. Table 3 shows that RCSLS is on par with the state of the art. RCSL...
[2, 2, 2, 1, 2]
['4.3 The WaCky dataset.', 'Dinu et al. (2014) introduce a setting where word vectors are learned on the WaCky datasets (Baroni et al., 2009) and aligned with a noisy bilingual lexicon.', 'We select the number of epochs within {1, 2, 5, 10} on a validation set.', 'Table 3 shows that RCSLS is on par with the state of th...
[None, None, None, ['Adversarial + refine + CSLS', 'RCSLS', 'Procrustes + CSLS'], ['RCSLS']]
1
D18-1341table_1
TER and BLEU scores of our model (MT+AG+LM) v.s. the rest on various data conditions for the EN-DE post-editing task. bold: Best results within a data condition; (cid:63): Best results across data conditions
3
[['-', 'Model', 'Original MT'], ['12K', 'Model', 'TGT → PE'], ['12K', 'Model', 'SRC+TGT → PE'], ['12K', 'Model', 'MT+AG'], ['12K', 'Model', 'MT+AG+LM'], ['500K+12K', 'Model', 'TGT → PE'], ['500K+12K', 'Model', 'SRC+TGT → PE'], ['500K+12K', 'Model', 'MT+AG'], ['500K+12K', 'Model', 'MT+AG+LM'], ['23K', 'Model', 'TGT → PE...
2
[['dev', 'TER'], ['dev', 'BLEU'], ['test2016', 'TER'], ['test2016', 'BLEU'], ['test2017', 'TER'], ['test2017', 'BLEU']]
[['24.81', '62.92', '24.76', '62.11', '24.48', '62.49'], ['63.76', '21.32', '60.96', '22.11', '65.13', '18.13'], ['51.41', '34.04', '48.27', '35.24', '50.98', '31.52'], ['23.74', '65.95', '23.53', '65.22', '23.77', '64.34'], ['23.36†', '66.24', '23.24†', '65.53†', '23.45†', '64.65†'], ['50.91', '30.88', '48.62', '32.55...
column
['TER', 'BLEU', 'TER', 'BLEU', 'TER', 'BLEU']
['MT+AG+LM']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>dev || TER</th> <th>dev || BLEU</th> <th>test2016 || TER</th> <th>test2016 || BLEU</th> <th>test2017 || TER</th> <th>test2017 || BLEU</th> </tr> </thead> <tbody> <tr> <td>- || M...
Table 1
table_1
D18-1341
4
emnlp2018
4.1 Results. Table 1 shows the result on different training datasets to compare our model against the baselines. Original MT is the strong standard donothing baseline, copying the MT translation as the PE output. In all settings, our MT+AG+LM models outperforms the MT+AG and monolingual/multi-source SEQ2SEQ models. Spe...
[2, 1, 2, 1, 1, 1, 1, 2]
['4.1 Results.', 'Table 1 shows the result on different training datasets to compare our model against the baselines.', 'Original MT is the strong standard donothing baseline, copying the MT translation as the PE output.', 'In all settings, our MT+AG+LM models outperforms the MT+AG and monolingual/multi-source SEQ2SEQ ...
[None, ['Original MT', 'TGT → PE', 'SRC+TGT → PE', 'MT+AG', 'MT+AG+LM', 'dev', 'test2016', 'test2017'], ['Original MT'], ['MT+AG+LM', 'TGT → PE', 'SRC+TGT → PE', 'MT+AG'], ['MT+AG+LM', 'MT+AG', '500K+12K', 'test2017', 'BLEU'], ['12K', '500K+12K', '23K', '500K+23K'], ['MT+AG', 'MT+AG+LM', '23K', '500K+12K', 'TER', 'BLEU...
1
D18-1345table_2
Token level identification F1 scores. Averages are computed over all languages other than English. Two baselines are also compared here: Capitalization tags a token in test as entity if it is capitalized; and Exact Match keeps track of entities seen in training, tagging tokens in Test that exactly match some entity in ...
2
[['Model', 'Exact Match'], ['Model', 'Capitalization'], ['Model', 'SRILM'], ['Model', 'Skip-gram'], ['Model', 'CBOW'], ['Model', 'Log-Bilinear'], ['Model', 'CogCompNER (ceiling)'], ['Model', 'Lample et al. (2016) (ceiling)']]
1
[['eng'], ['amh'], ['ara'], ['ben'], ['fas'], ['hin'], ['som'], ['tgl'], ['avg']]
[['43.4', '54.4', '29.3', '47.7', '30.5', '30.9', '46.0', '23.7', '37.5'], ['79.5', '-', '-', '-', '-', '-', '69.5', '77.6', '-'], ['92.8', '69.9', '54.7', '79.4', '60.8', '63.8', '84.1', '80.5', '70.5'], ['76.0', '53.0', '29.7', '41.4', '30.8', '29.0', '51.1', '61.5', '42.4'], ['73.7', '50.0', '28.1', '40.6', '32.6', ...
column
['F1', 'F1', 'F1', 'F1', 'F1', 'F1', 'F1', 'F1', 'F1']
['SRILM']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>eng</th> <th>amh</th> <th>ara</th> <th>ben</th> <th>fas</th> <th>hin</th> <th>som</th> <th>tgl</th> <th>avg</th> </tr> </thead> <tbody> <tr> <td>Model || Exact Ma...
Table 2
table_2
D18-1345
3
emnlp2018
We compare the CLM’s Entity Identification against two state-of-the-art NER systems: CogCompNER (Khashabi et al., 2018) and LSTMCRF (Lample et al., 2016). We train the NER systems as usual, but at test time we convert all predictions into binary token-level annotations to get the final score. As Table 2 shows, the re...
[1, 2, 1]
['We compare the CLM’s Entity Identification against two state-of-the-art NER systems: CogCompNER (Khashabi et al., 2018) and LSTMCRF (Lample et al., 2016).', 'We train the NER systems as usual, but at test time we convert all predictions into binary token-level annotations to get\nthe final score.', 'As Table 2 show...
[['SRILM', 'CogCompNER (ceiling)', 'Lample et al. (2016) (ceiling)'], None, ['SRILM']]
1
D18-1345table_3
NER results on 8 languages show that even a simplistic addition of CLM features to a standard NER model boosts performance. CogCompNER is run with standard features, including Brown clusters; (Lample et al., 2016) is run with default parameters and pre-trained embeddings. Unseen refers to performance on named entities ...
3
[['Model', 'Lample et al. (2016)', 'Full'], ['Model', 'Lample et al. (2016)', 'Unseen'], ['Model', 'CogCompNER', 'Full'], ['Model', 'CogCompNER', 'Unseen'], ['Model', 'CogCompNER+LM', 'Full'], ['Model', 'CogCompNER+LM', 'Unseen']]
1
[['eng'], ['amh'], ['ara'], ['ben'], ['fas'], ['hin'], ['som'], ['tgl'], ['avg']]
[['90.94', '73.2', '57.2', '77.7', '61.2', '77.7', '81.3', '83.2', '73.1'], ['86.11', '51.9', '30.2', '57.9', '41.4', '62.2', '66.5', '72.8', '54.7'], ['90.88', '67.5', '54.8', '74.5', '57.8', '73.5', '82.0', '80.9', '70.1'], ['84.40', '42.7', '25.0', '51.9', '31.5', '53.9', '67.2', '68.3', '48.6'], ['91.21', '71.3', '...
column
['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy']
['CogCompNER+LM']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>eng</th> <th>amh</th> <th>ara</th> <th>ben</th> <th>fas</th> <th>hin</th> <th>som</th> <th>tgl</th> <th>avg</th> </tr> </thead> <tbody> <tr> <td>Model || Lample e...
Table 3
table_3
D18-1345
4
emnlp2018
The results in Table 3 show that for six of the eight languages we studied, the baseline NER can be significantly improved by adding simple CLM features; for English and Arabic, it performs better even than the neural NER model of (Lample et al., 2016). For Tagalog, however, adding CLM features actually impairs system ...
[1, 1, 1, 2, 1, 1]
['The results in Table 3 show that for six of the eight languages we studied, the baseline NER can be significantly improved by adding simple CLM features; for English and Arabic, it performs better even than the neural NER model of (Lample et al., 2016).', 'For Tagalog, however, adding CLM features actually impairs sy...
[['CogCompNER+LM', 'Lample et al. (2016)', 'eng', 'ara'], ['CogCompNER', 'tgl'], ['Unseen'], ['Unseen'], ['CogCompNER+LM', 'Unseen', 'fas', 'tgl'], ['CogCompNER+LM', 'Unseen', 'eng', 'amh', 'ara', 'ben', 'hin', 'som']]
1
D18-1349table_4
Comparison of F1 scores (weighted average by support (the number of true instances for each label)) between our model and the best published methods. The presented results of our model are evaluated on the test set of the run with the highest F1 score on the validation set.
3
[['Model', 'Best Published', 'Marco Lui (Lui 2012)'], ['Model', 'Best Published', 'bi-ANN (Dernoncourt et al. 2016)'], ['Model', 'Our Models', 'HSLN-CNN'], ['Model', 'Our Models', 'HSLN-RNN']]
2
[['PubMed', '20k'], ['PubMed', '200k'], ['NICTA', '-']]
[['-', '-', '82.0'], ['90.0', '91.6', '82.7'], ['92.2', '92.8', '84.7'], ['92.6', '93.9', '84.3']]
column
['F1', 'F1', 'F1']
['HSLN-CNN', 'HSLN-RNN']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>PubMed || 20k</th> <th>PubMed || 200k</th> <th>NICTA || -</th> </tr> </thead> <tbody> <tr> <td>Model || Best Published || Marco Lui (Lui 2012)</td> <td>-</td> <td>-</td> <td>82....
Table 4
table_4
D18-1349
6
emnlp2018
5 Results and Discussion. Table 4 compares our model against the best performing models in the literature (Dernoncourt et al. 2016; Liu et al. 2013). There are two variants of our model in terms of different implementations of the sentence encoding layer: the model that uses bi-RNN to encode the sentence is called HSLN...
[2, 1, 2, 1, 1, 1, 2, 2, 2]
['5 Results and Discussion.', 'Table 4 compares our model against the best performing models in the literature (Dernoncourt et al. 2016; Liu et al. 2013).', 'There are two variants of our model in terms of different implementations of the sentence encoding layer: the model that uses bi-RNN to encode the sentence is cal...
[None, ['Best Published', 'Our Models'], ['HSLN-CNN', 'HSLN-RNN'], ['HSLN-CNN', 'HSLN-RNN', 'PubMed', 'NICTA'], ['Marco Lui (Lui 2012)', 'bi-ANN (Dernoncourt et al. 2016)', 'HSLN-CNN', 'HSLN-RNN'], ['HSLN-CNN', 'HSLN-RNN', 'PubMed', '20k', '200k', 'NICTA'], ['HSLN-CNN', 'NICTA'], ['HSLN-RNN'], ['HSLN-CNN', 'HSLN-RNN', ...
1
D18-1349table_9
Comparison of performance with different choices of word embeddings for our HSLN-RNN model trained on the PubMed 20k dataset (reported on F1-scores on the test set). “P.M.” means PubMed.
2
[['Embedding', 'Glove-wiki'], ['Embedding', 'FastText-wiki'], ['Embedding', 'FastText-P.M.+MIMIC'], ['Embedding', 'Word2vec-News'], ['Embedding', 'Word2vec-wiki'], ['Embedding', 'Word2vec-wiki+P.M.']]
1
[['Dimension'], ['P.M. 20k']]
[['200', '92.0'], ['300', '92.2'], ['300', '92.0'], ['300', '92.2'], ['200', '92.1'], ['200', '92.6']]
column
['Dimension', 'F1-scores']
['Glove-wiki', 'FastText-wiki', 'FastText-P.M.+MIMIC', 'Word2vec-News', 'Word2vec-wiki', 'Word2vec-wiki+P.M.']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Dimension</th> <th>P.M. 20k</th> </tr> </thead> <tbody> <tr> <td>Embedding || Glove-wiki</td> <td>200</td> <td>92.0</td> </tr> <tr> <td>Embedding || FastText-wiki</td> <td...
Table 9
table_9
D18-1349
8
emnlp2018
In order to test the importance of pretrained word embeddings, we performed experiments with different sets of publicly published word embeddings, as well as our locally curated word embeddings, to initialize our model. Table 9 gives the performance of six different word embeddings for our HSLN-RNN model trained on the...
[2, 1, 1, 1, 1]
['In order to test the importance of pretrained word embeddings, we performed experiments with different sets of publicly published word embeddings, as well as our locally curated word embeddings, to initialize our model.', 'Table 9 gives the performance of six different word embeddings for our HSLN-RNN model trained o...
[None, ['Glove-wiki', 'FastText-wiki', 'FastText-P.M.+MIMIC', 'Word2vec-News', 'Word2vec-wiki', 'Word2vec-wiki+P.M.', 'P.M. 20k'], ['Glove-wiki', 'FastText-wiki', 'FastText-P.M.+MIMIC', 'Word2vec-News', 'Word2vec-wiki', 'Word2vec-wiki+P.M.', 'P.M. 20k'], ['Word2vec-News', 'Word2vec-wiki', 'Word2vec-wiki+P.M.'], ['FastT...
1
D18-1352table_2
MIMIC II results across frequent (S), few-shot (F), and zero-shot (Z) groups. We mark prior methods for MIMIC datasets that we implemented with a *.
1
[['Random'], ['Logistic (Vani et al. 2017) *'], ['CNN (Baumel et al. 2018) *'], ['ACNN (Mullenbach et al. 2018) *'], ['Match-CNN (Rios and Kavuluru, 2018)'], ['ESZSL + W2V'], ['ESZSL + W2V 2'], ['ESZSL + GRALS'], ['ZACNN'], ['ZAGCNN']]
2
[['S', 'R@5'], ['S', 'R@10'], ['F', 'R@5'], ['F', 'R@10'], ['Z', 'R@5'], ['Z', 'R@10'], ['Harmonic Average', 'R@5'], ['Harmonic Average', 'R@10']]
[['0.000', '0.000', '0.000', '0.000', '0.011', '0.032', '0.000', '0.000'], ['0.137', '0.247', '0.001', '0.003', '-', '-', '-', '-'], ['0.138', '0.250', '0.050', '0.082', '-', '-', '-', '-'], ['0.138', '0.255', '0.046', '0.081', '-', '-', '-', '-'], ['0.137', '0.247', '0.031', '0.042', '-', '-', '-', '-'], ['0.074', '0....
column
['R@5', 'R@10', 'R@5', 'R@10', 'R@5', 'R@10', 'R@5', 'R@10']
['ZAGCNN', 'ZACNN']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>S || R@5</th> <th>S || R@10</th> <th>F || R@5</th> <th>F || R@10</th> <th>Z || R@5</th> <th>Z || R@10</th> <th>Harmonic Average || R@5</th> <th>Harmonic Average || R@10</th> </t...
Table 2
table_2
D18-1352
7
emnlp2018
Results. Table 2 shows the results for MIMIC II. Because the label set for each medical record is augmented using the ICD-9 hierarchy, we expect methods that use the hierarchy to have an advantage. Table 2 results do not rely on thresholding because we evaluate using the relative ranking of groups with similar frequenc...
[2, 1, 2, 1, 1, 1, 1, 1, 1, 2, 1, 2, 1]
['Results.', 'Table 2 shows the results for MIMIC II.', 'Because the label set for each medical record is augmented using the ICD-9 hierarchy, we expect methods that use the hierarchy to have an advantage.', 'Table 2 results do not rely on thresholding because we evaluate using the relative ranking of groups with simil...
[None, None, None, None, ['ACNN (Mullenbach et al. 2018) *', 'S'], ['ZAGCNN', 'ACNN (Mullenbach et al. 2018) *', 'F', 'R@5', 'R@10', 'S'], ['ESZSL + W2V', 'ESZSL + W2V 2', 'Z'], ['ESZSL + GRALS', 'F', 'S'], ['Z', 'ZAGCNN', 'ESZSL + W2V 2', 'R@5', 'R@10'], ['ZAGCNN', 'F', 'Z'], ['Harmonic Average', 'R@5', 'R@10'], ['Har...
1
D18-1353table_3
Human evaluation results. Diacritic ** (p < 0:01) indicates MRL significantly outperforms baselines; ++ (p < 0:01) indicates GT is significantly better than all models.
2
[['Models', 'Base'], ['Models', 'Mem'], ['Models', 'MRL'], ['Models', 'GT']]
1
[['Fluency'], ['Coherence'], ['Meaning'], ['Overall Quality']]
[['3.28', '2.77', '2.63', '2.58'], ['3.23', '2.88', '2.68', '2.68'], ['4.05', '3.81', '3.68', '3.60'], ['4.14', '4.11', '4.16', '3.97']]
column
['Fluency', 'Coherence', 'Meaning', 'Overall Quality']
['MRL']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Fluency</th> <th>Coherence</th> <th>Meaning</th> <th>Overall Quality</th> </tr> </thead> <tbody> <tr> <td>Models || Base</td> <td>3.28</td> <td>2.77</td> <td>2.63</td> ...
Table 3
table_3
D18-1353
7
emnlp2018
Table 3 gives human evaluation results. MRL achieves better results than the other two models. Since fluency is quite easy to be optimized, our method gets close to human-authored poems on Fluency. The biggest gap between MRL and GT lies on Meaning. It’s a complex criterion involving the use of words, topic, emotion ...
[1, 1, 2, 1, 2, 2]
['Table 3 gives human evaluation results.', 'MRL achieves better results than the other two models.', 'Since fluency is quite easy to be optimized, our method gets close to human-authored poems on Fluency.', 'The biggest gap between MRL and GT lies on Meaning.', 'It’s a complex criterion involving the use of words, t...
[None, ['MRL', 'Base', 'Mem'], ['MRL'], ['MRL', 'GT'], None, None]
1
D18-1358table_6
Triple Classification Results. The results of baselines on WN11 and FB13 are directly taken from the original paper except DistMult. We obtain other results by ourselves.
2
[['Model', 'CTransR'], ['Model', 'TransD'], ['Model', 'TransG'], ['Model', 'TransE'], ['Model', 'TransH'], ['Model', 'DistMult'], ['Model', 'TransE-HRS'], ['Model', 'TransH-HRS'], ['Model', 'DistMult-HRS']]
1
[['WN11'], ['FB13'], ['FB15k'], ['Avg']]
[['85.7', '-', '84.4', '-'], ['86.4', '89.1', '88.2', '87.9'], ['87.4', '87.3', '88.5', '87.7'], ['75.9', '81.5', '78.7', '78.7'], ['78.8', '83.3', '81.1', '81.1'], ['87.1', '86.2', '86.3', '86.5'], ['86.8', '88.4', '87.6', '87.6'], ['87.6', '88.9', '88.7', '88.4'], ['88.9', '89.0', '89.1', '89.0']]
column
['accuracy', 'accuracy', 'accuracy', 'accuracy']
['TransE-HRS', 'TransH-HRS', 'DistMult-HRS']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>WN11</th> <th>FB13</th> <th>FB15k</th> <th>Avg</th> </tr> </thead> <tbody> <tr> <td>Model || CTransR</td> <td>85.7</td> <td>-</td> <td>84.4</td> <td>-</td> </tr> ...
Table 6
table_6
D18-1358
8
emnlp2018
4.4.2 Experimental Results. Finally, the evaluation results in Table 6 lead to the following findings:. (1) Our models outperform other baselines on WN11 and FB15k, and obtain comparable results with baselines on FB13, which validate the effectiveness of our models;. (2) The extended models TransE-HRS, TransH-HRS and D...
[2, 1, 1, 1, 1, 2]
['4.4.2 Experimental Results.', 'Finally, the evaluation results in Table 6 lead to the following findings:.', '(1) Our models outperform other baselines on WN11 and FB15k, and obtain comparable results with baselines on FB13, which validate the effectiveness of our models;.', '(2) The extended models TransE-HRS, Trans...
[None, None, ['TransE-HRS', 'TransH-HRS', 'DistMult-HRS', 'WN11', 'FB13', 'FB15k'], ['TransE-HRS', 'TransH-HRS', 'DistMult-HRS', 'TransE', 'TransH', 'DistMult'], ['WN11', 'TransE-HRS', 'TransE'], None]
1
D18-1359table_4
Per-Relation Breakdown showing performance of each model on different relations.
2
[['Relation', 'isAffiliatedTo'], ['Relation', 'playsFor'], ['Relation', 'hasGender'], ['Relation', 'isConnectedTo'], ['Relation', 'isMarriedTo']]
2
[['Links Only', 'MRR'], ['Links Only', 'Hits@1'], ['+Numbers', 'MRR'], ['+Numbers', 'Hits@1'], ['+Description', 'MRR'], ['+Description', 'Hits@1'], ['+Images', 'MRR'], ['+Images', 'Hits@1']]
[['0.524', '0.401', '0.551', '0.467', '0.572', '0.481', '0.569', '0.478'], ['0.528', '0.413', '0.554', '0.471', '0.574', '0.486', '0.566', '0.476'], ['0.798', '0.596', '0.799', '0.599', '0.813', '0.627', '0.842', '0.683'], ['0.482', '0.367', '0.497', '0.379', '0.492', '0.384', '0.484', '0.372'], ['0.365', '0.207', '0.3...
column
['MRR', 'Hits@1', 'MRR', 'Hits@1', 'MRR', 'Hits@1', 'MRR', 'Hits@1']
['isAffiliatedTo', 'playsFor', 'hasGender', 'isConnectedTo', 'isMarriedTo']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Links Only || MRR</th> <th>Links Only || Hits@1</th> <th>+Numbers || MRR</th> <th>+Numbers || Hits@1</th> <th>+Description || MRR</th> <th>+Description || Hits@1</th> <th>+Images || MRR...
Table 4
table_4
D18-1359
8
emnlp2018
Relation Breakdown. We perform additional analysis on the YAGO dataset to gain a deeper understanding of the performance of our model using ConvE method. Table 4 compares our models on some of the most frequent relations. As shown, the model that includes textual description significantly benefits isAffiliatedTo, and p...
[2, 2, 1, 1, 1]
['Relation Breakdown.', 'We perform additional analysis on the YAGO dataset to gain a deeper understanding of the performance of our model using ConvE method.', 'Table 4 compares our models on some of the most frequent relations.', 'As shown, the model that includes textual description significantly benefits isAffiliat...
[None, None, ['isAffiliatedTo', 'playsFor', 'hasGender', 'isConnectedTo', 'isMarriedTo'], ['isAffiliatedTo', 'playsFor', '+Description'], ['hasGender', 'isMarriedTo', 'isConnectedTo', '+Images', '+Numbers']]
1
D18-1360table_4
Results for scientific keyphrase extraction and extraction on SemEval 2017 Task 10, comparing with previous best systems.
2
[['Model', '(Luan 2017)'], ['Model', 'Best SemEval'], ['Model', 'SCIIE']]
2
[['Span Indentification', 'P'], ['Span Indentification', 'R'], ['Span Indentification', 'F1'], ['Keyphrase Extraction', 'P'], ['Keyphrase Extraction', 'R'], ['Keyphrase Extraction', 'F1'], ['Relation Extraction', 'P'], ['Relation Extraction', 'R'], ['Relation Extraction', 'F1'], [' Overall', 'P'], ['Overall', 'R'], ['O...
[['-', '-', '56.9', '-', '-', '45.3', '-', '-', '-', '-', '-', '-'], ['55', '54', '55', '44', '43', '44', '36', '23', '28', '44', '41', '43'], ['62.2', '55.4', '58.6', '48.5', '43.8', '46.0', '40.4', '21.2', '27.8', '48.1', '41.8', '44.7']]
column
['P', 'R', 'F1', 'P', 'R', 'F1', 'P', 'R', 'F1', 'P', 'R', 'F1']
['SCIIE']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Span Indentification || P</th> <th>Span Indentification || R</th> <th>Span Indentification || F1</th> <th>Keyphrase Extraction || P</th> <th>Keyphrase Extraction || R</th> <th>Keyphrase Extr...
Table 4
table_4
D18-1360
9
emnlp2018
Results on SemEval 17. Table 4 compares the results of our model with the state of the art on the SemEval 17 dataset for tasks of span identification, keyphrase extraction and relation extraction as well as the overall score. Span identification aims at identifying spans of entities. Keyphrase classification and relati...
[2, 1, 2, 2, 1, 1, 2, 1, 2]
['Results on SemEval 17.', 'Table 4 compares the results of our model with the state of the art on the SemEval 17 dataset for tasks of span identification, keyphrase extraction and relation extraction as well as the overall score.', 'Span identification aims at identifying spans of entities.', 'Keyphrase classification...
[None, ['Span Indentification', 'Keyphrase Extraction', 'Relation Extraction', ' Overall'], ['Span Indentification'], ['Keyphrase Extraction', 'Relation Extraction'], ['SCIIE'], ['SCIIE', 'Span Indentification', 'Keyphrase Extraction'], ['SCIIE', 'Span Indentification'], ['SCIIE', 'Best SemEval', 'Relation Extraction']...
1
D18-1362table_2
Query answering performance compared to state-of-the-art embedding based approaches (top part) and multi-hop reasoning approaches (bottom part). The @1, @10 and MRR metrics were multiplied by 100. We highlight the best approach in each category.
2
[['Model', 'DistMult (Yang et al. 2014)'], ['Model', 'ComplEx (Trouillon et al. 2016)'], ['Model', 'ConvE (Dettmers et al. 2018)'], ['Model', 'NeuralLP (Yang et al. 2017)'], ['Model', 'NTP-λ (Rocktaschel et. al. 2017)'], ['Model', 'MINERVA (Das et al. 2018)'], ['Model', 'Ours(ComplEx)'], ['Model', 'Ours(ConvE)']]
2
[['UMLS', '@1'], ['UMLS', '@10'], ['UMLS', 'MRR'], ['Kinship', '@1'], ['Kinship', '@10'], ['Kinship', 'MRR'], ['FB15k-237', '@1'], ['FB15k-237', '@10'], ['FB15k-237', 'MRR'], ['WN18RR', '@1'], ['WN18RR', '@10'], ['WN18RR', 'MRR'], ['NELL-995', '@1'], ['NELL-995', '@10'], ['NELL-995', 'MRR']]
[['82.1', '96.7', '86.8', '48.7', '90.4', '61.4', '32.4', '60.0', '41.7', '43.1', '52.4', '46.2', '55.2', '78.3', '64.1'], ['89.0', '99.2', '93.4', '81.8', '98.1', '88.4', '32.8', '61.6', '42.5', '41.8', '48.0', '43.7', '64.3', '86.0', '72.6'], ['93.2', '99.4', '95.7', '79.7', '98.1', '87.1', '34.1', '62.2', '43.5', '4...
column
['@1', '@10', 'MRR', '@1', '@10', 'MRR', '@1', '@10', 'MRR', '@1', '@10', 'MRR', '@1', '@10', 'MRR']
['Ours(ComplEx)', 'Ours(ConvE)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>UMLS || @1</th> <th>UMLS || @10</th> <th>UMLS || MRR</th> <th>Kinship || @1</th> <th>Kinship || @10</th> <th>Kinship || MRR</th> <th>FB15k-237 || @1</th> <th>FB15k-237 || @10</th> ...
Table 2
table_2
D18-1362
6
emnlp2018
5 Results 5.1 Model Comparison . Table 2 shows the evaluation results of our proposed approach and the baselines. The top part presents embedding based approaches and the bottom part presents multi-hop reasoning approaches. We find embedding based models perform strongly on several datasets, achieving overall best eval...
[2, 1, 1, 1, 1, 2, 2, 2, 1, 1, 1, 1, 2, 2, 1, 2]
['5 Results 5.1 Model Comparison .', 'Table 2 shows the evaluation results of our proposed approach and the baselines.', 'The top part presents embedding based approaches and the bottom part presents multi-hop reasoning approaches.', 'We find embedding based models perform strongly on several datasets, achieving overal...
[None, None, ['DistMult (Yang et al. 2014)', 'ComplEx (Trouillon et al. 2016)', 'ConvE (Dettmers et al. 2018)', 'NeuralLP (Yang et al. 2017)', 'NTP-λ (Rocktaschel et. al. 2017)', 'MINERVA (Das et al. 2018)', 'Ours(ComplEx)', 'Ours(ConvE)'], ['DistMult (Yang et al. 2014)', 'ComplEx (Trouillon et al. 2016)', 'ConvE (Dett...
1
D18-1362table_5
MRR evaluation of seen queries vs. unseen queries on five datasets. The % columns show the percentage of examples of seen/unseen queries found in the development split of the corresponding dataset.
2
[['Dataset', 'UMLS'], ['Dataset', 'Kinship'], ['Dataset', 'FB15k-237'], ['Dataset', 'WN18RR'], ['Dataset', 'NELL-995']]
2
[['Seen Queries', '%'], ['Seen Queries', 'Ours(ConvE)'], ['Seen Queries', '-RS'], ['Seen Queries', '-AD'], ['Unseen Queries', '%'], ['Unseen Queries', 'Ours(ConvE)'], ['Unseen Queries', '-RS'], ['Unseen Queries', '-AD']]
[['97.2', '73.1', '67.9 (-7%)', '61.4 (-16%)', '2.8', '68.5', '61.5 (-10%)', '58.7 (-14%)'], ['96.8', '75.1', '66.5 (-11%)', '65.8 (-12%)', '3.2', '73.6', '64.3 (-13%)', '53.3 (-27%)'], ['76.1', '28.3', '24.3 (-14%)', '20.6 (-27%)', '23.9', '70.9', '69.1 (-2%)', '63.9 (-10%)'], ['41.8', '60.8', '62.0 (+2%)', '53.4 (-12...
column
['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy']
['Ours(ConvE)', '-RS', '-AD']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Seen Queries || %</th> <th>Seen Queries || Ours(ConvE)</th> <th>Seen Queries || -RS</th> <th>Seen Queries || -AD</th> <th>Unseen Queries || %</th> <th>Unseen Queries || Ours(ConvE)</th> ...
Table 5
table_5
D18-1362
9
emnlp2018
Table 5 shows the percentage of examples associated with seen and unseen queries on each dev dataset and the corresponding MRR evaluation metrics of previously studied models. On most datasets, the ratio of seen vs. unseen queries is similar to that of to-many vs. to-one relations (Table 4) as a result of random data s...
[1, 1, 1, 2, 1, 1]
['Table 5 shows the percentage of examples associated with seen and unseen queries on each dev dataset and the corresponding MRR evaluation metrics of previously studied models.', 'On most datasets, the ratio of seen vs. unseen queries is similar to that of to-many vs. to-one relations (Table 4) as a result of random d...
[['%', 'Seen Queries', 'Unseen Queries'], ['UMLS', 'Kinship', 'FB15k-237', 'NELL-995', '%', 'Seen Queries', 'Unseen Queries'], ['Seen Queries', 'UMLS', 'Kinship', 'WN18RR'], None, ['NELL-995', '-RS', '-AD', 'Seen Queries'], ['Unseen Queries', '-RS', '-AD']]
1
D18-1363table_1
Accuracy on PC for SIG17+SHIP (the shared task baseline SIG17 with SHIP), MED+PT (MED with paradigm transduction), MED+PT+SHIP (MED with paradigm transduction and SHIP), as well as all baselines (BL). Results are averaged over all languages, and best results are in bold; detailed accuracies for all languages can be fou...
1
[['BL: COPY'], ['BL: MED'], ['BL: PT'], ['BL: SIG17'], ['SIG17+SHIP'], ['MED+PT'], ['MED+PT+SHIP']]
1
[['SET1'], ['SET2'], ['SET3']]
[['.0810', '.0810', '.0810'], ['.0004', '.0432', '.4211'], ['.0833', '.0833', '.0775'], ['.5012', '.6576', '.7707'], ['.5971', '.7355', '.8008'], ['.5808', '.7486', '.8454'], ['.5793', '.7547', '.8483']]
column
['accuracy', 'accuracy', 'accuracy']
['SIG17+SHIP', 'MED+PT', 'MED+PT+SHIP']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>SET1</th> <th>SET2</th> <th>SET3</th> </tr> </thead> <tbody> <tr> <td>BL: COPY</td> <td>.0810</td> <td>.0810</td> <td>.0810</td> </tr> <tr> <td>BL: MED</td> <td>...
Table 1
table_1
D18-1363
6
emnlp2018
4.4 Results. Our results are shown in Table 1. For SET1, SIG17+SHIP obtains the highest accuracy, while, for SET2 and SET3, MED+PT+SHIP performs best. This difference can be easily explained by the fact that the performance of neural networks decreases rapidly for smaller training sets, and, while paradigm transduction...
[2, 1, 1, 2, 1, 2, 1, 2, 2, 2, 2, 2, 1, 2, 2, 1, 2, 1, 1, 2, 1, 2, 1, 1, 1, 2, 2, 2, 2, 1, 1, 2, 2]
['4.4 Results.', 'Our results are shown in Table 1.', 'For SET1, SIG17+SHIP obtains the highest accuracy, while, for SET2 and SET3, MED+PT+SHIP performs best.', 'This difference can be easily explained by the fact that the performance of neural networks decreases rapidly for smaller training sets, and, while paradigm t...
[None, None, ['SET1', 'SET2', 'SET3', 'SIG17+SHIP', 'MED+PT+SHIP'], None, ['SIG17+SHIP', 'MED+PT', 'MED+PT+SHIP'], None, ['MED+PT', 'BL: SIG17', 'SET1', 'SET2', 'SET3'], ['MED+PT'], None, None, None, None, ['MED+PT', 'SET1'], None, None, ['SIG17+SHIP', 'BL: SIG17', 'SET1', 'SET2', 'SET3'], ['SIG17+SHIP'], None, ['MED+P...
1
D18-1366table_3
NER results for monolingual experiments. Metric F1 (out of 100%)
4
[['Model', 'Ours', 'subword units', 'Char-ngrams + Lemma + Morph'], ['Model', 'Ours', 'subword units', 'Char-ngrams + Lemma'], ['Model', 'Ours', 'subword units', 'Char-ngrams + Morph'], ['Model', 'prop2vec', 'subword units', 'Word + Lemma'], ['Model', 'prop2vec', 'subword units', 'Word + Morph'], ['Model', 'prop2vec', ...
1
[['Turkish'], ['Uyghur'], ['Hindi'], ['Bengali']]
[['68.06', '52.50', '73.15', '52.77'], ['68.61', '52.40', '73.37', '52.09'], ['67.97', '47.80', '73.46', '52.06'], ['66.52', '46.00', '71.82', '50.03'], ['64.45', '46.00', '71.52', '49.27'], ['68.46', '47.70', '70.51', '48.16'], ['66.81', '50.80', '72.67', '52.10'], ['62.85', '46.80', '72.04', '49.83'], ['58.94', '31.3...
column
['F1', 'F1', 'F1', 'F1']
['Char-ngrams + Lemma + Morph', 'Char-ngrams + Lemma', 'Char-ngrams + Morph']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Turkish</th> <th>Uyghur</th> <th>Hindi</th> <th>Bengali</th> </tr> </thead> <tbody> <tr> <td>Model || Ours || subword units || Char-ngrams + Lemma + Morph</td> <td>68.06</td> <t...
Table 3
table_3
D18-1366
7
emnlp2018
For example lemma + morph means lemma and morph embeddings are first pre-trained on the resoucerich language and then used to initialize the respective lemma and morph representations for the low resource language. Monolingual Experiments:. Table 3 shows our results on all languages. We get +5.8 F1 points for Turkish, ...
[2, 2, 1, 1, 1, 1, 2]
['For example lemma + morph means lemma and morph embeddings are first pre-trained on the resoucerich language and then used to initialize the respective lemma and morph representations for the low resource language.', 'Monolingual Experiments:.', 'Table 3 shows our results on all languages.', 'We get +5.8 F1 points fo...
[None, None, ['Turkish', 'Uyghur', 'Hindi', 'Bengali'], ['Char-ngrams + Lemma + Morph', 'Char-ngrams + Lemma', 'Char-ngrams + Morph', 'Turkish', 'Uyghur', 'Hindi', 'Bengali'], ['Char-ngrams + Lemma + Morph', 'Turkish', 'Uyghur', 'Bengali'], ['Char-ngrams + Morph', 'Turkish', 'Hindi'], None]
1
D18-1367table_1
Mean accuracy of 10-fold cross validation in the Hype-Par setting. Column QQ shows results for our handcrafted quality and quantity features; the last two columns are concatenations of QQ with the Skip-gram and GloVe baseline features.
1
[['LR'], ['KNN'], ['NB'], ['DT'], ['SVM'], ['LDA']]
1
[['Baseline1'], ['QQ'], ['Skip-gram'], ['GloVe'], ['Skip-gram+QQ'], ['GloVe+QQ']]
[['.50', '.64', '.68', '.66', '.72', '.69'], ['.50', '.63', '.47', '.43', '.52', '.48'], ['.50', '.66', '.69', '.66', '.69', '.68'], ['.50', '.60', '.54', '.53', '.55', '.54'], ['.50', '.64', '.15', '.62', '.63', '.64'], ['.50', '.61', '.67', '.65', '.68', '.67']]
column
['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy']
['QQ', 'Skip-gram+QQ', 'GloVe+QQ']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Baseline1</th> <th>QQ</th> <th>Skip-gram</th> <th>GloVe</th> <th>Skip-gram+QQ</th> <th>GloVe+QQ</th> </tr> </thead> <tbody> <tr> <td>LR</td> <td>.50</td> <td>.64</td> ...
Table 1
table_1
D18-1367
7
emnlp2018
Cross-validation is conducted in two settings. In one (Hype-Par), the non-hyperbolic sentences are paraphrases, in the other (Hype-Min), literal data come from the Minimal Units Corpus. The results are shown in Table 1 and Table 2. While in the Hype-Min setting the performance is not satisfying, estimators achieve abov...
[2, 2, 1, 2, 1, 2, 2, 2, 1, 2, 2, 1, 2, 1]
['Cross-validation is conducted in two settings.', 'In one (Hype-Par), the non-hyperbolic sentences are paraphrases, in the other (Hype-Min), literal data come from the Minimal Units Corpus.', 'The results are shown in Table 1 and Table 2.', 'While in the Hype-Min setting the performance is not satisfying, estimators a...
[None, None, None, None, ['QQ'], None, ['LR'], ['SVM', 'LDA'], ['LR', 'SVM', 'LDA'], None, ['Skip-gram+QQ', 'GloVe+QQ'], ['QQ', 'Skip-gram', 'GloVe', 'Skip-gram+QQ', 'GloVe+QQ'], ['Skip-gram+QQ', 'GloVe+QQ'], ['Skip-gram+QQ', 'LR']]
1
D18-1368table_4
Cross-genre Classification Results on the Training Set of MASC+Wiki. We report accuracy (Acc), macro-average F1-score (Macro) and class-wise F1 scores.
2
[['Model', 'CRF (Friedrich et al. 2016)'], ['Model', 'Clause-level Bi-LSTM'], ['Model', 'Paragraph-level Model'], ['Model', 'Paragraph-level Model+CRF']]
1
[['Macro'], ['Acc'], ['F1 STA'], ['F1 EVE'], ['F1 REP'], ['F1 GENI'], ['F1 GENA'], ['F1 QUE'], ['F1 IMP']]
[['66.6', '71.8', '78.2', '77.0', '76.8', '44.8', '27.4', '81.8', '70.8'], ['69.3', '73.3', '79.5', '78.7', '82.8', '47.6', '31.9', '86.9', '77.7'], ['73.2', '77.2', '81.5', '80.1', '83.2', '64.7', '37.2', '88.1', '77.8'], ['73.5', '77.4', '81.5', '80.3', '83.7', '66.5', '37.4', '88.5', '76.7']]
column
['Macro', 'Acc', 'F1 STA', 'F1 EVE', 'F1 REP', 'F1 GENI', 'F1 GENA', 'F1 QUE', 'F1 IMP']
['Paragraph-level Model', 'Paragraph-level Model+CRF']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Macro</th> <th>Acc</th> <th>STA</th> <th>EVE</th> <th>REP</th> <th>GENI</th> <th>GENA</th> <th>QUE</th> <th>IMP</th> </tr> </thead> <tbody> <tr> <td>Model || CRF ...
Table 4
table_4
D18-1368
7
emnlp2018
Table 4 shows cross-genre experimental results of our neural network models on the training set of MASC+Wiki by treating each genre as one crossvalidation fold. As we expected, both the macroaverage F1-score and class-wise F1 scores are lower compared with the results in Table 2 where in-genre data were used for model ...
[1, 1, 1]
['Table 4 shows cross-genre experimental results of our neural network models on the training set of MASC+Wiki by treating each genre as one crossvalidation fold.', 'As we expected, both the macroaverage F1-score and class-wise F1 scores are lower compared with the results in Table 2 where in-genre data were used for m...
[None, ['Paragraph-level Model', 'Macro', 'F1 STA', 'F1 EVE', 'F1 REP', 'F1 GENI', 'F1 GENA', 'F1 QUE', 'F1 IMP'], ['Paragraph-level Model', 'CRF (Friedrich et al. 2016)', 'Clause-level Bi-LSTM']]
1
D18-1369table_4
Reply structure recovery results in Wikipedia conversation dataset.
1
[['Naive Baseline'], ['HDHP'], ['HD-GMHP']]
1
[['Pnode'], ['Rnode'], ['F1node']]
[['0.3223', '0.6501', '0.4310'], ['0.5598', '0.5834', '0.5714'], ['0.6433', '0.5468', '0.5911']]
column
['Pnode', 'Rnode', 'F1node']
['HD-GMHP']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Pnode</th> <th>Rnode</th> <th>F1node</th> </tr> </thead> <tbody> <tr> <td>Naive Baseline</td> <td>0.3223</td> <td>0.6501</td> <td>0.4310</td> </tr> <tr> <td>HDHP</td>...
Table 4
table_4
D18-1369
9
emnlp2018
Table 4 shows the thread reconstruction results of our model and the baseline models in the Wikipedia conversation dataset. Since the HDHP model does not infer the parent event, we reconstruct threads in the form of chronologically ordered linked list of posts in each local cluster that inferred from HDHP. From the F1n...
[1, 2, 1]
['Table 4 shows the thread reconstruction results of our model and the baseline models in the Wikipedia conversation dataset.', 'Since the HDHP model does not infer the parent event, we reconstruct threads in the form of chronologically ordered linked list of posts in each local cluster that inferred from HDHP.', 'From...
[None, ['HDHP'], ['HD-GMHP']]
1
D18-1371table_5
timex/event labels generated by stage 1 (bottom). Best performances are in bold.
3
[['temporal relation parsing with gold spans', 'model', 'Baseline-simple'], ['temporal relation parsing with gold spans', 'model', 'Baseline-logistic'], ['temporal relation parsing with gold spans', 'model', 'Neural-basic'], ['temporal relation parsing with gold spans', 'model', 'Neural-enriched'], ['temporal relation ...
3
[['news', 'unlabeled f', 'dev'], ['news', 'unlabeled f', 'test'], ['news', 'labeled f', 'dev'], ['news', 'labeled f', 'test'], ['grimm', 'unlabeled f', 'dev'], ['grimm', 'unlabeled f', 'test'], ['grimm', 'labeled f', 'dev'], ['grimm', 'labeled f', 'test']]
[['.64', '.68', '.47', '.43', '.78', '.79', '.39', '.39'], ['.81', '.79', '.63', '.54', '.74', '.74', '.60', '.63'], ['.78', '.75', '.67', '.57', '.72', '.74', '.60', '.63'], ['.80', '.78', '.67', '.59', '.76', '.77', '.63', '.65'], ['.83', '.81', '.76', '.70', '.79', '.79', '.66', '.68'], ['.39', '.40', '.26', '.25', ...
column
['f1', 'f1', 'f1', 'f1', 'f1', 'f1', 'f1', 'f1']
['Neural-basic', 'Neural-enriched', 'Neural-attention']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>news || unlabeled f || dev</th> <th>news || unlabeled f || test</th> <th>news || labeled f || dev</th> <th>news || labeled f || test</th> <th>grimm || unlabeled f || dev</th> <th>grimm || un...
Table 5
table_5
D18-1371
7
emnlp2018
Bottom rows in Table 5 report the end-to-end performance of our five systems on both domains. On both labeled and unlabeled parsing, our basic neural model with only lexical input performs comparable to the logistic regression model. And our enriched neural model with only three simple linguistic features outperforms b...
[1, 1, 1, 1, 2, 1, 2, 1, 1, 2, 1, 2, 2]
['Bottom rows in Table 5 report the end-to-end performance of our five systems on both domains.', 'On both labeled and unlabeled parsing, our basic neural model with only lexical input performs comparable to the logistic regression model.', 'And our enriched neural model with only three simple linguistic features outpe...
[None, ['Neural-basic', 'Baseline-logistic', 'unlabeled f', 'labeled f', 'end-to-end systems with automatic spans'], ['Baseline-logistic', 'Neural-basic', 'Neural-enriched', 'news', 'end-to-end systems with automatic spans'], ['grimm', 'unlabeled f', 'Baseline-simple', 'Neural-basic', 'Neural-enriched', 'Neural-attenti...
1
D18-1373table_2
Comparison on datasets with the baselines. ‘+F’: tested with all modalities(U,O,M,V), ‘-X’: dropping one modality, ‘-U’ and ‘-O’: user and item cold-start scenario.
2
[['Dataset Models', 'Offset'], ['Dataset Models', 'NMF'], ['Dataset Models', 'SVD++'], ['Dataset Models', 'URP'], ['Dataset Models', 'RMR'], ['Dataset Models', 'HFT'], ['Dataset Models', 'DeepCoNN'], ['Dataset Models', 'NRT'], ['Dataset Models', 'LRMM(+F)'], ['Dataset Models', 'LRMM(-U)'], ['Dataset Models', 'LRMM(-O)'...
2
[['S&O', 'RMSE'], ['S&O', 'MAE'], ['H&P', 'RMSE'], ['H&P', 'MAE'], ['Movie', 'RMSE'], ['Movie', 'MAE'], ['Electronics', 'RMSE'], ['Electronics', 'MAE']]
[['0.979', '0.769', '1.247', '0.882', '1.389', '0.933', '1.401', '0.928'], ['0.948', '0.671', '1.059', '0.761', '1.135', '0.794', '1.297', '0.904'], ['0.922', '0.669', '1.026', '0.760', '1.049', '0.745', '1.194', '0.847'], ['-', '-', '-', '-', '1.006', '0.764', '1.126', '0.860'], ['-', '-', '-', '-', '1.005', '0.741', ...
column
['RMSE', 'MAE', 'RMSE', 'MAE', 'RMSE', 'MAE', 'RMSE', 'MAE']
['LRMM(+F)', 'LRMM(-U)', 'LRMM(-O)', 'LRMM(-M)', 'LRMM(-V)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>S&amp;O || RMSE</th> <th>S&amp;O || MAE</th> <th>H&amp;P || RMSE</th> <th>H&amp;P || MAE</th> <th>Movie || RMSE</th> <th>Movie || MAE</th> <th>Electronics || RMSE</th> <th>Electron...
Table 2
table_2
D18-1373
6
emnlp2018
3.4 Compare with State-of-the-art. First, we compare LRMM with state-of-the-art methods listed in Sec. 3.2. In this setting, LRMM is trained with all data modalities and tested with different missing modality regimes. Table 2 lists the results on the four datasets. By leveraging multimodal correlations, LRMM significan...
[2, 2, 2, 1, 1, 1, 2, 1, 1, 2, 1, 2]
['3.4 Compare with State-of-the-art.', 'First, we compare LRMM with state-of-the-art methods listed in Sec. 3.2.', 'In this setting, LRMM is trained with all data modalities and tested with different missing modality regimes.', 'Table 2 lists the results on the four datasets.', 'By leveraging multimodal correlations, L...
[None, None, None, None, ['LRMM(+F)', 'NMF', 'SVD++', 'URP', 'RMR', 'HFT'], ['LRMM(+F)', 'HFT', 'DeepCoNN'], ['LRMM(+F)'], ['LRMM(-U)', 'LRMM(-O)'], ['LRMM(-O)', 'NRT', 'DeepCoNN', 'Electronics', 'S&O', 'RMSE', 'MAE'], None, None, ['LRMM(-U)', 'LRMM(-O)']]
1
D18-1373table_3
The performance of training with missing modality imputation.
2
[['Dataset Models', 'LRMM(+F)'], ['Dataset Models', 'LRMM(-U)'], ['Dataset Models', 'LRMM(-O)'], ['Dataset Models', 'LRMM(-M)'], ['Dataset Models', 'LRMM(-V)']]
2
[['S&O', 'RMSE'], ['S&O', 'MAE'], ['H&P', 'RMSE'], ['H&P', 'MAE']]
[['0.997', '0.790', '1.131', '0.912'], ['0.998', '0.795', '1.132', '0.914'], ['0.999', '0.796', '1.133', '0.917'], ['0.998', '0.797', '1.133', '0.913'], ['0.997', '0.791', '1.132', '0.913']]
column
['RMSE', 'MAE', 'RMSE', 'MAE']
['LRMM(+F)', 'LRMM(-U)', 'LRMM(-O)', 'LRMM(-M)', 'LRMM(-V)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>S&amp;O || RMSE</th> <th>S&amp;O || MAE</th> <th>H&amp;P || RMSE</th> <th>H&amp;P || MAE</th> </tr> </thead> <tbody> <tr> <td>Dataset Models || LRMM(+F)</td> <td>0.997</td> <td>...
Table 3
table_3
D18-1373
7
emnlp2018
3.6 Missing Modality Imputation. The proposed m-drop and m-auto methods allow LRMM to be more robust to missing data modalities. Table 3 lists the results of training LRMM with missing data modalities for the modality dropout ratio pm = 0.5 on the S&O and H&P datasets, respectively. Both RMSE and MAE of LRMM deteriorat...
[2, 2, 1, 1]
['3.6 Missing Modality Imputation.', 'The proposed m-drop and m-auto methods allow LRMM to be more robust to missing data modalities.', 'Table 3 lists the results of training LRMM with missing data modalities for the modality dropout ratio pm = 0.5 on the S&O and H&P datasets, respectively.', 'Both RMSE and MAE of LRMM...
[None, None, ['LRMM(+F)', 'S&O', 'H&P'], ['LRMM(+F)', 'RMSE', 'MAE']]
1
D18-1374table_1
Experimental results for NCG-IDF@k and prec@k scores for different methods.
2
[['p(F)', '-'], ['N(C F)', 'base'], ['N(C F)', 'SVD'], ['P(F|C)', 'base'], ['P(F|C)', 'SVD'], ['PPMI(C F)', 'base'], ['PPMI(C F)', 'SVD'], ['Youtube', 'base'], ['Youtube', 'IDF'], ['XML-CNN', 'base'], ['XML-CNN', 'IDF'], ['FastXML', '-'], ['PFastreXML', '-']]
1
[['NCG-IDF @1'], ['NCG-IDF @3'], ['NCG-IDF @5'], ['NCG-IDF @7'], ['prec @1'], ['prec @3'], ['prec @5'], ['prec @7'], ['model size']]
[['14.86', '14.50', '14.61', '14.56', '36.76', '32.69', '31.12', '30.23', '28B'], ['20.12', '19.95', '19.43', '18.91', '46.92', '42.89', '40.29', '38.15', '4.7GB'], ['19.99', '19.79', '19.20', '18.72', '46.68', '42.60', '39.88', '37.81', '0.1GB'], ['22.92', '22.77', '22.17', '21.86', '51.50', '47.81', '44.97', '43.19',...
column
['NCG-IDF @1', 'NCG-IDF @3', 'NCG-IDF @5', 'NCG-IDF @7', 'prec @1', 'prec @3', 'prec @5', 'prec @7', 'model size']
['FastXML', 'PFastreXML']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>NCG-IDF @1</th> <th>NCG-IDF @3</th> <th>NCG-IDF @5</th> <th>NCG-IDF @7</th> <th>prec @1</th> <th>prec @3</th> <th>prec @5</th> <th>prec @7</th> <th>model size</th> </tr> <...
Table 1
table_1
D18-1374
7
emnlp2018
6 Experiments. In Table 1 we report results from the experiments on the 10M web documents dataset for prec and NDCG-IDF metrics for k = 1, 3, 5, 7, limiting k to small values as is common in the recommendation problems from large sets of items (Jain et al., 2016). The p(F ) baseline always predicts entities according t...
[2, 1, 2, 2, 2, 1, 2, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 2, 2, 2, 1, 1, 2, 1, 1, 2, 1]
['6 Experiments.', 'In Table 1 we report results from the experiments on the 10M web documents dataset for prec and NDCG-IDF metrics for k = 1, 3, 5, 7, limiting k to small values as is common in the recommendation problems from large sets of items (Jain et al., 2016).', 'The p(F ) baseline always predicts entities acc...
[None, ['NCG-IDF @1', 'NCG-IDF @3', 'NCG-IDF @5', 'NCG-IDF @7', 'prec @1', 'prec @3', 'prec @5', 'prec @7'], ['p(F)'], ['p(F)'], ['p(F)'], ['p(F)', 'prec @1'], None, ['P(F|C)'], ['PPMI(C F)', 'NCG-IDF @1', 'NCG-IDF @3', 'NCG-IDF @5', 'NCG-IDF @7'], ['SVD'], None, ['SVD'], ['Youtube', 'XML-CNN', 'base', 'NCG-IDF @1', 'N...
1
D18-1378table_8
SEU sentiment classification accuracy.
1
[['tSEU'], ['tSEU(D)']]
2
[['Accuracy', 'LA'], ['Accuracy', 'LAD'], ['Accuracy', 'LAW'], ['Accuracy', 'L']]
[['0.702', '0.723', '0.733', '0.750'], ['0.692', '0.715', '0.716', '0.735']]
column
['Accuracy', 'Accuracy', 'Accuracy', 'Accuracy']
['LA', 'LAD', 'LAW']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy || LA</th> <th>Accuracy || LAD</th> <th>Accuracy || LAW</th> <th>Accuracy || L</th> </tr> </thead> <tbody> <tr> <td>tSEU</td> <td>0.702</td> <td>0.723</td> <td>0.733...
Table 8
table_8
D18-1378
8
emnlp2018
To understand the contributions of incorporating authors, discourse relations, and word embeddings, we evaluate variants of Limbic for SEUlevel sentiment classification on two datasets: tSEU and tSEU(D). We create tSEU by randomly selecting 200 hotel reviews by seven authors. We manually annotate the sentiments of each...
[2, 2, 2, 2, 2, 1, 1, 1]
['To understand the contributions of incorporating authors, discourse relations, and word embeddings, we evaluate variants of Limbic for SEUlevel sentiment classification on two datasets: tSEU and tSEU(D).', 'We create tSEU by randomly selecting 200 hotel reviews by seven authors.', 'We manually annotate the sentiments...
[['tSEU', 'tSEU(D)'], ['tSEU'], None, ['tSEU(D)'], ['LA', 'LAD', 'LAW'], ['LA', 'LAD', 'LAW'], ['tSEU', 'tSEU(D)', 'LAD'], ['LAD', 'LAW']]
1
D18-1380table_2
The performance comparisons of different methods on the three datasets, where the results of baseline methods are retrieved from published papers. The best performances are marked in bold.
2
[['Method', 'Majority'], ['Method', 'Feature-SVM'], ['Method', 'ATAE-LSTM'], ['Method', 'TD-LSTM'], ['Method', 'IAN'], ['Method', 'MemNet'], ['Method', 'BILSTM-ATT-G'], ['Method', 'RAM'], ['Method', 'MGAN']]
2
[['Laptop', 'Acc'], ['Laptop', 'Macro-F1'], ['Restaurant', 'Acc'], ['Restaurant', 'Macro-F1'], ['Twitter', 'Acc'], ['Twitter', 'Macro-F1']]
[['0.5350', '0.3333', '0.6500', '0.3333', '0.5000', '0.3333'], ['0.7049', '-', '0.8016', '-', '0.6340', '0.6330'], ['0.6870', '-', '0.7720', '-', '-', '-'], ['0.7183', '0.6843', '0.7800', '0.6673', '0.6662', '0.6401'], ['0.7210', '-', '0.7860', '-', '-', '-'], ['0.7237', '-', '0.8032', '-', '0.6850', '0.6691'], ['0.731...
column
['Acc', 'Macro-F1', 'Acc', 'Macro-F1', 'Acc', 'Macro-F1']
['MGAN']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Laptop || Acc</th> <th>Laptop || Macro-F1</th> <th>Restaurant || Acc</th> <th>Restaurant || Macro-F1</th> <th>Twitter || Acc</th> <th>Twitter || Macro-F1</th> </tr> </thead> <tbody> ...
Table 2
table_2
D18-1380
8
emnlp2018
4.3 Overall Performance Comparison. Table 2 shows the performance comparison results of MGAN with other baseline methods. We can have the following observations. (1) Majority performs worst since it only utilizes the data distribution information. Feature+SVM can achieve much better performance on all the datasets, wit...
[2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 1, 1, 2, 2, 1, 2, 2, 2]
['4.3 Overall Performance Comparison.', 'Table 2 shows the performance comparison results of MGAN with other baseline methods.', 'We can have the following observations.', '(1) Majority performs worst since it only utilizes the data distribution information.', 'Feature+SVM can achieve much better performance on all the...
[None, ['MGAN'], None, ['Majority'], ['Feature-SVM'], ['MGAN', 'Majority', 'Feature-SVM'], ['ATAE-LSTM'], ['TD-LSTM', 'ATAE-LSTM'], ['MGAN', 'TD-LSTM'], ['ATAE-LSTM', 'TD-LSTM', 'IAN'], ['MGAN', 'IAN'], ['MemNet'], ['MemNet', 'BILSTM-ATT-G'], ['RAM'], ['RAM'], ['RAM', 'MemNet'], ['MGAN', 'MemNet', 'BILSTM-ATT-G', 'RAM'...
1
D18-1380table_3
The performance comparisons of MGAN variants. ∗ means MGAN-CF and MGAN can be regarded as the same method on twitter dataset.
2
[['Method', 'MGAN-C'], ['Method', 'MGAN-F'], ['Method', 'MGAN-CF'], ['Method', 'MGAN']]
2
[['Laptop', 'Acc'], ['Laptop', 'Macro-F1'], ['Restaurant', 'Acc'], ['Restaurant', 'Macro-F1'], ['Twitter', 'Acc'], ['Twitter', 'Macro-F1']]
[['0.7273', '0.6933', '0.8054', '0.7099', '0.7153', '0.6952'], ['0.7398', '0.7082', '0.8000', '0.7092', '0.7110', '0.6918'], ['0.7445', '0.7121', '0.8089', '0.7135', '0.7254*', '0.7081*'], ['0.7539', '0.7247', '0.8125', '0.7194', '0.7254', '0.7081']]
column
['Acc', 'Macro-F1', 'Acc', 'Macro-F1', 'Acc', 'Macro-F1']
['MGAN']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Laptop || Acc</th> <th>Laptop || Macro-F1</th> <th>Restaurant || Acc</th> <th>Restaurant || Macro-F1</th> <th>Twitter || Acc</th> <th>Twitter || Macro-F1</th> </tr> </thead> <tbody> ...
Table 3
table_3
D18-1380
8
emnlp2018
4.4 Analysis of MGAN model. Table 3 shows the performance comparison among the variants of MGAN model. We can have the following observations. (1) the proposed fine-grained attention mechanism MGAN-F, which is responsible for linking and fusing the information between the context and aspect word, achieves competitive p...
[2, 1, 1, 1, 2, 2, 1, 1, 1]
['4.4 Analysis of MGAN model.', 'Table 3 shows the performance comparison among the variants of MGAN model.', 'We can have the following observations.', '(1) the proposed fine-grained attention mechanism MGAN-F, which is responsible for linking and fusing the information between the context and aspect word, achieves co...
[None, ['MGAN'], None, ['MGAN-F', 'MGAN-C', 'Laptop'], None, ['Laptop'], ['MGAN-F'], ['MGAN-CF', 'MGAN-C', 'MGAN-F'], ['MGAN-CF', 'MGAN']]
1
D18-1381table_1
Experimental results on test datasets SemEval2013 and SemEval2014.
1
[['NBOW-MLP'], ['CNN'], ['BiLSTM'], ['AT-BiLSTM'], ['Lexicon RNN'], ['AGLR']]
3
[['SemEval13', '3-way', 'Acc'], ['SemEval13', '3-way', 'F1'], ['SemEval13', 'Binary', 'Acc'], ['SemEval13', 'Binary', 'F1'], ['SemEval14', '3-way', 'Acc'], ['SemEval14', '3-way', 'F1'], ['SemEval14', 'Binary', 'Acc'], ['SemEval14', 'Binary', 'F1'], ['-', 'AVG', 'Acc'], ['-', 'AVG', 'F1']]
[['65.18', '60.94', '85.44', '82.30', '65.68', '60.35', '89.44', '81.60', '76.44', '71.30'], ['71.41', '68.23', '85.74', '82.60', '70.05', '66.22', '89.86', '82.09', '79.27', '74.79'], ['72.06', '70.00', '85.89', '82.79', '71.62', '68.34', '90.20', '83.09', '79.94', '76.06'], ['72.21', '69.89', '86.13', '83.22', '71.83...
column
['Acc', 'F1', 'Acc', 'F1', 'Acc', 'F1', 'Acc', 'F1', 'Acc', 'F1']
['AGLR']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>SemEval13 || 3-way || Acc</th> <th>SemEval13 || 3-way || F1</th> <th>SemEval13 || Binary || Acc</th> <th>SemEval13 || Binary || F1</th> <th>SemEval14 || 3-way || Acc</th> <th>SemEval14 || 3-...
Table 1
table_1
D18-1381
7
emnlp2018
4.2 Experimental Results. Table 1 and Table 2 report the results of our experiments. The results on TRAIN-ALL are higher than TRAIN for SemEval16 in lieu of the larger dataset. Firstly, we observe that our proposed AGLR outperforms all neural baselines on 3-way classification. The overall performance of AGLR achieves s...
[2, 1, 0, 1, 1, 1, 1, 2, 1, 1, 2, 2, 1]
['4.2 Experimental Results.', 'Table 1 and Table 2 report the results of our experiments.', 'The results on TRAIN-ALL are higher than TRAIN for SemEval16 in lieu of the larger dataset.', 'Firstly, we observe that our proposed AGLR outperforms all neural baselines on 3-way classification.', 'The overall performance of A...
[None, None, None, ['AGLR', '3-way'], ['AGLR'], ['AGLR', 'AT-BiLSTM', 'Lexicon RNN', 'AVG', 'F1'], ['AGLR', 'AT-BiLSTM'], None, ['Lexicon RNN', '3-way'], ['Lexicon RNN', 'Binary'], None, ['AGLR'], ['BiLSTM', 'AT-BiLSTM', 'Lexicon RNN', 'Binary', 'AVG']]
1
D18-1381table_3
Comparisons against top SemEval systems. Results reported are the FPN metric scores used in the SemEval tasks.
2
[['SemEval13', 'Tweets'], ['SemEval14', 'Tweets'], ['SemEval14', 'Sarcasm'], ['SemEval14', 'LiveJournal'], ['SemEval16', 'Tweets'], ['SemEval16', 'Tweets (Acc)']]
1
[['Top System'], ['Ours']]
[['69.02', '70.10'], ['70.96', '71.11'], ['56.50', '58.87'], ['69.44', '72.52'], ['63.30', '61.90'], ['64.60', '66.60']]
column
['FPN', 'FPN']
['Ours']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Top System</th> <th>Ours</th> </tr> </thead> <tbody> <tr> <td>SemEval13 || Tweets</td> <td>69.02</td> <td>70.10</td> </tr> <tr> <td>SemEval14 || Tweets</td> <td>70.96</td>...
Table 3
table_3
D18-1381
8
emnlp2018
Comparisons against Top SemEval Systems. Table 3 reports the results of our proposed approach against the top team of each SemEval run, i.e., NRC-Canada (Mohammad et al., 2013) for 2013 Task 2, Team-X (Miura et al., 2014) for 2014 Task 9, SwissCheese (Deriu et al., 2016) for 2016 Task 4. We follow the exact training da...
[2, 1, 2, 1, 1, 2, 0, 2, 1, 2, 2, 1, 1, 1, 2]
['Comparisons against Top SemEval Systems.', 'Table 3 reports the results of our proposed approach against the top team of each SemEval run, i.e., NRC-Canada (Mohammad et al., 2013) for 2013 Task 2, Team-X (Miura et al., 2014) for 2014 Task 9, SwissCheese (Deriu et al., 2016) for 2016 Task 4.', 'We follow the exact tra...
[None, ['Ours', 'Top System'], None, ['SemEval16'], ['Ours', 'Top System', 'SemEval13', 'SemEval14', 'SemEval16'], None, None, ['SemEval16', 'Top System'], ['Ours'], ['SemEval16', 'Top System'], ['Ours'], ['Ours', 'SemEval16', 'Top System'], ['Ours', 'SemEval16', 'Top System', 'Tweets (Acc)'], ['SemEval14', 'Top System...
1
D18-1385table_2
The task and event settings performance.
2
[['Method', 'UoS-ITI'], ['Method', 'MCG-ICT'], ['Method', 'CERTH-UNITN'], ['Method', 'TFG'], ['Method', 'BFG'], ['Method', 'Combo']]
1
[['F1-Task'], ['F1-Event']]
[['0.830', '0.224'], ['0.942', '0.756'], ['0.911', '0.693'], ['0.908', '0.822'], ['0.810', '0.739'], ['0.899', '0.816']]
column
['F1-Task', 'F1-Event']
['TFG', 'BFG', 'Combo']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>F1-Task</th> <th>F1-Event</th> </tr> </thead> <tbody> <tr> <td>Method || UoS-ITI</td> <td>0.830</td> <td>0.224</td> </tr> <tr> <td>Method || MCG-ICT</td> <td>0.942</td> ...
Table 2
table_2
D18-1385
6
emnlp2018
6.2 Results. We describe the task setting results in Table 2, and detailed per-event results in Table 3. Although TFG does not achieve the highest F1-score in the task setting, it is mainly due to the split of the dataset. More than half of the tweets in the test set do not have images. Thus we can only leverage cross-...
[2, 1, 1, 2, 2, 1, 2, 2, 1, 2, 0, 0, 0, 0, 2, 2, 1, 1, 1, 2]
['6.2 Results.', 'We describe the task setting results in Table 2, and detailed per-event results in Table 3.', 'Although TFG does not achieve the highest F1-score in the task setting, it is mainly due to the split of the dataset.', 'More than half of the tweets in the test set do not have images.', 'Thus we can only l...
[None, None, ['TFG', 'F1-Task'], None, None, ['TFG', 'F1-Event'], None, None, None, ['TFG'], None, None, None, None, None, ['Combo'], ['Combo'], ['Combo', 'TFG'], ['Combo'], None]
1
D18-1385table_5
Rumor verification performance on the CCMR Baidu, where indicates there is no webpage in that event.
4
[['ID', '01', 'Event', 'Hurricane Sandy'], ['ID', '02', 'Event', 'Boston Marathon bombing'], ['ID', '03', 'Event', 'Sochi Olympics'], ['ID', '04', 'Event', 'MH flight 370'], ['ID', '05', 'Event', 'Bring Back Our Girls'], ['ID', '06', 'Event', 'Columbian Chemicals'], ['ID', '07', 'Event', 'Passport hoax'], ['ID', '08', ...
1
[['Random'], ['Transfer']]
[['0.247', '0.287'], ['0.230', '0.284'], ['0.555', '0.752'], ['0.407', '0.536'], ['0.500', '0.923'], ['0.000', '0.100'], ['0.000', '0.000'], ['0.000', '0.500'], ['0.577', '0.972'], ['-', '-'], ['0.375', '1.00'], ['0.571', '0.889'], ['0.559', '0.925'], ['0.227', '0.211'], ['0.125', '0.059'], ['-', '-'], ['-', '-'], ['0....
column
['accuracy', 'accuracy']
['Transfer']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Random</th> <th>Transfer</th> </tr> </thead> <tbody> <tr> <td>ID || 01 || Event || Hurricane Sandy</td> <td>0.247</td> <td>0.287</td> </tr> <tr> <td>ID || 02 || Event || Boston...
Table 5
table_5
D18-1385
8
emnlp2018
8.1 Results. Table 5 lists the detailed results of our transfer learning experiment. We achieved much better performance compared to the baseline with statistical significance (p<0.001), which indicates that our cross-lingual cross-platform feature set can be generalized to rumors in different languages. It enables the...
[2, 1, 1, 2, 2, 1, 2, 2, 2, 2, 1, 2, 2, 2, 1, 2]
['8.1 Results.', 'Table 5 lists the detailed results of our transfer learning experiment.', 'We achieved much better performance compared to the baseline with statistical significance (p<0.001), which indicates that our cross-lingual cross-platform feature set can be generalized to rumors in different languages.', 'It ...
[None, ['Transfer'], ['Transfer', 'Random'], None, None, ['Random', 'Transfer', 'ID', '11', 'Pig fish'], None, ['Pig fish'], None, None, ['Transfer', 'ID', '07', 'Passport hoax'], ['Passport hoax'], ['Passport hoax'], ['Passport hoax'], ['Transfer', 'Passport hoax'], None]
1
D18-1386table_1
Rationale performance relative to human annotations. Prediction accuracy is based on a binary threshold of 0.5. Performance of both Lei2016 model variants is significantly different from the baseline model (McNemar’s test, p < 0.05)
2
[['Model', 'Sigmoid predictor'], ['Model', 'RNN predictor'], ['Model', 'Mean human performance'], ['Model', 'Sigmoid predictor + feature importance'], ['Model', 'RNN predictor + sigmoid generator'], ['Model', 'RNN predictor + LIME'], ['Model', 'Lei2016'], ['Model', 'Lei2016 + bias'], ['Model', 'Lei2016 + bias + inverse...
3
[['Rationale', 'Tokenwise', 'F1'], ['Rationale', 'Tokenwise', 'Pr.'], ['Rationale', 'Tokenwise', 'Rec.'], ['Rationale', 'Phrasewise', 'F1'], ['Rationale', 'Phrasewise', 'Pr.'], ['Rationale', 'Phrasewise', 'Rec.'], ['Prediction', '-', 'MSE'], ['Prediction', '-', 'Acc.'], ['Prediction', '-', 'F1']]
[['-', '-', '-', '-', '-', '-', '0.029', '0.94', '0.74'], ['-', '-', '-', '-', '-', '-', '0.018', '0.95', '0.78'], ['0.55', '0.62', '0.57', '0.72', '0.78', '0.69', '-', '--', '-'], ['0.20', '0.62', '0.12', '0.64', '0.59', '0.70', '0.029', '0.94', '0.74'], ['0.29', '0.22', '0.45', '0.31', '0.19', '0.92', '0.038', '0.91'...
column
['F1', 'Pr.', 'Rec.', 'F1', 'Pr.', 'Rec.', 'MSE', 'Acc.', 'F1']
['Lei2016', 'Lei2016 + bias', 'Lei2016 + bias + inverse (EAN)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Rationale || Tokenwise || F1</th> <th>Rationale || Tokenwise || Pr.</th> <th>Rationale || Tokenwise || Rec.</th> <th>Rationale || Phrasewise || F1</th> <th>Rationale || Phrasewise || Pr.</th> ...
Table 1
table_1
D18-1386
7
emnlp2018
Table 1 displays the results. The difference in performance between the three baselines that don't use a RNN generator and the three model variants that do demonstrates the importance of context in recognizing personal attacks within text. The relative performance of the three variants of the Lei et al. model show that...
[1, 1, 1, 1, 1, 2, 1, 2]
['Table 1 displays the results.', "The difference in performance between the three baselines that don't use a RNN generator and the three model variants that do demonstrates the importance of context in recognizing personal attacks within text.", 'The relative performance of the three variants of the Lei et al. model s...
[None, ['RNN predictor', 'RNN predictor + sigmoid generator', 'RNN predictor + LIME'], ['Lei2016', 'Lei2016 + bias', 'Lei2016 + bias + inverse (EAN)'], ['Mean human performance'], ['Phrasewise'], ['Phrasewise'], ['Lei2016 + bias + inverse (EAN)', 'Phrasewise', 'Rec.'], None]
1
D18-1387table_7
Results on classifying vague sentences.
2
[['System', 'Baseline (Majority)'], ['System', 'LSTM'], ['System', 'CNN'], ['System', 'AC-GAN (Full Model)'], ['System', 'AC-GAN (Vagueness Only)']]
2
[['Sentence-Level', 'P (%)'], ['Sentence-Level', 'R (%)'], ['Sentence-Level', 'F (%)']]
[['25.77', '50.77', '34.19'], ['47.79', '50.06', '47.88'], ['49.66', '52.51', '50.18'], ['51.00', '53.50', '50.42'], ['52.90', '54.64', '52.34']]
column
['P (%)', 'R (%)', 'F (%)']
['AC-GAN (Full Model)', 'AC-GAN (Vagueness Only)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Sentence-Level || P (%)</th> <th>Sentence-Level || R (%)</th> <th>Sentence-Level || F (%)</th> </tr> </thead> <tbody> <tr> <td>System || Baseline (Majority)</td> <td>25.77</td> <td>5...
Table 7
table_7
D18-1387
8
emnlp2018
6.3 Predicting Vague Sentences. In Table 7 we present results on classifying privacy policy sentences into four categories: clear, somewhat clear, vague, and extremely vague. We compare AC-GAN with three baselines: CNN and LSTM trained on human-annotated sentences, and a majority baseline that assigns the most frequent...
[2, 1, 1, 1, 1, 2, 2, 1, 2]
['6.3 Predicting Vague Sentences.', 'In Table 7 we present results on classifying privacy policy sentences into four categories: clear, somewhat clear, vague, and extremely vague.', 'We compare AC-GAN with three baselines: CNN and LSTM trained on human-annotated sentences, and a majority baseline that assigns the most ...
[None, None, ['AC-GAN (Full Model)', 'AC-GAN (Vagueness Only)', 'Baseline (Majority)', 'LSTM', 'CNN'], ['AC-GAN (Full Model)', 'AC-GAN (Vagueness Only)'], ['CNN'], ['CNN'], ['AC-GAN (Full Model)', 'AC-GAN (Vagueness Only)'], ['AC-GAN (Full Model)', 'AC-GAN (Vagueness Only)'], None]
1
D18-1388table_1
Precision, Recall, and F1 scores of our model MVDAM on the test set compared with several baselines. All flavors of our model significantly outperform baselines and yield state of the art performance.
4
[['Model', 'CHANCE', 'Views', '-'], ['Model', 'LR', 'Views', 'Title'], ['Model', 'CNN', 'Views', 'Title'], ['Model', 'FNN', 'Views', 'Network'], ['Model', 'HDAM', 'Views', 'Content'], ['Model', 'MVDAM', 'Views', 'Title Network'], ['Model', 'MVDAM', 'Views', 'Title Content'], ['Model', 'MVDAM', 'Views', 'Title Network C...
1
[['P'], ['R'], ['F1']]
[['34.53', '34.59', '34.53'], ['59.53', '59.42', '59.12'], ['59.26', '59.40', '59.24'], ['68.28', '56.54', '55.10'], ['69.85', '68.72', '68.92'], ['69.87', '69.71', '69.66'], ['70.84', '70.19', '69.54'], ['80.10', '79.56', '79.67']]
column
['P', 'R', 'F1']
['MVDAM']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Model || CHANCE || Views || -</td> <td>34.53</td> <td>34.59</td> <td>34.53</td> </tr> <tr> <td>Model || ...
Table 1
table_1
D18-1388
6
emnlp2018
5.1 Results and Analysis. Quantitative Results. Table 1 shows the results of the evaluation. First note that the logistic regression classifier and the CNN model using the Title outperforms the CHANCE classifier significantly (F1: 59.12,59.24 vs 34.53). Second, only modeling the network structure yields a F1 of 55.10 b...
[2, 2, 1, 1, 1, 2, 1, 2, 1, 2, 1, 1, 2, 2]
['5.1 Results and Analysis.', 'Quantitative Results.', 'Table 1 shows the results of the evaluation.', 'First note that the logistic regression classifier and the CNN model using the Title outperforms the CHANCE classifier significantly (F1: 59.12,59.24 vs 34.53).', 'Second, only modeling the network structure yields a...
[None, None, None, ['LR', 'CNN', 'CHANCE', 'F1'], ['FNN', 'CHANCE', 'F1'], ['FNN'], ['HDAM', 'F1'], ['HDAM'], ['MVDAM', 'Title Network', 'Title Content', 'Title Network Content'], ['Title Network Content'], ['MVDAM', 'Title Content', 'HDAM', 'F1', 'Network'], ['MVDAM', 'Title Network Content', 'HDAM', 'F1'], ['MVDAM'],...
1
D18-1389table_4
Results for factuality and bias prediction. Bold values indicate the best-performing feature type in its family of features, while underlined values indicate the best-performing feature type overall.
5
[['Source', 'Majority Baseline', '-', 'Dim.', '-'], ['Source', 'Traffic', 'Alexa rank', 'Dim.', '1'], ['Source', 'URL', 'URL URL structure', 'Dim.', '12'], ['Source', 'Twitter', 'created at.', 'Dim.', '1'], ['Source', 'Twitter', 'has account', 'Dim.', '1'], ['Source', 'Twitter', 'verified', 'Dim.', '1'], ['Source', 'Tw...
2
[['Factuality', 'Macro-F1'], ['Factuality', 'Acc.'], ['Factuality', 'MAE'], ['Factuality', 'MAEM'], ['Bias', 'Macro-F1'], ['Bias', 'Acc.'], ['Bias', 'MAE'], ['Bias', 'MAEM']]
[['22.47', '50.84', '0.73', '1.00', '5.65', '24.67', '1.39', '1.71'], ['22.46', '50.75', '0.73', '1.00', '7.76', '25.70', '1.38', '1.71'], ['39.30', '53.28', '0.68', '0.81', '13.50', '23.64', '1.65', '2.06'], ['30.72', '52.91', '0.69', '0.92', '5.65', '24.67', '1.39', '1.71'], ['30.72', '52.91', '0.69', '0.92', '5.65',...
column
['Macro-F1', 'Acc.', 'MAE', 'MAEM', 'Macro-F1', 'Acc.', 'MAE', 'MAEM']
['Traffic', 'URL', 'Twitter', 'Wikipedia', 'Articles']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Factuality || Macro-F1</th> <th>Factuality || Acc.</th> <th>Factuality || MAE</th> <th>Factuality || MAEM</th> <th>Bias || Macro-F1</th> <th>Bias || Acc.</th> <th>Bias || MAE</th> ...
Table 4
table_4
D18-1389
7
emnlp2018
4.3 Results and Discussion. We present in Table 4 the results of using features from the different sources proposed in Section 3. We start by describing the contribution of each feature type towards factuality and bias. We can see that the textual features extracted from the ARTICLES yielded the best performance on fac...
[2, 1, 2, 1, 1, 2, 1, 1, 2, 1, 2, 1, 1, 1, 1, 1, 1, 2, 1, 1]
['4.3 Results and Discussion.', 'We present in Table 4 the results of using features from the different sources proposed in Section 3.', 'We start by describing the contribution of each feature type towards factuality and bias.', 'We can see that the textual features extracted from the ARTICLES yielded the best perform...
[None, ['Traffic', 'URL', 'Twitter', 'Wikipedia', 'Articles'], ['Factuality', 'Bias'], ['Articles', 'Factuality'], ['Articles', 'Majority Baseline', 'Bias', 'MAE'], None, ['Articles', 'title', 'body'], ['Wikipedia', 'Factuality', 'Bias'], ['Wikipedia', 'content'], ['Wikipedia', 'has page', 'Factuality', 'Majority Basel...
1
D18-1392table_1
R2 (variance explained) of residualized factor adaptation (RFA) versus baseline models. Results are shown for 3 hand-picked factors (age, race, education) as well as all factors. RC is residualized control and FA is factor adaptation. Each row is color-coded separately, from red (lowest value) to green (highest values)...
3
[['Domain', 'Health', 'HD'], ['Domain', 'Health', 'FP'], ['Domain', 'Psych.', 'LS'], ['Domain', 'Econ.', 'IP'], ['Domain', 'Econ.', 'FC'], ['Domain', '-', 'Avg.']]
2
[['Lang.', '-'], ['3 Socio-Demographic Factors', 'Controls Only'], ['3 Socio-Demographic Factors', 'Added-Controls'], ['3 Socio-Demographic Factors', 'RC'], ['3 Socio-Demographic Factors', 'FA'], ['3 Socio-Demographic Factors', 'RFA'], ['All Factors', 'Controls Only'], ['All Factors', 'Added-Controls'], ['All Factors',...
[['0.585', '0.423', '0.590', '0.620', '0.628', '0.638', '0.515', '0.597', '0.630', '0.636', '0.657*'], ['0.602', '0.434', '0.606', '0.619', '0.647', '0.647', '0.609', '0.632', '0.657', '0.685', '0.680'], ['0.214', '0.148', '0.219', '0.292', '0.308', '0.338', '0.326', '0.352', '0.376', '0.353', '0.396*'], ['0.245', '0.0...
column
['R2', 'R2', 'R2', 'R2', 'R2', 'R2', 'R2', 'R2', 'R2', 'R2', 'R2']
['RC', 'FA', 'RFA']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Lang. || -</th> <th>3 Socio-Demographic Factors || Controls Only</th> <th>3 Socio-Demographic Factors || Added-Controls</th> <th>3 Socio-Demographic Factors || RC</th> <th>3 Socio-Demographic Fac...
Table 1
table_1
D18-1392
6
emnlp2018
Table 1 compares results in terms of variance explained, when using the three hand-picked factors vs. using all 11 extra-linguistic factors (Since past work has also used the Pearson-r metric, Table 2 shows the same results for all factors in terms of Pearson-r). As the table shows, FA outperforms controls only, added-...
[1, 1, 1, 2, 1, 2, 1, 2, 2, 2]
['Table 1 compares results in terms of variance explained, when using the three hand-picked factors vs. using all 11 extra-linguistic factors (Since past work has also used the Pearson-r metric, Table 2 shows the same results for all factors in terms of Pearson-r).', 'As the table shows, FA outperforms controls only, a...
[None, ['Controls Only', 'Added-Controls', 'RC', 'FA'], ['RFA', 'FA', '3 Socio-Demographic Factors', 'All Factors'], None, ['Lang.', 'Controls Only', 'Added-Controls'], ['FA', 'RFA'], ['RFA'], ['RC', 'FA', 'RFA'], ['Added-Controls', 'RC', 'FA', 'RFA'], ['RFA']]
1
D18-1395table_2
Results: content features
1
[['In-domain'], ['Out-of-domain']]
1
[['Binary'], ['Families'], ['NLI']]
[['91.07', '83.51', '70.26'], ['81.49', '65.37', '35.99']]
column
['accuracy', 'accuracy', 'accuracy']
['In-domain', 'Out-of-domain']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Binary</th> <th>Families</th> <th>NLI</th> </tr> </thead> <tbody> <tr> <td>In-domain</td> <td>91.07</td> <td>83.51</td> <td>70.26</td> </tr> <tr> <td>Out-of-domain</t...
Table 2
table_2
D18-1395
6
emnlp2018
Table 2 depicts the results obtained by combining character trigrams, tokens, and spelling features (Sections 3.5.1, 3.5.2). As expected, these content features yield excellent results in-domain, but the accuracy deteriorates out-of-domain, especially in the most challenging task of NLI.
[1, 1]
['Table 2 depicts the results obtained by combining character trigrams, tokens, and spelling features (Sections 3.5.1, 3.5.2).', 'As expected, these content features yield excellent results in-domain, but the accuracy deteriorates out-of-domain, especially in the most challenging task of NLI.']
[None, ['In-domain', 'Out-of-domain', 'NLI']]
1
D18-1395table_4
Results: grammar and spelling features
1
[['In-domain'], ['Out-of-domain']]
1
[['Binary'], ['Families'], ['NLI']]
[['72.93', '55.59', '26.74'], ['70.24', '47.23', '14.15']]
column
['accuracy', 'accuracy', 'accuracy']
['In-domain', 'Out-of-domain']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Binary</th> <th>Families</th> <th>NLI</th> </tr> </thead> <tbody> <tr> <td>In-domain</td> <td>72.93</td> <td>55.59</td> <td>26.74</td> </tr> <tr> <td>Out-of-domain</t...
Table 4
table_4
D18-1395
7
emnlp2018
Table 4 shows the results obtained by combining the spelling features with the grammar features (Section 3.5.2). Clearly, these two feature types reflect somewhat different phenomena, as the results are better than using any of the two alone.
[1, 1]
['Table 4 shows the results obtained by combining the spelling features with the grammar features (Section 3.5.2).', 'Clearly, these two feature types reflect somewhat different phenomena, as the results are better than using any of the two alone.']
[None, None]
1
D18-1395table_5
Results: centrality features
1
[['In-domain'], ['Out-of-domain']]
1
[['Binary'], ['Families'], ['NLI']]
[['57.92', '32.39', '5.75'], ['56.29', '30.70', '5.60']]
column
['accuracy', 'accuracy', 'accuracy']
['In-domain', 'Out-of-domain']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Binary</th> <th>Families</th> <th>NLI</th> </tr> </thead> <tbody> <tr> <td>In-domain</td> <td>57.92</td> <td>32.39</td> <td>5.75</td> </tr> <tr> <td>Out-of-domain</td...
Table 5
table_5
D18-1395
7
emnlp2018
Table 5 shows the accuracy obtained by all the centrality features (Section 3.5.4), excluding the most popular subreddits. As expected, the contribution of these features is small, and is most evident on the binary task. The signal of the native language reflected by these features is very subtle, but is nonetheless pr...
[1, 1, 1]
['Table 5 shows the accuracy obtained by all the centrality features (Section 3.5.4), excluding the most popular subreddits.', 'As expected, the contribution of these features is small, and is most evident on the binary task.', 'The signal of the native language reflected by these features is very subtle, but is noneth...
[None, ['Binary'], None]
1
D18-1396table_3
BLEU scores. ”0” represents the translation results without teacher forcing during inference, and ”1” represents the translation results with teacher forcing during inference. ∆ represents the BLEU score improvement of teacher forcing over normal translation.
2
[['De-En', 'Left'], ['De-En', 'Right'], ['En-De', 'Left'], ['En-De', 'Right'], ['En-Zh', 'Left'], ['En-Zh', 'Right']]
2
[['left-to-right', '0'], ['left-to-right', '1'], ['left-to-right', 'Δ'], ['right-to-left', '0'], ['right-to-left', '1'], ['right-to-left', 'Δ']]
[['10.17', '10.71', '0.54', '9.41', '10.41', '1.00'], ['8.39', '9.25', '0.86', '7.83', '8.45', '0.62'], ['7.90', '9.43', '1.53', '7.11', '10.71', '3.60'], ['6.60', '8.36', '1.76', '6.45', '8.37', '1.92'], ['7.41', '9.11', '1.70', '7.01', '9.83', '2.82'], ['5.91', '8.55', '2.64', '5.77', '7.54', '1.77']]
column
['BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU']
['1']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>left-to-right || 0</th> <th>left-to-right || 1</th> <th>left-to-right || Δ</th> <th>right-to-left || 0</th> <th>right-to-left || 1</th> <th>right-to-left || Δ</th> </tr> </thead> <tbody...
Table 3
table_3
D18-1396
4
emnlp2018
Same as last section, we evaluate the quality of the left and right half of the translation results generated by both the left-to-right and right-toleft models. The results are summarized in Table 3. For comparison, we also include the BLEU scores of normal translation (without teacher forcing). We have several finding...
[2, 1, 2, 1, 2, 1, 2, 2, 2, 1, 1, 2, 1, 1, 2]
['Same as last section, we evaluate the quality of the left and right half of the translation results generated by both the left-to-right and right-toleft models.', 'The results are summarized in Table 3.', 'For comparison, we also include the BLEU scores of normal translation (without teacher forcing).', 'We have seve...
[['left-to-right', 'right-to-left'], None, ['0'], None, None, ['Left', 'Right', '0', '1'], None, None, None, ['En-Zh', 'Left', 'Right', 'left-to-right', 'Δ'], ['En-Zh', 'Left', 'Right', 'right-to-left', 'Δ'], None, ['En-De', 'Left', 'Right', 'left-to-right', '1'], None, None]
1
D18-1397table_2
Results of variance reduction of gradient estimation.
2
[['Training Strategy', 'RL'], ['Training Strategy', 'RL (baseline function)']]
1
[[' En-De'], ['En-Zh'], ['Zh-En']]
[['27.23', '34.47', '24.72'], ['27.25', '34.43', '24.73']]
column
['BLEU', 'BLEU', 'BLEU']
['RL (baseline function)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>En-De</th> <th>En-Zh</th> <th>Zh-En</th> </tr> </thead> <tbody> <tr> <td>Training Strategy || RL</td> <td>27.23</td> <td>34.47</td> <td>24.72</td> </tr> <tr> <td>Trai...
Table 2
table_2
D18-1397
6
emnlp2018
Table 2 shows that the learning of baseline reward does not help RL training. This contradicts with previous observations (Ranzato et al., 2016), and seems to suggest that the variance of gradient estimation in NMT is not as large as we expected. The reason might be that the probability mass on the target-side language...
[1, 2, 2, 2]
['Table 2 shows that the learning of baseline reward does not help RL training.', 'This contradicts with previous observations (Ranzato et al., 2016), and seems to suggest that the variance of gradient estimation in NMT is not as large as we expected.', 'The reason might be that the probability mass on the target-side ...
[['RL (baseline function)'], None, None, ['RL (baseline function)']]
1
D18-1397table_3
Results with source monolingual data. “B” denotes bilingual data, “Ms” denotes source-side monolingual data, “&” denotes data combination.
2
[['[Data] (Objective)', '[B] (MLE)'], ['[Data] (Objective)', '[B] (MLE) + [B] (RL)'], ['[Data] (Objective)', '[B] (MLE) + [Ms] (RL)'], ['[Data] (Objective)', '[B & Ms] (MLE)'], ['[Data] (Objective)', '[B & Ms] (MLE) + [B & Ms] (RL)']]
1
[['Valid'], ['Test']]
[['22.32', '24.29'], ['22.87', '25.04'], ['23.03', '25.22'], ['24.31', '25.31'], ['24.58', '25.60']]
column
['BLEU', 'BLEU']
['[B] (MLE)', '[B] (MLE) + [B] (RL)', '[B] (MLE) + [Ms] (RL)', '[B & Ms] (MLE)', '[B & Ms] (MLE) + [B & Ms] (RL)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Valid</th> <th>Test</th> </tr> </thead> <tbody> <tr> <td>[Data] (Objective) || [B] (MLE)</td> <td>22.32</td> <td>24.29</td> </tr> <tr> <td>[Data] (Objective) || [B] (MLE) + [B]...
Table 3
table_3
D18-1397
7
emnlp2018
From Table 3 and 4, we have several observations. First, monolingual data helps RL training, improving BLEU score from 25.04 to 25.22 (ρ < 0.05) in Table 3. Second, when we only add monolingual data for RL training, the model achieves similar performance compared to MLE training with bilingual and monolingual data (e.g...
[1, 1, 1]
['From Table 3 and 4, we have several observations.', 'First, monolingual data helps RL training, improving BLEU score from 25.04 to 25.22 (ρ < 0.05) in Table 3.', 'Second, when we only add monolingual data for RL training, the model achieves similar performance compared to MLE training with bilingual and monolingual d...
[None, ['[B] (MLE) + [Ms] (RL)', '[B] (MLE) + [B] (RL)', 'Test'], ['[B] (MLE) + [Ms] (RL)', '[B & Ms] (MLE)']]
1
D18-1397table_5
Results of sequential approach for monolingual data. “B” denotes bilingual data, “Ms” denotes source-side monolingual data and “Mt” denotes targetside monolingual data, “&” denotes data combination.
2
[['[Data] (Objective)', '[B & Ms] (MLE)'], ['[Data] (Objective)', '[B & Ms] (MLE) + [B & Ms] (RL)'], ['[Data] (Objective)', '[B & Ms] (MLE) + [Mt] (RL)'], ['[Data] (Objective)', '[B & Mt] (MLE)'], ['[Data] (Objective)', '[B & Mt] (MLE) + [B & Mt] (RL)'], ['[Data] (Objective)', '[B & Mt] (MLE) + [Ms] (RL)']]
1
[['Valid'], ['Test']]
[['24.31', '25.31'], ['24.58', '25.60'], ['24.61', '25.72'], ['24.14', '25.24'], ['24.41', '25.58'], ['24.75', '25.92']]
column
['BLEU', 'BLEU']
['[B & Mt] (MLE)', '[B & Mt] (MLE) + [Ms] (RL)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Valid</th> <th>Test</th> </tr> </thead> <tbody> <tr> <td>[Data] (Objective) || [B &amp; Ms] (MLE)</td> <td>24.31</td> <td>25.31</td> </tr> <tr> <td>[Data] (Objective) || [B &am...
Table 5
table_5
D18-1397
7
emnlp2018
With both Source-Side and Target-Side Monolingual Data. We have two approaches to use both source-side and target-side monolingual data, as described in subsection 4.3. The results are reported in Table 5. From Table 5, we can observe that the sequential training of monolingual data can benefit the model performance. T...
[2, 2, 1, 1, 1]
['With both Source-Side and Target-Side Monolingual Data.', 'We have two approaches to use both source-side and target-side monolingual data, as described in subsection 4.3.', 'The results are reported in Table 5.', 'From Table 5, we can observe that the sequential training of monolingual data can benefit the model per...
[None, None, None, None, ['[B & Mt] (MLE)', '[B & Mt] (MLE) + [Ms] (RL)', 'Test']]
1
D18-1399table_3
Results of the proposed method in comparison to supervised systems (BLEU). Transformer results reported by Vaswani et al. (2017). SMT variants are incremental (e.g. 2nd includes 1st). Refer to the text for more details.
2
[['Supervised', 'NMT (transformer)'], ['Supervised', 'WMT best'], ['Supervised', 'Supervised SMT (europarl)'], ['Supervised', '+ w/o lexical reord.'], ['Supervised', '+ constrained vocab.'], ['Supervised', '+ unsup. tuning'], ['Unsup.', 'Proposed system']]
2
[['WMT-14', 'FR-EN'], ['WMT-14', 'EN-FR'], ['WMT-14', 'DE-EN'], ['WMT-14', 'EN-DE'], ['WMT-16', 'DE-EN'], ['WMT-16', 'EN-DE']]
[['-', '41.8', '-', '28.4', '-', '-'], ['35.0', '35.8', '29.0', '20.6', '40.2', '34.2'], ['30.61', '30.82', '20.83', '16.60', '26.38', '22.12'], ['30.54', '30.33', '20.37', '16.34', '25.99', '22.20'], ['30.04', '30.10', '19.91', '16.32', '25.66', '21.53'], ['29.32', '29.46', '17.75', '15.45', '23.35', '19.86'], ['25.87...
column
['BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU']
['Proposed system']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>WMT-14 || FR-EN</th> <th>WMT-14 || EN-FR</th> <th>WMT-14 || DE-EN</th> <th>WMT-14 || EN-DE</th> <th>WMT-16 || DE-EN</th> <th>WMT-16 || EN-DE</th> </tr> </thead> <tbody> <tr> <td...
Table 3
table_3
D18-1399
7
emnlp2018
6.3 Comparison with supervised systems. So as to put our results into perspective, Table 3 comprises the results of different supervised methods in the same test sets. More concretely, we report the results of the Transformer (Vaswani et al., 2017), an NMT system based on self-attention that is the current state-of-the...
[2, 1, 2, 2, 1, 2, 2, 2, 1, 1, 1, 2]
['6.3 Comparison with supervised systems.', 'So as to put our results into perspective, Table 3 comprises the results of different supervised methods in the same test sets.', 'More concretely, we report the results of the Transformer (Vaswani et al., 2017), an NMT system based on self-attention that is the current stat...
[None, None, ['NMT (transformer)'], ['+ w/o lexical reord.', '+ constrained vocab.', '+ unsup. tuning'], ['Proposed system', 'Supervised SMT (europarl)'], ['Proposed system', 'Supervised SMT (europarl)'], None, ['Proposed system'], ['Supervised SMT (europarl)', '+ w/o lexical reord.', '+ constrained vocab.', '+ unsup. ...
1
D18-1403table_4
Experimental results for the identification of aspect segments (top) and the retrieval of salient segments (bottom) on OPOSUM’s six product domains and overall (AVG).
2
[['Aspect Extraction (F1)', 'Majority'], ['Aspect Extraction (F1)', 'ABAE'], ['Aspect Extraction (F1)', 'ABAEinit'], ['Aspect Extraction (F1)', 'MATE'], ['Aspect Extraction (F1)', 'MATE+MT'], ['Salience (MAP/P@5)', 'MILNET'], ['Salience (MAP/P@5)', 'ABAEinit'], ['Salience (MAP/P@5)', 'MATE'], ['Salience (MAP/P@5)', 'MA...
1
[['L. Bags'], ['B/T H/S'], ['Boots'], ['Keyb/s'], ['TVs'], ['Vac/s'], ['AVG']]
[['37.9', '39.8', '37.1', '43.2', '41.7', '41.6', '40.2'], ['38.1', '37.6', '35.2', '38.6', '39.5', '38.1', '37.9'], ['41.6', '48.5', '41.2', '41.3', '45.7', '40.6', '43.2'], ['46.2', '52.2', '45.6', '43.5', '48.8', '42.3', '46.4'], ['48.6', '54.5', '46.4', '45.3', '51.8', '47.7', '49.1'], ['21.8 / 40.0', '19.8 / 36.7'...
column
['F1', 'F1', 'F1', 'F1', 'F1', 'F1', 'F1']
['MATE', 'MATE+MT', 'MILNET+ABAEinit', 'MILNET+MATE', 'MILNET+MATE+MT']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>L. Bags</th> <th>B/T H/S</th> <th>Boots</th> <th>Keyb/s</th> <th>TVs</th> <th>Vac/s</th> <th>AVG</th> </tr> </thead> <tbody> <tr> <td>Aspect Extraction (F1) || Majority</td...
Table 4
table_4
D18-1403
7
emnlp2018
Table 4 (top) reports the results using micro-averaged F1. Our models outperform both variants of ABAE across domains. ABAEinit improves upon the vanilla model, affirming that informed aspect initialization can facilitate the task. The richer multi-seed representation of MATE, however, helps our model achieve a 3.2% in...
[1, 1, 1, 1, 1, 2, 1, 1, 1, 2, 2]
['Table 4 (top) reports the results using micro-averaged F1.', 'Our models outperform both variants of ABAE across domains.', 'ABAEinit improves upon the vanilla model, affirming that informed aspect initialization can facilitate the task.', 'The richer multi-seed representation of MATE, however, helps our model achiev...
[['Aspect Extraction (F1)'], ['ABAE', 'ABAEinit', 'MATE'], ['ABAE', 'ABAEinit'], ['ABAEinit', 'MATE', 'AVG'], ['MATE', 'MATE+MT', 'AVG'], ['Salience (MAP/P@5)'], ['MILNET+ABAEinit', 'MILNET+MATE', 'MILNET+MATE+MT'], None, ['MILNET+MATE+MT', 'MILNET+ABAEinit', 'MILNET+MATE', 'AVG'], ['MILNET'], ['MILNET+MATE+MT']]
1
D18-1403table_5
Summarization results on OPOSUM.
2
[['Summarization', 'Random'], ['Summarization', 'Lead'], ['Summarization', 'SumBasic'], ['Summarization', 'LexRank'], ['Summarization', 'Opinosis'], ['Summarization', 'Opinosis+MATE+MT'], ['Summarization', 'MILNET+MATE+MT'], ['Summarization', 'MILNET+MATE+MT+RD'], ['Summarization', 'Inter-annotator Agreement']]
1
[['ROUGE-1'], ['ROUGE-2'], ['ROUGE-L']]
[['35.1', '11.3', '34.3'], ['35.5', '15.2', '34.8'], ['34.0', '11.2', '32.6'], ['37.7', '14.1', '36.6'], ['36.8', '14.3', '35.7'], ['38.7', '15.8', '37.4'], ['43.5', '21.7', '42.8'], ['44.1', '21.8', '43.3'], ['54.7', '36.6', '53.9']]
column
['ROUGE-1', 'ROUGE-2', 'ROUGE-L']
['MILNET+MATE+MT', 'MILNET+MATE+MT+RD']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>ROUGE-1</th> <th>ROUGE-2</th> <th>ROUGE-L</th> </tr> </thead> <tbody> <tr> <td>Summarization || Random</td> <td>35.1</td> <td>11.3</td> <td>34.3</td> </tr> <tr> <td>S...
Table 5
table_5
D18-1403
8
emnlp2018
Table 5 presents ROUGE-1, ROUGE-2 and ROUGE-L F1 scores, averaged across domains. Our model (MILNET+MATE+MT) significantly outperforms all comparison systems (p < 0.05; paired bootstrap resampling; Koehn 2004), whilst using a redundancy filter slightly improves performance. Assisting Opinosis with aspect predictions is...
[1, 1, 1]
['Table 5 presents ROUGE-1, ROUGE-2 and ROUGE-L F1 scores, averaged across domains.', 'Our model (MILNET+MATE+MT) significantly outperforms all comparison systems (p < 0.05; paired bootstrap resampling; Koehn 2004), whilst using a redundancy filter slightly improves performance.', 'Assisting Opinosis with aspect predic...
[['ROUGE-1', 'ROUGE-2', 'ROUGE-L'], ['MILNET+MATE+MT'], ['MILNET+MATE+MT', 'Opinosis+MATE+MT']]
1
D18-1410table_2
Substitution Ranking evaluation on English Lexical Simplification shared-task of SemEval 2012. P@1 and Pearson correlation of our neural readability ranking (NRR) model compared to the state-of-the-art neural model (Paetzold and Specia, 2017) and other methods. ∗ indicates statistical significance (p < 0.05) compared t...
1
[['Biran et al. (2011)'], ['Jauhar & Specia (2012)'], ['Kajiwara et al. (2013)'], ['Horn et al. (2014)'], ['Glavaš & Štajner (2015)'], ['Boundary Ranker'], ['Paetzold & Specia (2017)'], ['NRRall'], ['NRRall+binning'], ['NRRall+binning+WC']]
1
[['P@1'], ['Pearson']]
[['51.3', '0.505'], ['60.2', '0.575'], ['60.4', '0.649'], ['63.9', '0.673'], ['63.2', '0.644'], ['65.3', '0.677'], ['65.6', '0.679'], ['65.4', '0.682'], ['66.6', '0.702*'], ['67.3*', '0.714*']]
column
['P@1', 'Pearson']
['NRRall', 'NRRall+binning', 'NRRall+binning+WC']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P@1</th> <th>Pearson</th> </tr> </thead> <tbody> <tr> <td>Biran et al. (2011)</td> <td>51.3</td> <td>0.505</td> </tr> <tr> <td>Jauhar &amp; Specia (2012)</td> <td>60.2</td...
Table 2
table_2
D18-1410
5
emnlp2018
Results. Table 2 compares the performances of our NRR model to the state-of-the-art results reported by Paetzold and Specia (2017). We use precision of the simplest candidate (P@1) and Pearson correlation to measure performance. P@1 is equivalent to TRank (Specia et al., 2012), the official metric for the SemEval 2012 ...
[2, 1, 2, 2, 2, 2, 1, 2, 1]
['Results.', 'Table 2 compares the performances of our NRR model to the state-of-the-art results reported by Paetzold and Specia (2017).', 'We use precision of the simplest candidate (P@1) and Pearson correlation to measure performance.', 'P@1 is equivalent to TRank (Specia et al., 2012), the official metric for the Se...
[None, None, ['P@1', 'Pearson'], ['P@1'], ['Pearson'], ['NRRall', 'NRRall+binning', 'NRRall+binning+WC'], ['NRRall+binning+WC', 'P@1', 'Pearson'], None, ['NRRall+binning', 'NRRall+binning+WC']]
1
D18-1410table_4
Cross-validation accuracy and precision of our neural readability ranking (NRR) model used to create SimplePPDB++, in comparison to the SimplePPDB and other baselines. P+1 stands for the precision of ‘simplifying’ paraphrase rules and P−1 for the precision of ‘complicating’ rules. * indicates statistical significance (...
1
[['Google Ngram Frequency'], ['Number of Syllables'], ['Character & Word Length'], ['W2V'], ['SimplePPDB'], ['NRRall'], ['NRRall+binning'], ['NRRall+binning+WC']]
1
[['Acc.'], ['P+1'], ['P-1']]
[['49.4', '53.7', '54.0'], ['50.1', '53.8', '53.3'], ['56.2', '55.7', '56.1'], ['60.4', '54.9', '53.1'], ['62.1', '57.6', '57.8'], ['59.4', '61.8', '57.7'], ['64.1', '62.1', '59.8'], ['65.3*', '65.0*', '61.8*']]
column
['Acc.', 'P+1', 'P-1']
['NRRall', 'NRRall+binning', 'NRRall+binning+WC']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Acc.</th> <th>P+1</th> <th>P-1</th> </tr> </thead> <tbody> <tr> <td>Google Ngram Frequency</td> <td>49.4</td> <td>53.7</td> <td>54.0</td> </tr> <tr> <td>Number of Syl...
Table 4
table_4
D18-1410
6
emnlp2018
Results. Following the evaluation setup in previous work (Pavlick and Callison-Burch, 2016), we compare accuracy and precision by 10-fold cross-validation. Folds are constructed in such a way that the training and test vocabularies are disjoint. Table 4 shows the performance of our model compared to SimplePPDB and othe...
[2, 2, 2, 1, 2, 2, 1, 1, 1]
['Results.', 'Following the evaluation setup in previous work (Pavlick and Callison-Burch, 2016), we compare accuracy and precision by 10-fold cross-validation.', 'Folds are constructed in such a way that the training and test vocabularies are disjoint.', 'Table 4 shows the performance of our model compared to SimplePP...
[None, ['Acc.', 'P+1', 'P-1'], None, None, ['NRRall'], ['SimplePPDB'], ['NRRall', 'NRRall+binning', 'Acc.', 'P+1', 'P-1'], ['NRRall+binning+WC', 'SimplePPDB'], ['NRRall+binning+WC', 'SimplePPDB', 'Acc.', 'P+1', 'P-1']]
1
D18-1410table_5
Substitution Generation evaluation with Mean Average Precision, Precision@1 and the average number of paraphrases generated per target for each method. n is the number of target complex words/phrases for which the model generated > 0 candidates. Kauchak† has an advantage on MAP because it generates the least number of ...
1
[['Glavas'], ['WordNet'], ['Kauchak'], ['SimplePPDB'], ['SimplePPDB++']]
1
[['#PPs'], ['MAP'], ['P@1']]
[['-', '22.8', '13.5'], ['6.63', '62.2', '50.6'], ['4.39', '76.4†', '68.9'], ['8.77', '67.8', '78.0'], ['9.52', '69.1', '80.2']]
column
['#PPs', 'MAP', 'P@1']
['SimplePPDB++']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>#PPs</th> <th>MAP</th> <th>P@1</th> </tr> </thead> <tbody> <tr> <td>Glavas</td> <td>-</td> <td>22.8</td> <td>13.5</td> </tr> <tr> <td>WordNet</td> <td>6.63</td> ...
Table 5
table_5
D18-1410
7
emnlp2018
Results. Table 5 shows the comparison of SimplePPDB and SimplePPDB++ on the number of substitutions generated for each target, the mean average precision and precision@1 for the final ranked list of candidate substitutions. This is a fair and direct comparison between SimplePPDB++ and SimplePPDB, as both methods have a...
[2, 1, 1, 1, 2]
['Results.', 'Table 5 shows the comparison of SimplePPDB and SimplePPDB++ on the number of substitutions generated for each target, the mean average precision and precision@1 for the final ranked list of candidate substitutions.', 'This is a fair and direct comparison between SimplePPDB++ and SimplePPDB, as both method...
[None, ['SimplePPDB', 'SimplePPDB++', '#PPs', 'MAP', 'P@1'], ['SimplePPDB++', 'SimplePPDB'], ['SimplePPDB++'], None]
1
D18-1410table_6
Evaluation on two datasets for English complex word identification. Our approaches that utilize the word-complexity lexicon (W C) improve upon the nearest centroid (Yimam et al., 2017) and SV000gg (Paetzold and Specia, 2016b) systems. The best performance figure of each column is denoted in bold typeface and the second...
1
[['Length'], ['Senses'], ['SimpleWiki'], ['NearestCentroid'], ['SV000gg'], ['WC-only'], ['NearestCentroid+WC'], ['SV000gg+WC']]
2
[['CWI SemEval 2016', 'G-score'], ['CWI SemEval 2016', 'F-score'], ['CWI SemEval 2016', 'Accuracy'], ['CWIG3G2 2018', 'G-score'], ['CWIG3G2 2018', 'F-score'], ['CWIG3G2 2018', 'Accuracy']]
[['47.8', '10.7', '33.2', '70.8', '65.9', '67.7'], ['57.9', '12.5', '43.6', '67.7', '62.3', '54.1'], ['69.7', '16.2', '58.3', '73.1', '66.3', '61.6'], ['66.1', '14.8', '53.6', '75.1', '66.6', '76.7'], ['77.3', '24.3', '77.6', '74.9', '73.8', '78.7'], ['68.5', '30.5', '87.7', '71.1', '67.5', '69.8'], ['70.2', '16.6', '6...
column
['G-score', 'F-score', 'Accuracy', 'G-score', 'F-score', 'Accuracy']
['WC-only', 'NearestCentroid+WC', 'SV000gg+WC']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>CWI SemEval 2016 || G-score</th> <th>CWI SemEval 2016 || F-score</th> <th>CWI SemEval 2016 || Accuracy</th> <th>CWIG3G2 2018 || G-score</th> <th>CWIG3G2 2018 || F-score</th> <th>CWIG3G2 2018...
Table 6
table_6
D18-1410
8
emnlp2018
Results. We compare our enhanced approaches (SV000gg+W C and NC+W C) and lexicon only approach (W C-only), with the state-of-the-art and baseline threshold-based methods. For measuring performance, we use F-score and accuracy as well as G-score, the harmonic mean of accuracy and recall. G-score is the official metric o...
[2, 1, 2, 2, 1, 2, 1, 1]
['Results.', 'We compare our enhanced approaches (SV000gg+W C and NC+W C) and lexicon only approach (W C-only), with the state-of-the-art and baseline threshold-based methods.', 'For measuring performance, we use F-score and accuracy as well as G-score, the harmonic mean of accuracy and recall.', 'G-score is the offici...
[None, ['WC-only', 'NearestCentroid+WC', 'SV000gg+WC'], ['G-score', 'F-score', 'Accuracy'], ['G-score'], ['NearestCentroid+WC', 'SV000gg+WC'], None, ['WC-only', 'CWIG3G2 2018'], ['WC-only', 'CWI SemEval 2016', 'F-score', 'Accuracy']]
1
D18-1412table_2
PropBank sSRL results, using gold predicates, on CoNLL 2012 test. For fair comparison, we show only non-ensembled models.
2
[['Model', 'Zhou and Xu (2015)'], ['Model', 'He et al. (2017)'], ['Model', 'He et al. (2018a)'], ['Model', 'Tan et al. (2018)'], ['Model', 'Semi-CRF baseline'], ['Model', '+ common nonterminals']]
1
[['Prec.'], ['Rec.'], ['F1']]
[['-', '-', '81.3'], ['81.7', '81.6', '81.7'], ['83.9', '73.7', '82.1'], ['81.9', '83.6', '82.7'], ['84.8', '81.2', '83.0'], ['85.1', '82.6', '83.8']]
column
['Prec. ', 'Rec.', 'F1']
['Semi-CRF baseline', '+ common nonterminals']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Prec.</th> <th>Rec.</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Model || Zhou and Xu (2015)</td> <td>-</td> <td>-</td> <td>81.3</td> </tr> <tr> <td>Model || He e...
Table 2
table_2
D18-1412
7
emnlp2018
PropBank SRL. We use the OntoNotes data from the CoNLL shared task in 2012 (Pradhan et al., 2013) for Propbank SRL. Table 2 reports results using gold predicates. Recent competitive systems for PropBank SRL follow the approach of Zhou and Xu (2015), employing deep architectures, and forgoing the use of any syntax. He e...
[2, 2, 1, 2, 2, 2, 2, 2, 1, 2, 2, 2]
['PropBank SRL.', 'We use the OntoNotes data from the CoNLL shared task in 2012 (Pradhan et al., 2013) for Propbank SRL.', 'Table 2 reports results using gold predicates.', 'Recent competitive systems for PropBank SRL follow the approach of Zhou and Xu (2015), employing deep architectures, and forgoing the use of any s...
[None, None, None, ['Zhou and Xu (2015)'], ['He et al. (2017)'], ['Tan et al. (2018)'], ['He et al. (2018a)'], ['Semi-CRF baseline'], ['Semi-CRF baseline', '+ common nonterminals', 'F1'], ['+ common nonterminals'], ['He et al. (2018a)'], None]
1
D18-1414table_9
F-scores of the baseline and the bothretrained models relative to role types on the two data sets. We only list results of the PCFGLAparser-based system.
2
[['L2', 'Baseline'], ['L2', 'Both retrained'], ['L1', 'Baseline'], ['L1', 'Both retrained']]
1
[['A0'], ['A1'], ['A2'], ['AM']]
[['67.95', '71.21', '51.43', '70.20'], ['70.62', '74.75', '64.29', '72.22'], ['69.49', '79.78', '61.84', '71.74'], ['73.15', '80.90', '63.35', '73.02']]
column
['F-scores', 'F-scores', 'F-scores', 'F-scores']
['Both retrained']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>A0</th> <th>A1</th> <th>A2</th> <th>AM</th> </tr> </thead> <tbody> <tr> <td>L2 || Baseline</td> <td>67.95</td> <td>71.21</td> <td>51.43</td> <td>70.20</td> </tr> ...
Table 9
table_9
D18-1414
9
emnlp2018
Table 9 further shows F-scores for the baseline and the both-retrained model relative to each role type in detail. Given that the F-scores for both models are equal to 0 on A3 and A4, we just omit this part. From the figure we can observe that, all the semantic roles achieve significant improvements in performances.
[1, 2, 1]
['Table 9 further shows F-scores for the baseline and the both-retrained model relative to each role type in detail.', 'Given that the F-scores for both models are equal to 0 on A3 and A4, we just omit this part.', 'From the figure we can observe that, all the semantic roles achieve significant improvements in performa...
[['Baseline', 'Both retrained'], ['Baseline', 'Both retrained'], ['Both retrained']]
1
D18-1415table_1
Performance of the original system when interacting with different user simulators. LU error means simulating slot errors and intent errors in different rates. Succ.: success rate, Turn: average turns, Reward: average reward.
2
[['LU Error Rate', '0.00'], ['LU Error Rate', '0.05'], ['LU Error Rate', '0.10'], ['LU Error Rate', '0.20']]
2
[['Sim1', 'Succ.'], ['Sim1', 'Turn'], ['Sim1', 'Reward'], ['Sim1', 'Satis.'], ['Sim2', 'Succ.'], ['Sim2', 'Turn'], ['Sim2', 'Reward'], ['Sim2', 'Satis.']]
[['0.962', '13.6', '3.94', '-', '0.901', '13.2', '2.95', '0.57'], ['0.937', '13.7', '3.41', '-', '0.877', '14.4', '2.41', '0.48'], ['0.910', '14.3', '2.65', '-', '0.841', '13.9', '1.41', '0.47'], ['0.845', '15.2', '0.58', '-', '0.784', '14.7', '0.01', '0.47']]
column
['Succ.', 'Turn', 'Reward', 'Satis.', 'Succ.', 'Turn', 'Reward', 'Satis.']
['Sim1', 'Sim2']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Sim1 || Succ.</th> <th>Sim1 || Turn</th> <th>Sim1 || Reward</th> <th>Sim1 || Satis.</th> <th>Sim2 || Succ.</th> <th>Sim2 || Turn</th> <th>Sim2 || Reward</th> <th>Sim2 || Satis.</th...
Table 1
table_1
D18-1415
6
emnlp2018
After obtaining the original system S1, we deploy it to interact with Sim1 and Sim2 respectively, under different LU error rates (Li et al., 2017a). In each condition, we simulate 3200 episodes to obtain the performance. Table 1 shows the details of the test performance. Table 2 shows the statistics of turns when S1 in...
[2, 2, 1, 0, 1]
['After obtaining the original system S1, we deploy it to interact with Sim1 and Sim2 respectively, under different LU error rates (Li et al., 2017a).', 'In each condition, we simulate 3200 episodes to obtain the performance.', 'Table 1 shows the details of the test performance.', 'Table 2 shows the statistics of turns...
[['Sim1', 'Sim2'], None, None, None, ['Succ.', 'Reward', 'Sim1']]
1
D18-1417table_1
Results of independent training for slot filling in terms of F1-score.
2
[['Methods', 'CRF (Mesnil et al. 2013)'], ['Methods', 'simple RNN (Yao et al. 2013)'], ['Methods', 'CNN-CRF (Xu and Sarikaya, 2013)'], ['Methods', 'LSTM (Yao et al. 2013)'], ['Methods', 'RNN-SOP (Liu and Lane 2015)'], ['Methods', 'Deep LSTM (Yao et al. 2013)'], ['Methods', 'RNN-EM (Peng et al. 2015)'], ['Methods', 'Bi-...
1
[['F1-score']]
[['92.94'], ['94.11'], ['94.35'], ['94.85'], ['84.89'], ['95.08'], ['95.25'], ['95.47'], ['95.66'], ['95.75'], ['95.79'], ['96.35']]
column
['F1-score']
['Our Model']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>F1-score</th> </tr> </thead> <tbody> <tr> <td>Methods || CRF (Mesnil et al. 2013)</td> <td>92.94</td> </tr> <tr> <td>Methods || simple RNN (Yao et al. 2013)</td> <td>94.11</td> <...
Table 1
table_1
D18-1417
7
emnlp2018
4.4 Independent Learning. The results of separate training for slot filling and intent detection are reported in Table 1 and Table 2 respectively. On the independent slot filling task, we fixed the intent information as the ground truth labels in the dataset. But on the independent intent detection task, there is no in...
[2, 1, 0, 2, 1, 1, 2, 2, 2, 2]
['4.4 Independent Learning.', 'The results of separate training for slot filling and intent detection are reported in Table 1 and Table 2 respectively.', 'On the independent slot filling task, we fixed the intent information as the ground truth labels in the dataset.', 'But on the independent intent detection task, the...
[None, None, None, None, ['Our Model', 'F1-score'], ['Our Model', 'F1-score'], ['Attention BiRNN (Liu and Lane 2016a)'], ['Our Model'], ['Our Model'], ['Our Model']]
1
D18-1417table_4
Feature ablation comparison of our proposed model on ATIS. slot filling and intent detection result are shown each row after after we exclude each feature from the full architecture
2
[['Methods', 'W/O char-embedding'], ['Methods', 'W/O self-attention'], ['Methods', 'W/O attention-gating'], ['Methods', 'Full Model']]
1
[['F1-Score'], ['Error(%)']]
[['96.30', '1.23'], ['96.26', '1.34'], ['96.25', '1.46'], ['96.52', '1.23']]
column
['F1-Score', 'Error(%)']
['W/O char-embedding', 'W/O self-attention', 'W/O attention-gating', 'Full Model']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>F1-Score</th> <th>Error(%)</th> </tr> </thead> <tbody> <tr> <td>Methods || W/O char-embedding</td> <td>96.30</td> <td>1.23</td> </tr> <tr> <td>Methods || W/O self-attention</td...
Table 4
table_4
D18-1417
8
emnlp2018
Table 4 shows the joint learning performance of our model on ATIS data set by removing one module at a time. We find that all variants of our model perform well based on our gate mechanism. As listed in the table, all features contribute to both slot filling and intent classification task. If we remove the self-attenti...
[1, 1, 1, 1, 2, 2, 1, 2]
['Table 4 shows the joint learning performance of our model on ATIS data set by removing one module at a time.', 'We find that all variants of our model perform well based on our gate mechanism.', 'As listed in the table, all features contribute to both slot filling and intent classification task.', 'If we remove the s...
[None, ['W/O char-embedding', 'W/O self-attention', 'W/O attention-gating', 'Full Model'], None, ['W/O self-attention'], ['W/O self-attention'], ['W/O self-attention'], ['Full Model', 'W/O char-embedding', 'F1-Score'], ['W/O char-embedding']]
1
D18-1418table_2
Test results for various models on permuted-bAbI dialog task. Results (accuracy %) are given in the standard setup and OOV setup; and both with and without match-type features.
2
[['Model', 'memN2N'], ['Model', 'memN2N + all-answers'], ['Model', 'Mask-memN2N'], ['Model', 'OOV: memN2N'], ['Model', 'OOV: memN2N + all-answers'], ['Model', 'OOV: Mask-memN2N']]
2
[['no match-type', 'Per-turn'], ['no match-type', 'Per-dialog'], ['+ match-type', 'Per-turn'], ['+ match-type', 'Per-dialog']]
[['91.8', '22', '93.3', '30.3'], ['88.5', '14.9', '92.5', '26.4'], ['93.4', '32', '95.2', '47.3'], ['63.4', '0.5', '78.1', '0.6'], ['60.8', '0.5', '74.9', '0.6'], ['63.0', '0.5', '80.1', '1']]
column
['accuracy', 'accuracy', 'accuracy', 'accuracy']
['Mask-memN2N', 'OOV: Mask-memN2N']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>no match-type || Per-turn</th> <th>no match-type || Per-dialog</th> <th>+ match-type || Per-turn</th> <th>+ match-type || Per-dialog</th> </tr> </thead> <tbody> <tr> <td>Model || memN2N</...
Table 2
table_2
D18-1418
7
emnlp2018
6.3 Model comparison. Our results for our proposed model and comparison with other models for permuted-bAbI dialog task are given in Table 2. Table 2 follows the same format as Table 1, except we show results for different models on permuted-bAbI dialog task. We show results for three models memN2N, memN2N + all-answer...
[2, 1, 2, 1, 2, 2, 2, 1, 2, 1, 1, 1, 1, 1, 1, 2, 2]
['6.3 Model comparison.', 'Our results for our proposed model and comparison with other models for permuted-bAbI dialog task are given in Table 2.', 'Table 2 follows the same format as Table 1, except we show results for different models on permuted-bAbI dialog task.', 'We show results for three models memN2N, memN2N +...
[None, ['Mask-memN2N'], None, ['memN2N', 'memN2N + all-answers', 'Mask-memN2N'], ['memN2N + all-answers'], ['memN2N + all-answers'], ['memN2N + all-answers'], ['memN2N + all-answers', 'memN2N'], None, ['Mask-memN2N', 'memN2N', 'memN2N + all-answers'], ['Mask-memN2N', 'memN2N', 'Per-dialog', 'no match-type'], ['Mask-mem...
1
D18-1418table_3
Ablation study of our proposed model on permuted-bAbI dialog task. Results (accuracy %) are given in the standard setup, without match-type features.
2
[['Model', 'Mask-memN2N'], ['Model', 'Mask-memN2N (w/o entropy)'], ['Model', 'Mask-memN2N (w/o L2 mask pre-training)'], ['Model', 'Mask-memN2N (Reinforcement learning phase only)']]
1
[['Per-turn'], ['Per-dialog']]
[['93.4', '32'], ['92.1', '24.6'], ['85.8', '2.2'], ['16.0', '0']]
column
['accuracy', 'accuracy']
['Mask-memN2N', 'Mask-memN2N (w/o entropy)', 'Mask-memN2N (w/o L2 mask pre-training)', 'Mask-memN2N (Reinforcement learning phase only)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Per-turn</th> <th>Per-dialog</th> </tr> </thead> <tbody> <tr> <td>Model || Mask-memN2N</td> <td>93.4</td> <td>32</td> </tr> <tr> <td>Model || Mask-memN2N (w/o entropy)</td> ...
Table 3
table_3
D18-1418
8
emnlp2018
6.4 Ablation study. Here, we study the different parts of our model for better understanding of how the different parts influence the overall model performance. Our results for ablation study are given in Table 3. We show results for Mask-memN2N in various settings - a) without entropy, b) without pre-training mask c) ...
[2, 2, 1, 1, 2, 1, 2, 2, 2, 1, 2, 2, 2]
['6.4 Ablation study.', 'Here, we study the different parts of our model for better understanding of how the different parts influence the overall model performance.', 'Our results for ablation study are given in Table 3.', 'We show results for Mask-memN2N in various settings - a) without entropy, b) without pre-traini...
[None, None, None, ['Mask-memN2N (w/o entropy)', 'Mask-memN2N (w/o L2 mask pre-training)', 'Mask-memN2N (Reinforcement learning phase only)'], None, ['Mask-memN2N (w/o L2 mask pre-training)'], None, None, None, ['Mask-memN2N', 'Mask-memN2N (Reinforcement learning phase only)'], ['Mask-memN2N (Reinforcement learning pha...
1
D18-1421table_2
Performances on Quora datasets.
2
[['Models', 'Seq2Seq'], ['Models', 'Residual LSTM'], ['Models', 'VAE-SVG-eq'], ['Models', 'Pointer-generator'], ['Models', 'RL-ROUGE'], ['Models', 'RbM-SL (ours)'], ['Models', 'RbM-IRL (ours)']]
2
[['Quora-I', 'ROUGE-1'], ['Quora-I', 'ROUGE-2'], ['Quora-I', 'BLEU'], ['Quora-I', 'METEOR'], ['Quora-II', 'ROUGE-1'], ['Quora-II', 'ROUGE-2'], ['Quora-II', 'BLEU'], ['Quora-II', 'METEOR']]
[['58.77', '31.47', '36.55', '26.28', '47.22', '20.72', '26.06', '20.35'], ['59.21', '32.43', '37.38', '28.17', '48.55', '22.48', '27.32', '22.37'], ['-', '-', '-', '25.50', '-', '-', '-', '22.20'], ['61.96', '36.07', '40.55', '30.21', '51.98', '25.16', '30.01', '24.31'], ['63.35', '37.33', '41.83', '30.96', '54.50', '...
column
['ROUGE-1', 'ROUGE-2', 'BLEU', 'METEOR', 'ROUGE-1', 'ROUGE-2', 'BLEU', 'METEOR']
['RbM-SL (ours)', 'RbM-IRL (ours)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Quora-I || ROUGE-1</th> <th>Quora-I || ROUGE-2 BLEU METEOR</th> <th>Quora-I || BLEU</th> <th>Quora-I || METEOR</th> <th>Quora-II || ROUGE-1</th> <th>Quora-II || ROUGE-2 BLEU METEOR</th> ...
Table 2
table_2
D18-1421
7
emnlp2018
Automatic evaluation. Table 2 shows the performances of the models on Quora datasets. In both settings, we find that the proposed RbM-SL and RbM-IRL models outperform the baseline models in terms of all the evaluation measures. Particularly in Quora-II, RbM-SL and RbM-IRL make significant improvements over the baseline...
[2, 1, 1, 1, 1, 2, 2, 2]
['Automatic evaluation.', 'Table 2 shows the performances of the models on Quora datasets.', 'In both settings, we find that the proposed RbM-SL and RbM-IRL models outperform the baseline models in terms of all the evaluation measures.', 'Particularly in Quora-II, RbM-SL and RbM-IRL make significant improvements over t...
[None, None, ['RbM-SL (ours)', 'RbM-IRL (ours)'], ['Quora-II', 'RbM-SL (ours)', 'RbM-IRL (ours)'], ['RbM-SL (ours)', 'RbM-IRL (ours)'], None, None, ['RbM-SL (ours)']]
1
D18-1421table_4
Human evaluation on Quora datasets.
2
[['Models', 'Pointer-generator'], ['Models', 'RL-ROUGE'], ['Models', 'RbM-SL (ours)'], ['Models', 'RbM-IRL (ours)'], ['Models', 'Reference']]
2
[['Quora-I', 'Relevance'], ['Quora-I', 'Fluency'], [' Quora-II', 'Relevance'], [' Quora-II', 'Fluency']]
[['3.23', '4.55', '2.34', '2.96'], ['3.56', '4.61', '2.58', '3.14'], ['4.08', '4.67', '3.20', '3.48'], ['4.07', '4.69', '2.80', '3.53'], ['4.69', '4.95', '4.68', '4.90']]
column
['Relevance', 'Fluency', 'Relevance', 'Fluency']
['RbM-SL (ours)', 'RbM-IRL (ours)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Quora-I || Relevance</th> <th>Quora-I || Fluency</th> <th>Quora-II || Relevance</th> <th>Quora-II || Fluency</th> </tr> </thead> <tbody> <tr> <td>Models || Pointer-generator</td> <td...
Table 4
table_4
D18-1421
7
emnlp2018
Table 4 demonstrates the average ratings for each model, including the ground-truth references. Our models of RbM-SL and RbM-IRL get better scores in terms of relevance and fluency than the baseline models, and their differences are statistically significant (paired t-test, p-value < 0.01). We note that in human evalua...
[1, 1, 1]
['Table 4 demonstrates the average ratings for each model, including the ground-truth references.', 'Our models of RbM-SL and RbM-IRL get better scores in terms of relevance and fluency than the baseline models, and their differences are statistically significant (paired t-test, p-value < 0.01).', 'We note that in huma...
[None, ['RbM-SL (ours)', 'RbM-IRL (ours)', 'Relevance', 'Fluency'], ['RbM-SL (ours)', 'RbM-IRL (ours)', 'Relevance', 'Fluency']]
1
D18-1424table_1
Performance of our models on Split1 with both sentence-level input and paragraph-level input. Sen. means sentence, while Par. means paragraph.
2
[['Model', 's2s'], ['Model', 's2s-a'], ['Model', 's2s-a-at'], ['Model', 's2s-a-at-cp'], ['Model', 's2s-a-at-mcp'], ['Model', 's2s-a-at-mcp-gsa']]
2
[['BLEU 1', 'Sen.'], ['BLEU 1', 'Par.'], ['BLEU 2', 'Sen.'], ['BLEU 2', 'Par.'], ['BLEU 3', 'Sen.'], ['BLEU 3', 'Par.'], ['BLEU 4', 'Sen.'], ['BLEU 4', 'Par.'], ['METEOR', 'Sen.'], ['METEOR', 'Par.'], ['ROUGE-L', 'Sen.'], ['ROUGE-L', 'Par.']]
[['30.41', '28.49', '12.68', '10.43', '6.33', '4.70', '3.44', '2.38', '11.98', '10.69', '29.93', '27.32'], ['34.46', '31.26', '18.07', '14.37', '11.20', '8.02', '7.42', '4.80', '14.95', '12.52', '34.69', '30.11'], ['40.57', '40.56', '24.30', '24.23', '16.40', '16.33', '11.54', '11.46', '18.35', '18.42', '40.76', '40.40...
column
['BLEU 1', 'BLEU 1', 'BLEU 2', 'BLEU 2', 'BLEU 3', 'BLEU 3', 'BLEU 4', 'BLEU 4', 'METEOR', 'METEOR', 'ROUGE-L', 'ROUGE-L']
['s2s-a', 's2s-a-at', 's2s-a-at-cp', 's2s-a-at-mcp', 's2s-a-at-mcp-gsa']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU 1 || Sen.</th> <th>BLEU 1 || Par.</th> <th>BLEU 2 || Sen.</th> <th>BLEU 2 || Par.</th> <th>BLEU 3 || Sen.</th> <th>BLEU 3 || Par.</th> <th>BLEU 4 || Sen.</th> <th>BLEU 4 || Pa...
Table 1
table_1
D18-1424
5
emnlp2018
3.3 Evaluation We conduct automatic evaluation with metrics: BLEU 1, BLEU 2, BLEU 3, BLEU 4 (Papineni et al., 2002), METEOR (Denkowski and Lavie, 2014) and ROUGE-L (Lin, 2004), and use evaluation package released by (Sharma et al., 2017) to compute them. 4 Results and Analysis. 4.1 Comparison of Techniques. Table 1 sho...
[2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 1, 2, 2, 1, 2, 2, 1, 2, 2, 2, 2, 2, 1, 1, 2, 1, 2, 1, 2, 2, 0, 0, 0]
['3.3 Evaluation We conduct automatic evaluation with metrics: BLEU 1, BLEU 2, BLEU 3, BLEU 4 (Papineni et al., 2002), METEOR (Denkowski and Lavie, 2014) and ROUGE-L (Lin, 2004), and use evaluation package released by (Sharma et al., 2017) to compute them.', '4 Results and Analysis.', '4.1 Comparison of Techniques.', '...
[None, None, None, None, None, None, None, ['s2s'], ['s2s-a'], ['s2s-a-at'], ['s2s-a-at-cp'], ['s2s-a-at-mcp'], ['s2s-a-at-mcp-gsa'], None, ['s2s', 's2s-a', 'Sen.', 'Par.'], ['Par.'], None, ['s2s-a', 's2s-a-at'], ['Sen.', 'Par.', 's2s-a-at'], None, ['s2s-a-at', 's2s-a-at-cp'], None, ['s2s-a-at-cp'], ['s2s-a-at-cp', 'Pa...
1
D18-1429table_8
Performance obtained by training on different types of noisy questions (WikiMovies).
2
[['Type of Noise', 'None'], ['Type of Noise', 'Stop Words'], ['Type of Noise', 'Question Type'], ['Type of Noise', 'Content Words'], ['Type of Noise', 'Named Entity']]
1
[['BLEU'], ['QBLEU'], ['Hit 1']]
[['100', '100', '76.5'], ['25.4', '84.0', '75.6'], ['74.0', '79.3', '73.5'], ['29.4', '64.3', '54.7'], ['41.9', '48.5', '17.97']]
column
['BLEU', 'QBLEU', 'Hit 1']
['Question Type', 'Stop Words', 'Content Words', 'Named Entity']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU</th> <th>QBLEU</th> <th>Hit 1</th> </tr> </thead> <tbody> <tr> <td>Type of Noise || None</td> <td>100</td> <td>100</td> <td>76.5</td> </tr> <tr> <td>Type of Nois...
Table 8
table_8
D18-1429
8
emnlp2018
The results of our experiments are summarized in Table 8 - 10. The first column for each table shows the manner in which the noisy training data was created. The second column shows the BLEU4 score of the noisy questions when compared to the original reference questions (thus it tells us the perceived quality of these ...
['1', '1', '1', '2', '1', '2', '1', '2', '2']
['The results of our experiments are summarized in Table 8 - 10.', 'The first column for each table shows the manner in which the noisy training data was created.', 'The second column shows the BLEU4 score of the noisy questions when compared to the original reference questions (thus it tells us the perceived quality o...
[None, ['None', 'Stop Words', 'Question Type', 'Content Words', 'Named Entity'], ['BLEU'], ['BLEU'], ['QBLEU'], None, ['BLEU', 'QBLEU'], None, None]
1
D18-1434table_3
State-of-the-Art (SOTA) comparison on VQGCOCO Dataset. The first block consists of the SOTA results, second block refers to the baselines mentioned in section 5.2, third block shows the results for the best method for different ablations mentioned in table 1.
2
[['Context', 'Natural 2016'], ['Context', 'Creative 2017'], ['Context', 'Image Only'], ['Context', 'Caption Only'], ['Context', 'Tag-Hadamard'], ['Context', 'Place CNN-Joint'], ['Context', 'Diff.Image-Joint'], ['Context', 'MDN-Joint (Ours)'], ['Context', 'Humans 2016']]
1
[['BLEU1'], ['METEOR'], ['ROUGE'], ['CIDEr']]
[['19.2', '19.7', '-', '-'], ['35.6', '19.9', '-', '-'], ['20.8', '8.6', '22.6', '18.8'], ['21.1', '8.5', '25.9', '22.3'], ['24.4', '10.8', '24.3', '55.0'], ['25.7', '10.8', '24.5', '56.1'], ['30.4', '11.7', '26.3', '38.8'], ['36.0', '23.4', '41.8', '50.7'], ['86.0', '60.8', '-', '-']]
column
['BLEU1', 'METEOR', 'ROUGE', 'CIDEr']
['MDN-Joint (Ours)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU1</th> <th>METEOR</th> <th>ROUGE</th> <th>CIDEr</th> </tr> </thead> <tbody> <tr> <td>Context || Natural 2016</td> <td>19.2</td> <td>19.7</td> <td>-</td> <td>-</td>...
Table 3
table_3
D18-1434
8
emnlp2018
5.2 Baseline and State-of-the-Art. The comparison of our method with various baselines and state-of-the-art methods is provided in table 2 for VQA 1.0 and table 3 for VQG-COCO dataset. The comparable baselines for our method are the image based and caption based models in which we use either only the image or the capti...
[2, 1, 2, 1, 1, 0, 1]
['5.2 Baseline and State-of-the-Art.', 'The comparison of our method with various baselines and state-of-the-art methods is provided in table 2 for VQA 1.0 and table 3 for VQG-COCO dataset.', 'The comparable baselines for our method are the image based and caption based models in which we use either only the image or t...
[None, ['Natural 2016', 'Creative 2017', 'Image Only', 'Caption Only', 'Tag-Hadamard', 'Place CNN-Joint', 'Diff.Image-Joint', 'MDN-Joint (Ours)', 'Humans 2016'], ['Image Only', 'Caption Only'], ['Natural 2016', 'Creative 2017', 'Image Only', 'Caption Only'], ['MDN-Joint (Ours)', 'Image Only', 'Caption Only', 'BLEU1', '...
1
D18-1435table_2
Comparison of Template Generator with coarse/fine-grained entity type. Coarse Template is generalized by coarse-grained entity type (name tagger) and fine template is generalized by fine-grained entity type (EDL).
2
[['Approach', 'Raw-caption'], ['Approach', 'Coarse Template'], ['Approach', 'Fine Template']]
1
[['Vocabulary Size'], ['BLEU-1'], ['BLEU-2'], ['BLEU-3'], ['BLEU-4'], ['METEOR'], ['ROUGE'], ['CIDEr']]
[['10979', '15.1', '11.7', '9.9', '8.8', '8.8', '24.2', '34.7'], ['3533', '46.7', '36.1', '29.8', '25.7', '22.4', '43.5', '161.6'], ['3642', '43.0', '33.4', '27.8', '24.3', '20.3', '39.8', '165.3']]
column
['Vocabulary Size', 'BLEU-1', 'BLEU-2', 'BLEU-3', 'BLEU-4', 'METEOR', 'ROUGE', 'CIDEr']
['Coarse Template', 'Fine Template']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Vocabulary Size</th> <th>BLEU-1</th> <th>BLEU-2</th> <th>BLEU-3</th> <th>BLEU-4</th> <th>METEOR</th> <th>ROUGE</th> <th>CIDEr</th> </tr> </thead> <tbody> <tr> <td>Appr...
Table 2
table_2
D18-1435
8
emnlp2018
Table 2 shows the performances of template generator based on coarse-grained and finegrained type respectively, and Figure 5 shows an example of the template generated. Coarse templates are the ones after we replace names with these coarse-grained types. Entity Linking classifies names into more fine-grained types, so ...
[1, 2, 2, 2, 2, 1, 2]
['Table 2 shows the performances of template generator based on coarse-grained and finegrained type respectively, and Figure 5 shows an example of the template generated.', 'Coarse templates are the ones after we replace names with these coarse-grained types.', 'Entity Linking classifies names into more fine-grained ty...
[None, ['Coarse Template'], ['Fine Template'], ['Vocabulary Size'], None, ['Coarse Template', 'Fine Template'], None]
1
D18-1438table_4
Comparison results using Rouge recall at 75 bytes without OOV replacement. HNNattTI-3-OOV is the version of HNNattTI-3 without the OOV replacement mechanism.
2
[['Method', 'HNNattTI-3-OOV'], ['Method', 'HNNattTC-3-OOV'], ['Method', 'HNNTattTIC-3-OOV'], ['Method', 'HNNattT-3-OOV']]
1
[['Rouge-1'], ['Rouge-2'], ['Rouge-L']]
[['24.03', '8.2', '16.52'], ['18.18', '6.53', '12.87'], ['20.50', '7.67', '14.36'], ['21.60', '7.82', '15.05']]
column
['Rouge-1', 'Rouge-2', 'Rouge-L']
['HNNattTI-3-OOV', 'HNNattTC-3-OOV', 'HNNTattTIC-3-OOV', 'HNNattT-3-OOV']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Rouge-1</th> <th>Rouge-2</th> <th>Rouge-L</th> </tr> </thead> <tbody> <tr> <td>Method || HNNattTI-3-OOV</td> <td>24.03</td> <td>8.2</td> <td>16.52</td> </tr> <tr> <td...
Table 4
table_4
D18-1438
8
emnlp2018
We use the 1-image and 2-image random selected image summaries as the baselines which we compare our models with. The top 1 or 2 images ranked by our model are selected out to form the summaries. Results in Table 4 show that HNNattTI outperforms the random baseline, while HNNattTC and HNNattTIC perform worse. This impl...
[2, 2, 1, 2, 2, 1, 1, 2]
['We use the 1-image and 2-image random selected image summaries as the baselines which we compare our models with.', 'The top 1 or 2 images ranked by our model are selected out to form the summaries.', 'Results in Table 4 show that HNNattTI outperforms the random baseline, while HNNattTC and HNNattTIC perform worse.',...
[None, None, ['HNNattTI-3-OOV', 'HNNattTC-3-OOV', 'HNNTattTIC-3-OOV'], None, None, ['HNNattTI-3-OOV', 'HNNattTC-3-OOV', 'HNNTattTIC-3-OOV', 'HNNattT-3-OOV'], ['HNNattTI-3-OOV', 'HNNattTC-3-OOV', 'HNNTattTIC-3-OOV', 'HNNattT-3-OOV'], None]
1
D18-1442table_1
Comparison with other baselines on DailyMail test dataset using Rouge recall score with respect to the abstractive ground truth at 75 bytes and at 275 bytes.
2
[['DailyMail', 'Lead-3'], ['DailyMail', 'LReg(500)'], ['DailyMail', 'Cheng et.al 16'], ['DailyMail', 'SummaRuNNer'], ['DailyMail', 'REFRESH'], ['DailyMail', 'Hybrid MemNet'], ['DailyMail', 'ITS']]
2
[['b75', 'Rouge-1'], ['b75', 'Rouge-2'], ['b75', 'Rouge-L'], ['b275', 'Rouge-1'], ['b275', 'Rouge-2'], ['b275', 'Rouge-L']]
[['21.9', '7.2', '11.6', '40.5', '14.9', '32.6'], ['18.5', '6.9', '10.2', '-', '-', '-'], ['22.7', '8.5', '12.5', '42.2', '17.3', '34.8'], ['26.2', '10.8', '14.4', '42', '16.9', '34.1'], ['24.1', '11.5', '12.5', '40.3', '15.1', '32.9'], ['26.3', '11.2', '15.5', '41.4', '16.7', '33.2'], ['27.4', '11.9', '16.1', '42.4', ...
column
['Rouge-1', 'Rouge-2', 'Rouge-L', 'Rouge-1', 'Rouge-2', 'Rouge-L']
['ITS']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>b75 || Rouge-1</th> <th>b75 || Rouge-2</th> <th>b75 || Rouge-L</th> <th>b275 || Rouge-1</th> <th>b275 || Rouge-2</th> <th>b275 || Rouge-L</th> </tr> </thead> <tbody> <tr> <td>Da...
Table 1
table_1
D18-1442
7
emnlp2018
6 Experiment analysis. Table 1 shows the performance comparison of our model with other baselines on the DailyMail dataset with respect to Rouge score at 75 bytes and 275 bytes of summary length. Our model performs consistently and significantly better than other models on 75 bytes, while on 275 bytes, the improvement ...
[2, 1, 1, 2, 2]
['6 Experiment analysis.', 'Table 1 shows the performance comparison of our model with other baselines on the DailyMail dataset with respect to Rouge score at 75 bytes and 275 bytes of summary length.', 'Our model performs consistently and significantly better than other models on 75 bytes, while on 275 bytes, the impr...
[None, ['Lead-3', 'LReg(500)', 'Cheng et.al 16', 'SummaRuNNer', 'REFRESH', 'Hybrid MemNet', 'ITS', 'b75', 'b275'], ['ITS', 'b75', 'b275'], ['ITS'], ['ITS']]
1
D18-1442table_5
System ranking comparison with other baselines on DailyMail corpus. Rank 1 is the best and Rank 4 is the worst. Each score represents the percentage of the summary under this rank.
2
[['Models', 'Lead-3'], ['Models', 'Hybrid MemNet'], ['Models', 'ITS'], ['Models', 'Gold']]
1
[['1st'], ['2nd'], ['3rd'], ['4th']]
[['0.12', '0.11', '0.25', '0.52'], ['0.24', '0.25', '0.28', '0.23'], ['0.31', '0.34', '0.23', '0.12'], ['0.33', '0.30', '0.24', '0.13']]
column
['percentage', 'percentage', 'percentage', 'percentage']
['ITS']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>1st</th> <th>2nd</th> <th>3rd</th> <th>4th</th> </tr> </thead> <tbody> <tr> <td>Models || Lead-3</td> <td>0.12</td> <td>0.11</td> <td>0.25</td> <td>0.52</td> </tr> ...
Table 5
table_5
D18-1442
8
emnlp2018
Human Evaluation:. We gave human evaluators three system-generated summaries, generated by Lead-3, Hybrid MemNet, ITS, as well as the human-written gold standard summary, and asked them to rank these summaries based on summary informativeness and coherence. Table 5 shows the percentages of summaries of different models...
[2, 2, 1, 1, 1, 2]
['Human Evaluation:.', 'We gave human evaluators three system-generated summaries, generated by Lead-3, Hybrid MemNet, ITS, as well as the human-written gold standard summary, and asked them to rank these summaries based on summary informativeness and coherence.', 'Table 5 shows the percentages of summaries of differen...
[None, ['Lead-3', 'Hybrid MemNet', 'ITS', 'Gold'], ['Lead-3', 'Hybrid MemNet', 'ITS', 'Gold'], ['Gold', '1st'], ['ITS', '2nd', 'Lead-3', 'Hybrid MemNet', '3rd', '4th'], ['Hybrid MemNet', 'ITS']]
1
D18-1443table_2
Results on the NYT corpus, where we compare to RL trained models. * marks models and results by Paulus et al. (2017), and † results by Celikyilmaz et al. (2018).
2
[['Method', 'ML*'], ['Method', 'ML+RL*'], ['Method', 'DCA†'], ['Method', 'Point.Gen. + Coverage Pen.'], ['Method', 'Bottom-Up Summarization']]
1
[['R-1'], ['R-2'], ['R-L']]
[['44.26', '27.43', '40.41'], ['47.03', '30.72', '43.10'], ['48.08', '31.19', '42.33'], ['45.13', '30.13', '39.67'], ['47.38', '31.23', '41.81']]
column
['R-1', 'R-2', 'R-L']
['Bottom-Up Summarization']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>R-1</th> <th>R-2</th> <th>R-L</th> </tr> </thead> <tbody> <tr> <td>Method || ML*</td> <td>44.26</td> <td>27.43</td> <td>40.41</td> </tr> <tr> <td>Method || ML+RL*</td...
Table 2
table_2
D18-1443
7
emnlp2018
Table 2 shows experiments with the same systems on the NYT corpus. We see that the 2 point improvement compared to the baseline PointerGenerator maximum-likelihood approach carries over to this dataset. Here, the model outperforms the RL based model by Paulus et al. (2017) in ROUGE-1 and 2, but not L, and is comparable...
[1, 1, 1, 1, 2, 2]
['Table 2 shows experiments with the same systems on the NYT corpus.', 'We see that the 2 point improvement compared to the baseline PointerGenerator maximum-likelihood approach carries over to this dataset.', 'Here, the model outperforms the RL based model by Paulus et al. (2017) in ROUGE-1 and 2, but not L, and is co...
[None, ['Bottom-Up Summarization', 'Point.Gen. + Coverage Pen.'], ['Bottom-Up Summarization', 'ML*', 'ML+RL*', 'DCA†', 'R-1', 'R-2', 'R-L'], ['Point.Gen. + Coverage Pen.', 'ML*', 'R-1', 'R-2', 'R-L'], None, ['Bottom-Up Summarization']]
1
D18-1445table_3
Upper-bound performance comparison. Results are averaged over all clusters in DUC’04.
2
[['Method', 'TD(λ)'], ['Method', 'LSTD(λ)'], ['Method', 'ILP']]
1
[['R1'], ['R2'], ['RL'], ['RSU4']]
[['.484', '.184', '.388', '.199'], ['.458', '.159', '.366', '.185'], ['.470', '.212', 'N/A', '.185']]
column
['R1', 'R2', 'RL', 'RSU4']
['TD(λ)', 'LSTD(λ)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>R1</th> <th>R2</th> <th>RL</th> <th>RSU4</th> </tr> </thead> <tbody> <tr> <td>Method || TD(λ)</td> <td>.484</td> <td>.184</td> <td>.388</td> <td>.199</td> </tr> ...
Table 3
table_3
D18-1445
7
emnlp2018
Table 3 shows the performance of RL and ILP on the DUC’04 dataset. TD(λ) significantly outperforms LSTD(λ) in terms of all ROUGE scores we consider. Although the least-square RL algorithms (which LSTD belongs to) have been proved to achieve better performance than standard TD methods in large-scale problems (see Lagoud...
[1, 1, 2, 2, 1, 2, 2]
['Table 3 shows the performance of RL and ILP on the DUC’04 dataset.', 'TD(λ) significantly outperforms LSTD(λ) in terms of all ROUGE scores we consider.', 'Although the least-square RL algorithms (which LSTD belongs to) have been proved to achieve better performance than standard TD methods in large-scale problems (se...
[['TD(λ)', 'LSTD(λ)', 'ILP'], ['TD(λ)', 'LSTD(λ)', 'R1', 'R2', 'RL', 'RSU4'], ['TD(λ)', 'LSTD(λ)'], ['LSTD(λ)'], ['TD(λ)', 'ILP', 'R1', 'RL', 'RSU4'], ['ILP', 'R2'], ['ILP', 'TD(λ)', 'LSTD(λ)']]
1
D18-1447table_4
Results of keyphrase generation for news from DUC dataset with F1. Results of unsupervised learning methods are adopted from Hasan and Ng (2010).
3
[['Model', 'Our Models', 'SEQ2SEQ'], ['Model', 'Our Models', 'SYN.UNSUPER.'], ['Model', 'Our Models', 'SYN.SELF-LEARN.'], ['Model', 'Our Models', 'MULTI-TASK'], ['Model', 'Unsupervised', 'TF-IDF'], ['Model', 'Unsupervised', 'TEXTRANK'], ['Model', 'Unsupervised', 'SINGLERANK'], ['Model', 'Unsupervised', 'EXPANDRANK']]
1
[['F1']]
[['0.056'], ['0.083'], ['0.065'], ['0.109'], ['0.270'], ['0.097'], ['0.256'], ['0.269']]
column
['F1']
['Our Models']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Model || Our Models || SEQ2SEQ</td> <td>0.056</td> </tr> <tr> <td>Model || Our Models || SYN.UNSUPER.</td> <td>0.083</td> </tr> <tr> ...
Table 4
table_4
D18-1447
8
emnlp2018
The experimental results are shown in Table 4 which indicate that: 1) though trained on scientific papers, our models still have the ability to generate keyphrases for news articles, illustrating that our models have learned some universal features between the two domains; and 2) semi-supervised learning by leveraging ...
[1, 1]
['The experimental results are shown in Table 4 which indicate that: 1) though trained on scientific papers, our models still have the ability to generate keyphrases for news articles, illustrating that our models have learned some universal features between the two domains; and 2) semi-supervised learning by leveragin...
[['Our Models'], ['Unsupervised', 'F1']]
1
D18-1454table_2
Results across different metrics on the test set of NarrativeQA-summaries task. † indicates span prediction models trained on the Rouge-L retrieval oracle.
2
[['Model', 'Seq2Seq (Kocisky et al. 2018)'], ['Model', 'ASR (Kocisky et al. 2018)'], ['Model', 'BiDAF (Kocisky et al. 2018)'], ['Model', 'BiAttn + MRU-LSTM (Tay et al. 2018)'], ['Model', 'MHPGM'], ['Model', 'MHPGM+ NOIC']]
1
[['BLEU-1'], ['BLEU-4'], ['METEOR'], ['Rouge-L'], ['CIDEr']]
[['15.89', '1.26', '4.08', '13.15', '-'], ['23.20', '6.39', '7.77', '22.26', '-'], ['33.72', '15.53', '15.38', '36.30', '-'], ['36.55', '19.79', '17.87', '41.44', '-'], ['40.24', '17.40', '17.33', '41.49', '139.23'], ['43.63', '21.07', '19.03', '44.16', '152.98']]
column
['BLEU-1', 'BLEU-4', 'METEOR', 'Rouge-L', 'CIDEr']
['MHPGM', 'MHPGM+ NOIC']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU-1</th> <th>BLEU-4</th> <th>METEOR</th> <th>Rouge-L</th> <th>CIDEr</th> </tr> </thead> <tbody> <tr> <td>Model || Seq2Seq (Kocisky et al. 2018)</td> <td>15.89</td> <td>1...
Table 2
table_2
D18-1454
8
emnlp2018
5.1 Main Experiment. The results of our model on both NarrativeQA and WikiHop with and without commonsense incorporation are shown in Table 2 and Table 3. We see empirically that our model outperforms all generative models on NarrativeQA, and is competitive with the top span prediction models. Furthermore, with the NOI...
[2, 1, 1, 1, 2]
['5.1 Main Experiment.', 'The results of our model on both NarrativeQA and WikiHop with and without commonsense incorporation are shown in Table 2 and Table 3.', 'We see empirically that our model outperforms all generative models on NarrativeQA, and is competitive with the top span prediction models.', 'Furthermore, w...
[None, ['MHPGM', 'MHPGM+ NOIC'], ['MHPGM', 'Seq2Seq (Kocisky et al. 2018)', 'ASR (Kocisky et al. 2018)', 'BiDAF (Kocisky et al. 2018)', 'BiAttn + MRU-LSTM (Tay et al. 2018)'], ['MHPGM+ NOIC'], None]
1
D18-1462table_2
Automatic evaluations of the proposed model and the state-of-the-art models.
2
[['Models', 'EE-Seq2Seq'], ['Models', 'DE-Seq2Seq'], ['Models', 'GE-Seq2Seq'], ['Models', 'Proposed Model']]
1
[['BLEU']]
[['0.0029'], ['0.0027'], ['0.0022'], ['0.0042 (+44.8%)']]
column
['BLEU']
['Proposed Model']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU</th> </tr> </thead> <tbody> <tr> <td>Models || EE-Seq2Seq</td> <td>0.0029</td> </tr> <tr> <td>Models || DE-Seq2Seq</td> <td>0.0027</td> </tr> <tr> <td>Models || GE-S...
Table 2
table_2
D18-1462
6
emnlp2018
4.5 Experimental Results. Table 2 shows the results of automatic evaluation. The proposed model performs the best according to BLEU. In particular, the differences between the existing state-of-the-art models are within 0.07, while the proposed model supersedes the best of them by 0.13.
[2, 1, 1, 1]
['4.5 Experimental Results.', 'Table 2 shows the results of automatic evaluation.', 'The proposed model performs the best according to BLEU.', 'In particular, the differences between the existing state-of-the-art models are within 0.07, while the proposed model supersedes the best of them by 0.13.']
[None, None, ['Proposed Model', 'BLEU'], ['EE-Seq2Seq', 'DE-Seq2Seq', 'GE-Seq2Seq', 'Proposed Model', 'BLEU']]
1
D18-1462table_6
Human evaluations of the key components.
2
[['Models', 'Seq2Seq'], ['Models', '+Skeleton Extraction Module'], ['Models', '+Reinforcement Learning']]
1
[['Fluency'], ['Coherence'], ['G-Score']]
[['7.54', '4.98', '6.13'], ['7.26', '4.32', '5.60'], ['8.69', '5.62', '6.99']]
column
['Fluency', 'Coherence', 'G-Score']
['+Skeleton Extraction Module', '+Reinforcement Learning']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Fluency</th> <th>Coherence</th> <th>G-Score</th> </tr> </thead> <tbody> <tr> <td>Models || Seq2Seq</td> <td>7.54</td> <td>4.98</td> <td>6.13</td> </tr> <tr> <td>Model...
Table 6
table_6
D18-1462
8
emnlp2018
Table 6 shows the human evaluation results. The slight improvement with the skeleton extraction module in BLEU reflecs as the decreases in both fluency and coherence. It suggests the necessity of human evaluation. The decreased results can be explained by the fact that the style of the dataset for pre-training the skel...
[1, 1, 2, 2, 2, 1]
['Table 6 shows the human evaluation results.', 'The slight improvement with the skeleton extraction module in BLEU reflecs as the decreases in both fluency and coherence.', 'It suggests the necessity of human evaluation.', 'The decreased results can be explained by the fact that the style of the dataset for pre-traini...
[None, ['Fluency', 'Coherence', 'G-Score', '+Skeleton Extraction Module'], ['+Skeleton Extraction Module'], ['+Skeleton Extraction Module'], None, ['G-Score', '+Reinforcement Learning']]
1
D18-1463table_1
Results of embedding-based metrics. * indicates statistically significant difference (p < 0.05) from the best baselines. The same mark is used in Table 2
2
[['Model', 'Greedy'], ['Model', 'Beam'], ['Model', 'MMI'], ['Model', 'RL'], ['Model', 'VHRED'], ['Model', 'NEXUS-H'], ['Model', 'NEXUS-F'], ['Model', 'NEXUS']]
2
[['DailyDialog', 'Average'], ['DailyDialog', 'Greedy'], ['DailyDialog', 'Extreme'], ['Twitter', 'Average'], ['Twitter', 'Greedy'], ['Twitter', 'Extreme']]
[['0.443', '0.376', '0.328', '0.510', '0.341', '0.356'], ['0.437', '0.350', '0.369', '0.505', '0.345', '0.352'], ['0.457', '0.371', '0.371', '0.518', '0.353', '0.365'], ['0.405', '0.329', '0.305', '0.460', '0.349', '0.323'], ['0.491', '0.375', '0.313', '0.525', '0.389', '0.372'], ['0.479', '0.381', '0.385', '0.558', '0...
column
['Average', 'Greedy', 'Extreme', 'Average', 'Greedy', 'Extreme']
['NEXUS-H', 'NEXUS-F', 'NEXUS']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>DailyDialog || Average</th> <th>DailyDialog || Greedy</th> <th>DailyDialog || Extreme</th> <th>Twitter || Average</th> <th>Twitter || Greedy</th> <th>Twitter || Extreme</th> </tr> </thea...
Table 1
table_1
D18-1463
6
emnlp2018
Table 1 reports the embedding scores on both datasets. NEXUS network significantly outperforms the best baseline model in most cases. Notably, NEXUS can absorb the advantages from both NEXUS-H and NEXUS-F. The history and future information seem to help the model from different perspectives. Taking into account both of...
[1, 1, 2, 2, 2, 2]
['Table 1 reports the embedding scores on both datasets.', 'NEXUS network significantly outperforms the best baseline model in most cases.', 'Notably, NEXUS can absorb the advantages from both NEXUS-H and NEXUS-F.', 'The history and future information seem to help the model from different perspectives.', 'Taking into a...
[['DailyDialog', 'Twitter'], ['NEXUS'], ['NEXUS-H', 'NEXUS-F', 'NEXUS'], None, ['NEXUS'], None]
1
D18-1463table_2
Results of BLEU score. It is computed based on the smooth BLEU algorithm (Lin and Och, 2004). p-value interval is computed base on the altered bootstrap resampling algorithm (Riezler and Maxwell, 2005)
2
[['Model', 'Greedy'], ['Model', 'Beam'], ['Model', 'MMI'], ['Model', 'RL'], ['Model', 'VHRED'], ['Model', 'NEXUS-H'], ['Model', 'NEXUS-F'], ['Model', 'NEXUS']]
2
[['DailyDialog', 'BLEU-1'], ['DailyDialog', 'BLEU-2'], ['DailyDialog', 'BLEU-3'], ['Twitter', 'BLEU-1'], ['Twitter', 'BLEU-2'], ['Twitter', 'BLEU-3']]
[['0.394', '0.245', '0.157', '0.340', '0.203', '0.116'], ['0.386', '0.251', '0.163', '0.338', '0.205', '0.112'], ['0.407', '0.269', '0.172', '0.347', '0.208', '0.118'], ['0.298', '0.186', '0.075', '0.314', '0.199', '0.103'], ['0.395', '0.281', '0.190', '0.355', '0.211', '0.124'], ['0.418', '0.279', '0.199', '0.366', '0...
column
['BLEU-1', 'BLEU-2', 'BLEU-3', 'BLEU-1', 'BLEU-2', 'BLEU-3']
['NEXUS-H', 'NEXUS-F', 'NEXUS']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>DailyDialog || BLEU-1</th> <th>DailyDialog || BLEU-2</th> <th>DailyDialog || BLEU-3</th> <th>Twitter || BLEU-1</th> <th>Twitter || BLEU-2</th> <th>Twitter || BLEU-3</th> </tr> </thead> ...
Table 2
table_2
D18-1463
7
emnlp2018
BLEU Score. BLEU is a popular metric that measures the geometric mean of the modified ngram precision with a length penalty (Papineni et al., 2002). Table 2 reports the BLEU 1-3 scores. Compared with embedding-based metrics, the BLEU score quantifies the word-overlap between generated responses and the ground-truth. On...
[2, 2, 1, 2, 2, 2, 2, 2, 2, 1, 1, 1, 1]
['BLEU Score.', 'BLEU is a popular metric that measures the geometric mean of the modified ngram precision with a length penalty (Papineni et al., 2002).', 'Table 2 reports the BLEU 1-3 scores.', 'Compared with embedding-based metrics, the BLEU score quantifies the word-overlap between generated responses and the groun...
[None, None, ['BLEU-1', 'BLEU-2', 'BLEU-3'], ['BLEU-1', 'BLEU-2', 'BLEU-3'], None, None, ['DailyDialog', 'Twitter'], ['DailyDialog', 'Twitter'], None, ['NEXUS'], ['NEXUS-H', 'NEXUS-F'], ['Greedy', 'Beam', 'MMI', 'VHRED'], ['RL']]
1
D18-1464table_1
Results on readability assessment. The first system is the state-of-the-art coherence model on this dataset. The last one is a full readability system. “∗” indicates statistically significant difference with the bold result.
2
[['Model', 'Mesgar and Strube (2016)'], ['Model', 'CohEmb'], ['Model', 'CohLSTM'], ['Model', 'De Clercq and Hoste (2016)']]
1
[['Accuracy (%)']]
[['85.70'], ['92.17'], ['97.77'], ['96.88']]
column
['Accuracy (%)']
['CohEmb', 'CohLSTM']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy (%)</th> </tr> </thead> <tbody> <tr> <td>Model || Mesgar and Strube (2016)</td> <td>85.70</td> </tr> <tr> <td>Model || CohEmb</td> <td>92.17</td> </tr> <tr> <td>...
Table 1
table_1
D18-1464
8
emnlp2018
Results. Table 1 summarizes the results of different systems for the readability assessment task. CohEmb significantly outperforms the graph-based coherence model proposed by Mesgar and Strube (2016) by a large margin (6%), showing that our model captures coherence better than their model. In our model, the CNN layer a...
[2, 1, 1, 2, 1, 2, 2, 1]
['Results.', 'Table 1 summarizes the results of different systems for the readability assessment task.', 'CohEmb significantly outperforms the graph-based coherence model proposed by Mesgar and Strube (2016) by a large margin (6%), showing that our model captures coherence better than their model.', 'In our model, the ...
[None, ['Mesgar and Strube (2016)', 'CohEmb', 'CohLSTM', 'De Clercq and Hoste (2016)'], ['CohEmb', 'Mesgar and Strube (2016)', 'CohLSTM', 'Accuracy (%)'], ['CohLSTM'], ['CohLSTM', 'Mesgar and Strube (2016)', 'CohEmb', 'Accuracy (%)'], ['CohLSTM', 'Mesgar and Strube (2016)', 'CohEmb'], None, ['CohLSTM', 'De Clercq and H...
1
D18-1465table_4
The performance of correctly predicting the first and the last sentences on arXiv abstract and SIND caption datasets.
2
[['Models', 'Random'], ['Models', 'Pairwise Ranking Model'], ['Models', 'CNN+PtrNet'], ['Models', 'LSTM+PtrNet'], ['Models', 'ATTOrderNet (ATT)'], ['Models', 'ATTOrderNet (CNN)'], ['Models', 'ATTOrderNet']]
2
[['arXiv abstract', 'head'], ['arXiv abstract', 'tail'], ['SIND caption', 'head'], ['SIND caption', 'tail']]
[['23.06', '23.16', '22.78', '22.56'], ['84.85', '62.37', '-', '-'], ['89.43', '65.36', '73.53', '53.26'], ['90.47', '66.49', '74.66', '53.30'], ['89.68', '65.75', '75.88', '54.30'], ['90.86', '67.85', '75.95', '54.37'], ['91.00', '68.08', '76.00', '54.42']]
column
['accuracy', 'accuracy', 'accuracy', 'accuracy']
['ATTOrderNet (ATT)', 'ATTOrderNet (CNN)', 'ATTOrderNet']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>arXiv abstract || head</th> <th>arXiv abstract || tail</th> <th>SIND caption || head</th> <th>SIND caption || tail</th> </tr> </thead> <tbody> <tr> <td>Models || Random</td> <td>23.0...
Table 4
table_4
D18-1465
7
emnlp2018
Since the first and the last sentences of the text are more special to discern (Chen et al., 2016; Gong et al., 2016), we also evaluate the ratio of correctly predicting the first and the last sentences. Table 4 summarizes our performances on arXiv abstract and SIND caption. As we see, all models show fair well in pred...
[2, 1, 1, 1]
['Since the first and the last sentences of the text are more special to discern (Chen et al., 2016; Gong et al., 2016), we also evaluate the ratio of correctly predicting the first and the last sentences.', 'Table 4 summarizes our performances on arXiv abstract and SIND caption.', 'As we see, all models show fair well...
[None, ['arXiv abstract', 'SIND caption'], ['Random', 'Pairwise Ranking Model', 'CNN+PtrNet', 'LSTM+PtrNet', 'ATTOrderNet (ATT)', 'ATTOrderNet (CNN)', 'ATTOrderNet', 'head', 'tail'], ['ATTOrderNet', 'arXiv abstract', 'SIND caption', 'head', 'tail']]
1
D18-1465table_5
Experimental results of Pairwise Accuracy for different approaches on two datasets in the Order Discrimination task.
2
[['Models', 'Random'], ['Models', 'Graph'], ['Models', 'HMM+Entity'], ['Models', 'HMM'], ['Models', 'Entity Grid'], ['Models', 'Recurrent'], ['Models', 'Recursive'], ['Models', 'Discriminative Model'], ['Models', 'Varient-LSTM+PtrNet'], ['Models', 'CNN+PtrNet'], ['Models', 'LSTM+PtrNet'], ['Models', 'ATTOrderNet (ATT)'...
1
[['Accident'], ['Earthquake']]
[['50.0', '50.0'], ['84.6', '63.5'], ['84.2', '91.1'], ['82.2', '93.8'], ['90.4', '87.2'], ['84.0', '95.1'], ['86.4', '97.6'], ['93.0', '99.2'], ['94.4', '99.7'], ['93.5', '99.4'], ['93.7', '99.5'], ['95.4', '99.6'], ['95.8', '99.7'], ['96.2', '99.8']]
column
['accuracy', 'accuracy']
['ATTOrderNet (ATT)', 'ATTOrderNet (CNN)', 'ATTOrderNet']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accident</th> <th>Earthquake</th> </tr> </thead> <tbody> <tr> <td>Models || Random</td> <td>50.0</td> <td>50.0</td> </tr> <tr> <td>Models || Graph</td> <td>84.6</td> ...
Table 5
table_5
D18-1465
9
emnlp2018
3.4.2 Results. Table 5 reports the results of ATTOrderNet and currently competing architectures in this evaluation task. ATTOrderNet also achieves the stateof-the-art performance, showing a remarkable advancement of about 1.8% gain on Accident dataset and further improving the pairwise accuracy to 99.8 on Earthquake da...
[2, 1, 1, 1, 2, 2, 1, 2]
['3.4.2 Results.', 'Table 5 reports the results of ATTOrderNet and currently competing architectures in this evaluation task.', 'ATTOrderNet also achieves the stateof-the-art performance, showing a remarkable advancement of about 1.8% gain on Accident dataset and further improving the pairwise accuracy to 99.8 on Earth...
[None, ['ATTOrderNet', 'Random', 'Graph', 'HMM+Entity', 'HMM', 'Entity Grid', 'Recurrent', 'Recursive', 'Discriminative Model', 'Varient-LSTM+PtrNet', 'CNN+PtrNet', 'LSTM+PtrNet'], ['ATTOrderNet', 'Varient-LSTM+PtrNet', 'Accident', 'Earthquake'], ['Varient-LSTM+PtrNet', 'CNN+PtrNet', 'LSTM+PtrNet'], None, ['Accident', ...
1
D18-1483table_5
Crosslingual clustering results when considering two different approaches to compute distances across crosslingual clusters on the test set for Spanish, German and English. See text for details.
2
[['crosslingual model', 'τsearch (global)'], ['crosslingual model', 'τsearch (pivot)']]
1
[['F1'], ['P'], ['R']]
[['72.7', '89.8', '61.0'], ['84.0', '83.0', '85.0']]
column
['F1', 'P', 'R']
['τsearch (global)', 'τsearch (pivot)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>F1</th> <th>P</th> <th>R</th> </tr> </thead> <tbody> <tr> <td>crosslingual model || τsearch (global)</td> <td>72.7</td> <td>89.8</td> <td>61.0</td> </tr> <tr> <td>cro...
Table 5
table_5
D18-1483
8
emnlp2018
We test two different scenarios for optimizing the similarity threshold τ for the crosslingual case. Table 5 shows the results for these experiments. First, we consider the simpler case of adjusting a global τ parameter for the crosslingual distances, as also described for the monolingual case. As shown, this method wo...
[2, 1, 2, 1, 2, 1, 2]
['We test two different scenarios for optimizing the similarity threshold τ for the crosslingual case.', 'Table 5 shows the results for these experiments.', 'First, we consider the simpler case of adjusting a global τ parameter for the crosslingual distances, as also described for the monolingual case.', 'As shown, thi...
[None, None, ['τsearch (global)'], ['τsearch (global)'], ['τsearch (pivot)'], ['τsearch (pivot)', 'F1'], ['τsearch (pivot)', 'F1']]
1
D18-1484table_4
Results of Hot Update, Cold Update and Zero Update in different cases
2
[['Model', 'Before Update'], ['Model', 'Cold Update'], ['Model', 'Hot Update'], ['Model', 'Zero Update']]
2
[['Case 1', 'SST-1'], ['Case 1', 'SST-2'], ['Case 1', 'IMDB'], ['Case 2', 'B'], ['Case 2', 'D'], ['Case 2', 'E'], ['Case 2', 'K'], ['Case 3', 'RN'], ['Case 3', 'QC'], ['Case 3', 'IMDB']]
[['48.6', '87.6', '-', '83.7', '84.5', '85.9', '-', '84.8', '93.4', '-'], ['49.8', '88.5', '91.2', '84.4', '85.2', '87.2', '86.9', '85.5', '93.2', '91.0'], ['49.6', '88.1', '91.4', '84.2', '84.9', '87.0', '87.1', '85.2', '92.9', '91.1'], ['-', '-', '90.9', '-', '-', '-', '86.7', '-', '-', '74.2']]
column
['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy']
['Before Update', 'Cold Update', 'Hot Update', 'Zero Update']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Case 1 || SST-1</th> <th>Case 1 || SST-2</th> <th>Case 1 || IMDB</th> <th>Case 2 || B</th> <th>Case 2 || D</th> <th>Case 2 || E</th> <th>Case 2 || K</th> <th>Case 3 || RN</th> ...
Table 4
table_4
D18-1484
7
emnlp2018
where in Zero Update, we ignore the training set of C and just evaluate our model on the testing set. As Table 4 shows, Before Update denotes the model trained on the old tasks before the new tasks are involved, so only evaluations on the old tasks are conducted. Cold Update re-trains the model of Before Update with bo...
[2, 1, 1, 1, 1, 2, 2, 1, 1]
['where in Zero Update, we ignore the training set of C and just evaluate our model on the testing set.', 'As Table 4 shows, Before Update denotes the model trained on the old tasks before the new tasks are involved, so only evaluations on the old tasks are conducted.', 'Cold Update re-trains the model of Before Update...
[None, ['Before Update'], ['Cold Update', 'Before Update'], ['Cold Update', 'Hot Update'], ['Hot Update', 'IMDB', 'K'], ['Zero Update'], ['Before Update'], ['Zero Update', 'Case 1', 'IMDB', 'Case 2', 'K'], ['Zero Update', 'Case 3', 'IMDB']]
1
D18-1484table_5
Comparisons of MTLE against state-of-the-art models
2
[['Model', 'NBOW'], ['Model', 'PV'], ['Model', 'CNN'], ['Model', 'MT-CNN'], ['Model', 'MT-DNN'], ['Model', 'MT-RNN'], ['Model', 'DSM'], ['Model', 'GRNN'], ['Model', 'Tree-LSTM'], ['Model', 'MTLE']]
1
[['SST-1'], ['SST-2'], ['IMDB'], ['Books'], ['DVDs'], ['Electronics'], ['Kitchen'], ['QC']]
[['42.4', '80.5', '83.6', '-', '-', '-', '-', '88.2'], ['44.6', '82.7', '91.7', '-', '-', '-', '-', '91.8'], ['48.0', '88.1', '-', '-', '-', '-', '-', '93.6'], ['-', '-', '-', '80.2', '81.0', '83.4', '83.0', '-'], ['-', '-', '-', '79.7', '80.5', '82.5', '82.8', '-'], ['49.6', '87.9', '91.3', '-', '-', '-', '-', '-'], [...
column
['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy']
['MTLE']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>SST-1</th> <th>SST-2</th> <th>IMDB</th> <th>Books</th> <th>DVDs</th> <th>Electronics</th> <th>Kitchen</th> <th>QC</th> </tr> </thead> <tbody> <tr> <td>Model || NBOW</t...
Table 5
table_5
D18-1484
8
emnlp2018
As Table 5 shows, MTLE achieves competitive or better performances on most tasks except for QC, as it contains less correlations with other tasks. Tree-LSTM outperforms our model on SST1 (50.6 against 49.8), but it requires an external parser to get the sentence topological structure and utilizes treebank annotations. ...
[1, 1, 1]
['As Table 5 shows, MTLE achieves competitive or better performances on most tasks except for QC, as it contains less correlations with other tasks.', 'Tree-LSTM outperforms our model on SST1 (50.6 against 49.8), but it requires an external parser to get the sentence topological structure and utilizes treebank annotati...
[['MTLE', 'SST-1', 'SST-2', 'IMDB', 'Books', 'DVDs', 'Electronics', 'Kitchen'], ['Tree-LSTM', 'MTLE', 'SST-1'], ['PV', 'MTLE', 'IMDB']]
1
D18-1485table_5
Performance of the hierarchical model and our model on the RCV1-V2 test set. Hier refers to hierarchical model, and the subsequent number refers to the length of sentence (word) for sentence-level representations (p < 0.05).
2
[['Models', 'Hier-5'], ['Models', 'Hier-10'], ['Models', 'Hier-15'], ['Models', 'Hier-20'], ['Models', 'Our model']]
1
[['HL(-)'], ['P(+)'], ['R(+)'], ['F1(+)']]
[['0.0075', '0.887', '0.869', '0.878'], ['0.0077', '0.883', '0.873', '0.878'], ['0.0076', '0.879', '0.879', '0.879'], ['0.0076', '0.876', '0.881', '0.878'], ['0.0072', '0.891', '0.873', '0.882']]
column
['HL(-)', 'P(+)', 'R(+)', 'F1(+)']
['Our model']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>HL(-)</th> <th>P(+)</th> <th>R(+)</th> <th>F1(+)</th> </tr> </thead> <tbody> <tr> <td>Models || Hier-5</td> <td>0.0075</td> <td>0.887</td> <td>0.869</td> <td>0.878</td...
Table 5
table_5
D18-1485
8
emnlp2018
We present the results of the evaluation on Table 5, where it can be found that our model with fewer parameters still outperforms the hierarchical model with the deterministic setting of sentence or phrase. Moreover, in order to alleviate the influence of the deterministic sentence boundary, we compare the performance ...
[1, 2, 1, 2, 2]
['We present the results of the evaluation on Table 5, where it can be found that our model with fewer parameters still outperforms the hierarchical model with the deterministic setting of sentence or phrase.', 'Moreover, in order to alleviate the influence of the deterministic sentence boundary, we compare the perform...
[['Our model'], ['Hier-5', 'Hier-10', 'Hier-15', 'Hier-20'], ['Hier-5', 'Hier-10', 'Hier-15', 'Hier-20'], None, None]
1