paper
stringlengths
0
839
paper_id
stringlengths
1
12
table_caption
stringlengths
3
2.35k
table_column_names
large_stringlengths
13
1.76k
table_content_values
large_stringlengths
2
11.9k
text
large_stringlengths
69
2.82k
When Choosing Plausible Alternatives, Clever Hans can be Clever
1911.00225v1
Table 5: Results of fine-tuned models on Balanced COPA. Easy: instances with superficial cues, Hard: instances without superficial cues.
['Model', 'Training data', 'Overall', 'Easy', 'Hard']
[['BERT-large-FT', 'B-COPA', '74.5 (± 0.7)', '74.7 (± 0.4)', '[BOLD] 74.4 (± 0.9)'], ['BERT-large-FT', 'B-COPA (50%)', '74.3 (± 2.2)', '76.8 (± 1.9)', '72.8 (± 3.1)'], ['BERT-large-FT', 'COPA', '[BOLD] 76.5 (± 2.7)', '[BOLD] 83.9 (± 4.4)', '71.9 (± 2.5)'], ['RoBERTa-large-FT', 'B-COPA', '[BOLD] 89.0 (± 0.3)', '88.9 (± ...
The results are shown in Table 5. The smaller performance gap between Easy and Hard subsets indicates that training on BCOPA encourages BERT and RoBERTa to rely less on superficial cues. Moreover, training on B-COPA improves performance on the Hard subset, both when training with all 1000 instances in B-COPA, and when ...
When Choosing Plausible Alternatives, Clever Hans can be Clever
1911.00225v1
Table 1: Reported results on COPA. With the exception of Wang et al. (2019), BERT-large and RoBERTa-large yields substantial improvements over prior approaches. See §2 for model details. * indicates our replication experiments.
['Model', 'Accuracy']
[['BigramPMI\xa0Goodwin et al. ( 2012 )', '63.4'], ['PMI\xa0Gordon et al. ( 2011 )', '65.4'], ['PMI+Connectives\xa0Luo et al. ( 2016 )', '70.2'], ['PMI+Con.+Phrase\xa0Sasaki et al. ( 2017 )', '71.4'], ['BERT-large\xa0Wang et al. ( 2019 )', '70.5'], ['BERT-large\xa0Sap et al. ( 2019 )', '75.0'], ['BERT-large\xa0Li et al...
Recent studies show that BERT and RoBERTa achieve considerable improvements on COPA (see Table 1).
When Choosing Plausible Alternatives, Clever Hans can be Clever
1911.00225v1
Table 2: Applicability (App.), Productivity (Prod.) and Coverage (Cov.) of the various words in the alternatives of the COPA dev set.
['Cue', 'App.', 'Prod.', 'Cov.']
[['in', '47', '55.3', '9.40'], ['was', '55', '61.8', '11.0'], ['to', '82', '40.2', '16.4'], ['the', '85', '38.8', '17.0'], ['a', '106', '57.5', '21.2']]
Table 2 shows the five tokens with highest coverage. For example, a is the token with the highest coverage and appears in either a correct alternative or wrong alternative in 21.2% of COPA training instances. Its productivity of 57.5% expresses that it appears in in correct alternatives 7.5% more often than expected by...
When Choosing Plausible Alternatives, Clever Hans can be Clever
1911.00225v1
Table 3: Results of human performance evaluation of the original COPA and Balanced COPA.
['Dataset', 'Accuracy', 'Fleiss’ kappa [ITALIC] k']
[['Original COPA', '100.0', '0.973'], ['Balanced COPA', '97.0', '0.798']]
The human evaluation shows that our mirrored instances are comparable in difficulty to the original ones (see Table 3).
When Choosing Plausible Alternatives, Clever Hans can be Clever
1911.00225v1
Table 4: Model performance on the COPA test set (Overall), on Easy instances with superficial cues, and on Hard instances without superficial cues. p-values according to Approximate Randomization Tests Noreen (1989), with ∗ indicating a significant difference between performance on Easy and Hard p<5%. Methods are point...
['Model', 'Method', 'Training Data', 'Overall', 'Easy', 'Hard', 'p-value (%)']
[['goodwin-etal-2012-utdhlt', 'PMI', 'unsupervised', '61.8', '64.7', '60.0', '19.8'], ['gordon_commonsense_2011-1', 'PMI', 'unsupervised', '65.4', '65.8', '65.2', '83.5'], ['sasaki-etal-2017-handling', 'PMI', 'unsupervised', '71.4', '75.3', '69.0', '4.8∗'], ['Word frequency', 'wordfreq', 'COPA', '53.5', '57.4', '51.3',...
We then compare BERT and RoBERTa with previous models on the Easy and Hard subsets.7 As Table 4 shows, previous models perform similarly on both subsets, with the exception of Sasaki et al. (2017).8 Overall both BERT (76.5%) and [CONTINUE] RoBERTa (87.7%) considerably outperform the best previous model (71.4%). However...
When Choosing Plausible Alternatives, Clever Hans can be Clever
1911.00225v1
Table 6: Results of non-fine-tuned models on Balanced COPA. Easy: instances with superficial cues, Hard: instances without superficial cues.
['Model', 'Training data', 'Overall', 'Easy', 'Hard']
[['BERT-large', 'B-COPA', '70.5 (± 2.5)', '72.6 (± 2.3)', '[BOLD] 69.1 (± 2.7)'], ['BERT-large', 'B-COPA (50%)', '69.9 (± 1.9)', '71.2 (± 1.3)', '69.0 (± 3.5)'], ['BERT-large', 'COPA', '[BOLD] 71.7 (± 0.5)', '[BOLD] 80.5 (± 0.4)', '66.3 (± 0.8)'], ['RoBERTa-large', 'B-COPA', '[BOLD] 76.7 (± 0.8)', '73.3 (± 1.5)', '[BOL...
The relatively high accuracies of BERT-large, RoBERTa-large and BERT-*-NSP show that these pretrained models are already well-equipped to perform this task "out-of-the-box".
When Choosing Plausible Alternatives, Clever Hans can be Clever
1911.00225v1
Table 7: Sensitivity of BERT-large to superficial cues identified in §2 (unit: 10−2). Cues with top-5 reduction are shown. SCOPA,SB_COPA indicate the mean contributions of BERT-large trained on COPA, and BERT-large trained on B-COPA, respectively.
['Cue', '[ITALIC] SCOPA', '[ITALIC] SB_COPA', 'Diff.', 'Prod.']
[['woman', '7.98', '4.84', '-3.14', '0.25'], ['mother', '5.16', '3.95', '-1.21', '0.75'], ['went', '6.00', '5.15', '-0.85', '0.73'], ['down', '5.52', '4.93', '-0.58', '0.71'], ['into', '4.07', '3.51', '-0.56', '0.40']]
We observe that BERT trained on Balanced COPA is less sensitive to a few highly productive superficial cues than BERT trained on original COPA. Note the decrease in the sensitivity for cues of productivity from 0.7 to 0.9. These cues are shown in Table 7. However, for cues with lower productivity, the picture is less c...
Using Structured Representation and Data: A Hybrid Model for Negation and Sentiment in Customer Service Conversations
1906.04706v1
Table 8: Sentiment classification evaluation, using different classifiers on the test set.
['Classifier', 'Positive Sentiment Precision', 'Positive Sentiment Recall', 'Positive Sentiment Fscore']
[['SVM-w/o neg.', '0.57', '0.72', '0.64'], ['SVM-Punct. neg.', '0.58', '0.70', '0.63'], ['SVM-our-neg.', '0.58', '0.73', '0.65'], ['CNN', '0.63', '0.83', '0.72'], ['CNN-LSTM', '0.71', '0.72', '0.72'], ['CNN-LSTM-Our-neg-Ant', '[BOLD] 0.78', '[BOLD] 0.77', '[BOLD] 0.78'], ['[EMPTY]', 'Negative Sentiment', 'Negative Sent...
show that the antonym based learned representations are more useful for sentiment task as compared to prefixing with NOT_. The proposed CNN-LSTMOur-neg-Ant improves upon the simple CNNLSTM-w/o neg. baseline with F1 scores improving from 0.72 to 0.78 for positive sentiment and from 0.83 to 0.87 for negative sentiment.
Using Structured Representation and Data: A Hybrid Model for Negation and Sentiment in Customer Service Conversations
1906.04706v1
Table 7: Negation classifier performance for scope detection with gold cues and scope.
['[EMPTY]', '[BOLD] Punctuation', '[BOLD] BiLSTM', '[BOLD] Proposed']
[['In-scope (F)', '0.66', '0.88', '0.85'], ['Out-scope (F)', '0.87', '0.97', '0.97'], ['PCS', '0.52', '0.72', '0.72']]
The results in Table 7 show that the method is comparable to state of the art BiLSTM model from (Fancellu et al., 2016) on gold negation cues for scope prediction. [CONTINUE] report the F-score for both in-scope and out-ofscope tokens.
Using Structured Representation and Data: A Hybrid Model for Negation and Sentiment in Customer Service Conversations
1906.04706v1
Table 3: Cue and token distribution in the conversational negation corpus.
['Total negation cues', '2921']
[['True negation cues', '2674'], ['False negation cues', '247'], ['Average scope length', '2.9'], ['Average sentence length', '13.6'], ['Average tweet length', '22.3']]
Corpus statistics are shown in Table 3. The average number of tokens per tweet is 22.3, per sentence is 13.6 and average scope length is 2.9.
Using Structured Representation and Data: A Hybrid Model for Negation and Sentiment in Customer Service Conversations
1906.04706v1
Table 4: Cue classification on the test set.
['[EMPTY]', '[BOLD] F-Score [BOLD] Baseline', '[BOLD] F-Score [BOLD] Proposed', '[BOLD] Support']
[['False cues', '0.61', '0.68', '47'], ['Actual cues', '0.97', '0.98', '557']]
improves the F-score of false negation from a 0.61 baseline to 0.68 on a test set containing 47 false and 557 actual negation cues.
Task-Oriented Dialog Systems that Consider Multiple Appropriate Responses under the Same Context
1911.10484v2
Table 5: Human evaluation results. Models with data augmentation are noted as (+). App denotes the average appropriateness score.
['Model', 'Diversity', 'App', 'Good%', 'OK%', 'Invalid%']
[['DAMD', '3.12', '2.50', '56.5%', '[BOLD] 37.4%', '6.1%'], ['DAMD (+)', '[BOLD] 3.65', '[BOLD] 2.53', '[BOLD] 63.0%', '27.1%', '9.9%'], ['HDSA (+)', '2.14', '2.47', '57.5%', '32.5%', '[BOLD] 10.0%']]
The results are shown in Table 5. [CONTINUE] We report the average value of diversity and appropriateness, and the percentage of responses scored for each appropriateness level. [CONTINUE] With data augmentation, our model obtains a significant improvement in diversity score and achieves the best average appropriatenes...
Task-Oriented Dialog Systems that Consider Multiple Appropriate Responses under the Same Context
1911.10484v2
Table 1: Multi-action evaluation results. The “w” and “w/o” column denote with and without data augmentation respectively, and the better score between them is in bold. We report the average performance over 5 runs.
['Model & Decoding Scheme', 'Act # w/o', 'Act # w/', 'Slot # w/o', 'Slot # w/']
[['Single-Action Baselines', 'Single-Action Baselines', 'Single-Action Baselines', 'Single-Action Baselines', 'Single-Action Baselines'], ['DAMD + greedy', '[BOLD] 1.00', '[BOLD] 1.00', '1.95', '[BOLD] 2.51'], ['HDSA + fixed threshold', '[BOLD] 1.00', '[BOLD] 1.00', '2.07', '[BOLD] 2.40'], ['5-Action Generation', '5-Ac...
The results are shown in Table 1. [CONTINUE] After applying our data augmentation, both the action and slot diversity are improved consistently, [CONTINUE] HDSA has the worse performance and benefits less from data augmentation comparing to our proposed domain-aware multi-decoder network,
Task-Oriented Dialog Systems that Consider Multiple Appropriate Responses under the Same Context
1911.10484v2
Table 2: Comparison of response generation results on MultiWOZ. The oracle/generated denotes either using ground truth or generated results. The results are grouped according to whether and how system action is modeled.
['Model', 'Belief State Type', 'System Action Type', 'System Action Form', 'Inform (%)', 'Success (%)', 'BLEU', 'Combined Score']
[['1. Seq2Seq + Attention ', 'oracle', '-', '-', '71.3', '61.0', '[BOLD] 18.9', '85.1'], ['2. Seq2Seq + Copy', 'oracle', '-', '-', '86.2', '[BOLD] 72.0', '15.7', '94.8'], ['3. MD-Sequicity', 'oracle', '-', '-', '[BOLD] 86.6', '71.6', '16.8', '[BOLD] 95.9'], ['4. SFN + RL (Mehri et al. mehri2019structured)', 'oracle', '...
Results are shown in Table 2. [CONTINUE] The first group shows that after applying our domain-adaptive delexcalization and domain-aware belief span modeling, the task completion ability of seq2seq models becomes better. [CONTINUE] The relative lower BLEU score [CONTINUE] Our DAMD model significantly outperforms other m...
Improving Generalization by Incorporating Coverage in Natural Language Inference
1909.08940v1
Table 3: Impact of using coverage for improving generalization across the datasets of similar tasks. Both models are trained on the SQuAD training data.
['[EMPTY]', 'in-domain SQuAD', 'in-domain SQuAD', 'out-of-domain QA-SRL', 'out-of-domain QA-SRL']
[['[EMPTY]', 'EM', 'F1', 'EM', 'F1'], ['MQAN', '31.76', '75.37', '<bold>10.99</bold>', '50.10'], ['+coverage', '<bold>32.67</bold>', '<bold>76.83</bold>', '10.63', '<bold>50.89</bold>'], ['BIDAF (ELMO)', '70.43', '79.76', '28.35', '49.98'], ['+coverage', '<bold>71.07</bold>', '<bold>80.15</bold>', '<bold>30.58</bold>',...
Table 3 shows the impact of coverage for improving generalization across these two datasets that belong to the two similar tasks of reading comprehension and QA-SRL. [CONTINUE] The models are evaluated using Exact Match (EM) and F1 measures, [CONTINUE] As the results show, incorporating coverage improves the model's pe...
Improving Generalization by Incorporating Coverage in Natural Language Inference
1909.08940v1
Table 2: Impact of using coverage for improving generalization across different datasets of the same task (NLI). All models are trained on MultiNLI.
['[EMPTY]', 'in-domain MultiNLI', 'out-of-domain SNLI', 'out-of-domain Glockner', 'out-of-domain SICK']
[['MQAN', '72.30', '60.91', '41.82', '53.95'], ['+ coverage', '<bold>73.84</bold>', '<bold>65.38</bold>', '<bold>78.69</bold>', '<bold>54.55</bold>'], ['ESIM (ELMO)', '80.04', '68.70', '60.21', '51.37'], ['+ coverage', '<bold>80.38</bold>', '<bold>70.05</bold>', '<bold>67.47</bold>', '<bold>52.65</bold>']]
Table 2 shows the performance for both systems for in-domain (the MultiNLI development set) as well as out-of-domain evaluations on SNLI, Glockner, and SICK datasets. [CONTINUE] The results show that coverage information considerably improves the generalization of both examined models across various NLI datasets. The r...
Guided Dialog Policy Learning: Reward Estimation for Multi-Domain Task-Oriented Dialog
1908.10719v1
Table 4: KL-divergence between different dialog policy and the human dialog KL(πturns||pturns), where πturns denotes the discrete distribution over the number of dialog turns of simulated sessions between the policy π and the agenda-based user simulator, and pturns for the real human-human dialog.
['GP-MBCM', 'ACER', 'PPO', 'ALDM', 'GDPL']
[['1.666', '0.775', '0.639', '1.069', '[BOLD] 0.238']]
Table 4 shows that GDPL has the smallest KL-divergence to the human on the number of dialog turns over the baselines, which implies that GDPL behaves more like the human.
Guided Dialog Policy Learning: Reward Estimation for Multi-Domain Task-Oriented Dialog
1908.10719v1
Table 3: Performance of different dialog agents on the multi-domain dialog corpus by interacting with the agenda-based user simulator. All the results except “dialog turns” are shown in percentage terms. Real human-human performance computed from the test set (i.e. the last row) serves as the upper bounds.
['Method', 'Agenda Turns', 'Agenda Inform', 'Agenda Match', 'Agenda Success']
[['GP-MBCM', '2.99', '19.04', '44.29', '28.9'], ['ACER', '10.49', '77.98', '62.83', '50.8'], ['PPO', '9.83', '83.34', '69.09', '59.1'], ['ALDM', '12.47', '81.20', '62.60', '61.2'], ['GDPL-sess', '[BOLD] 7.49', '88.39', '77.56', '76.4'], ['GDPL-discr', '7.86', '93.21', '80.43', '80.5'], ['GDPL', '7.64', '[BOLD] 94.97', ...
The performance of each approach that interacts with the agenda-based user simulator is shown in [CONTINUE] Table 3. GDPL achieves extremely high performance in the task success on account of the substantial improvement in inform F1 and match rate over the baselines. [CONTINUE] Surprisingly, GDPL even outperforms human...
Guided Dialog Policy Learning: Reward Estimation for Multi-Domain Task-Oriented Dialog
1908.10719v1
Table 5: Performance of different agents on the neural user simulator.
['Method', 'VHUS Turns', 'VHUS Inform', 'VHUS Match', 'VHUS Success']
[['ACER', '22.35', '55.13', '33.08', '18.6'], ['PPO', '[BOLD] 19.23', '[BOLD] 56.31', '33.08', '18.3'], ['ALDM', '26.90', '54.37', '24.15', '16.4'], ['GDPL', '22.43', '52.58', '[BOLD] 36.21', '[BOLD] 19.7']]
the agent The performance that interacts with VHUS is presented in Table 5. [CONTINUE] All the methods cause a significant drop in performance when interacting with VHUS. ALDM even gets worse performance than ACER and PPO. In comparison, GDPL is still comparable with ACER and PPO, obtains a better match rate, and even ...
Guided Dialog Policy Learning: Reward Estimation for Multi-Domain Task-Oriented Dialog
1908.10719v1
Table 6: The count of human preference on dialog session pairs that GDPL wins (W), draws with (D) or loses to (L) other methods based on different criteria. One method wins the other if the majority prefer the former one.
['VS.', 'Efficiency W', 'Efficiency D', 'Efficiency L', 'Quality W', 'Quality D', 'Quality L', 'Success W', 'Success D', 'Success L']
[['ACER', '55', '25', '20', '44', '32', '24', '52', '30', '18'], ['PPO', '74', '13', '13', '56', '26', '18', '59', '31', '10'], ['ALDM', '69', '19', '12', '49', '25', '26', '61', '24', '15']]
Table 6 presents the results of human evaluation. GDPL outperforms three baselines significantly in all aspects (sign test, p-value < 0.01) except for the quality compared with ACER. Among all the baselines, GDPL obtains the most preference against PPO.
Guided Dialog Policy Learning: Reward Estimation for Multi-Domain Task-Oriented Dialog
1908.10719v1
Table 7: Return distribution of GDPL on each metric. The first row counts the dialog sessions that get the full score of the corresponding metric, and the results of the rest sessions are included in the second row.
['Type', 'Inform Mean', 'Inform Num', 'Match Mean', 'Match Num', 'Success Mean', 'Success Num']
[['Full', '8.413', '903', '10.59', '450', '11.18', '865'], ['Other', '-99.95', '76', '-48.15', '99', '-71.62', '135']]
Table 7 provides a quantitative evaluation on the learned rewards by showing the distribution of the return R = t γtrt according to each metric. [CONTINUE] It can be observed that the learned reward function has good interpretability in that the reward is positive when the dialog gets a full score on each metric, and n...
Imparting Interpretability to Word Embeddings while Preserving Semantic Structure
1807.07279v3
TABLE VII: Precision scores for the Analogy Test
['Methods', '# dims', 'Analg. (sem)', 'Analg. (syn)', 'Total']
[['GloVe', '300', '78.94', '64.12', '70.99'], ['Word2Vec', '300', '81.03', '66.11', '73.03'], ['OIWE-IPG', '300', '19.99', '23.44', '21.84'], ['SOV', '3000', '64.09', '46.26', '54.53'], ['SPINE', '1000', '17.07', '8.68', '12.57'], ['Word2Sense', '2250', '12.94', '19.44', '5.84'], ['Proposed', '300', '79.96', '63.52', '...
We present precision scores for the word analogy tests in Table VII. It can be seen that the alternative approaches that aim to improve interpretability, have poor performance on the word analogy tests. However, our proposed method has comparable performance with the original GloVe embeddings. Our proposed method outpe...
Imparting Interpretability to Word Embeddings while Preserving Semantic Structure
1807.07279v3
TABLE V: Word Intrusion Test Results: Correct Answers out of 300 Questions
['[EMPTY]', 'GloVe', 'Imparted']
[['Participants 1 to 5', '80/88/82/78/97', '212/170/207/229/242'], ['Mean/Std', '85/6.9', '212/24.4']]
We apply the test on five participants. Results tabulated at Table V shows that our proposed method significantly improves the interpretability by increasing the average true answer percentage from ∼ 28% for baseline to ∼ 71% for our method.
Imparting Interpretability to Word Embeddings while Preserving Semantic Structure
1807.07279v3
TABLE VI: Correlations for Word Similarity Tests
['Dataset (EN-)', 'GloVe', 'Word2Vec', 'OIWE-IPG', 'SOV', 'SPINE', 'Word2Sense', 'Proposed']
[['WS-353-ALL', '0.612', '0.7156', '0.634', '0.622', '0.173', '0.690', '0.657'], ['SIMLEX-999', '0.359', '0.3939', '0.295', '0.355', '0.090', '0.380', '0.381'], ['VERB-143', '0.326', '0.4430', '0.255', '0.271', '0.293', '0.271', '0.348'], ['SimVerb-3500', '0.193', '0.2856', '0.184', '0.197', '0.035', '0.234', '0.245'],...
The correlation scores for 13 different similarity test sets and their averages are reported in Table VI. We observe that, let alone a reduction in performance, the obtained scores indicate an almost uniform improvement in the correlation values for the proposed algorithm, outperforming all the alternatives except Word...
Imparting Interpretability to Word Embeddings while Preserving Semantic Structure
1807.07279v3
TABLE VIII: Precision scores for the Semantic Analogy Test
['Questions Subset', '# of Questions Seen', 'GloVe', 'Word2Vec', 'Proposed']
[['All', '8783', '78.94', '81.03', '79.96'], ['At least one', '1635', '67.58', '70.89', '67.89'], ['concept word', '1635', '67.58', '70.89', '67.89'], ['All concept words', '110', '77.27', '89.09', '83.64']]
To investigate the effect of the additional cost term on the performance improvement in the semantic analogy test, we [CONTINUE] present Table VIII. In particular, we present results for the cases where i) all questions in the dataset are considered, ii) only the questions that contains at least one concept word are co...
Imparting Interpretability to Word Embeddings while Preserving Semantic Structure
1807.07279v3
TABLE IX: Accuracies (%) for Sentiment Classification Task
['GloVe', 'Word2Vec', 'OIWE-IPG', 'SOV', 'SPINE', 'Word2Sense', 'Proposed']
[['77.34', '77.91', '74.27', '78.43', '74.13', '81.21', '78.26']]
Classification accuracies are presented in Table IX. The proposed method outperforms the original embeddings and performs on par with the SOV. Pretrained Word2Sense embeddings outperform our method, however it has the advantage of training on a larger corpus. This result along with the intrinsic evaluations show that t...
Revisiting Joint Modeling of Cross-documentEntity and Event Coreference Resolution
1906.01753v1
Table 2: Combined within- and cross-document entity coreference results on the ECB+ test set.
['[BOLD] Model', 'R', 'MUC P', '[ITALIC] F1', 'R', 'B3 P', '[ITALIC] F1', 'R', 'CEAF- [ITALIC] e P', '[ITALIC] F1', 'CoNLL [ITALIC] F1']
[['Cluster+Lemma', '71.3', '83', '76.7', '53.4', '84.9', '65.6', '70.1', '52.5', '60', '67.4'], ['Disjoint', '76.7', '80.8', '78.7', '63.2', '78.2', '69.9', '65.3', '58.3', '61.6', '70'], ['Joint', '78.6', '80.9', '79.7', '65.5', '76.4', '70.5', '65.4', '61.3', '63.3', '[BOLD] 71.2']]
Table 2 presents the performance of our method with respect to entity coreference. Our joint model improves upon the strong lemma baseline by 3.8 points in CoNLL F1 score.
Revisiting Joint Modeling of Cross-documentEntity and Event Coreference Resolution
1906.01753v1
Table 3: Combined within- and cross-document event coreference results on the ECB+ test set.
['[BOLD] Model', 'R', 'MUC P', '[ITALIC] F1', 'R', 'B3 P', '[ITALIC] F1', 'R', 'CEAF- [ITALIC] e P', '[ITALIC] F1', 'CoNLL [ITALIC] F1']
[['[BOLD] Baselines', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['Cluster+Lemma', '76.5', '79.9', '78.1', '71.7', '85', '77.8', '75.5', '71.7', '73.6', '76.5'], ['CV Cybulska and Vossen ( 2015a )', '71', '75', '73', '71', '78', '74', '-', '-', '64', '...
Table 3 presents the results on event coreference. Our joint model outperforms all the base [CONTINUE] lines with a gap of 10.5 CoNLL F1 points from the last published results (KCP), while surpassing our strong lemma baseline by 3 points. [CONTINUE] The results reconfirm that the lemma baseline, when combined with effe...
Revisiting Joint Modeling of Cross-documentEntity and Event Coreference Resolution
1906.01753v1
Table 2: Combined within- and cross-document entity coreference results on the ECB+ test set.
['<bold>Model</bold>', 'R', 'MUC P', '<italic>F</italic>1', 'R', 'B3 P', '<italic>F</italic>1', 'R', 'CEAF-<italic>e</italic> P', '<italic>F</italic>1', 'CoNLL <italic>F</italic>1']
[['Cluster+Lemma', '71.3', '83', '76.7', '53.4', '84.9', '65.6', '70.1', '52.5', '60', '67.4'], ['Disjoint', '76.7', '80.8', '78.7', '63.2', '78.2', '69.9', '65.3', '58.3', '61.6', '70'], ['Joint', '78.6', '80.9', '79.7', '65.5', '76.4', '70.5', '65.4', '61.3', '63.3', '<bold>71.2</bold>']]
Table 2 presents the performance of our method with respect to entity coreference. Our joint model improves upon the strong lemma baseline by 3.8 points in CoNLL F1 score.
Revisiting Joint Modeling of Cross-documentEntity and Event Coreference Resolution
1906.01753v1
Table 3: Combined within- and cross-document event coreference results on the ECB+ test set.
['<bold>Model</bold>', 'R', 'MUC P', '<italic>F</italic>1', 'R', 'B3 P', '<italic>F</italic>1', 'R', 'CEAF-<italic>e</italic> P', '<italic>F</italic>1', 'CoNLL <italic>F</italic>1']
[['<bold>Baselines</bold>', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['Cluster+Lemma', '76.5', '79.9', '78.1', '71.7', '85', '77.8', '75.5', '71.7', '73.6', '76.5'], ["CV Cybulska and Vossen (<ref id='bib-bib8'>2015a</ref>)", '71', '75', '73', '71', ...
lines with a gap of 10.5 CoNLL F1 points from the last published results (KCP), while surpassing our strong lemma baseline by 3 points. Table 3 presents the results on event coreference. Our joint model outperforms all the base [CONTINUE] The results reconfirm that the lemma baseline, when combined with effective topic...
Attention-Based Capsule Networks with Dynamic Routing for Relation Extraction
1812.11321v1
Table 1: Precisions on the NYT dataset.
['Recall', '0.1', '0.2', '0.3', '0.4', 'AUC']
[['PCNN+ATT', '0.698', '0.606', '0.518', '0.446', '0.323'], ['Rank+ExATT', '0.789', '0.726', '0.620', '0.514', '0.395'], ['Our Model', '0.788', '[BOLD] 0.743', '[BOLD] 0.654', '[BOLD] 0.546', '[BOLD] 0.397']]
We also show the precision numbers for some particular recalls as well as the AUC in Table 1, where our model generally leads to better precision.
Attention-Based Capsule Networks with Dynamic Routing for Relation Extraction
1812.11321v1
Table 2: Precisions on the Wikidata dataset.
['Recall', '0.1', '0.2', '0.3', 'AUC']
[['Rank+ExATT', '0.584', '0.535', '0.487', '0.392'], ['PCNN+ATT (m)', '0.365', '0.317', '0.213', '0.204'], ['PCNN+ATT (1)', '0.665', '0.517', '0.413', '0.396'], ['Our Model', '0.650', '0.519', '0.422', '[BOLD] 0.405']]
We show the precision numbers for some particular recalls as well as the AUC in Table 2, where PCNN+ATT (1) refers to train sentences with two entities and one relation label, PCNN+ATT (m) refers to train sentences with four entities7 and two relation labels. We observe that our model exhibits the best performances.
Attention-Based Capsule Networks with Dynamic Routing for Relation Extraction
1812.11321v1
Table 3: Ablation study of capsule net and word-level attention on Wikidata dataset.
['Recall', '0.1', '0.2', '0.3', 'AUC']
[['-Word-ATT', '0.648', '0.515', '0.395', '0.389'], ['-Capsule', '0.635', '0.507', '0.413', '0.386'], ['Our Model', '0.650', '0.519', '0.422', '0.405']]
The experimental results on Wikidata dataset are summarized in Table 3. The results of "-Word-ATT" row refers to the results without word-level attention. According to the table, the drop of precision demonstrates that the word-level attention is quite useful.
Attention-Based Capsule Networks with Dynamic Routing for Relation Extraction
1812.11321v1
Table 4: Precisions on the Wikidata dataset with different choice of d.
['Recall', '0.1', '0.2', '0.3', 'AUC', 'Time']
[['[ITALIC] d=1', '0.602', '0.487', '0.403', '0.367', '4h'], ['[ITALIC] d=32', '0.645', '0.501', '0.393', '0.370', '-'], ['[ITALIC] d=16', '0.655', '0.518', '0.413', '0.413', '20h'], ['[ITALIC] d=8', '0.650', '0.519', '0.422', '0.405', '8h']]
As the table 4 depicts, the training time increases with the growth of d.
Attention-Based Capsule Networks with Dynamic Routing for Relation Extraction
1812.11321v1
Table 5: Precisions on the Wikidata dataset with different number of dynamic routing iterations.
['Recall', '0.1', '0.2', '0.3', 'AUC']
[['Iteration=1', '0.531', '0.455', '0.353', '0.201'], ['Iteration=2', '0.592', '0.498', '0.385', '0.375'], ['Iteration=3', '0.650', '0.519', '0.422', '0.405'], ['Iteration=4', '0.601', '0.505', '0.422', '0.385'], ['Iteration=5', '0.575', '0.495', '0.394', '0.376']]
Table 5 shows the comparison of 1 - 5 iterations. We find that the performance reach the best when iteration is set to 3.
Better Rewards Yield Better Summaries: Learning to Summarise Without References
1909.01214v1
Table 3: Full-length ROUGE F-scores of some recent RL-based (upper) and supervised (middle) extractive summarisation systems, as well as our system with learned rewards (bottom). R-1/2/L stands for ROUGE-1/2/L. Our system maximises the learned reward instead of ROUGE, hence receives lower ROUGE scores.
['System', 'Reward', 'R-1', 'R-2', 'R-L']
[['Kryscinski et\xa0al. ( 2018 )', 'R-L', '40.2', '17.4', '37.5'], ['Narayan et\xa0al. ( 2018b )', 'R-1,2,L', '40.0', '18.2', '36.6'], ['Chen and Bansal ( 2018 )', 'R-L', '41.5', '18.7', '37.8'], ['Dong et\xa0al. ( 2018 )', 'R-1,2,L', '41.5', '18.7', '37.6'], ['Zhang et\xa0al. ( 2018 )', '[EMPTY]', '41.1', '18.8', '37....
Table 3 presents the ROUGE scores of our system (NeuralTD+LearnedRewards) and multiple stateof-the-art systems. The summaries generated by our system receive decent ROUGE metrics, but are lower than most of the recent systems, because our learned reward is optimised towards high correlation with human judgement instead...
Better Rewards Yield Better Summaries: Learning to Summarise Without References
1909.01214v1
Table 1: Quality of reward metrics. G-Pre and G-Rec are the precision and recall rate of the “good” summaries identified by the metrics, resp. All metrics here require reference summaries. We perform stemming and stop words removal as preprosessing, as they help increase the correlation. For InferSent, the embeddings o...
['Metric', '[ITALIC] ρ', '[ITALIC] r', 'G-Pre', 'G-Rec']
[['ROUGE-1', '.290', '.304', '.392', '.428'], ['ROUGE-2', '.259', '.278', '.408', '.444'], ['ROUGE-L', '.274', '.297', '.390', '.426'], ['ROUGE-SU4', '.282', '.279', '.404', '.440'], ['BLEU-1', '.256', '.281', '.409', '.448'], ['BLEU-2', '.301', '.312', '.411', '.446'], ['BLEU-3', '.317', '.312', '.409', '.444'], ['BLE...
From Table 1, we find that all metrics we consider have low correlation with the human judgement. More importantly, their G-Pre and G-Rec scores are all below .50, which means that more than half of the good summaries identified by the metrics are actually not good, and more than 50%
Better Rewards Yield Better Summaries: Learning to Summarise Without References
1909.01214v1
Table 2: Summary-level correlation of learned reward functions. All results are averaged over 5-fold cross validations. Unlike the metrics in Table 1, all rewards in this table do not require reference summaries.
['Model', 'Encoder', '[ITALIC] Reg. loss (Eq. ( 1 )) [ITALIC] ρ', '[ITALIC] Reg. loss (Eq. ( 1 )) [ITALIC] r', '[ITALIC] Reg. loss (Eq. ( 1 )) G-Pre', '[ITALIC] Reg. loss (Eq. ( 1 )) G-Rec', '[ITALIC] Pref. loss (Eq. ( 3 )) [ITALIC] ρ', '[ITALIC] Pref. loss (Eq. ( 3 )) [ITALIC] r', '[ITALIC] Pref. loss (Eq. ( 3 )) ...
[['MLP', 'CNN-RNN', '.311', '.340', '.486', '.532', '.318', '.335', '.481', '.524'], ['MLP', 'PMeans-RNN', '.313', '.331', '.489', '.536', '.354', '.375', '.502', '.556'], ['MLP', 'BERT', '[BOLD] .487', '[BOLD] .526', '[BOLD] .544', '[BOLD] .597', '[BOLD] .505', '[BOLD] .531', '[BOLD] .556', '[BOLD] .608'], ['SimRed', ...
Table 2 shows the quality of different reward learning models. As a baseline, we also consider the feature-rich reward learning method proposed by Peyrard and Gurevych (see §2). MLP with BERT as en(2018) coder has the best overall performance. Specifically, BERT+MLP+Pref significantly outperforms (p < 0.05) all the oth...
Better Rewards Yield Better Summaries: Learning to Summarise Without References
1909.01214v1
Table 4: Human evaluation on extractive summaries. Our system receives significantly higher human ratings on average. “Best%”: in how many percentage of documents a system receives the highest human rating.
['[EMPTY]', 'Ours', 'Refresh', 'ExtAbsRL']
[['Avg. Human Rating', '[BOLD] 2.52', '2.27', '1.66'], ['Best%', '[BOLD] 70.0', '33.3', '6.7']]
Table 4 presents the human evaluation results. summaries generated by NeuralTD receives significantly higher human evaluation scores than those by Refresh (p = 0.0088, double-tailed ttest) and ExtAbsRL (p (cid:28) 0.01). Also, the average human rating for Refresh is significantly higher (p (cid:28) 0.01) than ExtAbsRL,
Better Rewards Yield Better Summaries: Learning to Summarise Without References
1909.01214v1
Table 5: Performance of ExtAbsRL with different reward functions, measured in terms of ROUGE (center) and human judgements (right). Using our learned reward yields significantly (p=0.0057) higher average human rating. “Pref%”: in how many percentage of documents a system receives the higher human rating.
['Reward', 'R-1', 'R-2', 'R-L', 'Human', 'Pref%']
[['R-L (original)', '40.9', '17.8', '38.5', '1.75', '15'], ['Learned (ours)', '39.2', '17.4', '37.5', '[BOLD] 2.20', '[BOLD] 75']]
Table 5 compares the ROUGE scores of using different rewards to train the extractor in ExtAbsRL (the abstractor is pre-trained, and is applied to rephrase the extracted sentences). Again, when ROUGE is used as rewards, the generated summaries have higher ROUGE scores. [CONTINUE] It is clear from Table 5 that using the ...
Building a Production Model for Retrieval-Based Chatbots
1906.03209v2
Table 9: An ablation study showing the effect of different model architectures and training regimes on performance on the proprietary help desk dataset.
['[BOLD] Model', '[BOLD] Parameters', '[BOLD] Validation AUC@0.05', '[BOLD] Test AUC@0.05']
[['Base', '8.0M', '[BOLD] 0.871', '0.816'], ['4L SRU → 2L LSTM', '7.3M', '0.864', '[BOLD] 0.829'], ['4L SRU → 2L SRU', '7.8M', '0.856', '[BOLD] 0.829'], ['Flat → hierarchical', '12.4M', '0.825', '0.559'], ['Cross entropy → hinge loss', '8.0M', '0.765', '0.693'], ['6.6M → 1M examples', '8.0M', '0.835', '0.694'], ['6.6M ...
The results are shown in Table 9, [CONTINUE] As Table 9 shows, the training set size and the number of negative responses for each positive response are the most important factors in model performance. The model performs significantly worse when trained with hinge loss instead of cross-entropy loss, indicating the impo...
Building a Production Model for Retrieval-Based Chatbots
1906.03209v2
Table 8: Inference time (milliseconds) of our model to encode a context using an SRU or an LSTM encoder on a single CPU core. The last row shows the extra time needed to compare the response encoding to 10,000 cached candidate response encodings in order to find the best response.
['[BOLD] Encoder', '[BOLD] Layer', '[BOLD] Params', '[BOLD] Time']
[['SRU', '2', '3.7M', '14.7'], ['SRU', '4', '8.0M', '21.9'], ['LSTM', '2', '7.3M', '90.9'], ['LSTM', '4', '15.9M', '174.8'], ['+rank response', '-', '-', '0.9']]
The model makes use of a fast recurrent network implementation (Lei et al., 2018) and multiheaded attention (Lin et al., 2017) and achieves over a 4.1x inference speedup over traditional encoders such as LSTM (Hochreiter and Schmidhuber, 1997). [CONTINUE] SRU also exhibits a significant speedup in inference time compar...
Building a Production Model for Retrieval-Based Chatbots
1906.03209v2
Table 3: AUC and AUC@p of our model on the propriety help desk dataset.
['[BOLD] Metric', '[BOLD] Validation', '[BOLD] Test']
[['AUC', '0.991', '0.977'], ['AUC@0.1', '0.925', '0.885'], ['AUC@0.05', '0.871', '0.816'], ['AUC@0.01', '0.677', '0.630']]
The performance of our model according to these AUC metrics can be seen in Table 3. The high AUC indicates that our model can easily distinguish between the true response and negative responses. Furthermore, the AUC@p numbers show that the model has a relatively high true positive rate even under the difficult requirem...
Building a Production Model for Retrieval-Based Chatbots
1906.03209v2
Table 4: Recall@k from n response candidates for different values of n using random whitelists. Each random whitelist includes the correct response along with n−1 randomly selected responses.
['[BOLD] Candidates', '[BOLD] R@1', '[BOLD] R@3', '[BOLD] R@5', '[BOLD] R@10']
[['10', '0.892', '0.979', '0.987', '1'], ['100', '0.686', '0.842', '0.894', '0.948'], ['1,000', '0.449', '0.611', '0.677', '0.760'], ['10,000', '0.234', '0.360', '0.421', '0.505']]
Table 4 shows Rn@k on the test set for different values of n and k when using a random [CONTINUE] Table 4 shows that recall drops significantly as n grows, meaning that the R10@k evaluation performed by prior work may significantly overstate model performance in a production setting.
Building a Production Model for Retrieval-Based Chatbots
1906.03209v2
Table 5: Recall@k for random, frequency, and clustering whitelists of different sizes. The “+” indicates that the true response is added to the whitelist.
['[BOLD] Whitelist', '[BOLD] R@1', '[BOLD] R@3', '[BOLD] R@5', '[BOLD] R@10', '[BOLD] BLEU']
[['Random 10K+', '0.252', '0.400', '0.472', '0.560', '37.71'], ['Frequency 10K+', '0.257', '0.389', '0.455', '0.544', '41.34'], ['Clustering 10K+', '0.230', '0.376', '0.447', '0.541', '37.59'], ['Random 1K+', '0.496', '0.663', '0.728', '0.805', '59.28'], ['Frequency 1K+', '0.513', '0.666', '0.726', '0.794', '67.05'], [...
The results in Table 5 show that the three types of whitelists perform comparably to each other when the true response is added. However, in the more realistic second case, when recall is only computed on examples with a response already in the whitelist, performance on the frequency and clustering whitelists drops sig...
Building a Production Model for Retrieval-Based Chatbots
1906.03209v2
Table 6: Recall@1 versus coverage for frequency and clustering whitelists.
['[BOLD] Whitelist', '[BOLD] R@1', '[BOLD] Coverage']
[['Frequency 10K', '0.136', '45.04%'], ['Clustering 10K', '0.164', '38.38%'], ['Frequency 1K', '0.273', '33.38%'], ['Clustering 1K', '0.331', '23.28%']]
Table 6 shows R@1 and coverage for the frequency and clustering whitelists. While the clustering whitelists have higher recall, the frequency whitelists have higher coverage.
Building a Production Model for Retrieval-Based Chatbots
1906.03209v2
Table 7: Results of the human evaluation of the responses produced by our model. A response is acceptable if it is either good or great. Note: Numbers may not add up to 100% due to rounding.
['[BOLD] Whitelist', '[BOLD] Great', '[BOLD] Good', '[BOLD] Bad', '[BOLD] Accept']
[['Freq. 1K', '54%', '26%', '20%', '80%'], ['Cluster. 1K', '55%', '21%', '23%', '77%'], ['Freq. 10K', '56%', '24%', '21%', '80%'], ['Cluster. 10K', '57%', '23%', '20%', '80%'], ['Real response', '60%', '24%', '16%', '84%']]
The results of the human evaluation are in Table 7. Our proposed system works well, selecting acceptable (i.e. good or great) responses about 80% of the time and selecting great responses more than 50% of the time. Interestingly, the size and type of whitelist seem to have little effect on performance, indicating that ...
Mind the GAP: A Balanced Corpus of Gendered Ambiguous Pronouns
1810.05201v1
Table 6: Performance of our baselines on the development set. Parallelism+URL tests the page-context setting; all other test the snippet-context setting. Bold indicates best performance in each setting.
['[EMPTY]', 'M', 'F', 'B', 'O']
[['Random', '43.6', '39.3', '[ITALIC] 0.90', '41.5'], ['Token Distance', '50.1', '42.4', '[ITALIC] 0.85', '46.4'], ['Topical Entity', '51.5', '43.7', '[ITALIC] 0.85', '47.7'], ['Syntactic Distance', '63.0', '56.2', '[ITALIC] 0.89', '59.7'], ['Parallelism', '[BOLD] 67.1', '[BOLD] 63.1', '[ITALIC] [BOLD] 0.94', '[BOLD] ...
Both cues yield strong baselines comparable to the strongest OntoNotes-trained systems (cf. Table 4). In fact, Lee et al. (2017) and PARALLELISM produce remarkably similar output: of the 2000 example pairs in the development set, the two have completely opposing predictions (i.e. Name A vs. Name B) on only 325 examples...
Mind the GAP: A Balanced Corpus of Gendered Ambiguous Pronouns
1810.05201v1
Table 4: Performance of off-the-shelf resolvers on the GAP development set, split by Masculine and Feminine (Bias shows F/M), and Overall. Bold indicates best performance.
['[EMPTY]', 'M', 'F', 'B', 'O']
[['Lee et\xa0al. ( 2013 )', '55.4', '45.5', '[ITALIC] 0.82', '50.5'], ['Clark and Manning', '58.5', '51.3', '[ITALIC] 0.88', '55.0'], ['Wiseman et\xa0al.', '[BOLD] 68.4', '59.9', '[ITALIC] 0.88', '64.2'], ['Lee et\xa0al. ( 2017 )', '67.2', '[BOLD] 62.2', '[ITALIC] [BOLD] 0.92', '[BOLD] 64.7']]
We note particularly the large difference in performance between genders, [CONTINUE] Both cues yield strong baselines comparable to the strongest OntoNotes-trained systems (cf. Table 4). In fact, Lee et al. (2017) and PARALLELISM produce remarkably similar output: of the 2000 example pairs in the development set, the t...
Mind the GAP: A Balanced Corpus of Gendered Ambiguous Pronouns
1810.05201v1
Table 7: Performance of our baselines on the development set in the gold-two-mention task (access to the two candidate name spans). Parallelism+URL tests the page-context setting; all other test the snippet-context setting. Bold indicates best performance in each setting.
['[EMPTY]', 'M', 'F', 'B', 'O']
[['Random', '47.5', '50.5', '[ITALIC] 1.06', '49.0'], ['Token Distance', '50.6', '47.5', '[ITALIC] 0.94', '49.1'], ['Topical Entity', '50.2', '47.3', '[ITALIC] 0.94', '48.8'], ['Syntactic Distance', '66.7', '66.7', '[ITALIC] [BOLD] 1.00', '66.7'], ['Parallelism', '[BOLD] 69.3', '[BOLD] 69.2', '[ITALIC] [BOLD] 1.00', ...
RANDOM is indeed closer here to the expected 50% and other baselines are closer to gender-parity. [CONTINUE] TOKEN DISTANCE and TOPICAL ENTITY are only weak improvements above RANDOM, [CONTINUE] Further, the cues are markedly gender-neutral, improving the Bias metric by 9% in the standard task formulation and to parity...
Mind the GAP: A Balanced Corpus of Gendered Ambiguous Pronouns
1810.05201v1
Table 8: Coreference signal of a Transformer model on the validation dataset, by encoder attention layer and head.
['HeadLayer', 'L0', 'L1', 'L2', 'L3', 'L4', 'L5']
[['H0', '4 6.9', '4 7.4', '4 5.8', '4 6.2', '4 5.8', '4 5.7'], ['H1', '4 5.3', '4 6.5', '4 6.4', '4 6.2', '4 9.4', '4 6.3'], ['H2', '4 5.8', '4 6.7', '4 6.3', '4 6.5', '4 5.7', '4 5.9'], ['H3', '4 6.0', '4 6.3', '4 6.8', '4 6.0', '4 6.6', '4 8.0'], ['H4', '4 5.7', '4 6.3', '4 6.5', '4 7.8', '4 5.1', '4 7.0'], ['H5', '4...
Consistent with the observations by Vaswani et al. (2017), we observe that the coreference signal is localized on specific heads and that these heads are in the deep layers of the network (e.g. L3H7).
Mind the GAP: A Balanced Corpus of Gendered Ambiguous Pronouns
1810.05201v1
Table 9: Comparison of the predictions of the Parallelism and Transformer-Single heuristics over the GAP development dataset.
['[EMPTY]', '[EMPTY]', 'Parallelism Correct', 'Parallelism Incorrect']
[['Transf.', 'Correct', '48.7%', '13.4%'], ['Transf.', 'Incorrect', '21.6%', '16.3%']]
we find that the instances of coreference that TRANSFORMERSINGLE can handle is substantially
Effective Attention Modeling for Neural Relation Extraction
1912.03832v1
Table 3: Performance comparison of our model with different values of m on the two datasets.
['[ITALIC] m', 'NYT10 Prec.', 'NYT10 Rec.', 'NYT10 F1', 'NYT11 Prec.', 'NYT11 Rec.', 'NYT11 F1']
[['1', '0.541', '0.595', '[BOLD] 0.566', '0.495', '0.621', '0.551'], ['2', '0.521', '0.597', '0.556', '0.482', '0.656', '0.555'], ['3', '0.490', '0.617', '0.547', '0.509', '0.633', '0.564'], ['4', '0.449', '0.623', '0.522', '0.507', '0.652', '[BOLD] 0.571'], ['5', '0.467', '0.609', '0.529', '0.488', '0.677', '0.567']]
We investigate the effects of the multi-factor count (m) in our final model on the test datasets in Table 3. We observe that for the NYT10 dataset, m = {1, 2, 3} gives good performance with m = 1 achieving the highest F1 score. On the NYT11 dataset, m = 4 gives the best performance. These experiments show that the numb...
Effective Attention Modeling for Neural Relation Extraction
1912.03832v1
Table 2: Performance comparison of different models on the two datasets. * denotes a statistically significant improvement over the previous best state-of-the-art model with p<0.01 under the bootstrap paired t-test. † denotes the previous best state-of-the-art model.
['Model', 'NYT10 Prec.', 'NYT10 Rec.', 'NYT10 F1', 'NYT11 Prec.', 'NYT11 Rec.', 'NYT11 F1']
[['CNN zeng2014relation', '0.413', '0.591', '0.486', '0.444', '0.625', '0.519'], ['PCNN zeng2015distant', '0.380', '[BOLD] 0.642', '0.477', '0.446', '0.679', '0.538†'], ['EA huang2016attention', '0.443', '0.638', '0.523†', '0.419', '0.677', '0.517'], ['BGWA jat2018attention', '0.364', '0.632', '0.462', '0.417', '[BOLD]...
We present the results of our final model on the relation extraction task on the two datasets in Table 2. Our model outperforms the previous stateof-the-art models on both datasets in terms of F1 score. On the NYT10 dataset, it achieves 4.3% higher F1 score compared to the previous best state-of-the-art model EA. Simil...
Effective Attention Modeling for Neural Relation Extraction
1912.03832v1
Table 4: Effectiveness of model components (m=4) on the NYT11 dataset.
['[EMPTY]', 'Prec.', 'Rec.', 'F1']
[['(A1) BiLSTM-CNN', '0.473', '0.606', '0.531'], ['(A2) Standard attention', '0.466', '0.638', '0.539'], ['(A3) Window size ( [ITALIC] ws)=5', '0.507', '0.652', '[BOLD] 0.571'], ['(A4) Window size ( [ITALIC] ws)=10', '0.510', '0.640', '0.568'], ['(A5) Softmax', '0.490', '0.658', '0.562'], ['(A6) Max-pool', '0.492', '0....
We include the ablation results on the NYT11 dataset in Table 4. When we add multi-factor attention to the baseline BiLSTM-CNN model without the dependency distance-based weight factor in the attention mechanism, we get 0.8% F1 score improvement (A2−A1). Adding the dependency weight factor with a window size of 5 impro...
Zero-Shot Grounding of Objects from Natural Language Queries
1908.07129v1
Table 3: Category-wise performance with the default split of Flickr30k Entities.
['Method', 'Overall', 'people', 'clothing', 'bodyparts', 'animals', 'vehicles', 'instruments', 'scene', 'other']
[['QRC - VGG(det)', '60.21', '75.08', '55.9', '20.27', '73.36', '68.95', '45.68', '65.27', '38.8'], ['CITE - VGG(det)', '61.89', '[BOLD] 75.95', '58.50', '30.78', '[BOLD] 77.03', '[BOLD] 79.25', '48.15', '58.78', '43.24'], ['ZSGNet - VGG (cls)', '60.12', '72.52', '60.57', '38.51', '63.61', '64.47', '49.59', '64.66', '4...
For Flickr30k we also note entity-wise accuracy in Table 3 and compare against [7, 34]. [CONTINUE] As these models use object detectors pretrained on Pascal-VOC , they have somewhat higher performance on classes that are common to both Flickr30k and Pascal-VOC ("animals", "people" and "vehicles"). However, on the class...
Zero-Shot Grounding of Objects from Natural Language Queries
1908.07129v1
Table 2: Comparison of our model with other state of the art methods. We denote those networks which use classification weights from ImageNet [41] using “cls” and those networks which use detection weights from Pascal VOC [12] using “det”. The reported numbers are all Accuracy@IoU=0.5 or equivalently Recall@1. Models m...
['Method', 'Net', 'Flickr30k', 'ReferIt']
[['SCRC ', 'VGG', '27.8', '17.9'], ['GroundeR (cls) ', 'VGG', '42.43', '24.18'], ['GroundeR (det) ', 'VGG', '48.38', '28.5'], ['MCB (det) ', 'VGG', '48.7', '28.9'], ['Li (cls) ', 'VGG', '-', '40'], ['QRC* (det) ', 'VGG', '60.21', '44.1'], ['CITE* (cls) ', 'VGG', '61.89', '34.13'], ['QRG* (det)', 'VGG', '60.1', '-'], ['...
Table 2 compares ZSGNet with prior works on Flickr30k Entities and ReferIt . We use "det" and "cls" to denote models using Pascal VOC detection weights and ImageNet [10, 41] classification weights. Networks marked [CONTINUE] with "*" fine-tune their object detector pretrained on PascalVOC on the fixed entities of Fl...
Zero-Shot Grounding of Objects from Natural Language Queries
1908.07129v1
Table 4: Accuracy across various unseen splits. For Flickr-Split-0,1 we use Accuracy with IoU threshold of 0.5. Since Visual Genome annotations are noisy we additionally report Accuracy with IoU threshold of 0.3. The second row denotes the IoU threshold at which the Accuracy is calculated. “B” and “UB” denote the balan...
['Method', 'Net', 'Flickr- Split-0', 'Flickr- Split-1', 'VG-2B 0.3', 'VG-2B 0.5', 'VG-2UB 0.3', 'VG-2UB 0.5', 'VG-3B 0.3', 'VG-3B 0.5', 'VG-3UB 0.3', 'VG-3UB 0.5']
[['QRG', 'VGG', '35.62', '24.42', '13.17', '7.64', '12.39', '7.15', '14.21', '8.35', '13.03', '7.52'], ['ZSGNet', 'VGG', '39.32', '29.35', '17.09', '11.02', '16.48', '10.55', '17.63', '11.42', '17.35', '10.97'], ['ZSGNet', 'Res50', '[BOLD] 43.02', '[BOLD] 31.23', '[BOLD] 19.95', '[BOLD] 12.90', '[BOLD] 19.12', '[BOLD] ...
Table 4 shows the performance of our ZSGNet model compared to QRG on the four unseen splits [CONTINUE] Across all splits, ZSGNet shows 4−8% higher performance than QRG even though the latter has seen more data [CONTINUE] we observe that the accuracy obtained on Flickr-Split-0,1 are higher than the VG-split likely due t...
Zero-Shot Grounding of Objects from Natural Language Queries
1908.07129v1
Table 6: Ablation study: BM=Base Model, softmax means we classify only one candidate box as foreground, BCE = Binary Cross Entropy means we classify each candidate box as the foreground or background, FL = Focal Loss, Img-Resize: use images of dimension 600×600
['Model', 'Accuracy on RefClef']
[['BM + Softmax', '48.54'], ['BM + BCE', '55.20'], ['BM + FL', '57.13'], ['BM + FL + Img-Resize', '[BOLD] 61.75']]
We show the performance of our model with different loss functions using the base model of ZSGNet on the validation set of ReferIt in Table 6. [CONTINUE] Note that using softmax loss by itself places us higher than the previous methods. [CONTINUE] Further using Binary Cross Entropy Loss and Focal loss give a signific...
Domain Adaptive Inference for Neural Machine Translation
1906.00408v1
Table 4: Test BLEU for en-de adaptive training, with sequential adaptation to a third task. EWC-tuned models give the best performance on each domain.
['[EMPTY]', '[BOLD] Training scheme', '[BOLD] News', '[BOLD] TED', '[BOLD] IT']
[['1', 'News', '37.8', '25.3', '35.3'], ['2', 'TED', '23.7', '24.1', '14.4'], ['3', 'IT', '1.6', '1.8', '39.6'], ['4', 'News and TED', '38.2', '25.5', '35.4'], ['5', '1 then TED, No-reg', '30.6', '[BOLD] 27.0', '22.1'], ['6', '1 then TED, L2', '37.9', '26.7', '31.8'], ['7', '1 then TED, EWC', '[BOLD] 38.3', '[BOLD] 27....
In the en-de News/TED task (Table 4), all fine-tuning schemes give similar improvements on TED. However, EWC outperforms no-reg and L2 on News, not only reducing forgetting but giving 0.5 BLEU improvement over the baseline News model. [CONTINUE] The IT task is very small: training on IT data alone results in over-fitti...
Domain Adaptive Inference for Neural Machine Translation
1906.00408v1
Table 3: Test BLEU for es-en adaptive training. EWC reduces forgetting compared to other fine-tuning methods, while offering the greatest improvement on the new domain.
['[EMPTY]', '[BOLD] Training scheme', '[BOLD] Health', '[BOLD] Bio']
[['1', 'Health', '[BOLD] 35.9', '33.1'], ['2', 'Bio', '29.6', '36.1'], ['3', 'Health and Bio', '35.8', '37.2'], ['4', '1 then Bio, No-reg', '30.3', '36.6'], ['5', '1 then Bio, L2', '35.1', '37.3'], ['6', '1 then Bio, EWC', '35.2', '[BOLD] 37.8']]
For es-en, the Health and Bio tasks overlap, but catastrophic forgetting still occurs under noreg (Table 3). Regularization reduces forgetting and allows further improvements on Bio over noreg fine-tuning. We find EWC outperforms the L2 approach
Domain Adaptive Inference for Neural Machine Translation
1906.00408v1
Table 5: Test BLEU for 2-model es-en and 3-model en-de unadapted model ensembling, compared to oracle unadapted model chosen if test domain is known. Uniform ensembling generally underperforms the oracle, while BI+IS outperforms the oracle.
['[BOLD] Decoder configuration', '[BOLD] es-en [BOLD] Health', '[BOLD] es-en [BOLD] Bio', '[BOLD] en-de [BOLD] News', '[BOLD] en-de [BOLD] TED', '[BOLD] en-de [BOLD] IT']
[['Oracle model', '35.9', '36.1', '37.8', '24.1', '39.6'], ['Uniform', '33.1', '36.4', '21.9', '18.4', '38.9'], ['Identity-BI', '35.0', '36.6', '32.7', '25.3', '42.6'], ['BI', '35.9', '36.5', '38.0', '26.1', '[BOLD] 44.7'], ['IS', '[BOLD] 36.0', '36.8', '37.5', '25.6', '43.3'], ['BI + IS', '[BOLD] 36.0', '[BOLD] 36.9',...
Table 5 shows improvements on data without domain labelling using our adaptive decoding schemes with unadapted models trained only on one domain [CONTINUE] Uniform ensembling under-performs all oracle models except es-en Bio, especially on general domains. Identity-BI strongly improves over uniform ensembling, and BI w...
Domain Adaptive Inference for Neural Machine Translation
1906.00408v1
Table 6: Test BLEU for 2-model es-en and 3-model en-de model ensembling for models adapted with EWC, compared to oracle model last trained on each domain, chosen if test domain is known. BI+IS outperforms uniform ensembling and in some cases outperforms the oracle.
['[BOLD] Decoder configuration', '[BOLD] es-en [BOLD] Health', '[BOLD] es-en [BOLD] Bio', '[BOLD] en-de [BOLD] News', '[BOLD] en-de [BOLD] TED', '[BOLD] en-de [BOLD] IT']
[['Oracle model', '35.9', '37.8', '37.8', '27.0', '57.0'], ['Uniform', '36.0', '36.4', '[BOLD] 38.9', '26.0', '43.5'], ['BI + IS', '[BOLD] 36.2', '[BOLD] 38.0', '38.7', '[BOLD] 26.1', '[BOLD] 56.4']]
In Table 6 we apply the best adaptive decoding scheme, BI+IS, to models fine-tuned with EWC. [CONTINUE] EWC models perform well over multiple domains, so the improvement over uniform ensembling is less striking than for unadapted models. Nevertheless adaptive decoding improves over both uniform ensembling and the oracl...
Domain Adaptive Inference for Neural Machine Translation
1906.00408v1
Table 7: Total BLEU for test data concatenated across domains. Results from 2-model es-en and 3-model en-de ensembles, compared to oracle model chosen if test domain is known. No-reg uniform corresponds to the approach of Freitag and Al-Onaizan (2016). BI+IS performs similarly to strong oracles with no test domain labe...
['[BOLD] Language pair', '[BOLD] Model type', '[BOLD] Oracle model', '[BOLD] Decoder configuration [BOLD] Uniform', '[BOLD] Decoder configuration [BOLD] BI + IS']
[['es-en', 'Unadapted', '36.4', '34.7', '36.6'], ['es-en', 'No-reg', '36.6', '34.8', '-'], ['es-en', 'EWC', '37.0', '36.3', '[BOLD] 37.2'], ['en-de', 'Unadapted', '36.4', '26.8', '38.8'], ['en-de', 'No-reg', '41.7', '31.8', '-'], ['en-de', 'EWC', '42.1', '38.6', '[BOLD] 42.0']]
Uniform no-reg ensembling outperforms unadapted uniform ensembling, since fine-tuning gives better in-domain performance. [CONTINUE] BI+IS decoding with single-domain trained models achieves gains over both the naive uniform approach and over oracle single-domain models. BI+IS with EWC-adapted models gives a 0.9 / 3.4 ...
Filling Conversation Ellipsis for Better Social Dialog Understanding
1911.10776v1
Table 8: Semantic role labeling results. Hybrid-EL-CMP1 represents rule-based model and Hybrid-EL-CMP2 represents probability-based model.
['[BOLD] Model', '[BOLD] Prec.(%)', '[BOLD] Rec.(%)', '[BOLD] F1(%)']
[['EL', '96.02', '81.89', '88.39'], ['CMP', '86.39', '[BOLD] 88.64', '87.50'], ['Hybrid-EL-CMP1', '[BOLD] 97.42', '84.70', '90.62'], ['Hybrid-EL-CMP2', '95.82', '86.42', '[BOLD] 90.87']]
Our results in Table 8 show that when only using original utterances with ellipsis, precision is relatively high while recall is low.
Filling Conversation Ellipsis for Better Social Dialog Understanding
1911.10776v1
Table 6: Dialog act prediction performance using different selection methods.
['[BOLD] Selection Method', '[BOLD] Prec.(%)', '[BOLD] Rec.(%)', '[BOLD] F1(%)']
[['Max Logits', '80.19', '80.50', '79.85'], ['Add Logits', '81.30', '81.28', '80.85'], ['Add Logits+Expert', '[BOLD] 81.30', '[BOLD] 81.41', '[BOLD] 80.90'], ['Concat Hidden', '80.24', '80.04', '79.65'], ['Max Hidden', '80.30', '80.04', '79.63'], ['Add Hidden', '80.82', '80.28', '80.08']]
We can see from Table 6 that empirically adding logits from two models after classifiers performs the best.
Towards Scalable and Reliable Capsule Networksfor Challenging NLP Applications
1906.02829v1
Table 6: Experimental results on TREC QA dataset.
['Method', 'MAP', 'MRR']
[['CNN + LR (unigram)', '54.70', '63.29'], ['CNN + LR (bigram)', '56.93', '66.13'], ['CNN', '66.91', '68.80'], ['CNTN', '65.80', '69.78'], ['LSTM (1 layer)', '62.04', '66.85'], ['LSTM', '59.75', '65.33'], ['MV-LSTM', '64.88', '68.24'], ['NTN-LSTM', '63.40', '67.72'], ['HD-LSTM', '67.44', '<bold>75.11</bold>'], ['Capsul...
In Table 6, the best performance on MAP metric is achieved by our approach, which verifies the effectiveness of our model. We also observe that our approach exceeds traditional neural models like CNN, LSTM and NTNLSTM by a noticeable margin.
Towards Scalable and Reliable Capsule Networksfor Challenging NLP Applications
1906.02829v1
Table 2: Comparisons of our NLP-Cap approach and baselines on two text classification benchmarks, where ’-’ denotes methods that failed to scale due to memory issues.
['<bold>Datasets</bold>', '<bold>Metrics</bold>', '<bold>FastXML</bold>', '<bold>PD-Sparse</bold>', '<bold>FastText</bold>', '<bold>Bow-CNN</bold>', '<bold>CNN-Kim</bold>', '<bold>XML-CNN</bold>', '<bold>Cap-Zhao</bold>', '<bold>NLP-Cap</bold>', '<bold>Impv</bold>']
[['RCV1', 'PREC@1', '94.62', '95.16', '95.40', '96.40', '93.54', '96.86', '96.63', '<bold>97.05</bold>', '+0.20%'], ['RCV1', 'PREC@3', '78.40', '79.46', '79.96', '81.17', '76.15', '81.11', '81.02', '<bold>81.27</bold>', '+0.20%'], ['RCV1', 'PREC@5', '54.82', '55.61', '55.64', '<bold>56.74</bold>', '52.94', '56.07', '56...
In Table 2, we can see a noticeable margin brought by our capsule-based approach over the strong baselines on EUR-Lex, and competitive results on RCV1. These results appear to indicate that our approach has superior generalization ability on datasets with fewer training examples, i.e., RCV1 has 729.67 examples per labe...
Suggestion Mining from Online Reviews using ULMFiT
1904.09076v1
Table 1: Dataset Distribution for Sub Task A - Task 9: Suggestion Mining from Online Reviews.
['[BOLD] Label', '[BOLD] Train', '[BOLD] Trial']
[['[BOLD] Suggestion', '2085', '296'], ['[BOLD] Non Suggestion', '6415', '296']]
As evident from Table 1, there is a significant imbalance in the distribution of training instances that are suggestions and non-suggestions, 2https://www.uservoice.com/ [CONTINUE] For Sub Task A, the organizers shared a training and a validation dataset whose label distribution (suggestion or a non-suggestion) is pres...
Suggestion Mining from Online Reviews using ULMFiT
1904.09076v1
Table 3: Performance of different models on the provided train and test dataset for Sub Task A.
['[BOLD] Model', '[BOLD] F1 (train)', '[BOLD] F1 (test)']
[['[BOLD] Multinomial Naive Bayes (using Count Vectorizer)', '0.641', '0.517'], ['[BOLD] Logistic Regression (using Count Vectorizer)', '0.679', '0.572'], ['[BOLD] SVM (Linear Kernel) (using TfIdf Vectorizer)', '0.695', '0.576'], ['[BOLD] LSTM (128 LSTM Units)', '0.731', '0.591'], ['[BOLD] Provided Baseline', '0.720', ...
Table 3, shows the performances of all the models that we trained on the provided training dataset. [CONTINUE] The ULMFiT model achieved the best results with a F1-score of 0.861 on the training dataset and a F1-score of 0.701 on the test dataset.
Suggestion Mining from Online Reviews using ULMFiT
1904.09076v1
Table 4: Best performing models for SemEval Task 9: Sub Task A.
['[BOLD] Ranking', '[BOLD] Team Name', '[BOLD] Performance (F1)']
[['[BOLD] 1', 'OleNet', '0.7812'], ['[BOLD] 2', 'ThisIsCompetition', '0.7778'], ['[BOLD] 3', 'm_y', '0.7761'], ['[BOLD] 4', 'yimmon', '0.7629'], ['[BOLD] 5', 'NTUA-ISLab', '0.7488'], ['[BOLD] 10', '[BOLD] MIDAS (our team)', '[BOLD] 0.7011*']]
Table 4 shows the performance of the top 5 models for Sub Task A of SemEval 2019 Task 9. Our team ranked 10th out of 34 participants.
Unpaired Speech Enhancement by Acoustic and Adversarial Supervision for Speech Recognition
1811.02182v1
TABLE III: WERs (%) of obtained using different training data of CHiME-4
['Method', 'Training Data', 'Test WER (%) simulated', 'Test WER (%) real']
[['AAS ( [ITALIC] wAC=1, [ITALIC] wAD=105)', 'simulated', '26.1', '25.2'], ['AAS ( [ITALIC] wAC=1, [ITALIC] wAD=105)', 'real', '37.3', '35.2'], ['AAS ( [ITALIC] wAC=1, [ITALIC] wAD=105)', 'simulated + real', '25.9', '24.7'], ['FSEGAN', 'simulated', '29.1', '29.6']]
Table III shows the WERs on the simulated and real test sets when AAS is trained with different training data. With the simulated dataset as the training data, FSEGAN (29.6%) does not generalize well compared to AAS (25.2%) in terms of WER. With the real dataset as the training data, AAS shows severe overfitting since ...
Unpaired Speech Enhancement by Acoustic and Adversarial Supervision for Speech Recognition
1811.02182v1
TABLE I: WERs (%) and DCE of different speech enhancement methods on Librispeech + DEMAND test set
['Method', 'WER (%)', 'DCE']
[['No enhancement', '17.3', '0.828'], ['Wiener filter', '19.5', '0.722'], ['Minimizing DCE', '15.8', '[BOLD] 0.269'], ['FSEGAN', '14.9', '0.291'], ['AAS ( [ITALIC] wAC=1, [ITALIC] wAD=0)', '15.6', '0.330'], ['AAS ( [ITALIC] wAC=1, [ITALIC] wAD=105)', '[BOLD] 14.4', '0.303'], ['Clean speech', '5.7', '0.0']]
Table [CONTINUE] and II show the WER and DCE (normalized by the number of frames) on the test set of Librispeech + DEMAND, and CHiME-4. The Wiener filtering method shows lower DCE, but higher WER than no enhancement. We conjecture that Wiener filter remove some fraction of noise, however, remaining speech is distorted ...
Unpaired Speech Enhancement by Acoustic and Adversarial Supervision for Speech Recognition
1811.02182v1
TABLE II: WERs (%) and DCE of different speech enhancement methods on CHiME4-simulated test set
['Method', 'WER (%)', 'DCE']
[['No enhancement', '38.4', '0.958'], ['Wiener filter', '41.0', '0.775'], ['Minimizing DCE', '31.1', '[BOLD] 0.392'], ['FSEGAN', '29.1', '0.421'], ['AAS ( [ITALIC] wAC=1, [ITALIC] wAD=0)', '27.7', '0.476'], ['AAS ( [ITALIC] wAC=1, [ITALIC] wAD=105)', '[BOLD] 26.1', '0.462'], ['Clean speech', '9.3', '0.0']]
Table [CONTINUE] and II show the WER and DCE (normalized by the number of frames) on the test set of Librispeech + DEMAND, and CHiME-4. The Wiener filtering method shows lower DCE, but higher WER than no enhancement. We conjecture that Wiener filter remove some fraction of noise, however, remaining speech is distorted ...
Low-supervision urgency detection and transfer in short crisis messages
1907.06745v1
TABLE IV: Results investigating RQ1 on the Nepal and Kerala datasets. (b) Kerala
['System', 'Accuracy', 'Precision', 'Recall', 'F-Measure']
[['Local', '56.25%', '37.17%', '55.71%', '44.33%'], ['Manual', '65.00%', '47.82%', '[BOLD] 55.77%', '50.63%'], ['Wiki', '63.25%', '42.07%', '46.67%', '44.00%'], ['Local-Manual', '64.50%', '46.90%', '51.86%', '48.47%'], ['Wiki-Manual', '62.25%', '43.56%', '52.63%', '46.93%'], ['Wiki-Manual', '[BOLD] 68.75%∗∗∗', '51.04%'...
Concerning transfer learning experiments (RQ2), we note that source domain embedding model can improve the performance for target model, and upsampling has a generally positive effect (Tables V-VIII). As expected, transfer learning [CONTINUE] Table VII, a result not found to be significant even at the 90% level). Our a...
Low-supervision urgency detection and transfer in short crisis messages
1907.06745v1
TABLE II: Details on datasets used for experiments.
['Dataset', 'Unlabeled / Labeled Messages', 'Urgent / Non-urgent Messages', 'Unique Tokens', 'Avg. Tokens / Message', 'Time Range']
[['Nepal', '6,063/400', '201/199', '1,641', '14', '04/05/2015-05/06/2015'], ['Macedonia', '0/205', '92/113', '129', '18', '09/18/2018-09/21/2018'], ['Kerala', '92,046/400', '125/275', '19,393', '15', '08/17/2018-08/22/2018']]
For evaluating the approaches laid out in Section IV, we consider three real-world datasets described in Table II. [CONTINUE] Originally, all the raw messages for the datasets described in Table II were unlabeled, in that their urgency status was [CONTINUE] unknown. Since the Macedonia dataset only contains 205 message...
Low-supervision urgency detection and transfer in short crisis messages
1907.06745v1
TABLE IV: Results investigating RQ1 on the Nepal and Kerala datasets. (a) Nepal
['System', 'Accuracy', 'Precision', 'Recall', 'F-Measure']
[['Local', '63.97%', '64.27%', '64.50%', '63.93%'], ['Manual', '64.25%', '[BOLD] 70.84%∗∗', '48.50%', '57.11%'], ['Wiki', '67.25%', '66.51%', '69.50%', '67.76%'], ['Local-Manual', '65.75%', '67.96%', '59.50%', '62.96%'], ['Wiki-Local', '67.40%', '65.54%', '68.50%', '66.80%'], ['Wiki-Manual', '67.75%', '70.38%', '63.00%...
Table IV illustrate the result for RQ1 on the Nepal and Kerala datasets. The results illustrate the viability of urgency detection in low-supervision settings (with our approach yielding 69.44% F-Measure on Nepal, at 99% significance compared to the Local baseline), with different feature sets contributing differently ...
Low-supervision urgency detection and transfer in short crisis messages
1907.06745v1
TABLE V: Results investigating RQ2 using the Nepal dataset as source and Macedonia dataset as target.
['System', 'Accuracy', 'Precision', 'Recall', 'F-Measure']
[['Local', '58.76%', '52.96%', '59.19%', '54.95%'], ['Transform', '58.62%', '51.40%', '[BOLD] 60.32%∗', '55.34%'], ['Upsample', '59.38%', '52.35%', '57.58%', '54.76%'], ['[ITALIC] Our Approach', '[BOLD] 61.79%∗', '[BOLD] 55.08%', '59.19%', '[BOLD] 56.90%']]
Concerning transfer learning experiments (RQ2), we note that source domain embedding model can improve the performance for target model, and upsampling has a generally positive effect (Tables V-VIII).
Low-supervision urgency detection and transfer in short crisis messages
1907.06745v1
TABLE VI: Results investigating RQ2 using the Kerala dataset as source and Macedonia dataset as target.
['System', 'Accuracy', 'Precision', 'Recall', 'F-Measure']
[['Local', '58.76%', '52.96%', '59.19%', '54.95%'], ['Transform', '62.07%', '55.45%', '64.52%', '59.09%'], ['Upsample', '[BOLD] 64.90%∗∗∗', '[BOLD] 57.98%∗', '[BOLD] 65.48%∗∗∗', '[BOLD] 61.30%∗∗∗'], ['[ITALIC] Our Approach', '62.90%', '56.28%', '62.42%', '58.91%']]
uncertain in low-supervision settings. Concerning transfer learning experiments (RQ2), we note that source domain embedding model can improve the performance for target model, and upsampling has a generally positive effect (Tables V-VIII). As expected, transfer learning [CONTINUE] supervision urgency detection on a sin...
Low-supervision urgency detection and transfer in short crisis messages
1907.06745v1
TABLE VII: Results investigating RQ2 using the Nepal dataset as source and Kerala dataset as target.
['System', 'Accuracy', 'Precision', 'Recall', 'F-Measure']
[['Local', '58.65%', '[BOLD] 42.40%', '47.47%', '36.88%'], ['Transform', '53.74%', '32.89%', '[BOLD] 57.47%∗', '41.42%'], ['Upsample', '53.88%', '31.71%', '56.32%', '40.32%'], ['[ITALIC] Our Approach', '[BOLD] 58.79%', '35.26%', '55.89%', '[BOLD] 43.03%∗']]
uncertain in low-supervision settings. Concerning transfer learning experiments (RQ2), we note that source domain embedding model can improve the performance for target model, and upsampling has a generally positive effect (Tables V-VIII). As expected, transfer learning [CONTINUE] 0The best F-Measure achieved on Nepal ...
One-to-X analogical reasoning on word embeddings: a case for diachronic armed conflict prediction from news texts
1907.12674v1
Table 4: Average synchronic performance
['[EMPTY]', '[BOLD] Algorithm', '[BOLD] Precision', '[BOLD] Recall', '[BOLD] F1']
[['Giga', 'Baseline', '0.28', '0.74', '0.41'], ['Giga', 'Threshold', '0.60', '0.69', '[BOLD] 0.63'], ['NOW', 'Baseline', '0.39', '0.88', '0.53'], ['NOW', 'Threshold', '0.50', '0.77', '[BOLD] 0.60']]
As a sanity check, we also evaluated it synchronically, that is when Tn and rn are tested on the locations from the same year (including peaceful ones). In this easier setup, we observed exactly the same trends (Table 4).
One-to-X analogical reasoning on word embeddings: a case for diachronic armed conflict prediction from news texts
1907.12674v1
Table 2: Average recall of diachronic analogy inference
['[BOLD] Dataset', '[BOLD] @1', '[BOLD] @5', '[BOLD] @10']
[['Gigaword', '0.356', '0.555', '0.610'], ['NOW', '0.442', '0.557', '0.578']]
A replication experiment In Table 2 we replicate the experiments from (Kutuzov et al., 2017) on both sets. It follows their evaluation scheme, where only the presence of the correct armed group name in the k nearest neighbours of the ˆi mattered, and only conflict areas were present in the yearly test sets. Essentially...
One-to-X analogical reasoning on word embeddings: a case for diachronic armed conflict prediction from news texts
1907.12674v1
Table 3: Average diachronic performance
['[EMPTY]', '[BOLD] Algorithm', '[BOLD] Precision', '[BOLD] Recall', '[BOLD] F1']
[['Giga', 'Baseline', '0.19', '0.51', '0.28'], ['Giga', 'Threshold', '0.46', '0.41', '[BOLD] 0.41'], ['NOW', 'Baseline', '0.26', '0.53', '0.34'], ['NOW', 'Threshold', '0.42', '0.41', '[BOLD] 0.41']]
Table 3 shows the diachronic performance of our system in the setup when the matrix Tn and the threshold rn are applied to the year n + 1. For both Gigaword and NOW datasets (and the corresponding embeddings), using the cosinebased threshold decreases recall and increases precision (differences are statistically signif...
Scalable and Accurate Dialogue State Tracking via Hierarchical Sequence Generation
1909.00754v2
Table 4: The ablation study on the WoZ2.0 dataset with the joint goal accuracy on the test set. For “- Hierachical-Attn”, we remove the residual connections between the attention modules in the CMR decoders and all the attention memory access are based on the output from the LSTM. For “- MLP”, we further replace the ML...
['[BOLD] Model', '[BOLD] Joint Acc.']
[['COMER', '88.64%'], ['- Hierachical-Attn', '86.69%'], ['- MLP', '83.24%']]
Table 4: The ablation study on the WoZ2.0 dataset with the joint goal accuracy on the test set. [CONTINUE] The effectiveness of our hierarchical attention design is proved by an accuracy drop of 1.95% after removing residual connections and the hierarchical stack of our attention modules.
Scalable and Accurate Dialogue State Tracking via Hierarchical Sequence Generation
1909.00754v2
Table 3: The joint goal accuracy of the DST models on the WoZ2.0 test set and the MultiWoZ test set. We also include the Inference Time Complexity (ITC) for each model as a metric for scalability. The baseline accuracy for the WoZ2.0 dataset is the Delexicalisation-Based (DB) Model Mrksic et al. (2017), while the basel...
['[BOLD] DST Models', '[BOLD] Joint Acc. WoZ 2.0', '[BOLD] Joint Acc. MultiWoZ', '[BOLD] ITC']
[['Baselines Mrksic et al. ( 2017 )', '70.8%', '25.83%', '[ITALIC] O( [ITALIC] mn)'], ['NBT-CNN Mrksic et al. ( 2017 )', '84.2%', '-', '[ITALIC] O( [ITALIC] mn)'], ['StateNet_PSI Ren et al. ( 2018 )', '[BOLD] 88.9%', '-', '[ITALIC] O( [ITALIC] n)'], ['GLAD Nouri and Hosseini-Asl ( 2018 )', '88.5%', '35.58%', '[ITALIC] ...
Table 3: The joint goal accuracy of the DST models on the WoZ2.0 test set and the MultiWoZ test set. We also include the Inference Time Complexity (ITC) for each model as a metric for scalability [CONTINUE] Table 3 compares our model with the previous state-of-the-art on both the WoZ2.0 test set and the MultiWoZ test s...
Scalable and Accurate Dialogue State Tracking via Hierarchical Sequence Generation
1909.00754v2
Table 5: The ablation study on the MultiWoZ dataset with the joint domain accuracy (JD Acc.), joint domain-slot accuracy (JDS Acc.) and joint goal accuracy (JG Acc.) on the test set. For “- moveDrop”, we move the dropout layer to be in front of the final linear layer before the Softmax. For “- postprocess”, we further ...
['[BOLD] Model', '[BOLD] JD Acc.', '[BOLD] JDS Acc.', '[BOLD] JG Acc.']
[['COMER', '95.52%', '55.81%', '48.79%'], ['- moveDrop', '95.34%', '55.08%', '47.19%'], ['- postprocess', '95.53%', '54.74%', '45.72%'], ['- ShareParam', '94.96%', '54.40%', '44.38%'], ['- Order', '95.55%', '55.06%', '42.84%'], ['- Nested', '-', '49.58%', '40.57%'], ['- BlockGrad', '-', '49.36%', '39.15%']]
Table 5: The ablation study on the MultiWoZ dataset with the joint domain accuracy (JD Acc.), joint domain-slot accuracy (JDS Acc.) and joint goal accuracy (JG Acc.) on the test set. [CONTINUE] From Table 5, We can further calculate that given the correct slot prediction, COMER has 87.42% chance to make the correct val...
Deriving Machine Attention from Human Rationales
1808.09367v1
Table 4: Accuracy of transferring between domains. Models with † use labeled data from source domains and unlabeled data from the target domain. Models with ‡ use human rationales on the target task.
['Source', 'Target', 'Svm', 'Ra-Svm‡', 'Ra-Cnn‡', 'Trans†', 'Ra-Trans‡†', 'Ours‡†', 'Oracle†']
[['Beer look + Beer aroma + Beer palate', 'Hotel location', '78.65', '79.09', '79.28', '80.42', '82.10', '[BOLD] 84.52', '85.43'], ['Beer look + Beer aroma + Beer palate', 'Hotel cleanliness', '86.44', '86.68', '89.01', '86.95', '87.15', '[BOLD] 90.66', '92.09'], ['Beer look + Beer aroma + Beer palate', 'Hotel service'...
Table 4 presents the results of domain transfer using 200 training examples. We use the three aspects of the beer review data together as our source tasks while use the three aspects of hotel review data as the target. Our model (OURS) shows marked performance improvement. The error reduction over the best baseline is ...
Deriving Machine Attention from Human Rationales
1808.09367v1
Table 3: Accuracy of transferring between aspects. Models with † use labeled data from source aspects. Models with ‡ use human rationales on the target aspect.
['Source', 'Target', 'Svm', 'Ra-Svm‡', 'Ra-Cnn‡', 'Trans†', 'Ra-Trans‡†', 'Ours‡†', 'Oracle†']
[['Beer aroma+palate', 'Beer look', '74.41', '74.83', '74.94', '72.75', '76.41', '[BOLD] 79.53', '80.29'], ['Beer look+palate', 'Beer aroma', '68.57', '69.23', '67.55', '69.92', '76.45', '[BOLD] 77.94', '78.11'], ['Beer look+aroma', 'Beer palate', '63.88', '67.82', '65.72', '74.66', '73.40', '[BOLD] 75.24', '75.50']]
Table 3 summarizes the results of aspect transfer on the beer review dataset. Our model (OURS) obtains substantial gains in accuracy over the baselines across all three target aspects. It closely matches the performance of ORACLE with only 0.40% absolute difference. [CONTINUE] Specifically, all rationale-augmented meth...
Deriving Machine Attention from Human Rationales
1808.09367v1
Table 5: Ablation study on domain transfer from beer to hotel.
['Model', 'Hotel location', 'Hotel cleanliness', 'Hotel service']
[['Ours', '[BOLD] 84.52', '[BOLD] 90.66', '[BOLD] 89.93'], ['w/o L [ITALIC] wd', '82.36', '89.79', '89.61'], ['w/o L [ITALIC] lm', '82.47', '90.05', '89.75']]
Table 5 presents the results of an ablation study of our model in the setting of domain transfer. As this table indicates, both the language modeling objective and the Wasserstein [CONTINUE] distance contribute similarly to the task, with the Wasserstein distance having a bigger impact.
Keyphrase Generation for Scientific Articles using GANs
1909.12229v1
Table 1: Extractive and Abstractive Keyphrase Metrics
['Model', 'Score', 'Inspec', 'Krapivin', 'NUS', 'KP20k']
[['Catseq(Ex)', 'F1@5', '0.2350', '0.2680', '0.3330', '0.2840'], ['[EMPTY]', 'F1@M', '0.2864', '0.3610', '0.3982', '0.3661'], ['catSeq-RL(Ex.)', 'F1@5', '[BOLD] 0.2501', '[BOLD] 0.2870', '[BOLD] 0.3750', '[BOLD] 0.3100'], ['[EMPTY]', 'F1@M', '[BOLD] 0.3000', '0.3630', '[BOLD] 0.4330', '[BOLD] 0.3830'], ['GAN(Ex.)', 'F1...
We compare our proposed approach against 2 baseline models - catSeq (Yuan et al. 2018), RL-based catSeq Model (Chan et al. 2019) in terms of F1 scores as explained in (Yuan et al. 2018). The results, summarized in Table 1, are broken down in terms of performance on extractive and abstractive keyphrases. For extractive ...
Keyphrase Generation for Scientific Articles using GANs
1909.12229v1
Table 2: α-nDCG@5 metrics
['Model', 'Inspec', 'Krapivin', 'NUS', 'KP20k']
[['Catseq', '0.87803', '0.781', '0.82118', '0.804'], ['Catseq-RL', '0.8602', '[BOLD] 0.786', '0.83', '0.809'], ['GAN', '[BOLD] 0.891', '0.771', '[BOLD] 0.853', '[BOLD] 0.85']]
We also evaluated the models in terms of α-nDCG@5 (Clarke et al. 2008). The results are summarized in Table 2. Our model obtains the best performance on three out of the four datasets. The difference is most prevalent in KP20k, the largest of the four datasets, where our GAN model (at 0.85) is nearly 5% better than bot...
Assessing Gender Bias in Machine Translation – A Case Study with Google Translate
1809.02208v4
Table 7: Percentage of female, male and neutral gender pronouns obtained for each of the merged occupation category, averaged over all occupations in said category and tested languages detailed in Table
['Category', 'Female (%)', 'Male (%)', 'Neutral (%)']
[['Service', '10.5', '59.548', '16.476'], ['STEM', '4.219', '71.624', '11.181'], ['Farming / Fishing / Forestry', '12.179', '62.179', '14.744'], ['Corporate', '9.167', '66.042', '14.861'], ['Healthcare', '23.305', '49.576', '15.537'], ['Legal', '11.905', '72.619', '10.714'], ['Arts / Entertainment', '10.36', '67.342', ...
To simplify our dataset, we have decided to focus our work on job positions – which, we believe, are an interesting window into the nature of gender bias –, and were able to obtain a comprehensive list of professional occupations from the Bureau of Labor Statistics' detailed occupations table , from the United States D...
Assessing Gender Bias in Machine Translation – A Case Study with Google Translate
1809.02208v4
Table 6: Percentage of female, male and neutral gender pronouns obtained for each BLS occupation category, averaged over all occupations in said category and tested languages detailed in Table
['Category', 'Female (%)', 'Male (%)', 'Neutral (%)']
[['Office and administrative support', '11.015', '58.812', '16.954'], ['Architecture and engineering', '2.299', '72.701', '10.92'], ['Farming, fishing, and forestry', '12.179', '62.179', '14.744'], ['Management', '11.232', '66.667', '12.681'], ['Community and social service', '20.238', '62.5', '10.119'], ['Healthcare s...
What we have found is that Google Translate does indeed translate sentences with male pronouns with greater probability than it does either with female or gender-neutral pronouns, in general. Furthermore, this bias is seemingly aggravated for fields suggested to be troubled by male stereotypes, such as life and physica...
Racial Bias in Hate Speech and Abusive Language Detection Datasets
1905.12516v1
Table 4: Experiment 2, t= “b*tch”
['Dataset', 'Class', 'ˆ [ITALIC] piblack', 'ˆ [ITALIC] piwhite', '[ITALIC] t', '[ITALIC] p', 'ˆ [ITALIC] piblackˆ [ITALIC] piwhite']
[['[ITALIC] Waseem and Hovy', 'Racism', '0.010', '0.010', '-0.632', '[EMPTY]', '0.978'], ['[EMPTY]', 'Sexism', '0.963', '0.944', '20.064', '***', '1.020'], ['[ITALIC] Waseem', 'Racism', '0.011', '0.011', '-1.254', '[EMPTY]', '0.955'], ['[EMPTY]', 'Sexism', '0.349', '0.290', '28.803', '***', '1.203'], ['[EMPTY]', 'Racis...
The results for the second variation of Experiment 2 where we conditioned on the word "b*tch" are shown in Table 4. [CONTINUE] We see similar results for Waseem and Hovy (2016) and Waseem (2016). In both cases the classifiers trained upon their data are still more likely to flag black-aligned tweets as sexism. The Wase...
Racial Bias in Hate Speech and Abusive Language Detection Datasets
1905.12516v1
Table 1: Classifier performance
['Dataset', 'Class', 'Precision', 'Recall', 'F1']
[['[ITALIC] W. & H.', 'Racism', '0.73', '0.79', '0.76'], ['[EMPTY]', 'Sexism', '0.69', '0.73', '0.71'], ['[EMPTY]', 'Neither', '0.88', '0.85', '0.86'], ['[ITALIC] W.', 'Racism', '0.56', '0.77', '0.65'], ['[EMPTY]', 'Sexism', '0.62', '0.73', '0.67'], ['[EMPTY]', 'R. & S.', '0.56', '0.62', '0.59'], ['[EMPTY]', 'Neither',...
The performance of these models on the 20% held-out validation data is reported in Table 1. [CONTINUE] Overall we see varying performance across the classifiers, with some performing much [CONTINUE] better out-of-sample than others. In particular, we see that hate speech and harassment are particularly difficult to det...
Racial Bias in Hate Speech and Abusive Language Detection Datasets
1905.12516v1
Table 2: Experiment 1
['Dataset', 'Class', 'ˆ [ITALIC] piblack', 'ˆ [ITALIC] piwhite', '[ITALIC] t', '[ITALIC] p', 'ˆ [ITALIC] piblackˆ [ITALIC] piwhite']
[['[ITALIC] Waseem and Hovy', 'Racism', '0.001', '0.003', '-20.818', '***', '0.505'], ['[EMPTY]', 'Sexism', '0.083', '0.048', '101.636', '***', '1.724'], ['[ITALIC] Waseem', 'Racism', '0.001', '0.001', '0.035', '[EMPTY]', '1.001'], ['[EMPTY]', 'Sexism', '0.023', '0.012', '64.418', '***', '1.993'], ['[EMPTY]', 'Racism a...
The results of Experiment 1 are shown in Table 2. [CONTINUE] We observe substantial racial disparities in the performance of all classifiers. In all but one of the comparisons, there are statistically significant (p < 0.001) differences and in all but one of these we see that tweets in the black-aligned corpus are assi...
Racial Bias in Hate Speech and Abusive Language Detection Datasets
1905.12516v1
Table 3: Experiment 2, t= “n*gga”
['Dataset', 'Class', 'ˆ [ITALIC] piblack', 'ˆ [ITALIC] piwhite', '[ITALIC] t', '[ITALIC] p', 'ˆ [ITALIC] piblackˆ [ITALIC] piwhite']
[['[ITALIC] Waseem and Hovy', 'Racism', '0.010', '0.011', '-1.462', '[EMPTY]', '0.960'], ['[EMPTY]', 'Sexism', '0.147', '0.100', '31.932', '***', '1.479'], ['[ITALIC] Waseem', 'Racism', '0.010', '0.010', '0.565', '[EMPTY]', '1.027'], ['[EMPTY]', 'Sexism', '0.040', '0.026', '18.569', '***', '1.554'], ['[EMPTY]', 'Racism...
Table 3 shows that for tweets containing the word "n*gga", classifiers trained on Waseem and Hovy (2016) and Waseem (2016) are both predict black-aligned tweets to be instances of sexism approximately 1.5 times as often as white-aligned tweets. The classifier trained on the Davidson et al. (2017) data is significantly ...
Sparse and Structured Visual Attention
2002.05556v1
Table 1: Automatic evaluation of caption generation on MSCOCO and Flickr30k.
['[EMPTY]', 'MSCOCO spice', 'MSCOCO cider', 'MSCOCO rouge [ITALIC] L', 'MSCOCO bleu4', 'MSCOCO meteor', 'MSCOCO rep↓', 'Flickr30k spice', 'Flickr30k cider', 'Flickr30k rouge [ITALIC] L', 'Flickr30k bleu4', 'Flickr30k meteor', 'Flickr30k rep↓']
[['softmax', '18.4', '0.967', '52.9', '29.9', '24.9', '3.76', '13.5', '0.443', '44.2', '19.9', '19.1', '6.09'], ['sparsemax', '[BOLD] 18.9', '[BOLD] 0.990', '[BOLD] 53.5', '[BOLD] 31.5', '[BOLD] 25.3', '3.69', '[BOLD] 13.7', '[BOLD] 0.444', '[BOLD] 44.3', '[BOLD] 20.7', '[BOLD] 19.3', '5.84'], ['TVmax', '18.5', '0.974'...
As can be seen in Table 1, sparsemax and TVMAX achieve better results overall when compared with softmax, indicating that the use of selective attention leads to better captions. [CONTINUE] Moreover, for TVMAX, automatic metrics results are slightly worse than sparsemax but still superior to softmax on MSCOCO and simil...
Sparse and Structured Visual Attention
2002.05556v1
Table 2: Human evaluation results on MSCOCO.
['[EMPTY]', 'caption', 'attention relevance']
[['softmax', '3.50', '3.38'], ['sparsemax', '3.71', '3.89'], ['TVmax', '[BOLD] 3.87', '[BOLD] 4.10']]
Despite performing slightly worse than sparsemax under automatic metrics, TVMAX outperforms sparsemax and softmax in the caption human evaluation and the attention relevance human evaluation, reported in Table 2. The superior score on attention relevance shows that TVMAX is better at selecting the relevant features and...
Sparse and Structured Visual Attention
2002.05556v1
Table 3: Automatic evaluation of VQA on VQA-2.0. Sparse-TVmax and soft-TVmax correspond to using sparsemax or softmax on the image self-attention and TVmax on the output attention. Other models use softmax or sparsemax on self-attention and output attention.
['[EMPTY]', 'Att. to image', 'Att. to bounding boxes', 'Test-Dev Yes/No', 'Test-Dev Number', 'Test-Dev Other', 'Test-Dev Overall', 'Test-Standard Yes/No', 'Test-Standard Number', 'Test-Standard Other', 'Test-Standard Overall']
[['softmax', '✓', '[EMPTY]', '83.08', '42.65', '55.74', '65.52', '83.55', '42.68', '56.01', '65.97'], ['sparsemax', '✓', '[EMPTY]', '83.08', '43.19', '55.79', '65.60', '83.33', '42.99', '56.06', '65.94'], ['soft-TVmax', '✓', '[EMPTY]', '83.13', '43.53', '56.01', '65.76', '83.63', '43.24', '56.10', '66.11'], ['sparse-TV...
As can be seen in the results presented in Table 3 the models using TVMAX in the output attention layer outperform the models using softmax and sparsemax. Moreover, the results are slightly superior when the sparsemax transformation is used in the self-attention layers of the decoder. It can also be observed that, when...