paper
stringlengths
0
839
paper_id
stringlengths
1
12
table_caption
stringlengths
3
2.35k
table_column_names
large_stringlengths
13
1.76k
table_content_values
large_stringlengths
2
11.9k
text
large_stringlengths
69
2.82k
Reciprocal Attention Fusion for Visual Question Answering
1805.04247
Table 2: Comparison of the state-of-the-art methods with our single model performance on VQAv2.0 test-dev and test-standard server.
['Methods', 'Test-dev Y/N', 'Test-dev No.', 'Test-dev Other', 'Test-dev All', 'Test-standard Y/N', 'Test-standard No.', 'Test-standard Other', 'Test-standard All']
[['RAF (Ours)', '[BOLD] 84.1', '[BOLD] 44.9', '[BOLD] 57.8', '[BOLD] 67.2', '[BOLD] 84.2', '[BOLD] 44.4', '[BOLD] 58.0', '[BOLD] 67.4'], ['BU, adaptive K Teney et\xa0al. ( 2017 )', '81.8', '44.2', '56.1', '65.3', '82.2', '43.9', '56.3', '65.7'], ['MFB Yu et\xa0al. ( 2018 )', '-', '-', '-', '64.9', '-', '-', '-', '-'], ...
in all question categories and overall by a significant margin of 1.7%. The bottom up, adaptive-kTeney et al. et al. reports currently the best performance among on VQAv2 test-standard dataset. This indicates our models superior capability to interpret and incorporate multi-modal relationships for visual reasoning.
Sequential Attention-based Network for Noetic End-to-End Response Selection
1901.02609
Table 4: Ablation analysis on the development set for the DSTC7 Ubuntu dataset.
['[BOLD] Sub', '[BOLD] Models', '[BOLD] R@1', '[BOLD] R@10', '[BOLD] R@50', '[BOLD] MRR']
[['1', 'ESIM', '0.534', '0.854', '0.985', '0.6401'], ['1', '-CtxDec', '0.508', '0.845', '0.982', '0.6210'], ['1', '-CtxDec & -Rev', '0.504', '0.840', '0.982', '0.6174'], ['1', 'Ensemble', '0.573', '0.887', '0.989', '0.6790'], ['2', 'Sent-based', '0.021', '0.082', '0.159', '0.0416'], ['2', 'Ensemble1', '0.023', '0.091',...
For Ubuntu subtask 1, ESIM achieved 0.854 R@10 and 0.6401 MRR. If we removed context’s local matching and matching composition to accelerate the training process (“-CtxDec”), R@10 and MRR dropped to 0.845 and 0.6210. Further discarding the last words instead of the preceding words for the context (“-CtxDec & -Rev”) deg...
Sequential Attention-based Network for Noetic End-to-End Response Selection
1901.02609
Table 5: Ablation analysis on the development set for the DSTC7 Advising dataset.
['[BOLD] Sub', '[BOLD] Models', '[BOLD] R@1', '[BOLD] R@10', '[BOLD] R@50', '[BOLD] MRR']
[['1', '-CtxDec', '0.222', '0.656', '0.954', '0.3572'], ['1', '-CtxDec & -Rev', '0.214', '0.658', '0.942', '0.3518'], ['1', 'Ensemble', '0.252', '0.720', '0.960', '0.4010'], ['3', '-CtxDec', '0.320', '0.792', '0.978', '0.4704'], ['3', '-CtxDec & -Rev', '0.310', '0.788', '0.978', '0.4550'], ['3', 'Ensemble', '0.332', '0...
For Ubuntu subtask 1, ESIM achieved 0.854 R@10 and 0.6401 MRR. If we removed context’s local matching and matching composition to accelerate the training process (“-CtxDec”), R@10 and MRR dropped to 0.845 and 0.6210. Further discarding the last words instead of the preceding words for the context (“-CtxDec & -Rev”) deg...
Recursive Graphical Neural Networks for Text Classification
1909.08166
Table 5: Ablation study on R52 and Reuters21578. It is clear that LSTM plays an important role by alleviating over-smoothing problem, especially in multi-label classification, which is more prone to over-smoothing.
['Model', 'R52', 'Reuters21578']
[['w/o LSTM', '84.74', '43.82'], ['w/o Attention', '94.39', '81.31'], ['w/o Global node', '93.85', '76.81'], ['Proposal', '95.29', '82.01']]
From the results we can see that removing any of the three parts of our proposed model would lead to a decline in accuracy. Among the three parts, the accuracy of the model without LSTM decreases most significantly. We assume that this is because that the over-smoothing problem becomes very severe with relatively big l...
Semantic Sentence Matching with Densely-connectedRecurrent and Co-attentive Information
1805.11360
(b) SelQA
['[BOLD] Models', '[BOLD] MAP', '[BOLD] MRR']
[['CNN-DAN ', '0.866', '0.873'], ['CNN-hinge ', '0.876', '0.881'], ['ACNN ', '0.874', '0.880'], ['AdaQA ', '0.891', '0.898'], ['[BOLD] DRCN', '[BOLD] 0.925', '[BOLD] 0.930']]
However, the proposed DRCN using collective attentions over multiple layers, achieves the new state-of-the-art performance, exceeding the current state-of-the-art performance significantly on both datasets.
Semantic Sentence Matching with Densely-connectedRecurrent and Co-attentive Information
1805.11360
Table 3: Classification accuracy for natural language inference on MultiNLI test set. * denotes ensemble methods.
['[BOLD] Models', '[BOLD] Accuracy (%) [BOLD] matched', '[BOLD] Accuracy (%) [BOLD] mismatched']
[['ESIM ', '72.3', '72.1'], ['DIIN ', '78.8', '77.8'], ['CAFE ', '78.7', '77.9'], ['LM-Transformer ', '[BOLD] 82.1', '[BOLD] 81.4'], ['[BOLD] DRCN', '79.1', '78.4'], ['DIIN* ', '80.0', '78.7'], ['CAFE* ', '80.2', '79.0'], ['[BOLD] DRCN*', '[BOLD] 80.6', '[BOLD] 79.5'], ['[BOLD] DRCN+ELMo*', '[BOLD] 82.3', '[BOLD] 81.4'...
Our plain DRCN has a competitive performance without any contextualized knowledge. And, by combining DRCN with the ELMo, one of the contextualized embeddings from language models, our model outperforms the LM-Transformer which has 85m parameters with fewer parameters of 61m. From this point of view, the combination of ...
Semantic Sentence Matching with Densely-connectedRecurrent and Co-attentive Information
1805.11360
(a) TrecQA: raw and clean
['[BOLD] Models', '[BOLD] MAP', '[BOLD] MRR']
[['[ITALIC] [BOLD] Raw version', '[ITALIC] [BOLD] Raw version', '[ITALIC] [BOLD] Raw version'], ['aNMM ', '0.750', '0.811'], ['PWIM ', '0.758', '0.822'], ['MP CNN ', '0.762', '0.830'], ['HyperQA ', '0.770', '0.825'], ['PR+CNN ', '0.780', '0.834'], ['[BOLD] DRCN', '[BOLD] 0.804', '[BOLD] 0.862'], ['[ITALIC] [BOLD] c...
However, the proposed DRCN using collective attentions over multiple layers, achieves the new state-of-the-art performance, exceeding the current state-of-the-art performance significantly on both datasets.
Semantic Sentence Matching with Densely-connectedRecurrent and Co-attentive Information
1805.11360
Table 6: Accuracy (%) of Linguistic correctness on MultiNLI dev sets.
['[BOLD] Category', '[BOLD] ESIM', '[BOLD] DIIN', '[BOLD] CAFE', '[BOLD] DRCN']
[['[BOLD] Matched', '[BOLD] Matched', '[BOLD] Matched', '[BOLD] Matched', '[BOLD] Matched'], ['Conditional', '[BOLD] 100', '57', '70', '65'], ['Word overlap', '50', '79', '82', '[BOLD] 89'], ['Negation', '76', '78', '76', '[BOLD] 80'], ['Antonym', '67', '[BOLD] 82', '[BOLD] 82', '[BOLD] 82'], ['Long Sentence', '75', '8...
We used annotated subset provided by the MultiNLI dataset, and each sample belongs to one of the 13 linguistic categories. Especially, we can see that ours outperforms much better on the Quantity/Time category which is one of the most difficult problems. Furthermore, our DRCN shows the highest mean and the lowest stdde...
Tale of tails using rule augmented sequence labeling for event extraction
1908.07018
Table 1: InDEE-2019 dataset for five languages, namely, Marathi, Hindi, English, Tamil and Bengali. Number of tags or labels for each dataset and their respective train, validation and test split used in the experiments.
['Languages', 'Marathi(Mr) Doc', 'Marathi(Mr) Sen', 'Hindi(Hi) Doc', 'Hindi(Hi) Sen', 'English(En) Doc', 'English(En) Sen', 'Tamil(Ta) Doc', 'Tamil(Ta) Sen', 'Bengali(Bn) Doc', 'Bengali(Bn) Sen']
[['Train', '815', '15920', '678', '13184', '456', '5378', '1085', '15302', '699', '18533'], ['Val', '117', '2125', '150', '2775', '56', '642', '155', '2199', '100', '2621'], ['Test', '233', '4411', '194', '3790', '131', '1649', '311', '4326', '199', '4661'], ['#Labels', '43', '43', '44', '44', '48', '48', '47', '47', '...
We focus on the EE task and model as a sequence labeling problem. We use the following terminologies throughout the paper: Event Trigger : The main word that identifies occurrence of the event mentioned in the document. Event Arguments : Several words that define an events such as place , time , reason , after-effects ...
Does Multimodality Help Human and Machine for Translation and Image Captioning?
1605.09186
Table 6: BLEU and METEOR scores for human description generation experiments.
['Method', 'BLEU-1', 'BLEU-2', 'BLEU-3', 'BLEU-4', 'METEOR']
[['Image + sentences', '54.30', '35.95', '23.28', '15.06', '39.16'], ['Image only', '51.26', '34.74', '22.63', '15.01', '38.06'], ['Sentence only', '39.37', '23.27', '13.73', '8.40', '32.98'], ['Our system', '60.61', '44.35', '31.65', '21.95', '33.59']]
To evaluate the importance of the different modalities for the image description generation task, we have performed an experiment where we replace the computer algorithm with human participants. The two modalities are the five English description sentences, and the image. The output is a single description sentence in ...
Does Multimodality Help Human and Machine for Translation and Image Captioning?
1605.09186
Table 1: Training Data for Task 1.
['Side', 'Vocabulary', 'Words']
[['English', '10211', '377K'], ['German', '15820', '369K']]
This dataset consists of 29K parallel sentences (direct translations of image descriptions from English to German) for training, 1014 for validation and finally 1000 for the test set. We preprocessed the dataset using the punctuation normalization, tokenization and lowercasing scripts from Moses. This reduces the targe...
Does Multimodality Help Human and Machine for Translation and Image Captioning?
1605.09186
Table 2: BLEU and METEOR scores on detokenized outputs of baseline and submitted Task 1 systems. The METEOR scores in parenthesis are computed with -norm parameter.
['System Description', 'Validation Set METEOR (norm)', 'Validation Set BLEU', 'Test Set METEOR (norm)', 'Test Set BLEU']
[['Phrase-based Baseline (BL)', '53.71 (58.43)', '35.61', '52.83 (57.37)', '33.45'], ['BL+3Features', '54.29 (58.99)', '36.52', '53.19 (57.76)', '34.31'], ['BL+4Features', '54.40 (59.08)', '36.63', '53.18 (57.76)', '34.28'], ['Monomodal NMT', '51.07 (54.87)', '35.93', '49.20 (53.10)', '32.50'], ['Multimodal NMT', '44.5...
Overall, we were able to improve test set scores by around 0.4 and 0.8 on METEOR and BLEU respectively over a strong phrase-based baseline using auxiliary features.
Does Multimodality Help Human and Machine for Translation and Image Captioning?
1605.09186
Table 3: BLEU scores for various deep features on the image description generation task using the system of Xu et al. [Xu et al.2015].
['Network', 'BLEU-1', 'BLEU-2', 'BLEU-3', 'BLEU-4']
[['VGG-19', '58.2', '31.4', '18.5', '11.3'], ['ResNet-50', '68.4', '45.2', '30.9', '21.1'], ['ResNet-152', '68.3', '44.9', '30.7', '21.1']]
The results increase during the first layers but stabilize from Block-4 on. Based on these results and considering that a higher spatial resolution is better, we have selected layer ’res4f_relu’ (end of Block-4, after ReLU) for the experiments on multimodal MT. We also compared the features from different networks on t...
Does Multimodality Help Human and Machine for Translation and Image Captioning?
1605.09186
Table 5: BLEU and METEOR scores of our NMT based submissions for Task 2.
['System', 'Validation METEOR', 'Validation BLEU', 'Test METEOR', 'Test BLEU']
[['Monomodal', '36.3', '24.0', '35.1', '23.8'], ['Multimodal', '34.4', '19.3', '32.3', '19.2']]
Several explanations can clarify this behavior. First, the architecture is not well suited for integrating image and text representations. This is possible as we did not explore all the possibilities to benefit from both modalities. Another explanation is that the image context contain too much irrelevant information w...
Exploiting stance hierarchies for cost-sensitivestance detection of Web documents
2007.15121
Table 3. Class distribution for discuss/stance classification.
['[EMPTY]', '[BOLD] Instances', '[BOLD] Neutral', '[BOLD] Stance']
[['[BOLD] Train', '13,427', '8,909', '4,518'], ['[BOLD] Test', '7,064', '4,464', '2,600']]
For the first stage of our pipeline (relevance classification), we merge the classes discuss, agree and disagree, to one related class, facilitating a binary unrelated-related classification task. For the second stage (neutral/stance classification), we consider only related documents and merge the classes agree and di...
Exploiting stance hierarchies for cost-sensitivestance detection of Web documents
2007.15121
Table 3. Class distribution for discuss/stance classification.
['[EMPTY]', '[BOLD] Instances', '[BOLD] Unrelated', '[BOLD] Related']
[['[BOLD] Train', '49,972', '36,545', '13,427'], ['[BOLD] Test', '25,413', '18,349', '7,064']]
For the first stage of our pipeline (relevance classification), we merge the classes discuss, agree and disagree, to one related class, facilitating a binary unrelated-related classification task. For the second stage (neutral/stance classification), we consider only related documents and merge the classes agree and di...
Exploiting stance hierarchies for cost-sensitivestance detection of Web documents
2007.15121
Table 3. Class distribution for discuss/stance classification.
['[EMPTY]', '[BOLD] Instances', '[BOLD] Agree', '[BOLD] Disagree']
[['[BOLD] Train', '4,518', '3,678', '840'], ['[BOLD] Test', '2,600', '1,903', '697']]
For the first stage of our pipeline (relevance classification), we merge the classes discuss, agree and disagree, to one related class, facilitating a binary unrelated-related classification task. For the second stage (neutral/stance classification), we consider only related documents and merge the classes agree and di...
Exploiting stance hierarchies for cost-sensitivestance detection of Web documents
2007.15121
Table 5. Document stance classification performance.
['System', 'FNC', 'F1 [ITALIC] m', 'F1Unrel.', 'F1Neutral', 'F1Agree', 'F1Disagr.', 'F1 [ITALIC] mAgr/Dis']
[['Majority vote', '0.39', '0.21', '0.84', '0.00', '0.00', '0.00', '0.00'], ['FNC baseline', '0.75', '0.45', '0.96', '0.69', '0.15', '0.02', '0.09'], ['SOLAT (Baird et al., 2017 )', '0.82', '0.58', '[BOLD] 0.99', '0.76', '[BOLD] 0.54', '0.03', '0.29'], ['Athene (Hanselowski et al., 2017 )', '0.82', '0.60', '[BOLD] 0....
First, we notice that, considering the problematic FNC-I evaluation measure, CombNSE achieves the highest performance (0.83), followed by SOLAT, Athene, UCLMR, StackLSTM, 3-Stage Trad. (0.82) and our pipeline approach (0.81). However, we see that the ranking of the top performing systems is very different if we conside...
Exploiting stance hierarchies for cost-sensitivestance detection of Web documents
2007.15121
Table 7. Confusion matrix of our pipeline system.
['[EMPTY]', 'Agree', 'Disagree', 'Neutral', 'Unrelated']
[['Agree', '1,006', '278', '495', '124'], ['Disagree', '237', '160', '171', '129'], ['Neutral', '555', '252', '3,381', '276'], ['Unrelated', '127', '31', '523', '17,668']]
Overall, we note that the neutral and agree classes seem to be frequently confused, what seems intuitive given the very similar nature of these classes, i.e. a document which discusses a claim without explicitly taking a stance is likely to agree with it. The results for the disagree class illustrate that stage 1 miscl...
Exploiting stance hierarchies for cost-sensitivestance detection of Web documents
2007.15121
Table 8. Confusion matrix of different stages.
['Stage1', 'Unrelated', 'Unrelated 17,668', 'Related 681']
[['[EMPTY]', 'Related', '529', '6,535'], ['[EMPTY]', '[EMPTY]', 'Neutral', 'Stance'], ['Stage2', 'Neutral', '3,575', '889'], ['[EMPTY]', 'Stance', '760', '1,840'], ['[EMPTY]', '[EMPTY]', 'Agree', 'Disagree'], ['Stage3', 'Agree', '1,436', '467'], ['[EMPTY]', 'Disagree', '387', '310']]
We note the increasing difficulty of each stage. Stage 1 misclassifies a small number of related instances as unrelated (less than 8%). Stage 2 misclassifies 29% of the stance instances as neutral. Finally, stage 3 misclassifies the majority of disagree instances (55%) as agree, and around 25% of the agree instances as...
Data Augmentation with Atomic Templates for Spoken Language Understanding
1908.10770
Table 3: SLU performances on the DSTC3 evaluation set when removing different modules of our method.
['[BOLD] Method', '[BOLD] F1-score']
[['HD + AT(seed abr. + comb.)', '88.6'], ['- dstc2', '86.2'], ['- dstc3_seed', '84.5'], ['- dstc2, - dstc3_seed', '84.3'], ['- sentence generator', '74.0'], ['- atomic templates', '70.5']]
Ablation Study By removing SLU model pretraining on DSTC2 (“- dstc2”) and finetuning on the seed data (“- dstc3_seed”), we can see a significant decrease in SLU performance. When we subsequently cast aside the sentence generator (“- sentence generator”, i.e. using the atomic exemplars as inputs of SLU model directly), ...
Data Augmentation with Atomic Templates for Spoken Language Understanding
1908.10770
Table 2: SLU performances of different systems on the DSTC3 evaluation set.
['[BOLD] SLU', '[BOLD] Augmentation', '[BOLD] F1-score']
[['ZS', 'w/o', '68.3'], ['HD', 'w/o', '78.5'], ['HD', 'Naive', '82.9'], ['HD', 'AT (seed abridgement)', '85.5'], ['HD', 'AT (combination)', '87.9'], ['HD', 'AT (seed abr. + comb.)', '[BOLD] 88.6'], ['HD', 'Human Zhu et al. ( 2014 )', '90.4'], ['HD', 'Oracle', '96.9']]
We can see that: The hierarchical decoding (HD) model gets better performance than the zero-shot learning (ZS) method of SLU. The seed data dstc3_seed limits the power of the SLU model, and even the naive augmentation can enhance it. Our data augmentation method with atomic templates (AT) improves the SLU performance d...
Trivial Transfer Learning for Low-Resource Neural Machine Translation
1809.00357
Table 4: Results of child following a parent with swapped direction. “Baseline” is child-only training. “Aligned” is the more natural setup with English appearing on the “correct” side of the parent, the numbers in this column thus correspond to those in Table 2.
['Parent - Child', 'Transfer', 'Baseline', 'Aligned']
[['enFI - ETen', '22.75‡', '21.74', '24.18'], ['FIen - enET', '18.19‡', '17.03', '19.74'], ['enRU - ETen', '23.12‡', '21.74', '23.54'], ['enCS - ETen', '22.80‡', '21.74', 'not run'], ['RUen - enET', '18.16‡', '17.03', '20.09'], ['enET - ETen', '22.04‡', '21.74', '21.74'], ['ETen - enET', '17.46', '17.03', '17.03']]
This interesting result should be studied in more detail. \newcitefirat-cho-bengio:etal:2016 hinted possible gains even when both languages are distinct from the low-resource languages but in a multilingual setting. Not surprisingly, the improvements are better when the common language is aligned. We see gains in both ...
Trivial Transfer Learning for Low-Resource Neural Machine Translation
1809.00357
Table 2: Transfer learning with English reused either in source (encoder) or target (decoder). The column “Transfer” is our method, baselines correspond to training on one of the corpora only. Scores (BLEU) are always for the child language pair and they are comparable only within lines or when the child language pair ...
['Parent - Child', 'Transfer', 'Baselines: Only Child', 'Baselines: Only Parent']
[['enFI - enET', '19.74‡', '17.03', '2.32'], ['FIen - ETen', '24.18‡', '21.74', '2.44'], ['[BOLD] enCS - enET', '20.41‡', '17.03', '1.42'], ['[BOLD] enRU - enET', '20.09‡', '17.03', '0.57'], ['[BOLD] RUen - ETen', '23.54‡', '21.74', '0.80'], ['enCS - enSK', '17.75‡', '16.13', '6.51'], ['CSen - SKen', '22.42‡', '19.19',...
Furthermore, the improvement is not restricted only to related languages as Estonian and Finnish as shown in previous works. We reach an improvement of 3.38 BLEU for ENET when parent model was ENCS, compared to improvement of 2.71 from ENFI parent. This statistically significant improvement contradicts \newcitedabre201...
Trivial Transfer Learning for Low-Resource Neural Machine Translation
1809.00357
Table 3: Maximal score reached by ENET child for decreasing sizes of child training data, trained off an ENFI parent (all ENFI data are used and models are trained for 800k steps). The baselines use only the reduced ENET data.
['Child Training Sents', 'Transfer BLEU', 'Baseline BLEU']
[['800k', '19.74', '17.03'], ['400k', '19.04', '14.94'], ['200k', '17.95', '11.96'], ['100k', '17.61', '9.39'], ['50k', '15.95', '5.74'], ['10k', '12.46', '1.95']]
It is a common knowledge, that gains from transfer learning are more pronounced for smaller childs. Our transfer learning (“start with a model for whatever parent pair”) may thus resolve the issue of applicability of NMT for low resource languages as pointed out by \newcitekoehn-knowles:2017:NMT.
Trivial Transfer Learning for Low-Resource Neural Machine Translation
1809.00357
Table 7: Summary of vocabulary overlaps for the various language sets. All figures in % of the shared vocabulary.
['Languages', 'Unique in a Lang.', 'In All', 'From Parent']
[['ET-EN-FI', '24.4-18.2-26.2', '19.5', '49.4'], ['ET-EN-RU', '29.9-20.7-29.0', '8.9', '41.0'], ['ET-EN-CS', '29.6-17.5-21.2', '20.3', '49.2'], ['AR-RU-ET-EN', '28.6-27.7-21.2-9.1', '4.6', '6.2'], ['ES-FR-ET-EN', '15.7-13.0-24.8-8.8', '18.4', '34.1'], ['ES-RU-ET-EN', '14.7-31.1-21.3-9.3', '6.0', '21.4'], ['FR-RU-ET-EN'...
, what portion is shared by all the languages and what portion of subwords benefits from the parent training. We see a similar picture across the board, only AR-RU-ET-EN stands out with the very low number of subwords (6.2%) available already in the parent. The parent AR-RU thus offered very little word knowledge to th...
Trivial Transfer Learning for Low-Resource Neural Machine Translation
1809.00357
Table 9: Candidate total length, BLEU n-gram precisions and brevity penalty (BP). The reference length in the matching tokenization was 36062.
['[EMPTY]', 'Length', 'BLEU Components', 'BP']
[['Base ENET', '35326', '48.1/21.3/11.3/6.4', '0.979'], ['ENRU+ENET', '35979', '51.0/24.2/13.5/8.0', '0.998'], ['ENCS+ENET', '35921', '51.7/24.6/13.7/8.1', '0.996']]
In the table, we show also individual n-gram precisions and brevity penalty (BP) of BLEU. The longer output clearly helps to reduce the incurred BP but the improvements are also apparent in n-gram precisions. In other words, the observed gain cannot be attributed solely to producing longer outputs.
MAttNet: Modular Attention Network for Referring Expression Comprehension
1801.08186
Table 1: Comparison with state-of-the-art approaches on ground-truth MS COCO regions.
['[EMPTY]', '[EMPTY]', 'feature', 'RefCOCO val', 'RefCOCO testA', 'RefCOCO testB', 'RefCOCO+ val', 'RefCOCO+ testA', 'RefCOCO+ testB', 'RefCOCOg val*', 'RefCOCOg val', 'RefCOCOg test']
[['1', 'Mao\xa0', 'vgg16', '-', '63.15', '64.21', '-', '48.73', '42.13', '62.14', '-', '-'], ['2', 'Varun\xa0', 'vgg16', '76.90', '75.60', '78.00', '-', '-', '-', '-', '-', '68.40'], ['3', 'Luo\xa0', 'vgg16', '-', '74.04', '73.43', '-', '60.26', '55.03', '65.36', '-', '-'], ['4', 'CMN\xa0', 'vgg16-frcn', '-', '-', '-',...
First, we compare our model with previous methods using COCO’s ground-truth object bounding boxes as proposals. Results are shown in Table. As all of the previous methods (Line 1-8) used a 16-layer VGGNet (vgg16) as the feature extractor, we run our experiments using the same feature for fair comparison. Note the flat ...
MAttNet: Modular Attention Network for Referring Expression Comprehension
1801.08186
Table 4: Comparison of segmentation performance on RefCOCO, RefCOCO+, and our results on RefCOCOg.
['RefCOCOg Model', 'RefCOCOg Backbone Net', 'RefCOCOg Split', 'RefCOCOg Pr@0.5', 'RefCOCOg Pr@0.6', 'RefCOCOg Pr@0.7', 'RefCOCOg Pr@0.8', 'RefCOCOg Pr@0.9', 'RefCOCOg IoU']
[['MAttNet', 'res101-mrcn', 'val', '64.48', '61.52', '56.50', '43.97', '14.67', '47.64'], ['MAttNet', 'res101-mrcn', 'test', '65.60', '62.92', '57.31', '44.44', '12.55', '48.61']]
(res101-mrcn). We apply the same procedure described in Sec. Then we feed the predicted bounding box to the mask branch to obtain a pixel-wise segmentation. We use Precision@X (X∈{0.5,0.6,0.7,0.8,0.9}) Results are shown in Table. Some referential segmentation examples are shown in Fig. Note this is not a strictly fair ...
MAttNet: Modular Attention Network for Referring Expression Comprehension
1801.08186
Table 7: Object detection results.
['net', '[ITALIC] APbb', '[ITALIC] APbb50', '[ITALIC] APbb75']
[['res101-frcn', '34.1', '53.7', '36.8'], ['res101-mrcn', '35.8', '55.3', '38.6']]
We firstly show the comparison between Faster R-CNN and Mask R-CNN on object detection in Table. Both models are based on ResNet101 and were trained using same setting. In the main paper, we denote them as res101-frcn and res101-mrcn respectively. It shows that Mask R-CNN has higher AP than Faster R-CNN due to the mult...
MAttNet: Modular Attention Network for Referring Expression Comprehension
1801.08186
Table 8: Instance segmentation results.
['net', '[ITALIC] AP', '[ITALIC] AP50', '[ITALIC] AP75']
[['res101-mrcn (ours)', '30.7', '52.3', '32.4'], ['res101-mrcn\xa0', '32.7', '54.2', '34.0']]
Note this is not a strictly fair comparison as our model was trained with fewer images. Overall, the AP of our implementation is ∼2 points lower. The main reason may due to the shorter 600-pixel edge setting and smaller training batch size. Even though, our pixel-wise comprehension results already outperform the state-...
Code-Mixed to Monolingual Translation Framework
1911.03772
Table 1: Evaluation results.
['[BOLD] Model', '[BOLD] BLEU', '[BOLD] TER', '[BOLD] Adq.', '[BOLD] Flu.']
[['GNMT', '2.44', '75.09', '0.90', '1.12'], ['CMT1', '15.09', '58.04', '3.18', '3.57'], ['CMT2', '16.47', '55.45', '3.19', '3.97']]
Two variants of our system was tested, one without token reordering (CMT1) and one with (CMT2). Manual scoring (in the range 1-5, low to high quality) of Adequacy and Fluency banchs:2015adequacy was done by a bi-lingual linguist, fluent in both En and Bn, with Bn as mother tongue. We can clearly see that our pipeline o...
Code-Mixed to Monolingual Translation Framework
1911.03772
Table 2: Error contribution.
['[BOLD] Module', '[BOLD] Contribution']
[['Language Tagger', '36'], ['Back Transliteration', '12'], ['Machine Translation', '25']]
Exp 1. We randomly took 100 instances where BLEU score achieved was less than 15. Then we fed this back to our pipeline and collected outputs from each of the modules. We manually associated each of the errors with the respective module causing it, considering the input to it was correct. Language tagger being the star...
Adapting End-to-End Speech Recognition for Readable Subtitles
2005.12143
Table 8: Word error rate and summarization quality by the models with length count-down. Adaptation results in large gains. The model with length encoding outperforms the baseline and the one with length embedding.
['[BOLD] Model', '[BOLD] Ratio (output to desired length)', '[BOLD] WER', '[BOLD] R-1', '[BOLD] R-2', '[BOLD] R-L']
[['(1): Baseline (satisfy length)', '0.97', '55.1', '62.5', '41.5', '60.4'], ['(2): + adapt', '0.96', '39.9', '74.6', '[BOLD] 57.0', '72.6'], ['(3): Length embedding', '1.00', '57.8', '61.9', '40.5', '59.8'], ['(4): + adapt', '0.96', '39.3', '74.3', '55.2', '72.5'], ['(5): Length encoding', '1.00', '57.4', '62.6', '40....
While the baseline ASR model achieves some degree of compression after adaptation, it cannot fully comply with length constraints. Therefore, the following experiments examine the effects of training with explicit length count-downs.
Collecting Entailment Data for Pretraining:New Protocols and Negative Results
2004.11997
Table 6: Model performance on the SuperGLUE validation and diagnostic sets. The Avg. column shows the overall SuperGLUE score—an average across the eight tasks, weighting each task equally—as a mean and standard deviation across three restarts.
['[BOLD] Intermediate- [BOLD] Training Data', '[BOLD] Avg. [ITALIC] μ ( [ITALIC] σ)', '[BOLD] BoolQ [BOLD] Acc.', '[BOLD] CB [BOLD] F1/Acc.', '[BOLD] CB [BOLD] F1/Acc.', '[BOLD] COPA [BOLD] Acc.', '[BOLD] MultiRC [BOLD] F1 [ITALIC] a/EM', '[BOLD] MultiRC [BOLD] F1 [ITALIC] a/EM', '[BOLD] ReCoRD [BOLD] F1/EM', ...
[['[BOLD] RoBERTa (large)', '[BOLD] RoBERTa (large)', '[BOLD] RoBERTa (large)', '[BOLD] RoBERTa (large)', '[BOLD] RoBERTa (large)', '[BOLD] RoBERTa (large)', '[BOLD] RoBERTa (large)', '[BOLD] RoBERTa (large)', '[BOLD] RoBERTa (large)', '[BOLD] RoBERTa (large)', '[BOLD] RoBERTa (large)', '[BOLD] RoBERTa (large)', '[BOLD...
Our first observation is that our overall data collection pipeline worked well for our purposes: Our Base data yields models that transfer substantially better than the plain RoBERTa or XLNet baseline, and at least slightly better than 8.5k-example samples of MNLI, MNLI Government or ANLI.
Collecting Entailment Data for Pretraining:New Protocols and Negative Results
2004.11997
Table 4: NLI modeling experiments with RoBERTa, reporting results on the validation sets for MNLI and for the task used for training each model (Self), and the GLUE diagnostic set. We compare the two-class Contrast with a two-class version of MNLI.
['[BOLD] Training Data', '[BOLD] Self', '[BOLD] MNLI', '[BOLD] GLUE Diag.']
[['Base', '84.8', '81.5', '40.5'], ['Paragraph', '78.3', '78.2', '31.7'], ['EditPremise', '82.9', '79.8', '35.5'], ['EditOther', '82.5', '82.6', '33.9'], ['MNLI8.5k', '87.5', '87.5', '44.6'], ['MNLIGov8.5k', '87.7', '85.4', '40.7'], ['ANLI8.5k', '35.7', '85.6', '39.8'], ['MNLI', '90.4', '[BOLD] 90.4', '49.2'], ['ANLI',...
As NLI classifiers trained on Contrast cannot produce the neutral labels used in MNLI, we evaluate them separately, and compare them with two-class variants of the MNLI models. Our first three interventions, especially EditPremise, show much lower hypothesis-only performance than Base. This indicates that these these r...
Named Entities in Medical Case Reports: Corpus and Experiments
2003.13032
Table 4: Annotated relations between entities. Relations appear within a sentence (intra-sentential) or across sentences (inter-sentential)
['Type of Relation case has condition', 'Intra-sentential 28', 'Intra-sentential 18.1%', 'Inter-sentential 127', 'Inter-sentential 81.9%', 'Total 155', 'Total 4.0%']
[['case has finding', '169', '7.2%', '2180', '92.8%', '2349', '61.0%'], ['case has factor', '153', '52.9%', '136', '47.1%', '289', '7.5%'], ['modifier modifies finding', '994', '98.5%', '15', '1.5%', '1009', '26.2%'], ['condition causes finding', '44', '3.6%', '3', '6.4%', '47', '1.2%']]
Entities can appear in a discontinuous way. We model this as a relation between two spans which we call “discontinuous” (cf. Especially findings often appear as discontinuous entities, we found 543 discontinuous finding relations. The numbers for conditions and factors are lower with seven and two, respectively. Entiti...
Named Entities in Medical Case Reports: Corpus and Experiments
2003.13032
Table 5: Span-level precision (P), recall (R) and F1-scores (F1) on four distinct baseline NER systems. All scores are computed as average over five-fold cross validation.
['[EMPTY]', 'CRF P', 'CRF R', 'CRF F1', 'BiLSTM CRF P', 'BiLSTM CRF R', 'BiLSTM CRF F1', 'MTL P', 'MTL R', 'MTL F1', 'BioBERT P', 'BioBERT R', 'BioBERT F1']
[['case', '0.59', '0.76', '[BOLD] 0.66', '0.40', '0.22', '0.28', '0.55', '0.38', '0.44', '0.43', '0.64', '0.51'], ['condition', '0.45', '0.18', '0.26', '0.00', '0.00', '0.00', '0.62', '0.62', '[BOLD] 0.62', '0.33', '0.37', '0.34'], ['factor', '0.40', '0.05', '0.09', '0.23', '0.04', '0.06', '0.6', '0.53', '[BOLD] 0.56',...
To evaluate the performance of the four systems, we calculate the span-level precision (P), recall (R) and F1 scores, along with corresponding micro and macro scores.
Transferable Representation Learning in Vision-and-Language Navigation
1908.03409
Table 2: Results for Validation Seen and Validation Unseen, when trained with a small fraction of Fried-Augmented ordered by scores given by model trained on cma. SPL and SR are reported as percentages and NE and PL in meters.
['Dataset size', 'Strategy', '[BOLD] Validation Seen PL', '[BOLD] Validation Seen NE ↓', '[BOLD] Validation Seen SR ↑', '[BOLD] Validation Seen SPL ↑', '[BOLD] Validation Unseen PL', '[BOLD] Validation Unseen NE ↓', '[BOLD] Validation Unseen SR ↑', '[BOLD] Validation Unseen SPL ↑']
[['1%', 'Top', '11.1', '8.5', '21.2', '17.6', '11.2', '8.5', '20.4', '16.6'], ['1%', 'Bottom', '10.7', '9.0', '16.3', '13.1', '10.8', '8.9', '15.4', '14.1'], ['2%', 'Top', '11.7', '7.9', '25.5', '21.0', '11.3', '8.2', '22.3', '18.5'], ['2%', 'Bottom', '14.5', '9.1', '17.7', '12.7', '11.4', '8.4', '17.5', '14.1']]
Scoring generated instructions. We use this trained model to rank all the paths in Fried-Augmented and train the RCM agent on different portions of the data. Using high-quality examples–as judged by the model–outperforms the ones trained using low-quality examples. Note that the performance is low in both cases because...
Transferable Representation Learning in Vision-and-Language Navigation
1908.03409
Table 3: AUC performance when the model is trained on different combinations of the two tasks and evaluated on the dataset containing only PR and RW negatives.
['Training', 'Val. Seen', 'Val. Unseen']
[['cma', '82.6', '72.0'], ['nvs', '63.0', '62.1'], ['cma + nvs', '[BOLD] 84.0', '[BOLD] 79.2']]
Improvements from Adding Coherence Loss. Finally we show that training a model on cma and nvs simultaneously improves the model’s performance when evaluated on cma alone. The model is trained using combined loss αLalignment+(1−α)Lcoherence with α=0.5 and is evaluated on its ability to differentiate incorrect instructio...
Transferable Representation Learning in Vision-and-Language Navigation
1908.03409
Table 4: Comparison on R2R Leaderboard Test Set. Our navigation model benefits from transfer learned representations and outperforms the known SOTA on SPL. SPL and SR are reported as percentages and NE and PL in meters.
['Model', 'PL', 'NE ↓', 'SR ↑', 'SPL ↑']
[['Random\xa0', '9.89', '9.79', '13.2', '12.0'], ['Seq-to-Seq\xa0', '8.13', '7.85', '20.4', '18.0'], ['Look Before You Leap\xa0', '9.15', '7.53', '25.3', '23.0'], ['Speaker-Follower\xa0', '14.8', '6.62', '35.0', '28.0'], ['Self-Monitoring\xa0', '18.0', '5.67', '[BOLD] 48.0', '35.0'], ['Reinforced Cross-Modal\xa0', '12....
Our ALTR agent significantly outperforms the SOTA at the time on SPL–the primary metric for R2R–improving it by 5% absolute measure, and it has the lowest navigation error (NE). It furthermore ties the other two best models for SR. Compared to RCM, our ALTR agent is able to learn a more efficient policy resulting in sh...
Transferable Representation Learning in Vision-and-Language Navigation
1908.03409
Table 5: Ablations on R2R Validation Seen and Validation Unseen sets, showing results in VLN for different combinations of pre-training tasks. SPL and SR are reported as percentages and NE and PL in meters.
['Method', 'cma', 'nvs', '[BOLD] Validation Seen PL', '[BOLD] Validation Seen NE ↓', '[BOLD] Validation Seen SR ↑', '[BOLD] Validation Seen SPL ↑', '[BOLD] Validation Unseen PL', '[BOLD] Validation Unseen NE ↓', '[BOLD] Validation Unseen SR ↑', '[BOLD] Validation Unseen SPL ↑']
[['Speaker-Follower ', '-', '-', '-', '3.36', '66.4', '-', '-', '6.62', '35.5', '-'], ['RCM', '-', '-', '12.1', '3.25', '67.6', '-', '15.0', '6.01', '40.6', '-'], ['Speaker-Follower (Ours)', '✗', '✗', '15.9', '4.90', '51.9', '43.0', '15.6', '6.40', '36.0', '29.0'], ['Speaker-Follower (Ours)', '✓', '✗', '14.9', '5.04', ...
The first ablation study analyzes the effectiveness of each task individually in learning representations that can benefit the navigation agent. When pre-trainning CMA and NVS jointly, we see a consistent 11-12% improvement in SR for both the SF and RCM agents as well as improvement in agent’s path length, thereby also...
Transferable Representation Learning in Vision-and-Language Navigation
1908.03409
Table 6: Ablations showing the effect of adapting (or not) the learned representations in each branch of our RCM agent on Validation Seen and Validation Unseen. SPL and SR are reported as percentages and NE and PL in meters.
['Image encoder', 'Language encoder', '[BOLD] Validation Seen PL', '[BOLD] Validation Seen NE ↓', '[BOLD] Validation Seen SR ↑', '[BOLD] Validation Seen SPL ↑', '[BOLD] Validation Unseen PL', '[BOLD] Validation Unseen NE ↓', '[BOLD] Validation Unseen SR ↑', '[BOLD] Validation Unseen SPL ↑']
[['✗', '✗', '13.7', '4.48', '55.3', '47.9', '14.8', '6.00', '41.1', '32.7'], ['✓', '✗', '15.9', '5.05', '50.6', '38.2', '14.9', '5.94', '42.5', '33.1'], ['✗', '✓', '13.8', '4.68', '56.3', '46.6', '13.5', '5.66', '43.9', '35.8'], ['✓', '✓', '13.2', '4.68', '55.8', '52.7', '9.8', '5.61', '46.1', '43.0']]
The second ablation analyzes the effect of transferring representations to either of the language and visual encoders. The learned representations help the agent to generalize on previously unseen environments. When either of the encoders is warm-started, the agent outperforms the baseline success rates and SPL on vali...
Empower Entity Set Expansion via Language Model Probing
2004.13897
Table 2: Mean Average Precision on Wiki and APR. “∇” means the number is directly from the original paper.
['[BOLD] Methods', '[BOLD] Wiki MAP@10', '[BOLD] Wiki MAP@20', '[BOLD] Wiki MAP@50', '[BOLD] APR MAP@10', '[BOLD] APR MAP@20', '[BOLD] APR MAP@50']
[['Egoset\xa0Rong et al. ( 2016 )', '0.904', '0.877', '0.745', '0.758', '0.710', '0.570'], ['SetExpan\xa0Shen et al. ( 2017 )', '0.944', '0.921', '0.720', '0.789', '0.763', '0.639'], ['SetExpander\xa0Mamou et al. ( 2018 )', '0.499', '0.439', '0.321', '0.287', '0.208', '0.120'], ['CaSE\xa0Yu et al. ( 2019b )', '0.897', ...
Overall Performance. We can see that CGExpan along with its ablations in general outperform all the baselines by a large margin. Comparing with SetExpan, the full model CGExpan achieves 24% improvement in MAP@50 on the Wiki dataset and 49% improvement in MAP@50 on the APR dataset, which verifies that our class-guided m...
AliCoCo: Alibaba E-commerce Cognitive Concept Net
2003.13230
Table 6. Experimental results in semantic matching between e-commerce concepts and items.
['Model', 'AUC', 'F1', 'P@10']
[['BM25', '-', '-', '0.7681'], ['DSSM (huang2013learning)', '0.7885', '0.6937', '0.7971'], ['MatchPyramid (pang2016text)', '0.8127', '0.7352', '0.7813'], ['RE2 (yang2019simple)', '0.8664', '0.7052', '0.8977'], ['Ours', '0.8610', '0.7532', '0.9015'], ['Ours + Knowledge', '[BOLD] 0.8713', '[BOLD] 0.7769', '[BOLD] 0.9048'...
Our knowledge-aware deep semantic matching model outperforms all the baselines in terms of AUC, F1 and Precision at 10, showing the benefits brought by external knowledge. To further investigate how knowledge helps, we dig into cases. Using our base model without knowledge injected, the matching score of concept “中秋节礼物...
AliCoCo: Alibaba E-commerce Cognitive Concept Net
2003.13230
Table 3. Experimental results of different sampling strategy in hypernym discovery.
['Strategy', 'Labeled Size', 'MRR', 'MAP', 'P@1', 'Reduce']
[['Random', '500k', '58.97', '45.30', '45.50', '-'], ['US', '375k', '59.66', '45.73', '46.00', '150k'], ['CS', '400k', '58.96', '45.22', '45.30', '100k'], ['UCS', '325k', '59.87', '46.32', '46.00', '175k']]
When it achieves similar MAP score in four active learning strategies, we can find that all the active learning sampling strategies can reduce labeled data to save considerable manual efforts. UCS is the most economical sampling strategy, which only needs 325k samples, reducing 35\% samples comparing to random strategy...
AliCoCo: Alibaba E-commerce Cognitive Concept Net
2003.13230
Table 4. Experimental results in shopping concept generation.
['Model', 'Precision']
[['Baseline (LSTM + Self Attention)', '0.870'], ['+Wide', '0.900'], ['+Wide & BERT', '0.915'], ['+Wide & BERT & Knowledge', '[BOLD] 0.935']]
Comparing to the baseline, which is a base BiLSTM with self attention architecture, adding wide features such as different syntactic features of concept improves the precision by 3\% in absolute value. When we replace the input embedding with BERT output, the performance improves another 1.5\%, which shows the advantag...
AliCoCo: Alibaba E-commerce Cognitive Concept Net
2003.13230
Table 5. Experimental results in shopping concept tagging.
['Model', 'Precision', 'Recall', 'F1']
[['Baseline', '0.8573', '0.8474', '0.8523'], ['+Fuzzy CRF', '0.8731', '0.8665', '0.8703'], ['+Fuzzy CRF & Knowledge', '[BOLD] 0.8796', '[BOLD] 0.8748', '[BOLD] 0.8772']]
Comparing to baseline which is a basic sequence labeling model with Bi-LSTM and CRF, adding fuzzy CRF improves 1.8% on F1, which indicates such multi-path optimization in CRF layer actually contributes to disambiguation. Equipped with external knowledge embeddings to further enhance the textual information, our model c...
End-to-end Named Entity Recognition from English Speech* *submitted to Interspeech-2020
2005.11184
Table 4: E2E NER from speech: Micro-Average scores, with and without LM.
['[BOLD] E2E NER', '[BOLD] Precision', '[BOLD] Recall', '[BOLD] F1']
[['without LM', '0.38', '00.21', '0.27'], ['with LM', '[BOLD] 0.96', '[BOLD] 0.85', '[BOLD] 0.90']]
We also studied the effect of language model on the F1 scores. Therefore, based on the quantitative analysis, we conclude that the NER results are closely dependent on the language model, and if an LM is trained on a bigger corpus, then the recall could be increased further.
Latent Suicide Risk Detection on Microblog via Suicide-Oriented Word Embeddings and Layered Attention
1910.12038
Table 5: Performance comparison for different word embedding and different detection model, where “So-W2v”, “So-Glove” and “So-FastText” represent suicide-oriented word embeddings based on Word2vec, GloVe and FastText respectively. Acc and F1 represent accuracy and F1-score.
['LSTM', 'Acc(%)', 'Word2vec 79.21', 'GloVe 80.17', 'FastText 82.59', 'Bert 85.15', 'So-W2v 86.00', 'So-GloVe 86.45', 'So-FastText [BOLD] 88.00']
[['LSTM', 'F1(%)', '78.58', '79.98', '82.18', '85.69', '86.17', '86.69', '[BOLD] 88.14'], ['SDM', 'Acc(%)', '86.54', '86.55', '87.08', '88.89', '90.83', '91.00', '[BOLD] 91.33'], ['SDM', 'F1(%)', '86.63', '85.13', '86.91', '87.44', '90.55', '90.56', '[BOLD] 90.92']]
5.4.1 Effectiveness of Suicide-oriented Word Embeddings We find that without suicide-related dictionary, Bert outperforms other three word embeddings with 2% higher accuracy and 1.5% higher F1-score on both models. After leveraging suicide-related dictionary, suicide-oriented word embeddings based on FastText achieves ...
Latent Suicide Risk Detection on Microblog via Suicide-Oriented Word Embeddings and Layered Attention
1910.12038
Table 6: Performance comparison between different suicide risk detection model, where Acc and F1 represent accuracy and F1-score respectively.
['[EMPTY]', 'Full testset Acc', 'Full testset F1', 'Harder sub-testset Acc', 'Harder sub-testset F1']
[['SVM', '70.34', '69.01', '61.17', '64.11'], ['NB', '69.59', '70.12', '65.14', '62.20'], ['LSTM', '88.00', '88.14', '76.89', '75.32'], ['SDM', '[BOLD] 91.33', '[BOLD] 90.92', '[BOLD] 85.51', '[BOLD] 84.77']]
In this case, LSTM and SDM employs So-FastText word embeddings as their input. SDM improves the accuracy by over 3.33% and obtains 2.78% higher F1-score on full data set.
Latent Suicide Risk Detection on Microblog via Suicide-Oriented Word Embeddings and Layered Attention
1910.12038
Table 7: Ablation test for SDM with different inputs.
['Inputs', 'Accuracy', 'F1-score']
[['Text', '88.56', '87.99'], ['Text+Image', '89.22', '89.22'], ['Text+User’s feature', '90.66', '90.17'], ['Text+Image +User’s feature', '[BOLD] 91.33', '[BOLD] 90.92']]
To show the contribution of different input to the final classification performance, we design an ablation test of SDM after removing different input. All SDMs are based on embedding So-Fasttext. Since not every post contains image and user’s features contain missing value, we do not only use images nor user’s feature ...
Learning to Stop in Structured Prediction for Neural Machine Translation
1904.01032
Table 5: BLEU and length ratio of models on Zh→En validation set. †indicates our own implementation.
['Model', 'Train', 'Decode', 'BLEU', 'Len.']
[['Model', 'Beam', 'Beam', 'BLEU', 'Len.'], ['Seq2Seq†', '-', '7', '37.74', '0.96'], ['w/ Len. reward†', '-', '7', '38.28', '0.99'], ['BSO†', '4', '3', '36.91', '1.03'], ['BSO†', '8', '7', '35.57', '1.07'], ['This work', '4', '3', '38.41', '1.00'], ['This work', '8', '7', '39.51', '1.00']]
We compare our model with seq2seq, BSO and seq2seq with length reward Huang et al. (our proposed method does not require tuning of hyper-parameter). Fig. Our proposed model comparatively achieves better length ratio on almost all source sentence length in dev-set.
Language Models with Transformers
1904.09408
Table 2: Ablation study. Compare CAS with not adding LSTM layers (CAS-Subset) and not updating Transformer block parameters (CAS-LSTM).
['[BOLD] Model', '[BOLD] Datasets [BOLD] PTB', '[BOLD] Datasets [BOLD] PTB', '[BOLD] Datasets [BOLD] WT-2', '[BOLD] Datasets [BOLD] WT-2', '[BOLD] Datasets [BOLD] WT-103', '[BOLD] Datasets [BOLD] WT-103']
[['[BOLD] Model', '[BOLD] Val', '[BOLD] Test', '[BOLD] Val', '[BOLD] Test', '[BOLD] Val', '[BOLD] Test'], ['BERT-CAS-Subset', '42.53', '36.57', '[BOLD] 51.15', '[BOLD] 44.96', '[BOLD] 44.34', '[BOLD] 43.33'], ['BERT-CAS-LSTM', '[BOLD] 40.22', '[BOLD] 35.32', '53.82', '47.00', '53.66', '51.60'], ['GPT-CAS-Subset', '47.5...
but it fixes all Transformer blocks during fine-tuning. As can be seen, both CAS-Subset and CAS-LSTM improve significantly upon a naive use of BERT and GPT. This is to be expected since fine-tuning improves performance. On the smaller dataset, i.e. PTB, adding LSTMs is more effective. This might be due to the overfitti...
Language Models with Transformers
1904.09408
Table 1: Performance of Coordinate Architecture Search (CAS). ‘Val’ and ‘Test’ denote validation and test perplexity respectively.
['[BOLD] Model', '[BOLD] Datasets [BOLD] PTB', '[BOLD] Datasets [BOLD] PTB', '[BOLD] Datasets [BOLD] WT-2', '[BOLD] Datasets [BOLD] WT-2', '[BOLD] Datasets [BOLD] WT-103', '[BOLD] Datasets [BOLD] WT-103']
[['[BOLD] Model', '[BOLD] Val', '[BOLD] Test', '[BOLD] Val', '[BOLD] Test', '[BOLD] Val', '[BOLD] Test'], ['AWD-LSTM-MoS-BERTVocab', '43.47', '38.04', '48.48', '42.25', '54.94', '52.91'], ['BERT', '72.99', '62.40', '79.76', '69.32', '109.54', '107.30'], ['BERT-CAS (Our)', '39.97', '34.47', '38.43', '34.64', '40.70', '3...
First note that GPT and BERT are significantly worse than AWD-LSTM-MoS. It confirms our hypothesis that neither BERT nor GPT are effective tools for language modeling. Applying them naively leads to significantly worse results compared to AWS-LSTM-MoS on three datasets. It demonstrates that language modeling requires s...
Language Models with Transformers
1904.09408
Table 5: Efficiency of different search methods on PTB and WT-2.
['[BOLD] Search Method', '[BOLD] Search Cost (GPU days) [BOLD] PTB', '[BOLD] Search Cost (GPU days) [BOLD] WT-2', '[BOLD] Method Class']
[['NAS Zoph and Le ( 2016 )', '1,000 CPU days', 'n.a.', 'reinforcement'], ['ENAS Pham et al. ( 2018 )', '0.5', '0.5', 'reinforcement'], ['DARTS (first order) Liu et al. ( 2018 )', '0.5', '1', 'gradient descent'], ['DARTS (second order) Liu et al. ( 2018 )', '1', '1', 'gradient descent'], ['BERT-CAS (Our)', '[BOLD] 0.15...
As can be seen, BERT-CAS is cheaper than all others. The results indicate that by leveraging the prior knowledge of the design of the neural networks for specific tasks, we could only optimize the architectures in a small confined sub-space, that leads to speed up the search process. For example, BERT-CAS is directly b...
Language Models with Transformers
1904.09408
Table 6: Compare model parameter size and results with GPT-2. The GPT-2 model size and results are from Radford et al. (2019).
['[BOLD] Model', '[BOLD] Parameters', '[BOLD] Datasets PTB', '[BOLD] Datasets WT-2', '[BOLD] Datasets WT-103']
[['GPT-2', '345M', '47.33', '22.76', '26.37'], ['GPT-2', '762M', '40.31', '19.93', '22.05'], ['GPT-2', '1542M', '35.76', '[BOLD] 18.34', '[BOLD] 17.48'], ['BERT-Large-CAS', '395M', '[BOLD] 31.34', '34.11', '20.42']]
We specifically compare the proposed model with the recent state-of-the-art language model GPT-2 Radford et al. WT-103. More surprisingly, the proposed method performs better than GPT-2 (1542M) which has around 4 times more parameters. On WT-103, BERT-Large-CAS is better than GPT-2 (762M) which has around 2 times more ...
Fast and Scalable Expansion of Natural Language Understanding Functionality for Intelligent Agents
1805.01542
Table 2: Results for around 200 custom developer domains. For F1, higher values are better, while for SER lower values are better. * denotes statistically significant SER difference compared to both baselines.
['Approach', '[ITALIC] F1 [ITALIC] Intent Mean', '[ITALIC] F1 [ITALIC] Intent Median', '[ITALIC] F1 [ITALIC] Slot Mean', '[ITALIC] F1 [ITALIC] Slot Median', '[ITALIC] SER Mean', '[ITALIC] SER Median']
[['Baseline CRF/MaxEnt', '94.6', '96.6', '80.0', '91.5', '14.5', '9.2'], ['Baseline DNN', '91.9', '95.9', '85.1', '92.9', '14.7', '9.2'], ['Proposed Pretrained DNN *', '[BOLD] 95.2', '[BOLD] 97.2', '[BOLD] 88.6', '[BOLD] 93.0', '[BOLD] 13.1', '[BOLD] 7.9']]
For the custom domain experiments, we focus on a low resource experimental setup, where we assume that our only target training data is the data provided by the external developer. We report results for around 200 custom domains, which is a subset of all domains we support. For training the baselines, we use the availa...
Fast and Scalable Expansion of Natural Language Understanding Functionality for Intelligent Agents
1805.01542
Table 3: Results on domains A, B and C for the proposed pretrained DNN method and the baseline CRF/MaxEnt method during experimental early stages of domain development. * denotes statistically significant SER difference between proposed and baseline
['Train Set', 'Size', 'Method', '[ITALIC] F1 [ITALIC] intent', '[ITALIC] F1 [ITALIC] slot', '[ITALIC] SER']
[['Domain A (5 intents, 36 slots)', 'Domain A (5 intents, 36 slots)', 'Domain A (5 intents, 36 slots)', 'Domain A (5 intents, 36 slots)', 'Domain A (5 intents, 36 slots)', 'Domain A (5 intents, 36 slots)'], ['Core*', '500', 'Baseline', '85.0', '63.9', '51.9'], ['data', '500', 'Proposed', '86.6', '66.6', '48.2'], ['Boot...
We evaluate our methods on three new built-in domains referred here as domain A (5 intents, 36 slot types), domain B (2 intents, 17 slot types) and domain C (22 intents, 43 slot types). Core data refers to core example utterances, bootstrap data refers to domain data collection and generation of synthetic (grammar) utt...
Non-autoregressive Machine Translation with Disentangled Context Transformer
2001.05136
Table 1: The performance of non-autoregressive machine translation methods on the WMT14 EN-DE and WMT16 EN-RO test data. The Step columns indicate the average number of sequential transformer passes. Shaded results use a small transformer (d_{model}=d_{hidden}=512). Our EN-DE results show the scores after conventional ...
['[BOLD] Model n: # rescored candidates', '[BOLD] en\\rightarrowde Step', '[BOLD] en\\rightarrowde BLEU', '[BOLD] de\\rightarrowen Step', '[BOLD] de\\rightarrowen BLEU', '[BOLD] en\\rightarrowro Step', '[BOLD] en\\rightarrowro BLEU', '[BOLD] ro\\rightarrowen Step', '[BOLD] ro\\rightarrowen BLEU']
[['Gu2017NonAutoregressiveNM (n=100)', '1', '19.17', '1', '23.20', '1', '29.79', '1', '31.44'], ['Wang2019NonAutoregressiveMT (n=9)', '1', '24.61', '1', '28.90', '–', '–', '–', '–'], ['Li2019HintBasedTF (n=9)', '1', '25.20', '1', '28.80', '–', '–', '–', '–'], ['Ma2019FlowSeqNC (n=30)', '1', '25.31', '1', '30.68', '1', ...
First, our re-implementations of CMLM + Mask-Predict outperform Ghazvininejad2019MaskPredictPD (e.g., 31.24 vs. 30.53 in de\rightarrowen with 10 steps). This is probably due to our tuning on the dropout rate and weight averaging of the 5 best epochs based on the validation BLEU performance (Sec.
Non-autoregressive Machine Translation with Disentangled Context Transformer
2001.05136
Table 4: Effects of distillation across different models and inference. All results are BLEU scores from the dev data. T and b denote the max number of iterations and beam size respectively.
['Model', 'T', '[BOLD] en\\rightarrowde raw', '[BOLD] en\\rightarrowde dist.', '[BOLD] en\\rightarrowde \\Delta', '[BOLD] ro\\rightarrowen raw', '[BOLD] ro\\rightarrowen dist.', '[BOLD] ro\\rightarrowen \\Delta']
[['CMLM + MaskP', '4', '22.7', '25.5', '2.8', '33.2', '34.8', '1.6'], ['CMLM + MaskP', '10', '24.5', '25.9', '1.4', '34.5', '34.9', '0.4'], ['DisCo + MaskP', '4', '21.4', '24.6', '3.2', '32.3', '34.1', '1.8'], ['DisCo + MaskP', '10', '23.6', '25.3', '1.7', '33.4', '34.3', '0.9'], ['DisCo + EasyF', '10', '23.9', '25.6',...
Consistent with previous models (Gu2017NonAutoregressiveNM; UnderstandingKD), we find that distillation facilitates all of the non-autoregressive models. Moreover, the DisCo transformer benefits more from distillation compared to the CMLM under the same mask-predict inference. This is in line with UnderstandingKD who s...
Non-autoregressive Machine Translation with Disentangled Context Transformer
2001.05136
Table 6: Dev results from bringing training closer to inference.
['Training Variant', '[BOLD] en\\rightarrowde Step', '[BOLD] en\\rightarrowde BLEU', '[BOLD] ro\\rightarrowen Step', '[BOLD] ro\\rightarrowen BLEU']
[['Random Sampling', '4.29', '[BOLD] 25.60', '3.17', '[BOLD] 34.97'], ['Easy-First Training', '4.03', '24.76', '2.94', '34.96']]
Easy-First Training So far we have trained our models to predict every word given a random subset of the other words. But this training scheme yields a gap between training and inference, which might harm the model. We attempt to make training closer to inference by training the DisCo transformer in the easy-first orde...
Non-autoregressive Machine Translation with Disentangled Context Transformer
2001.05136
Table 7: Dev results with different decoding strategies.
['[BOLD] Inference Strategy', '[BOLD] en\\rightarrowde Step', '[BOLD] en\\rightarrowde BLEU', '[BOLD] ro\\rightarrowen Step', '[BOLD] ro\\rightarrowen BLEU']
[['Left-to-Right Order', '6.80', '21.25', '4.86', '33.87'], ['Right-to-Left Order', '6.79', '20.75', '4.67', '34.38'], ['All-But-Itself', '6.90', '20.72', '4.80', '33.35'], ['Parallel Easy-First', '4.29', '[BOLD] 25.60', '3.17', '[BOLD] 34.97'], ['Mask-Predict', '10', '25.34', '10', '34.54']]
Alternative Inference Algorithms ’s probability conditioned on the easier positions from the previous iteration. We evaluate two alternative orderings: left-to-right and right-to-left. We see that both of them yield much degraded performance. We also attempt to use even broader context than parallel easy-first by compu...
Model Compression with Two-stage Multi-teacher Knowledge Distillation for Web Question Answering System
1910.08381
Table 8. Extremely large Q&A dataset results.
['[BOLD] Performance (ACC) [BOLD] BERTlarge', '[BOLD] Performance (ACC) [BOLD] KD', '[BOLD] Performance (ACC) [BOLD] MKD', '[BOLD] Performance (ACC) [BOLD] TMKD']
[['77.00', '73.22', '77.32', '[BOLD] 79.22']]
To further evaluate the potential of TMKD, we conduct extensive experiments on CommQA-Unlabeled extremely large-scale corpus data (0.1 billion unlabeled Q&A pairs) and CommQA-Labeled (12M labeled Q&A labeled pairs). Four separate teacher models (T1-T4) are trained with batch size of 128 and learning rate with {2,3,4,5}...
Model Compression with Two-stage Multi-teacher Knowledge Distillation for Web Question Answering System
1910.08381
Table 4. Model comparison between our methods and baseline methods. ACC denotes accuracy (all ACC metrics in the table are percentage numbers with % omitted). Specially for MNLI, we average the results of matched and mismatched validation set.
['[BOLD] Model', '[BOLD] Model', '[BOLD] Performance (ACC) [BOLD] DeepQA', '[BOLD] Performance (ACC) [BOLD] MNLI', '[BOLD] Performance (ACC) [BOLD] SNLI', '[BOLD] Performance (ACC) [BOLD] QNLI', '[BOLD] Performance (ACC) [BOLD] RTE', '[BOLD] Inference Speed(QPS)', '[BOLD] Parameters (M)']
[['[BOLD] Original Model', '[BOLD] BERT-3', '75.78', '70.77', '77.75', '78.51', '57.42', '207', '50.44'], ['[BOLD] Original Model', '[BOLD] BERTlarge', '81.47', '79.10', '80.90', '90.30', '68.23', '16', '333.58'], ['[BOLD] Original Model', '[BOLD] BERTlarge ensemble', '81.66', '79.57', '81.39', '90.91', '70.75', '16/3'...
In this section, we conduct experiments to compare TMKD with baselines in terms of three dimensions, i.e. inference speed, parameter size and performance on task specific test set. 1-o-1 and 1avg-o-1 (BERT-3 and Bi-LSTM) obtain pretty good results regarding inference speed and memory capacity. However there are still s...
Model Compression with Two-stage Multi-teacher Knowledge Distillation for Web Question Answering System
1910.08381
Table 5. Comparison between KD and TKD
['[BOLD] Model', '[BOLD] Performance (ACC) [BOLD] DeepQA', '[BOLD] Performance (ACC) [BOLD] MNLI', '[BOLD] Performance (ACC) [BOLD] SNLI', '[BOLD] Performance (ACC) [BOLD] QNLI', '[BOLD] Performance (ACC) [BOLD] RTE']
[['[BOLD] KD (1-o-1)', '77.35', '71.07', '78.62', '77.65', '55.23'], ['[BOLD] TKD', '[BOLD] 80.12', '[BOLD] 72.34', '[BOLD] 78.23', '[BOLD] 85.89', '[BOLD] 67.35']]
On DeepQA dataset, TKD shows significant gains by leveraging large-scale unsupervised Q&A pairs for distillation pre-training. (2) Although Q&A task is different with GLUE tasks, the student model of GLUE tasks still benefit a lot from the distillation pre-training stage leveraging Q&A task. This proves the effect of t...
Model Compression with Two-stage Multi-teacher Knowledge Distillation for Web Question Answering System
1910.08381
Table 9. Compare different number of transformer layer.
['[BOLD] Dataset', '[BOLD] Metrics', '[BOLD] Layer Number [BOLD] 1', '[BOLD] Layer Number [BOLD] 3', '[BOLD] Layer Number [BOLD] 5']
[['[BOLD] DeepQA', 'ACC', '74.59', '78.46', '79.54'], ['[BOLD] MNLI', 'ACC', '61.23', '71.13', '72.76'], ['[BOLD] SNLI', 'ACC', '70.21', '77.67', '78.20'], ['[BOLD] QNLI', 'ACC', '70.60', '82.04', '84.94'], ['[BOLD] RTE', 'ACC', '54.51', '65.70', '66.07'], ['[EMPTY]', 'QPS', '511', '217', '141']]
(2) With n increasing, the performance gain between two consecutive trials decreases. That say, when n increases from 1 to 3, the ACC gains of the 5 datasets are (3.87, 9.90, 7.46, 11.44, 11.19) which are very big jump; while n increases from 3 to 5, gains decrease to (1.08, 1.63, 0.53, 2.89, 0.37), without decent add-...
Guiding Corpus-based Set Expansion by Auxiliary Sets Generation and Co-Expansion
2001.10106
Table 1. A frequency submatrix ϕ of president entities and their co-occured skip-grams.
['Entities', 'President __ and', 'President __ ,', 'President __ said', 'President __ ’s', 'President __ *']
[['Bill Clinton', '33', '17', '2', '23', '75'], ['Hu Jintao', '9', '8', '3', '0', '20'], ['Gorbachev', '2', '3', '0', '2', '7']]
The infrequent “Gorbachev” does not have enough co-occurrence with type-indicative skip-grams and might be even filtered out from the candidate pool What’s more, the skip-gram of “President __ said” might be neglected in the context feature selection process due to its lack of interaction with president entities. These...
Guiding Corpus-based Set Expansion by Auxiliary Sets Generation and Co-Expansion
2001.10106
Table 3. Mean Average Precision across all queries on Wiki and APR.
['Methods', '[ITALIC] Wiki MAP@10', '[ITALIC] Wiki MAP@20', '[ITALIC] Wiki MAP@50', '[ITALIC] APR MAP@10', '[ITALIC] APR MAP@20', '[ITALIC] APR MAP@50']
[['CaSE', '0.897', '0.806', '0.588', '0.619', '0.494', '0.330'], ['SetExpander', '0.499', '0.439', '0.321', '0.287', '0.208', '0.120'], ['SetExpan', '0.944', '0.921', '0.720', '0.789', '0.763', '0.639'], ['BERT', '0.970', '0.945', '0.853', '0.890', '0.896', '0.777'], ['Set-CoExpan (no aux.)', '0.964', '0.950', '0.861',...
The result clearly shows that our model has the best performance over all the methods in both datasets. Among all the baseline methods, BERT is the strongest one, since it stores a very large pre-trained language model to represent word semantics in a piece of context, which also explains why we have a very large margi...
Component Analysis for Visual Question Answering Architectures
2002.05104
TABLE II: Experiment using different fusion strategies.
['[ITALIC] Embedding', 'RNN', 'Fusion', 'Training', 'Validation']
[['BERT', 'GRU', 'Mult', '78.28', '[BOLD] 58.75'], ['BERT', 'GRU', 'Concat', '67.85', '55.07'], ['BERT', 'GRU', 'Sum', '68.21', '54.93']]
The best result is obtained using the element-wise multiplication. Such an approach functions as a filtering strategy that is able to scale down the importance of irrelevant dimensions from the visual-question feature vectors. In other words, vector dimensions with high cross-modal affinity will have their magnitudes i...
Component Analysis for Visual Question Answering Architectures
2002.05104
TABLE III: Experiment using different attention mechanisms.
['[ITALIC] Embedding', 'RNN', 'Attention', 'Training', 'Validation']
[['BERT', 'GRU', '-', '78.20', '58.75'], ['BERT', 'GRU', 'Co-Attention', '71.10', '58.54'], ['BERT', 'GRU', 'Co-Attention (L2 norm)', '86.03', '64.03'], ['BERT', 'GRU', 'Top Down', '82.64', '62.37'], ['BERT', 'GRU', 'Top Down ( [ITALIC] σ=ReLU)', '87.02', '[BOLD] 64.12']]
For these experiments we used only element-wise multiplication as fusion strategy, given that it presented the best performance in our previous experiments. We observe that attention is a crucial mechanism for VQA, leading to an ≈6% accuracy improvement.
Investigating the Effectiveness of Representations Based on Word-Embeddings in Active Learning for Labelling Text DatasetsSupported by organization x.
1910.03505
Table 3: P values and win/draw/lose of pairwise comparison of QBC-based methods.
['[EMPTY]', 'BERT', 'FT', 'FT_T', 'LDA', 'TF-IDF', 'TF']
[['BERT', '[EMPTY]', '6/0/2', '5/0/3', '8/0/0', '8/0/0', '8/0/0'], ['FT', '0.0687', '[EMPTY]', '3/1/4', '8/0/0', '8/0/0', '8/0/0'], ['FT_T', '0.0929', '0.4982', '[EMPTY]', '8/0/0', '7/0/1', '7/0/1'], ['LDA', '[BOLD] 0.0117', '[BOLD] 0.0117', '[BOLD] 0.0117', '[EMPTY]', '0/0/8', '1/0/7'], ['TF-IDF', '[BOLD] 0.0117', '[B...
This shows the win/loss/tie count for each pair of representations and the p-value from the related significance test. The table demonstrate that all embedding-based methods are significantly different from methods based on TF, TF-IDF and LDA with p<0.05. However, embedding-based methods do not have significant differe...
Abstract
1604.08120
Table .5: CauseRelPro-beta’s micro-averaged scores using different degrees of polynomial kernel.
['[EMPTY]', '[BOLD] P', '[BOLD] R', '[BOLD] F1']
[['2 [ITALIC] nd degree', '0.5031', '0.2516', '0.3354'], ['3 [ITALIC] rd degree', '0.5985', '0.2484', '0.3511'], ['4 [ITALIC] th degree', '0.6337', '0.2013', '0.3055']]
g We also conduct some experiments by using different degrees of polynomial kernel Note that even though the best F1-score is achieved by using the 3rd degree of polynomial kernel, the best precision of 0.6337 is achieved with degree 4.
Abstract
1604.08120
Table .6: TempRelPro performances evaluated on the TimeBank-Dense test set and compared with CAEVO.
['[BOLD] System', '[BOLD] T-T [BOLD] P/R/F1', '[BOLD] E-D [BOLD] P/R/F1', '[BOLD] E-T [BOLD] P/R/F1', '[BOLD] E-E [BOLD] P/R/F1', '[BOLD] Overall [BOLD] P', '[BOLD] Overall [BOLD] R', '[BOLD] Overall [BOLD] F1']
[['[BOLD] TempRelPro', '[BOLD] 0.780', '0.518', '[BOLD] 0.556', '0.487', '[BOLD] 0.512', '[BOLD] 0.510', '[BOLD] 0.511'], ['CAEVO', '0.712', '[BOLD] 0.553', '0.494', '[BOLD] 0.494', '0.508', '0.506', '0.507']]
we report the performances of TempRelPro compared with CAEVO. We achieve a small improvement in the overall F1-score, i.e., 51.1% vs 50.7%. For each temporal entity pair type, since we label all possible links, precision and recall are the same. TempRelPro is significantly better than CAEVO in labelling T-T and E-T pai...
Abstract
1604.08120
Table .9: TempRelPro performances in terms of coverage (Cov), precision (P), recall (R) and F1-score (F1) for all domains, compared with systems in QA TempEval augmented with TREFL.
['[BOLD] System', '[BOLD] Cov', '[BOLD] P', '[BOLD] R', '[BOLD] F1']
[['TempRelPro', '0.53', '0.65', '0.34', '0.45'], ['[BOLD] TempRelPro + coref', '0.53', '0.66', '[BOLD] 0.35', '[BOLD] 0.46'], ['HLT-FBK + trefl', '0.48', '0.61', '0.29', '0.39'], ['HLT-FBK + coref + trefl', '[BOLD] 0.67', '0.51', '0.34', '0.40'], ['HITSZ-ICRC + trefl', '0.15', '0.58', '0.09', '0.15'], ['CAEVO + trefl',...
g The QA TempEval organizers also provide an extra evaluation, augmenting the participating systems with a time expression reasoner (TREFL) as a post-processing step \parencitellorens-EtAl:2015:SemEval. The TREFL component adds TLINKs between timexes based on their resolved values. \parencitellorens-saquete-navarro:201...
Abstract
1604.08120
Table .1: Classifier performances (F1-scores) in different experimental settings S1 and S2, compared with using only traditional features. TP: true positives and FP: false positives.
['[EMPTY]', '[BOLD] Feature vector Traditional features', '[BOLD] Feature vector → [ITALIC] f', '[BOLD] Total 5271', '[BOLD] TP 2717', '[BOLD] FP 2554', '[BOLD] F1 [BOLD] 0.5155']
[['[BOLD] S1', 'GloVe', '(→ [ITALIC] w1⊕→ [ITALIC] w2)', '5271', '2388', '2883', '0.4530'], ['[EMPTY]', '[EMPTY]', '(→ [ITALIC] w1+→ [ITALIC] w2)', '5271', '2131', '3140', '0.4043'], ['[EMPTY]', '[EMPTY]', '(→ [ITALIC] w1−→ [ITALIC] w2)', '5271', '2070', '3201', '0.3927'], ['[EMPTY]', 'Word2Vec', '(→ [ITALIC] w1⊕→ [ITA...
we report the performances of the classifier in different experimental settings S1 and S2, compared with the classifier performance using only traditional features. Since we classify all possible event pairs in the dataset, precision and recall are the same.
Abstract
1604.08120
Table .2: F1-scores per TLINK type with different feature vectors. Pairs of word vectors (→w1, →w2) are retrieved from Word2Vec pre-trained vectors.
['[BOLD] TLINK type', '(→ [ITALIC] w1⊕→ [ITALIC] w2)', '(→ [ITALIC] w1+→ [ITALIC] w2)', '(→ [ITALIC] w1−→ [ITALIC] w2)', '→ [ITALIC] f', '((→ [ITALIC] w1⊕→ [ITALIC] w2)⊕→ [ITALIC] f)', '((→ [ITALIC] w1+→ [ITALIC] w2)⊕→ [ITALIC] f)', '((→ [ITALIC] w1−→ [ITALIC] w2)⊕→ [ITALIC] f)']
[['BEFORE', '[BOLD] 0.6120', '0.5755', '0.5406', '0.6156', '[BOLD] 0.6718', '0.6440', '0.6491'], ['AFTER', '[BOLD] 0.4674', '0.3258', '0.4450', '0.5294', '[BOLD] 0.5800', '0.5486', '0.5680'], ['IDENTITY', '0.5142', '0.4528', '[BOLD] 0.5201', '0.6262', '[BOLD] 0.6650', '0.6456', '0.6479'], ['SIMULTANEOUS', '[BOLD] 0.257...
g Pairs of word vectors (→w1,→w2) are retrieved from Word2Vec pre-trained vectors. (→w1−→w2) is shown to be the best in identifying IDENTITY, BEGINS, BEGUN_BY and ENDS relation types, while the rest are best identified by (→w1⊕→w2). Combining (→w1⊕→w2) and →f improves the identification of all TLINK types in general, p...
Abstract
1604.08120
Table .3: Classifier performance per TLINK type with different feature vectors, evaluated on TempEval-3-platinum. Pairs of word vectors (→w1, →w2) are retrieved from Word2Vec pre-trained vectors.
['[BOLD] TLINK type', '(→ [ITALIC] w1⊕→ [ITALIC] w2) [BOLD] P', '(→ [ITALIC] w1⊕→ [ITALIC] w2) [BOLD] R', '(→ [ITALIC] w1⊕→ [ITALIC] w2) [BOLD] F1', '→ [ITALIC] f [BOLD] P', '→ [ITALIC] f [BOLD] R', '→ [ITALIC] f [BOLD] F1', '((→ [ITALIC] w1⊕→ [ITALIC] w2)⊕→ [ITALIC] f) [BOLD] P', '((→ [ITALIC] w1⊕→ [ITALIC] w2)...
[['[BOLD] BEFORE', '0.4548', '0.7123', '0.5551', '0.5420', '0.7381', '[BOLD] 0.6250', '0.5278', '0.7170', '0.6080'], ['[BOLD] AFTER', '0.5548', '0.4649', '0.5059', '0.5907', '0.6196', '[BOLD] 0.6048', '0.6099', '0.6000', '[BOLD] 0.6049'], ['[BOLD] IDENTITY', '0.0175', '0.0667', '0.0278', '0.2245', '0.7333', '0.3438', '...
In general, using only (→w1⊕→w2) as features does not give any benefit since the performance is significantly worse compared to using only traditional features →f, i.e. 0.4271 vs 0.5043 F1-scores. Combining the word embedding and traditional features ((→w1⊕→w2)⊕→f) also does not improve the classifier performance in ge...
Abstract
1604.08120
Table .4: CauseRelPro-beta’s micro-averaged scores.
['[BOLD] System', '[BOLD] P', '[BOLD] R', '[BOLD] F1']
[['mirza-tonelli:2014:Coling', '0.6729', '0.2264', '0.3388'], ['[BOLD] CauseRelPro-beta', '[BOLD] 0.5985', '[BOLD] 0.2484', '[BOLD] 0.3511']]
The main difference between CauseRelPro-beta and that reported in \textcitemirza-tonelli:2014: Coling is the elimination of the middle step in which causal signals are identified. This, together with the use of supersenses, contributes to increasing recall. However, using token-based features and having a specific step...
Abstract
1604.08120
Table .6: Impact of increased training data, using EMM-clusters and propagated (prop.) CLINKs, on system performances (micro-averaged scores). TP: true positives, FP: false positives and FN: false negatives.
['[EMPTY]', '[BOLD] TP', '[BOLD] FP', '[BOLD] FN', '[BOLD] P', '[BOLD] R', '[BOLD] F1']
[['Causal-TimeBank', '79', '53', '239', '0.5985', '0.2484', '0.3511'], ['Causal-TimeBank + EMM-clusters', '95', '54', '223', '0.6376', '0.2987', '0.4069'], ['[BOLD] Causal-TimeBank + EMM-clusters + prop. CLINKs', '108', '74', '210', '[BOLD] 0.5934', '[BOLD] 0.3396', '[BOLD] 0.4320']]
(Causal-TimeBank + EMM-clusters) improves micro-averaged F1-scores of the system by 5.57% significantly (p < 0.01). We evaluate the performance of the system trained with the enriched training data from CLINK propagation in the same five-fold cross-validation setting as the previous experiment. (Causal-TimeBank + EMM-c...
Leveraging Discourse Information Effectively for Authorship AttributionThe first two authors make equal contribution.
1709.02271
Table 7: Macro-averaged F1 score for multi-class author classification on the large datasets, using either no discourse (None), grammatical relations (GR), or RST relations (RST).
['Disc. Type', 'Model', 'novel-50', 'IMDB62']
[['None', 'SVM2', '92.9', '90.4'], ['None', 'CNN2', '95.3', '91.5'], ['GR', 'SVM2-PV', '93.3', '90.4'], ['GR', 'CNN2-PV', '95.1', '90.5'], ['GR', 'CNN2-DE (local)', '96.9', '90.8'], ['GR', 'CNN2-DE (global)', '97.5', '90.9'], ['RST', 'SVM2-PV', '93.8', '90.9'], ['RST', 'CNN2-PV', '95.5', '90.7'], ['RST', 'CNN2-DE (loca...
Generalization-dataset experiments. On novel-50, most discourse-enhanced models improve the performance of the baseline non-discourse CNN2 to varying degrees. The clear pattern again emerges that RST features work better, with the best F1 score evidenced in the CNN2-DE (global) model (3.5 improvement in F1). On IMDB62,...
Snips Voice Platform: an embedded Spoken Language Understanding system for private-by-design voice interfaces
1805.10190
Table 3: Decoding accuracy of neural networks of different sizes (Word Error Rate, %)
['[BOLD] Model', '[BOLD] dev-clean', '[BOLD] dev-other', '[BOLD] test-clean', '[BOLD] test-other']
[['nnet-256', '7.3', '19.2', '7.6', '19.6'], ['nnet-512', '6.4', '17.1', '6.6', '17.6'], ['nnet-768', '6.4', '16.8', '6.6', '17.5'], ['KALDI', '4.3', '11.2', '4.8', '11.5']]
In the next experiment, the neural networks are trained with the same architecture but different layer sizes on the 460x6+500 hours dataset. This shows that larger models are capable of fitting the data and generalizing better, as expected. This allows us to choose the best tradeoff between precision and computational ...
Snips Voice Platform: an embedded Spoken Language Understanding system for private-by-design voice interfaces
1805.10190
Table 4: Comparison of speed and memory performance of nnet-256 and nnet-768. RTF refers to real time ratio.
['[BOLD] Model', '[BOLD] Num. Params (M)', '[BOLD] Size (MB)', '[BOLD] RTF (Raspberry Pi 3)']
[['nnet-256', '2.6', '10', '<1'], ['nnet-768', '15.4', '59', '>1']]
The gain is similar in RAM. In terms of speed, the nnet-256 is 6 to 10 times faster than the nnet-768. These tradeoffs and comparison with other trained models led us to select the nnet-256. It has a reasonable speed and memory footprint, and the loss in accuracy is compensated by the adapted LM and robust NLU.
Snips Voice Platform: an embedded Spoken Language Understanding system for private-by-design voice interfaces
1805.10190
Table 6: Precision, recall and F1-score on Braun et al. corpora. *Benchmark run in August 2017 by the authors of bench17 . **Benchmark run in January 2018 by the authors of this paper.
['corpus', 'NLU provider', 'precision', 'recall', 'F1-score']
[['chatbot', 'Luis*', '0.970', '0.918', '0.943'], ['chatbot', 'IBM Watson*', '0.686', '0.8', '0.739'], ['chatbot', 'API.ai*', '0.936', '0.532', '0.678'], ['chatbot', 'Rasa*', '0.970', '0.918', '0.943'], ['chatbot', 'Rasa**', '0.933', '0.921', '0.927'], ['chatbot', 'Snips**', '0.963', '0.899', '0.930'], ['web apps', 'Lu...
For the raw results and methodology, see https://github.com/snipsco/nlu-benchmark. The main metric used in this benchmark is the average F1-score of intent classification and slot filling. The data consists in three corpora. Two of the corpora were extracted from StackExchange, one from a Telegram chatbot. The exact sa...
Snips Voice Platform: an embedded Spoken Language Understanding system for private-by-design voice interfaces
1805.10190
Table 7: Precision, recall and F1-score averaged on all slots and on all intents of an in-house dataset, run in June 2017.
['NLU provider', 'train size', 'precision', 'recall', 'F1-score']
[['Luis', '70', '0.909', '0.537', '0.691'], ['Luis', '2000', '0.954', '0.917', '[BOLD] 0.932'], ['Wit', '70', '0.838', '0.561', '0.725'], ['Wit', '2000', '0.877', '0.807', '0.826'], ['API.ai', '70', '0.770', '0.654', '0.704'], ['API.ai', '2000', '0.905', '0.881', '0.884'], ['Alexa', '70', '0.680', '0.495', '0.564'], ['...
In this experiment, the comparison is done separately on each intent to focus on slot filling (rather than intent classification). The main metric used in this benchmark is the average F1-score of slot filling on all slots. Three training sets of 70 and 2000 queries have been drawn from the total pool of queries to gai...
Snips Voice Platform: an embedded Spoken Language Understanding system for private-by-design voice interfaces
1805.10190
Table 16: Precision, recall and F1-score averaged on all slots in an in-house dataset, run in June 2017.
['intent', 'NLU provider', 'train size', 'precision', 'recall', 'F1-score']
[['SearchCreativeWork', 'Luis', '70', '0.993', '0.746', '0.849'], ['SearchCreativeWork', 'Luis', '2000', '1.000', '0.995', '0.997'], ['SearchCreativeWork', 'Wit', '70', '0.959', '0.569', '0.956'], ['SearchCreativeWork', 'Wit', '2000', '0.974', '0.955', '0.964'], ['SearchCreativeWork', 'API.ai', '70', '0.915', '0.711', ...
In this experiment, the comparison is done separately on each intent to focus on slot filling (rather than intent classification). The main metric used in this benchmark is the average F1-score of slot filling on all slots. Three training sets of 70 and 2000 queries have been drawn from the total pool of queries to gai...
Snips Voice Platform: an embedded Spoken Language Understanding system for private-by-design voice interfaces
1805.10190
Table 17: Precision, recall and F1-score averaged on all slots in an in-house dataset, run in June 2017.
['intent', 'NLU provider', 'train size', 'precision', 'recall', 'F1-score']
[['AddToPlaylist', 'Luis', '70', '0.759', '0.575', '0.771'], ['AddToPlaylist', 'Luis', '2000', '0.971', '0.938', '0.953'], ['AddToPlaylist', 'Wit', '70', '0.647', '0.478', '0.662'], ['AddToPlaylist', 'Wit', '2000', '0.862', '0.761', '0.799'], ['AddToPlaylist', 'API.ai', '70', '0.830', '0.740', '0.766'], ['AddToPlaylist...
In this experiment, the comparison is done separately on each intent to focus on slot filling (rather than intent classification). The main metric used in this benchmark is the average F1-score of slot filling on all slots. Three training sets of 70 and 2000 queries have been drawn from the total pool of queries to gai...
Generating Sequences WithRecurrent Neural Networks
1308.0850
Table 1: Penn Treebank Test Set Results. ‘BPC’ is bits-per-character. ‘Error’ is next-step classification error rate, for either characters or words.
['Input', 'Regularisation', 'Dynamic', 'BPC', 'Perplexity', 'Error (%)', 'Epochs']
[['Char', 'none', 'no', '1.32', '167', '28.5', '9'], ['char', 'none', 'yes', '1.29', '148', '28.0', '9'], ['char', 'weight noise', 'no', '1.27', '140', '27.4', '25'], ['char', 'weight noise', 'yes', '1.24', '124', '26.9', '25'], ['char', 'adapt. wt. noise', 'no', '1.26', '133', '27.4', '26'], ['char', 'adapt. wt. noise...
For example, he records a perplexity of 141 for a 5-gram with Keyser-Ney smoothing, 141.8 for a word level feedforward neural network, 131.1 for the state-of-the-art compression algorithm PAQ8 and 123.2 for a dynamically evaluated word-level RNN. However by combining multiple RNNs, a 5-gram and a cache model in an ense...
Generating Sequences WithRecurrent Neural Networks
1308.0850
Table 2: Wikipedia Results (bits-per-character)
['Train', 'Validation (static)', 'Validation (dynamic)']
[['1.42', '1.67', '1.33']]
As with the Penn data, we tested the network on the validation data with and without dynamic evaluation (where the weights are updated as the data is predicted). This is probably because of the long range coherence of Wikipedia data; for example, certain words are much more frequent in some articles than others, and be...
Generating Sequences WithRecurrent Neural Networks
1308.0850
Table 4: Handwriting Synthesis Results. All results recorded on the validation set. ‘Log-Loss’ is the mean value of L(x) in nats. ‘SSE’ is the mean sum-squared-error per data point.
['Regularisation', 'Log-Loss', 'SSE']
[['none', '-1096.9', '0.23'], ['adaptive weight noise', '-1128.2', '0.23']]
The regularised network appears to generate slightly more realistic sequences, although the difference is hard to discern by eye. Both networks performed considerably better than the best prediction network. In particular the sum-squared-error was reduced by 44%. This is likely due in large part to the improved predict...
Simplified End-to-End MMI training and voting for ASR
1703.10356
Table 1: WER of various models on the WSJ corpus. In [13] the CTC baseline is better than ours (5.48%/9.12% with the same architecture and ext. LM), and the eval92 set is used as a validation set.
['Model', 'LM', 'WER% eval92', 'WER% dev93']
[['CTC, EESEN ', 'Std.', '7.87', '11.39'], ['CTC, ours', 'Std.', '7.66', '11.61'], ['EEMMI bi-gram', 'Std.', '7.37', '[BOLD] 10.85'], ['EEMMI trigram', 'Std.', '[BOLD] 7.05', '11.08'], ['Attention seq2seq ', 'Ext.', '6.7', '9.7'], ['CTC, ours', 'Ext.', '5.87', '9.38'], ['EEMMI bi-gram', 'Ext.', '5.83', '[BOLD] 9.02'], ...
We consider two decoding LMs. The WSJ standard pruned trigram model (std.), and the extended-vocabulary pruned trigram model (ext.). We compare our end-to-end MMI (EEMMI) model to CTC under the same conditions. We see a consistent improvement in WER of EEMMI using bi-grams train LM compared to CTC. It can be observed, ...
Robust Neural Machine Translation with Doubly Adversarial Inputs
1906.02443
Table 4: Results on WMT’14 English-German translation.
['Method', 'Model', 'BLEU']
[['Vaswani et\xa0al.', 'Trans.-Base', '27.30'], ['Vaswani et\xa0al.', 'Trans.-Big', '28.40'], ['Chen et\xa0al.', 'RNMT+', '28.49'], ['Ours', 'Trans.-Base', '28.34'], ['Ours', 'Trans.-Big', '[BOLD] 30.01']]
We compare our approach with Transformer for different numbers of hidden units (i.e. 1024 and 512) and a related RNN-based NMT model RNMT+ Chen et al. Recall that our approach is built on top of the Transformer model. The notable gain in terms of BLEU verifies our English-German translation model.
Robust Neural Machine Translation with Doubly Adversarial Inputs
1906.02443
Table 2: Comparison with baseline methods trained on different backbone models (second column). * indicates the method trained using an extra corpus.
['Method', 'Model', 'MT06', 'MT02', 'MT03', 'MT04', 'MT05', 'MT08']
[['Vaswani:17', 'Trans.-Base', '44.59', '44.82', '43.68', '45.60', '44.57', '35.07'], ['Miyato:17', 'Trans.-Base', '45.11', '45.95', '44.68', '45.99', '45.32', '35.84'], ['Sennrich:16c', 'Trans.-Base', '44.96', '46.03', '44.81', '46.01', '45.69', '35.32'], ['Wang:18', 'Trans.-Base', '45.47', '46.31', '45.30', '46.45', ...
Among all methods trained without extra corpora, our approach achieves the best result across datasets. After incorporating the back-translated corpus, our method yields an additional gain of 1-3 points over Sennrich et al. Since all methods are built on top of the same backbone, the result substantiates the efficacy o...
Robust Neural Machine Translation with Doubly Adversarial Inputs
1906.02443
Table 3: Results on NIST Chinese-English translation.
['Method', 'Model', 'MT06', 'MT02', 'MT03', 'MT04', 'MT05', 'MT08']
[['Vaswani:17', 'Trans.-Base', '44.59', '44.82', '43.68', '45.60', '44.57', '35.07'], ['Ours', 'Trans.-Base', '[BOLD] 46.95', '[BOLD] 47.06', '[BOLD] 46.48', '[BOLD] 47.39', '[BOLD] 46.58', '[BOLD] 37.38']]
We first compare our approach with the Transformer model Vaswani et al. As we see, the introduction of our method to the standard backbone model (Trans.-Base) leads to substantial improvements across the validation and test sets. Specifically, our approach achieves an average gain of 2.25 BLEU points and up to 2.8 BLEU...
Robust Neural Machine Translation with Doubly Adversarial Inputs
1906.02443
Table 9: Effect of the ratio value γsrc and γtrg on Chinese-English Translation.
['[13mm] [ITALIC] γsrcγtrg', '0.00', '0.25', '0.50', '0.75']
[['0.00', '44.59', '46.19', '46.26', '46.14'], ['0.25', '45.23', '46.72', '[BOLD] 46.95', '46.52'], ['0.50', '44.25', '45.34', '45.39', '45.94'], ['0.75', '44.18', '44.98', '45.35', '45.37']]
The hyper-parameters γsrc and γtrg control the ratio of word replacement in the source and target inputs. As we see, the performance is relatively insensitive to the values of these hyper-parameters, and the best configuration on the Chinese-English validation set is obtained at γsrc=0.25 and γtrg=0.50. We found that a...
Emotion helps Sentiment: A Multi-task Model for Sentiment and Emotion Analysis
1911.12569
TABLE IV: Comparison with the state-of-the-art systems proposed by [16] on emotion dataset. The metrics P, R and F stand for Precision, Recall and F1-Score.
['[BOLD] Models', 'Metric', '[BOLD] Emotion Anger', '[BOLD] Emotion Anticipation', '[BOLD] Emotion Disgust', '[BOLD] Emotion Fear', '[BOLD] Emotion Joy', '[BOLD] Emotion Sadness', '[BOLD] Emotion Surprise', '[BOLD] Emotion Trust', '[BOLD] Emotion Micro-Avg']
[['MaxEnt', 'P', '76', '72', '62', '57', '55', '65', '62', '62', '66'], ['MaxEnt', 'R', '72', '61', '47', '31', '50', '65', '15', '38', '52'], ['MaxEnt', 'F', '74', '66', '54', '40', '52', '65', '24', '47', '58'], ['SVM', 'P', '76', '70', '59', '55', '52', '64', '46', '57', '63'], ['SVM', 'R', '69', '60', '53', '40', '...
Overall, our proposed system achieves an improvement of 5 F-Score points over the existing state-of-the-art system for emotion analysis. Individually, the proposed system improves the existing F-scores for all the emotions except surprise. This could be attributed to the data scarcity and a very low agreement between t...
Emotion helps Sentiment: A Multi-task Model for Sentiment and Emotion Analysis
1911.12569
TABLE III: Comparison with the state-of-the-art systems of SemEval 2016 task 6 on sentiment dataset.
['[BOLD] Models', '[BOLD] Sentiment (F-score)']
[['UWB ', '42.02'], ['INF-UFRGS-OPINION-MINING ', '42.32'], ['LitisMind', '44.66'], ['pkudblab ', '56.28'], ['SVM + n-grams + sentiment ', '78.90'], ['M2 (proposed)', '[BOLD] 82.10']]
Our system comfortably surpasses the existing best system at SemEval. Our system manages to improve the existing best system of SemEval 2016 task 6 by 3.2 F-score points for sentiment analysis.
Emotion helps Sentiment: A Multi-task Model for Sentiment and Emotion Analysis
1911.12569
TABLE V: Confusion matrix for sentiment analysis
['Actual', 'Predicted negative', 'Predicted positive']
[['negative', '1184', '88'], ['positive', '236', '325']]
We perform quantitative error analysis for both sentiment and emotion for the M2 model. This may be due to the reason that this particular class is the most underrepresented in the training set. These three emotions have the least share of training instances, making the system less confident towards these emotions.
Emotion helps Sentiment: A Multi-task Model for Sentiment and Emotion Analysis
1911.12569
TABLE V: Confusion matrix for sentiment analysis
['Actual', 'Predicted NO', 'Predicted YES']
[['NO', '388', '242'], ['YES', '201', '1002']]
We perform quantitative error analysis for both sentiment and emotion for the M2 model. This may be due to the reason that this particular class is the most underrepresented in the training set. These three emotions have the least share of training instances, making the system less confident towards these emotions.
Emotion helps Sentiment: A Multi-task Model for Sentiment and Emotion Analysis
1911.12569
TABLE V: Confusion matrix for sentiment analysis
['Actual', 'Predicted NO', 'Predicted YES']
[['NO', '445', '249'], ['YES', '433', '706']]
We perform quantitative error analysis for both sentiment and emotion for the M2 model. This may be due to the reason that this particular class is the most underrepresented in the training set. These three emotions have the least share of training instances, making the system less confident towards these emotions.