paper stringlengths 0 839 | paper_id stringlengths 1 12 | table_caption stringlengths 3 2.35k | table_column_names large_stringlengths 13 1.76k | table_content_values large_stringlengths 2 11.9k | text large_stringlengths 69 2.82k |
|---|---|---|---|---|---|
Neural Aspect and Opinion Term Extraction with Mined Rules as Weak Supervision | 1907.03750 | Table 2: Aspect and opinion term extraction performance of different approaches. F1 score is reported. IHS_RD, DLIREC, Elixa and WDEmb* use manually designed features. For different versions of RINANTE, “Shared” and “Double” means shared BiLSTM model and double BiLSTM model, respectively; “Alt” and “Pre” means the first and the second training method, respectively. RINANTE-Double-Pre†: fine-tune the pre-trained model for only extracting aspect terms, and use the same number of validation samples as DE-CNN Xu et al. (2018). The results of RINANTE-Double-Pre† are obtained after this paper gets accepted and are not included in the ACL version. RINANTE-Double-Pre† achieves the best performance on SE15-R. | ['Approach', 'SE14-R Aspect', 'SE14-R Opinion', 'SE14-L Aspect', 'SE14-L Opinion', 'SE15-R Aspect', 'SE15-R Opinion'] | [['DP Qiu et\xa0al. ( 2011 )', '38.72', '65.94', '19.19', '55.29', '27.32', '46.31'], ['IHS_RD Chernyshevich ( 2014 )', '79.62', '-', '74.55', '-', '-', '-'], ['DLIREC Toh and Wang ( 2014 )', '84.01', '-', '73.78', '-', '-', '-'], ['Elixa Vicente et\xa0al. ( 2017 )', '-', '-', '-', '-', '[BOLD] 70.04', '-'], ['WDEmb Yin et\xa0al. ( 2016 )', '84.31', '-', '74.68', '-', '69.12', '-'], ['WDEmb* Yin et\xa0al. ( 2016 )', '84.97', '-', '75.16', '-', '69.73', '-'], ['RNCRF Wang et\xa0al. ( 2016 )', '82.23', '83.93', '75.28', '77.03', '65.39', '63.75'], ['CMLA Wang et\xa0al. ( 2017 )', '82.46', '84.67', '73.63', '79.16', '68.22', '70.50'], ['NCRF-AE Zhang et\xa0al. ( 2017 )', '83.28', '85.23', '74.32', '75.44', '65.33', '70.16'], ['HAST Li et\xa0al. ( 2018 )', '85.61', '-', '79.52', '-', '69.77', '-'], ['DE-CNN Xu et\xa0al. ( 2018 )', '85.20', '-', '[BOLD] 81.59', '-', '68.28', '-'], ['Mined Rules', '70.82', '79.60', '67.67', '76.10', '57.67', '64.29'], ['RINANTE (No Rule)', '84.06', '84.59', '73.47', '75.41', '66.17', '68.16'], ['RINANTE-Shared-Alt', '[BOLD] 86.76', '86.05', '77.92', '79.20', '67.47', '71.41'], ['RINANTE-Shared-Pre', '85.09', '85.63', '79.16', '79.03', '68.15', '70.44'], ['RINANTE-Double-Alt', '85.80', '[BOLD] 86.34', '78.59', '78.94', '67.42', '70.53'], ['RINANTE-Double-Pre', '86.45', '85.67', '80.16', '[BOLD] 81.96', '69.90', '[BOLD] 72.09'], ['RINANTE-Double-Pre†', '86.20', '-', '81.37', '-', '71.89', '-']] | From the results, we can see that the mined rules alone do not perform well. However, by learning from the data automatically labeled by these rules, all four versions of RINANTE achieves better performances than RINANTE (no rule). This verifies that we can indeed use the results of the mined rules to improve the performance of neural models. Moreover, the improvement over RINANTE (no rule) can be especially significant on SE14-L and SE15-R. We think this is because SE14-L is relatively more difficult and SE15-R has much less manually labeled training data. This is because our algorithm is able to mine hundreds of effective rules, while Double Propagation only has eight manually designed rules. |
Exploring Models and Data for Image Question Answering | 1505.02074 | Table 1: COCO-QA question type break-down | ['Category Object', 'Train 54992', '% 69.84%', 'Test 27206', '% 69.85%'] | [['Number', '5885', '7.47%', '2755', '7.07%'], ['Color', '13059', '16.59%', '6509', '16.71%'], ['Location', '4800', '6.10%', '2478', '6.36%'], ['Total', '78736', '100.00%', '38948', '100.00%']] | It should be noted that since we applied the QA pair rejection process, mode-guessing performs very poorly on COCO-QA. However, COCO-QA questions are actually easier to answer than DAQUAR from a human point of view. This encourages the model to exploit salient object relations instead of exhaustively searching all possible relations. COCO-QA dataset can be downloaded at http://www.cs.toronto.edu/~mren/imageqa/data/cocoqa |
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter | 1910.01108 | Table 1: DistilBERT retains 97% of BERT performance. Comparison on the dev sets of the GLUE benchmark. ELMo results as reported by the authors. BERT and DistilBERT results are the medians of 5 runs with different seeds. | ['Model', '[BOLD] Score', 'CoLA', 'MNLI', 'MRPC', 'QNLI', 'QQP', 'RTE', 'SST-2', 'STS-B', 'WNLI'] | [['ELMo', '68.7', '44.1', '68.6', '76.6', '71.1', '86.2', '53.4', '91.5', '70.4', '56.3'], ['BERT-base', '79.5', '56.3', '86.7', '88.6', '91.8', '89.6', '69.3', '92.7', '89.0', '53.5'], ['DistilBERT', '77.0', '51.3', '82.2', '87.5', '89.2', '88.5', '59.9', '91.3', '86.9', '56.3']] | Among the 9 tasks, DistilBERT is always on par or improving over the ELMo baseline (up to 19 points of accuracy on STS-B). DistilBERT also compares surprisingly well to BERT, retaining 97% of the performance with 40% fewer parameters. |
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter | 1910.01108 | Table 2: DistilBERT yields to comparable performance on downstream tasks. Comparison on downstream tasks: IMDb (test accuracy) and SQuAD 1.1 (EM/F1 on dev set). D: with a second step of distillation during fine-tuning. | ['Model', 'IMDb', 'SQuAD'] | [['[EMPTY]', '(acc.)', '(EM/F1)'], ['BERT-base', '93.46', '81.2/88.5'], ['DistilBERT', '92.82', '77.7/85.8'], ['DistilBERT (D)', '-', '79.1/86.9']] | On SQuAD, DistilBERT is within 3.9 points of the full BERT. the number of parameters of each model along with the inference time needed to do a full pass on the STS-B development set on CPU (Intel Xeon E5-2690 v3 Haswell @2.9GHz) using a batch size of 1. DistilBERT has 40% fewer parameters than BERT and is 60% faster than BERT. |
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter | 1910.01108 | Table 2: DistilBERT yields to comparable performance on downstream tasks. Comparison on downstream tasks: IMDb (test accuracy) and SQuAD 1.1 (EM/F1 on dev set). D: with a second step of distillation during fine-tuning. | ['Model', '# param.', 'Inf. time'] | [['[EMPTY]', '(Millions)', '(seconds)'], ['ELMo', '180', '895'], ['BERT-base', '110', '668'], ['DistilBERT', '66', '410']] | On SQuAD, DistilBERT is within 3.9 points of the full BERT. the number of parameters of each model along with the inference time needed to do a full pass on the STS-B development set on CPU (Intel Xeon E5-2690 v3 Haswell @2.9GHz) using a batch size of 1. DistilBERT has 40% fewer parameters than BERT and is 60% faster than BERT. |
Updating Pre-trained Word Vectors and Text Classifiers using Monolingual Alignment | 1910.06241 | Table 3: Comparison between the accuracy of our aligned model (RCSLS+Fine.), the accuracy of models trained on each split separately (Train on …) and two baselines. We observe that a model obtained using our procedure matches the performance of the Vote baseline while not requiring to store two separate models. | ['[EMPTY]', 'Sogou 3k', 'Sogou 30k', 'Sogou Full', 'Amazon 3k', 'Amazon 30k', 'Amazon Full', 'Yelp 3k', 'Yelp 30k', 'Yelp Full'] | [['Train on [ITALIC] S0', '91.3', '94.5', '96.2', '46.1', '53.2', '59.5', '50.7', '58.7', '62.8'], ['Train on [ITALIC] S1', '91.4', '94.3', '96.1', '47.2', '53.8', '59.6', '50.2', '59.0', '62.8'], ['Train on [ITALIC] S0∪ [ITALIC] S1', '92.3', '95.1', '96.7', '48.6', '54.9', '60.2', '54.2', '60.1', '63.7'], ['Fine-tune', '91.5', '94.3', '96.1', '47.2', '53.8', '59.6', '52.4', '59.0', '62.9'], ['Vote', '[BOLD] 92.0', '[BOLD] 94.8', '[BOLD] 96.3', '48.1', '54.3', '59.9', '51.1', '59.8', '63.9'], ['RCSLS+Fine.', '[BOLD] 92.0', '[BOLD] 94.8', '96.2', '[BOLD] 48.2', '[BOLD] 54.4', '[BOLD] 60.0', '[BOLD] 52.7', '[BOLD] 60.2', '[BOLD] 64.0']] | We report the performance of a model trained on each of the two splits alone, the two baselines, and a topline obtained by training on S0∪S1. First of all, we observe that our approach reduces the gap between training on a single set with the topline. This effect is especially true on small versions of the datasets (3k or 30k). The main reason for this improvement is that each split has an incomplete coverage of the discriminative words and n-grams. |
Updating Pre-trained Word Vectors and Text Classifiers using Monolingual Alignment | 1910.06241 | Table 1: Performance of updated word vectors on the word analogy task. We split the English word analogy datasets (Mikolov et al., 2013a) into Out of vocab and In vocab questions. Out of vocab questions are composed of words that are out of vocabulary for S0, hence the null accuracy. | ['[EMPTY]', 'Out of vocab', 'In vocab'] | [['Train on [ITALIC] S0', '00.0', '70.1'], ['Train on [ITALIC] S1', '66.9', '66.1'], ['Train on [ITALIC] S0∪ [ITALIC] S1', '68.6', '71.2'], ['Fine-tune', '67.7', '66.8'], ['Subwords', '37.0', '71.6'], ['RCSLS+Fine.', '[BOLD] 67.9', '[BOLD] 72.1']] | S4SS1SSS0Px4 Results. First, we see that fine-tuning X on S1 leads to decent performance on the Out of vocab questions, but saps the accuracy on In vocab questions. As mentionned before, learning vectors initialized with X on S1 may lead to a loss of important statistics learnt on S0: the total accuracy on In vocab questions drops by 3.3%. |
Updating Pre-trained Word Vectors and Text Classifiers using Monolingual Alignment | 1910.06241 | Table 2: Classification accuracy on Yelp reviews from 2018. We compare models trained on 1.2M reviews from 2013-2014 (S0) and a those trained on a smaller sample of reviews from 2018 (S1). We vary the size of S1 from 10k to 500k samples, while keeping S0 of fixed size. | ['[EMPTY]', 'Size of [ITALIC] S1 10k', 'Size of [ITALIC] S1 30k', 'Size of [ITALIC] S1 100k', 'Size of [ITALIC] S1 500k'] | [['Train on [ITALIC] S0', '74.9', '74.9', '74.9', '74.9'], ['Train on [ITALIC] S1', '70.8', '72.7', '74.4', '76.2'], ['Train on [ITALIC] S0∪ [ITALIC] S1', '75.1', '75.1', '75.3', '76.2'], ['Fine-tune', '72.9', '73.8', '75.0', '[BOLD] 76.3'], ['RCSLS+Fine.', '[BOLD] 75.1', '[BOLD] 75.3', '[BOLD] 75.8', '[BOLD] 76.3']] | First of all, we observe that the performance of models trained on S0 and S1 strongly depend on the size of S1. When the two datasets are of the same size (500k), the best performing model is the one trained on S1, as in that case there is no train/test distribution discrepancy. However, when S1 is small (10k or 30k), it is better to use a larger yet ill-distributed dataset (70.8% versus 74.9%). We also observe that when training on a concatenation of the two, the model performs at least as well as the best one. |
A Novel Cascade Binary Tagging Framework for Relational Triple Extraction | 1909.03227 | Table 2: Results of different methods on NYT and WebNLG datasets. Our re-implementation is marked by *. | ['Method', 'NYT [ITALIC] Prec.', 'NYT [ITALIC] Rec.', 'NYT [ITALIC] F1', 'WebNLG [ITALIC] Prec.', 'WebNLG [ITALIC] Rec.', 'WebNLG [ITALIC] F1'] | [['NovelTagging \xa0zheng2017Joint', '62.4', '31.7', '42.0', '52.5', '19.3', '28.3'], ['CopyR [ITALIC] OneDecoder \xa0zeng2018Extracting', '59.4', '53.1', '56.0', '32.2', '28.9', '30.5'], ['CopyR [ITALIC] MultiDecoder \xa0zeng2018Extracting', '61.0', '56.6', '58.7', '37.7', '36.4', '37.1'], ['GraphRel1 [ITALIC] p \xa0fu2019GraphRel', '62.9', '57.3', '60.0', '42.3', '39.2', '40.7'], ['GraphRel2 [ITALIC] p \xa0fu2019GraphRel', '63.9', '60.0', '61.9', '44.7', '41.1', '42.9'], ['CopyR [ITALIC] RL \xa0zeng2019Learning', '77.9', '67.2', '72.1', '63.3', '59.9', '61.6'], ['CopyR∗ [ITALIC] RL', '72.8', '69.4', '71.1', '60.9', '61.1', '61.0'], ['CasRel [ITALIC] random', '81.5', '75.7', '78.5', '84.7', '79.5', '82.0'], ['CasRel [ITALIC] LSTM', '84.2', '83.0', '83.6', '86.9', '80.6', '83.7'], ['CasRel', '[BOLD] 89.7', '[BOLD] 89.5', '[BOLD] 89.6', '[BOLD] 93.4', '[BOLD] 90.1', '[BOLD] 91.8']] | The CasRel model overwhelmingly outperforms all the baselines in terms of all three evaluation metrics and achieves encouraging 17.5% and 30.2% improvements in F1-score over the best state-of-the-art method zeng2019Learning on NYT and WebNLG datasets respectively. Even without taking advantage of the pre-trained BERT, CasRelrandom and CasRelLSTM are still competitive to existing state-of-the-art models. This validates the utility of the proposed cascade decoder that adopts a novel binary tagging scheme. The performance improvements from CasRelrandom to CasRel highlight the importance of the prior knowledge in a pre-trained language model. |
A Novel Cascade Binary Tagging Framework for Relational Triple Extraction | 1909.03227 | Table 3: F1-score of extracting relational triples from sentences with different number (denoted as N) of triples. | ['Method', 'NYT [ITALIC] N=1', 'NYT [ITALIC] N=2', 'NYT [ITALIC] N=3', 'NYT [ITALIC] N=4', 'NYT [ITALIC] N≥5', 'WebNLG [ITALIC] N=1', 'WebNLG [ITALIC] N=2', 'WebNLG [ITALIC] N=3', 'WebNLG [ITALIC] N=4', 'WebNLG [ITALIC] N≥5'] | [['CopyR [ITALIC] OneDecoder', '66.6', '52.6', '49.7', '48.7', '20.3', '65.2', '33.0', '22.2', '14.2', '13.2'], ['CopyR [ITALIC] MultiDecoder', '67.1', '58.6', '52.0', '53.6', '30.0', '59.2', '42.5', '31.7', '24.2', '30.0'], ['GraphRel1 [ITALIC] p', '69.1', '59.5', '54.4', '53.9', '37.5', '63.8', '46.3', '34.7', '30.8', '29.4'], ['GraphRel2 [ITALIC] p', '71.0', '61.5', '57.4', '55.1', '41.1', '66.0', '48.3', '37.0', '32.1', '32.1'], ['CopyR∗ [ITALIC] RL', '71.7', '72.6', '72.5', '77.9', '45.9', '63.4', '62.2', '64.4', '57.2', '55.7'], ['CasRel', '[BOLD] 88.2', '[BOLD] 90.3', '[BOLD] 91.9', '[BOLD] 94.2', '[BOLD] 83.7 (+37.8)', '[BOLD] 89.3', '[BOLD] 90.8', '[BOLD] 94.2', '[BOLD] 92.4', '[BOLD] 90.9 (+35.2)']] | It can be seen that the performance of most baselines on Normal, EPO and SEO presents a decreasing trend, reflecting the increasing difficulty of extracting relational triples from sentences with different overlapping patterns. That is, among the three overlapping patterns, Normal class is the easiest pattern while EPO and SEO classes are the relatively harder ones for baseline models to extract. In contrast, the proposed CasRel model attains consistently strong performance over all three overlapping patterns, especially for those hard patterns. We also validate the CasRel’s capability in extracting relational triples from sentences with different number of triples. Again, the CasRel model achieves excellent performance over all five classes. Though it’s not surprising to find that the performance of most baselines decreases with the increasing number of relational triples that a sentence contains, some patterns still can be observed from the performance changes of different models. Compared to previous works that devote to solving the overlapping problem in relational triple extraction, our model suffers the least from the increasing complexity of the input sentence. Though the CasRel model gain considerable improvements on all five classes compared to the best state-of-the-art method CopyRRL zeng2019Learning, the greatest improvement of F1-score on the two datasets both come from the most difficult class (N≥5), indicating that our model is more suitable for complicated scenarios than the baselines. |
A Novel Cascade Binary Tagging Framework for Relational Triple Extraction | 1909.03227 | Table 4: Results on relational triple elements. | ['Element', 'NYT [ITALIC] Prec.', 'NYT [ITALIC] Rec.', 'NYT [ITALIC] F1', 'WebNLG [ITALIC] Prec.', 'WebNLG [ITALIC] Rec.', 'WebNLG [ITALIC] F1'] | [['[ITALIC] E1', '94.6', '92.4', '93.5', '98.7', '92.8', '95.7'], ['[ITALIC] E2', '94.1', '93.0', '93.5', '97.7', '93.0', '95.3'], ['[ITALIC] R', '96.0', '93.8', '94.9', '96.6', '91.5', '94.0'], ['[ITALIC] (E1, R)', '93.6', '90.9', '92.2', '94.8', '90.3', '92.5'], ['[ITALIC] (R, E2)', '93.1', '91.3', '92.2', '95.4', '91.1', '93.2'], ['[ITALIC] (E1, E2)', '89.2', '90.1', '89.7', '95.3', '91.7', '93.5'], ['[ITALIC] (E1, R, E2)', '89.7', '89.5', '89.6', '93.4', '90.1', '91.8']] | For NYT, the performance on E1 and E2 are consistent with that on (E1, R) and (R, E2), demonstrating the effectiveness of our proposed framework in identifying both subject and object entity mentions. We also find that there is only a trivial gap between the F1-score on (E1, E2) and (E1, R, E2), but an obvious gap between (E1, R, E2) and (E1, R)/(R, E2). It reveals that most relations for the entity pairs in extracted triples are correctly identified while some extracted entities fail to form a valid relational triple. In other words, it implies that identifying relations is somehow easier than identifying entities for our model. |
Market Trend Prediction using Sentiment Analysis:Lessons Learned and Paths Forward | 1903.05440 | Table 8. Market trend prediction using main technical indicators — the baseline model. | ['Type', 'Method', '3-day ahead Acc', '3-day ahead [ITALIC] Fup1', '3-day ahead [ITALIC] Fdown1', '5-day ahead Acc', '5-day ahead [ITALIC] Fup1', '5-day ahead [ITALIC] Fdown1'] | [['DJIA', 'SVM', '0.616', '0.738', '0.282', '[BOLD] 0.700', '[BOLD] 0.754', '[BOLD] 0.615'], ['DJIA', 'LSTM', '0.559', '0.706', '0.120', '0.585', '0.728', '0.127'], ['AAPL', 'SVM', '0.577', '0.676', '0.391', '[BOLD] 0.685', '[BOLD] 0.723', '[BOLD] 0.634'], ['AAPL', 'LSTM', '0.547', '0.693', '0.138', '0.521', '0.641', '0.282'], ['JPM', 'SVM', '0.677', '0.747', '0.552', '[BOLD] 0.673', '[BOLD] 0.733', '[BOLD] 0.578'], ['JPM', 'LSTM', '0.541', '0.665', '0.269', '0.573', '0.676', '0.373'], ['EUR/USD', 'SVM', '0.642', '0.607', '0.672', '[BOLD] 0.671', '[BOLD] 0.620', '[BOLD] 0.710'], ['EUR/USD', 'LSTM', '0.509', '0.423', '0.572', '0.563', '0.370', '0.665'], ['GBP/USD', 'SVM', '0.610', '0.589', '0.630', '[BOLD] 0.714', '[BOLD] 0.705', '[BOLD] 0.723'], ['GBP/USD', 'LSTM', '0.500', '0.604', '0.323', '0.633', '0.646', '0.618']] | The F1 scores suggest that LSTM often favoured the positive class over the negative class and produced unbalanced results. The reason could be that the size of the dataset is relatively small: there are 670 data points in the analysed time period 2011-2015. Contrary to LSTM, SVM always yielded balanced and stable results. |
Market Trend Prediction using Sentiment Analysis:Lessons Learned and Paths Forward | 1903.05440 | Table 2. Sentiment attitudes Granger-causality on the FT I dataset. | ['Stock', 'Model', 'Lag', 'Attitude', 'Price⇒'] | [['Stock', 'Model', 'Lag', '⇒Price', 'Attitude'], ['S&P 500', 'Standard', '1', '0.1929', '0.1105'], ['S&P 500', 'Standard', '2', '0.2611', '[BOLD] 0.0780'], ['S&P 500', 'Temporal', '1', '0.2689', '[BOLD] 0.0495'], ['S&P 500', 'Temporal', '2', '0.1692', '[BOLD] 0.0940'], ['APPL', 'Standard', '1', '0.7351', '0.4253'], ['APPL', 'Standard', '2', '0.9117', '0.6426'], ['APPL', 'Temporal', '1', '0.9478', '0.6725'], ['APPL', 'Temporal', '2', '0.9715', '0.8245'], ['GOOGL', 'Standard', '1', '0.5285', '0.4035'], ['GOOGL', 'Standard', '2', '0.8075', '[BOLD] 0.0418'], ['GOOGL', 'Temporal', '1', '0.6920', '0.5388'], ['GOOGL', 'Temporal', '2', '0.8516', '[BOLD] 0.0422'], ['HPQ', 'Standard', '1', '0.1534', '0.3996'], ['HPQ', 'Standard', '2', '0.1877', '0.5322'], ['HPQ', 'Temporal', '1', '0.4069', '[BOLD] 0.0836'], ['HPQ', 'Temporal', '2', '0.5097', '0.1180'], ['JPM', 'Standard', '1', '0.8991', '[BOLD] 0.0461'], ['JPM', 'Standard', '2', '0.9963', '[BOLD] 0.0435'], ['JPM', 'Temporal', '1', '0.9437', '0.1204'], ['JPM', 'Temporal', '2', '0.7722', '0.2720']] | In all the experiments, we failed to discover any sign that sentiment attitudes Granger-cause stock price changes, which would suggest that in general sentiment attitudes probably cannot be useful for the prediction of stock price movements. However, in many cases, we found that the opposite was true — stock price changes Granger-cause sentiment attitudes in the news, with the strongest causality found using the temporal sentiment analysis model. The individual stocks also produced mixed results, with each company behaving differently. For the Apple stock, we failed to detect any causality. For the Google stock, we identified that the prices would Granger-cause sentiment attitudes, but only with a two-day lag. For the HP stock, we detected causality only in temporal sentiment and only with a one-day lag. For the JPM stock, we found causality using standard sentiment, but it was absent using temporal sentiment. It is difficult to draw general conclusions from such varying results. According to the Granger-causality test with a one-day or two-day lag, sentiment attitudes do not seem to be useful for predicting stock price movements. However, the opposite seems to be true — the sentiment attitudes should be predicted using stock price movements. It is still possible that the Granger-causality from sentiment attitudes to stock price changes is present at a finer time granularity (e.g., minutes), but we are unable to perform such an analysis using our current datasets. |
Market Trend Prediction using Sentiment Analysis:Lessons Learned and Paths Forward | 1903.05440 | Table 9. Market trend prediction using FT news articles and RWNC headlines (2011-2015). | ['Type', 'Baseline Acc', 'Baseline [ITALIC] Fup1', 'Baseline [ITALIC] Fdown1', 'Financial Times Acc', 'Financial Times [ITALIC] Fup1', 'Financial Times [ITALIC] Fdown1', 'Reddit Headlines Acc', 'Reddit Headlines [ITALIC] Fup1', 'Reddit Headlines [ITALIC] Fdown1'] | [['DJIA', '0.700', '[BOLD] 0.754', '0.615', '[BOLD] 0.706', '0.752', '[BOLD] 0.639', '0.618', '0.716', '0.417'], ['AAPL', '[BOLD] 0.685', '[BOLD] 0.723', '[BOLD] 0.634', '0.652', '0.723', '0.531', '0.624', '0.700', '0.496'], ['JPM', '0.673', '0.733', '0.578', '[BOLD] 0.679', '[BOLD] 0.739', '[BOLD] 0.583', '0.615', '0.713', '0.415'], ['EUR/USD', '0.641', '0.715', '0.518', '[BOLD] 0.653', '0.691', '[BOLD] 0.605', '0.638', '0.684', '0.578'], ['GBP/USD', '[BOLD] 0.714', '0.705', '[BOLD] 0.723', '0.711', '[BOLD] 0.708', '0.715', '0.615', '0.625', '0.605']] | Sentiment attitudes and emotions were extracted from the FT news articles and the RWNC headlines in the time period from 2011 to 2015. This is consistent with the previous section, in which no correlation or causality link was established between headlines sentiments and stock prices. It might be explained by the fact that headlines are very short text snippets and therefore provide little chance for us to reliably detect sentiment attitudes and sentiment emotions. The sentiments extracted from FT news articles painted a quite different picture. The sentiment enriched model outperformed the baseline model in half of the scenarios: it demonstrated slightly better results for DJIA, JPM, and EUR/USD, but slightly worse results for AAPL and GBP/USD. These experimental results are consistent with the previous section and confirm again that for some stocks sentiment emotions could be used to improve the baseline model for market trend prediction. |
Market Trend Prediction using Sentiment Analysis:Lessons Learned and Paths Forward | 1903.05440 | Table 10. Market trend prediction using financial tweets from Twitter (01/04/2014 – 01/04/2015). | ['Type', 'baseline Acc', 'baseline [ITALIC] Fup1', 'baseline [ITALIC] Fdown1', 'all+attitude+emotion Acc', 'all+attitude+emotion [ITALIC] Fup1', 'all+attitude+emotion [ITALIC] Fdown1', 'all+emotion Acc', 'all+emotion [ITALIC] Fup1', 'all+emotion [ITALIC] Fdown1', 'filtering+emotion Acc', 'filtering+emotion [ITALIC] Fup1', 'filtering+emotion [ITALIC] Fdown1'] | [['DJIA', '[BOLD] 0.810', '[BOLD] 0.854', '0.727', '0.810', '0.846', '[BOLD] 0.750', '0.778', '0.829', '0.682', '-', '-', '-'], ['AAPL', '[BOLD] 0.889', '[BOLD] 0.918', '[BOLD] 0.829', '0.810', '0.860', '0.700', '0.794', '0.847', '0.683', '0.794', '0.831', '0.735'], ['JPM', '0.746', '0.800', '0.652', '0.730', '0.779', '0.653', '0.746', '0.789', '0.680', '[BOLD] 0.778', '[BOLD] 0.829', '[BOLD] 0.682'], ['GBP/USD', '[BOLD] 0.708', '[BOLD] 0.387', '[BOLD] 0.808', '0.662', '0.389', '0.766', '0.631', '0.294', '0.750', '-', '-', '-'], ['EUR/USD', '0.685', '[BOLD] 0.627', '0.727', '0.685', '[BOLD] 0.627', '0.727', '[BOLD] 0.692', '0.626', '[BOLD] 0.739', '-', '-', '-']] | Only for the JPM stock we could see noticeable performance improvements in the “filtering+emotion” setting. Our results have also confirmed that sentiment attitudes on their own are probably not very useful for market trend prediction, but at least for some particular stocks sentiment emotions could be exploited to improve machine learning models like LSTM to get better market trend prediction. |
Pretrained Semantic Speech Embeddings for End-to-End Spoken Language Understanding via Cross-Modal Teacher-Student Learning | 2007.01836 | Table 5: Effect of objective function | ['Objective', 'Accuracy on Test, %', 'Accuracy on Test, %', 'Accuracy on Test, %', 'Valid value', 'Valid value', 'Valid value'] | [['function', 'SwBD', 'MRDA', 'FSC', 'Cosine', 'L2', 'L1'], ['Cosine', '55.56', '59.64', '89.45', '0.13', '0.08', '0.21'], ['L2', '53.73', '59.91', '88.64', '0.13', '0.07', '0.20'], ['L1', '[BOLD] 56.32', '[BOLD] 60.39', '[BOLD] 89.98', '0.13', '0.07', '0.20']] | Overall, these results indicate that the evaluated objective functions behave similarly in this task, however L1 distance based objective function yields slightly better results. |
Pretrained Semantic Speech Embeddings for End-to-End Spoken Language Understanding via Cross-Modal Teacher-Student Learning | 2007.01836 | Table 3: Effect of layers fine-tuning | ['ASR', 'NLU', 'Accuracy on Test, %', 'Accuracy on Test, %', 'Accuracy on Test, %', 'Validation'] | [['layers', 'layers', 'SwBD', 'MRDA', 'FSC', 'loss'], ['0', '0', '43.76', '56.08', '68.07', '0.26'], ['0', '1', '37.61', '56.47', '85.53', '0.19'], ['1', '0', '52.37', '[BOLD] 60.21', '86.42', '0.16'], ['1', '1', '52.05', '58.32', '[BOLD] 86.82', '0.17'], ['2', '0', '52.93', '59.42', '85.76', '[BOLD] 0.15'], ['3', '0', '[BOLD] 53.81', '58.90', '85.53', '0.16']] | While it is not completely clear how many layers should be fine-tuned, we can conclusively tell that fine-tuning of former ASR encoder layers is more beneficial than former NLU layers. We decide to fine-tune the two top former ASR encoder layers. The results also illustrate that the optimization of SLU model for smaller distance of its output from the output of NLU model is general enough and translates to accuracy improvements in the downstream tasks, although not in all cases. |
Pretrained Semantic Speech Embeddings for End-to-End Spoken Language Understanding via Cross-Modal Teacher-Student Learning | 2007.01836 | Table 4: Effect of learning rate schedule | ['Warmup', 'LR', 'Epochs', 'Accuracy on Test, %', 'Accuracy on Test, %', 'Accuracy on Test, %', 'Validation'] | [['steps', 'constant', '[EMPTY]', 'SwBD', 'MRDA', 'FSC', 'loss'], ['200,000', '50', '10', '52.65', '59.90', '83.94', '0.17'], ['300,000', '50', '10', '52.93', '59.42', '85.76', '0.15'], ['400,000', '50', '10', '51.90', '59.43', '85.79', '0.15'], ['600,000', '50', '10', '51.18', '59.95', '86.84', '0.14'], ['600,000', '50', '20', '54.00', '[BOLD] 60.12', '88.64', '0.14'], ['700,000', '50', '20', '51.89', '58.37', '88.24', '0.14'], ['700,000', '70', '20', '53.73', '59.67', '88.08', '0.14'], ['700,000', '30', '20', '[BOLD] 55.56', '59.64', '[BOLD] 89.45', '[BOLD] 0.13'], ['700,000', '20', '20', '52.69', '59.68', '89.27', '[BOLD] 0.13']] | After deciding which layers to fine-tune, we run a series of experiments to determine the best learning rate schedule. When we increase number of warmup steps, we notice positive effect from slower learning rate ramp up. However, as number number of warmup steps becomes close to the total number of fine-tuning steps, we have to increase number of epochs from 10 to 20 in order to see the whole fine-tuning process. |
One-to-Many Multilingual End-to-End Speech Translation | 1910.03320 | Table 3: Results for multilingual direct SLT systems with 6 and 8 target languages. | ['[EMPTY]', 'De', 'Nl', 'Es', 'Fr', 'It', 'Pt', 'Ro', 'Ru'] | [['[EMPTY]', '[BOLD] Baseline', '[BOLD] Baseline', '[BOLD] Baseline', '[BOLD] Baseline', '[BOLD] Baseline', '[BOLD] Baseline', '[BOLD] Baseline', '[BOLD] Baseline'], ['[EMPTY]', '17.3', '18.8', '20.8', '26.9', '16.8', '20.1', '16.5', '10.5'], ['[EMPTY]', '[BOLD] Multilingual', '[BOLD] Multilingual', '[BOLD] Multilingual', '[BOLD] Multilingual', '[BOLD] Multilingual', '[BOLD] Multilingual', '[BOLD] Multilingual', '[BOLD] Multilingual'], ['6', '17.3', '18.4', '20.0', '25.4', '16.9', '21.8', '-', '-'], ['8', '16.5', '17.8', '18.9', '24.5', '16.2', '20.8', '15.9', '9.8'], ['[EMPTY]', '[BOLD] + ASR', '[BOLD] + ASR', '[BOLD] + ASR', '[BOLD] + ASR', '[BOLD] + ASR', '[BOLD] + ASR', '[BOLD] + ASR', '[BOLD] + ASR'], ['6', '17.4', '19.2', '19.7', '26.0', '17.2', '21.8', '-', '-'], ['8', '15.9', '17.2', '18.3', '23.7', '15.1', '19.9', '15.5', '9.7']] | Number of languages. When training a system with all the 6 target languages (De, Nl, Es, Fr, It, Pt) When adding ASR data to the 6 languages, we observe improvements in most languages, and the new system is worse than the baseline only for Spanish and French, although the gap for French has been reduced to −0.9. However, using all the 8 target languages leads to worse results and adding ASR data contributes to worsen the performance in this case. We think that the reason is related to the relatively low number of parameters of our models (∼33 millions), which reduce their capability to learn and discriminate between a larger number of languages. |
One-to-Many Multilingual End-to-End Speech Translation | 1910.03320 | Table 2: Results with concat (C-*) and merge (M-*) target forcing on 6 languages. The baselines are one-to-one systems. All the other results are computed with one multilingual system for En→De,NL and one for En→Es,Fr,It,Pt. | ['[EMPTY]', 'De', 'Nl', 'Es', 'Fr', 'It', 'Pt'] | [['Baseline', '17.3', '18.8', '20.8', '[BOLD] 26.9', '16.8', '20.1'], ['C-Pre', '14.0', '11.6', '13.0', '16.3', '10.7', '14.5'], ['C-Post', '12.0', '13.8', '12.3', '18.0', '9.3', '14.6'], ['C-Final', '14.5', '12.1', '13.6', '16.7', '10.2', '16.2'], ['M-Pre', '17.6', '19.5', '20.5', '26.2', '17.2', '22.3'], ['M-Post', '17.1', '19.2', '20.5', '26.2', '17.4', '22.3'], ['M-Final', '17.4', '18.8', '20.4', '26.7', '17.2', '22.2'], ['M-Dec.', '17.3', '19.1', '20.6', '26.2', '17.2', '22.0'], ['M-Pre + ASR', '[BOLD] 17.7', '[BOLD] 20.0', '[BOLD] 20.9', '26.5', '[BOLD] 18.0', '[BOLD] 22.6']] | Concat vs Merge. Our first experiment consists in comparing the baselines with the multilingual models based on the target forcing mechanism. By looking at the translations, we found that the cause of the degradation is that many sentences are acceptable translations, but in a wrong language. We first hypothesize that the processing performed in the layer preceding the encoder self-attentional stack loses the language information. Thus, we concatenate the language embedding to the representations of the Post and Final positions (see Fig. Our second hypothesis is that the networks are not able to learn the joint probabilities of language embeddings and character sequences because of a combination of factors: character-level translations (instead of sub-words), very long source-side sequences and the source sides in the corpus are highly overlapping between languages. Thus, we assume that our networks can learn to discriminate better among target languages by giving a stronger language signal. For this reason, we introduce the merge target forcing that forces the network to generate target-language dependent encoder representations by translating them in different portions of the space according to the target language. (M- *) is definitely better than concat for all the target languages, and also obtains performance that is on par with or better than the baselines. M-Pre is the system that shows high results more consistently in all languages, and the largest improvement is observed in En-Pt, with over 2.0 BLEU points of gain, followed by +0.7 in En-Nl. The BLEU score slightly degrades for Spanish by 0.2∼0.4, and for French by 0.2∼0.7. Besides the three different language embedding positions in the encoder, we also performed experiments by applying target forcing on the decoder, but they show slightly worse performance. Then, for the following experiments we will continue only with the Pre position that results in the best average performance on all the language directions. |
One-to-Many Multilingual End-to-End Speech Translation | 1910.03320 | Table 4: Comparison of the Baseline and the best multilingual system with the single language cascade (BL-Cascade) and the multilingual cascade (M-Cascade) | ['[EMPTY]', 'De', 'Nl', 'Es', 'Fr', 'It', 'Pt'] | [['Baseline', '17.3', '18.8', '20.8', '26.9', '16.8', '20.1'], ['M-Pre + ASR', '17.7', '20.0', '20.9', '26.5', '18.0', '22.6'], ['BL-Cascade', '18.5', '22.2', '22.5', '27.9', '18.9', '21.5'], ['M-Cascade', '18.6', '22.0', '22.1', '27.3', '18.5', '22.8']] | Comparison with cascade. As expected, our direct SLT baselines are significantly worse than the BL-cascade systems with differences that range from −1.0 for French to −3.4 for Dutch. Comparing the BL-Cascade with the M-Cascade systems, we observe not significant variation for the Germanic languages, but lower results in 3 out of 4 Romance languages (−0.6 for French), with the single improvement of +1.3 for Portuguese. Thus, multilingual MT generally affects negatively the performance, being beneficial only for the lowest resourced languages. |
One-to-Many Multilingual End-to-End Speech Translation | 1910.03320 | Table 5: Percentage of sentences in the correct language computed with langdetect. | ['[EMPTY]', 'De', 'Nl', 'Es', 'Fr', 'It', 'Pt'] | [['M-Pre', '95.7', '98.5', '97.2', '94.6', '95.3', '96.6'], ['M-Pre + ASR', '96.1', '98.7', '97.9', '95.3', '95.4', '95.2']] | Language analysis. Then, when using also ASR data, the percentage of correct language increases slightly in all languages except for Portuguese. However, the improvement in the correct language does not correlate with the improvement in BLEU score. This suggests that the improvement in BLEU score of M-PRE + ASR comes from better translations and not from more sentences translated in the correct language. |
eRevise: Using Natural Language Processing to Provide Formative Feedback on Text Evidence Usage in Student Writing | 1908.01992 | Table 3: Quadratic Weighted Kappa (QWK) of different AES models. The CO-ATTN model significantly outperforms the Rubric and SG models, respectively (p<0.05). | ['[BOLD] AES Model', '[BOLD] QWK'] | [['Rubric', '0.632'], ['SG', '0.653'], ['CO-ATTN', '0.697']] | We have developed several AES systems for RTA assessment Our first model (denoted by Rubric) A subsequent model (denoted by SG) Most recently, \citeauthorzhang2018co \shortcitezhang2018co developed a neural network model with a co-attention layer (denoted by CO-ATTN) to eliminate human feature engineering. Although the neural CO-ATTN model has the best performance, to select formative feedback messages that address essay weaknesses in terms of rubric constructs, a more interpretable representation of the essay is necessary. Therefore, SG is the AES system used in eRevise. In particular, two of the features used by SG for score prediction, namely Number of Pieces of Evidence (NPE) and Specificity (SPC), form the basis of eRevise’s feedback selection algorithm. |
eRevise: Using Natural Language Processing to Provide Formative Feedback on Text Evidence Usage in Student Writing | 1908.01992 | Table 5: Lookup table for feedback selection. | ['[BOLD] Feature [ITALIC] NPE', '[BOLD] Value 0', '[BOLD] Value 0', '[BOLD] Value 0', '[BOLD] Value 1', '[BOLD] Value 2', '[BOLD] Value 3', '[BOLD] Value 4', '[BOLD] Value 1', '[BOLD] Value 1', '[BOLD] Value 2', '[BOLD] Value 2', '[BOLD] Value 3', '[BOLD] Value 4', '[BOLD] Value 3', '[BOLD] Value 4'] | [['[ITALIC] SPClmh', 'L', 'M', 'H', 'L', 'L', 'L', 'L', 'M', 'H', 'H', 'M', 'M', 'M', 'H', 'H'], ['[BOLD] Feedback Messages', '1,2', '1,2', '1,2', '1,2', '1,2', '1,2', '1,2', '1,2', '1,2', '1,2', '2,3', '2,3', '2,3', '3,4', '3,4']] | SPCAWE=3, and SPClmh =M. We are about to begin the next deployment of eRevise, which will extend our work in two ways. First, to better determine the benefit of using AES to adaptively guide revision, we have added a control condition where eRevise will display the same generic feedback message to all students: “MAKE YOUR ESSAY MORE CONVINCING - Help readers understand why you believe the fight against poverty is/isn’t achievable in our lifetime.” This is in contrast to the existing eRevise adaptive feedback, where students receive different messages based on AES. Second, students will use eRevise for two different forms of the RTA (i.e., RTAspace in addition to RTAMVP). While we have already trained a SG model for scoring RTAspace We are also exploring adding CO-ATTN scores to the lookup table. |
Graph-Based Decoding for Event Sequencing and Coreference Resolution | 1806.05099 | Table 2: Test Results for Event Coreference with the Singleton and Matching baselines. | ['[EMPTY]', '[ITALIC] B3', 'CEAF-E', 'MUC', 'BLANC', 'AVG.'] | [['ALL', '81.97', '74.80', '76.33', '76.07', '77.29'], ['-Distance', '81.92', '74.48', '76.02', '77.55', '77.50'], ['-Frame', '82.14', '75.01', '76.28', '77.74', '77.79'], ['-Syntactic', '81.87', '74.89', '75.79', '76.22', '77.19']] | Comparing to the top 3 coreference systems in TAC-KBP 2015, we outperform the best system by about 2 points absolute F-score on average. Our system is also competitive on individual metrics. Our model performs the best based on B3 and CEAF-E, and is comparable to the top performing systems on MUC and BLANC. Note that while the Matching baseline only links event mentions based on event type and realis status, it is very competitive and performs close to the top systems. This is not surprising since these two attributes are based on the gold standard. To take a closer look, we conduct an ablation study by removing the simple match features one by one. We observe that some features produce mixed results on different metrics: they provide improvements on some metrics but not all. This is partially caused by the different characteristics of different metrics. On the other hand, these features (parsing and frames) are automatically predicted, which make them less stable. Furthermore, the Frame features contain duplicate information to event types, which makes it less useful in this setting. |
Graph-Based Decoding for Event Sequencing and Coreference Resolution | 1806.05099 | Table 2: Test Results for Event Coreference with the Singleton and Matching baselines. | ['[EMPTY]', '[ITALIC] B3', 'CEAF-E', 'MUC', 'BLANC', 'AVG.'] | [['Singleton', '78.10', '68.98', '0.00', '48.88', '52.01'], ['Matching', '78.40', '65.82', '[BOLD] 69.83', '76.29', '71.94'], ['LCC', '82.85', '74.66', '68.50', '[BOLD] 77.61', '75.69'], ['UI-CCG', '83.75', '75.81', '63.78', '73.99', '74.28'], ['LTI', '82.27', '75.15', '60.93', '71.57', '72.60'], ['This work', '[BOLD] 85.59', '[BOLD] 79.65', '67.81', '77.37', '[BOLD] 77.61']] | Comparing to the top 3 coreference systems in TAC-KBP 2015, we outperform the best system by about 2 points absolute F-score on average. Our system is also competitive on individual metrics. Our model performs the best based on B3 and CEAF-E, and is comparable to the top performing systems on MUC and BLANC. Note that while the Matching baseline only links event mentions based on event type and realis status, it is very competitive and performs close to the top systems. This is not surprising since these two attributes are based on the gold standard. To take a closer look, we conduct an ablation study by removing the simple match features one by one. We observe that some features produce mixed results on different metrics: they provide improvements on some metrics but not all. This is partially caused by the different characteristics of different metrics. On the other hand, these features (parsing and frames) are automatically predicted, which make them less stable. Furthermore, the Frame features contain duplicate information to event types, which makes it less useful in this setting. |
Graph-Based Decoding for Event Sequencing and Coreference Resolution | 1806.05099 | Table 4: Test Results for event sequencing. The Oracle Cluster+Temporal system is using Caevo’s result on the Oracle Clusters. | ['[EMPTY]', 'Prec.', 'Recall', 'F-Score'] | [['Oracle Cluster+Temporal', '[BOLD] 46.21', '8.72', '14.68'], ['Our Model', '18.28', '[BOLD] 16.91', '[BOLD] 17.57']] | Because the baseline system has access to the oracle script clusters, it produces high precision. However, the low recall value shows that it fails to produce enough After links. Our analysis shows that a lot of After relations are not indicated by clear temporal clues, but can only be solved with script knowledge. However, it fails to identify that “extradited” is after “arrested”, which requires knowledge about prototypical event sequences. |
Graph-Based Decoding for Event Sequencing and Coreference Resolution | 1806.05099 | Table 5: Ablation Study for Event Sequencing. | ['[EMPTY]', 'Prec.', 'Recall', 'F-Score', 'Δ'] | [['Full', '37.92', '36.79', '36.36', '[EMPTY]'], ['- Mention Type', '32.78', '29.81', '30.07', '6.29'], ['- Sentence', '33.90', '30.75', '31.00', '5.36'], ['- Temporal', '37.21', '36.53', '35.81', '0.55'], ['- Dependency', '38.18', '36.44', '36.23', '0.13'], ['- Function words', '38.08', '36.51', '36.18', '0.18']] | While most of the features only affect the performance by less than 1 absolute F1 score, the feature sets after removing mention or sentences show a significant drop in both precision and recall. This shows that discourse proximity is the most significant ones among these features. In addition, the mention feature set captures the following explain away intuition: the event mentions A and B are less likely to be related if there are similar mentions in between. |
Impact of Batch Size on Stopping Active Learning for Text Classification | 1801.07887 | TABLE I: Stopping method results on 20Newsgroups for different batch sizes using various window sizes. The top number in each row shows the number of annotations at the stopping point and the bottom number shows the F-Measure at the stopping point. | ['[width=15em]Stopping MethodBatch Percent', '1%', '5%', '10%'] | [['Oracle Method', '1514.20', '2490.40', '3901.95'], ['Oracle Method', '76.17', '75.55', '75.49'], ['BV2009 (Window Size = 3)', '1299.50', '3877.10', '6446.70'], ['BV2009 (Window Size = 3)', '74.44', '75.17', '75.19'], ['BV2009 (Window Size = 1)', '1101.75', '3141.30', '5089.50'], ['BV2009 (Window Size = 1)', '74.40', '75.04', '75.11']] | We considered different batch sizes in our experiments, based on percentages of the entire set of training data. We ran BV2009 with smaller window sizes for each of our different batch sizes. When using a window size of one, BV2009 is able to stop with a smaller number of annotations than when using a window size of three. This is done without losing much F-Measure. The next subsection provides an explanation as to why smaller window sizes are more effective than larger window sizes when larger batch sizes are used. |
Visually grounded cross-lingual keyword spotting in speech | 1806.05030 | Table 3: Analysis of errors by a human annotator of the top ten retrievals on development data. Percentages (%) indicate the absolute drop in P@10 due to that error type. | ['Error type', 'XVisionSpeech Count', 'XVisionSpeech %', 'XBoWCNN Count', 'XBoWCNN %'] | [['(1) Correct (exact)', '032', '08.2', '45', '11.5'], ['(2) Semantically related', '086', '22.1', '13', '03.3'], ['(3) Incorrect retrieval', '035', '09.0', '19', '04.9'], ['Total', '153', '39.3', '77', '19.7']] | Transcriptions of the top English utterances retrieved using XVisionSpeechCNN for a selection of German keywords. Errors from both XVisionSpeechCNN and XBoWCNN were presented to the annotator in shuffled order. For both models, around 10% of the retrievals marked as errors are actually correct. The bulk of errors from XVisionSpeechCNN is due to semantically related retrievals. These retrievals are marked as errors, but could actually be useful depending on the type of retrieval application. If type (1) and type (2) errors are not counted as incorrect, XVisionSpeechCNN and XBoWCNN would achieve a P@10 of 91% and 95%, respectively (but, again, this will depend on the use-case). We leave a larger analysis, which will also measure recall (not only top retrievals), for future work. |
Visually grounded cross-lingual keyword spotting in speech | 1806.05030 | Table 2: Cross-lingual keyword spotting results (%) on test data. | ['Model', '[ITALIC] P@10', '[ITALIC] P@ [ITALIC] N', 'EER', 'AP'] | [['DETextPrior', '07.2', '06.3', '50', '10.4'], ['DEVisionCNN', '41.5', '32.9', '25.9', '29.7'], ['XVisionSpeechCNN', '58.2', '40.4', '23.5', '40.0'], ['XBoWCNN', '80.8', '54.3', '19.1', '54.3']] | Without seeing any speech transcriptions or translated text, XVisionSpeechCNN achieves a P@10 of 58%, with XBoWCNN the only model to outperform the visually grounded model. By comparing performance to DETextPrior, we see that XVisionSpeechCNN is not just predicting common German words. Interestingly, XVisionSpeechCNN also outperforms DEVisionCNN over all metrics. If the former were perfectly predicting the German visual tags (which is what it is trained to do), then the performance of these two models would be the same. We see, however, that XVisionSpeechCNN is doing more than simply mapping the acoustics to the visual tags; we speculate that it is therefore picking up information in the speech which cannot be obtained from the corresponding test images. |
Visually grounded cross-lingual keyword spotting in speech | 1806.05030 | Table 4: Cross-lingual keyword spotting results (%) for different variants of XVisionSpeechCNN on development data. | ['Model', '[ITALIC] P@10', '[ITALIC] P@ [ITALIC] N', 'EER', 'AP'] | [['XVisionSpeechCNN', '60.8', '39.3', '23.1', '38.0'], ['KeyXVisionSpeechCNN', '60.0', '39.6', '24.5', '36.9'], ['OracleXVisionSpeechCNN', '57.4', '37.6', '24.8', '36.5']] | Variants and ideal supervision. We compare different variants of XVisionSpeechCNN to gain insight into properties of the model. XVisionSpeechCNN produces scores →f(X)∈[0,1]W for all W=1k words in its output vocabulary. But we are actually only interested in those dimensions w corresponding to the test keywords. If we knew the keywords at training time, we could train a model which only tries to predict the visual tags corresponding to these keywords. Performance is similar to that of XVisionSpeechCNN, with the latter being slightly better on most metrics. This effectively regularises our model (improving results). XVisionSpeechCNN is trained on soft scores from a visual tagger. What if we had the true hard assignments from the manual annotations for the training images? OracleXVisionSpeechCNN is trained on such oracle targets. They described this as a student-teacher approach, where the student (in our case the speech network) is trying to distil knowledge from the teacher network (in our case the visual tagger). |
Flexible End-to-End Dialogue System for Knowledge Grounded Conversation | 1709.04264 | Table 4: Human Evaluation on the MusicConvers dataset. | ['Models', 'Grammar', 'Context Relevance', 'Correctness'] | [['S2SA', '1.76', '0.87', '0.16'], ['GenQA', '1.28', '0.95', '0.41'], ['GenQAD', '1.67', '1.11', '0.51'], ['GenDS-Single', '[BOLD] 2.16', '[BOLD] 1.67', '[BOLD] 1.18'], ['GenDS-Static', '1.97', '1.42', '0.96'], ['GenDS', '2.03', '1.55', '0.89']] | MusicConvers: We compute the mean score of each metric. For automatic evaluation, GenDS shows the best performance on BLEU and entity-accuracy, while GenDS-single achieves the highest entity-recall. Although GenDS does not overwhelm on S2SA in terms of BLEU, it improves entity accuracy and recall by 39% and 14% respectively. This indicates that GenDS can reply the message with more correct information. S2SA cannot respond with correct information, which is mainly due to the lack of grounding into external knowledge. Although GenQA and GenQAD can incorporate KB in response, their performance on entity-accuracy and entity-recall still cannot compete with GenDS. For GenQA, the entity generation probability is fixed during decoding. As a consequence, GenQA cannot generate different entities. For GenQAD, its update mechanism of entity generation probability is less effective than our dynamic knowledge enquirer. BLEU scores of GenQA and GenQAD are lowest among all models. We think that MusicConvers has no enough data for GenQA and GenQAD to learn reliable entity representations. This illustrate thet GenDS can achieve decent performance even on small dataset. GenDS achieves higher BLEU than GenDS-Single, which confirms the benefit of multi-task learning for improving the fluency. For human evaluation, the GenDS-single achieves the best performance in terms of grammar, context relevance and information correctness. Although S2SA can generate fluent responses, these responses contain little correct information and are less semantically relevant with the message. The performance of GenDS is slightly worse than GenDS-single. We infer that is due to the task 2, where GenDS tends to generate fluent response instead of correct information. |
Flexible End-to-End Dialogue System for Knowledge Grounded Conversation | 1709.04264 | Table 3: Automatic Evaluation on the Music dataset | ['Models', 'BLEU', 'Precision', 'Recall'] | [['S2SA', '0.11', '0.01±0.01', '0.004±0.02'], ['GenQA', '0.05', '0.1134±0.14', '0.05±0.1'], ['GenQAD', '0.06', '0.15±0.16', '0.05±0.1'], ['GenDS-Single', '0.108', '0.28±0.19', '[BOLD] 0.19±0.18'], ['GenDS-Static', '0.108', '0.14±0.15', '0.10±0.14'], ['GenDS', '[BOLD] 0.122', '[BOLD] 0.40\xa0±0.25', '0.14±0.16']] | MusicConvers: We compute the mean score of each metric. For automatic evaluation, GenDS shows the best performance on BLEU and entity-accuracy, while GenDS-single achieves the highest entity-recall. Although GenDS does not overwhelm on S2SA in terms of BLEU, it improves entity accuracy and recall by 39% and 14% respectively. This indicates that GenDS can reply the message with more correct information. S2SA cannot respond with correct information, which is mainly due to the lack of grounding into external knowledge. Although GenQA and GenQAD can incorporate KB in response, their performance on entity-accuracy and entity-recall still cannot compete with GenDS. For GenQA, the entity generation probability is fixed during decoding. As a consequence, GenQA cannot generate different entities. For GenQAD, its update mechanism of entity generation probability is less effective than our dynamic knowledge enquirer. BLEU scores of GenQA and GenQAD are lowest among all models. We think that MusicConvers has no enough data for GenQA and GenQAD to learn reliable entity representations. This illustrate thet GenDS can achieve decent performance even on small dataset. GenDS achieves higher BLEU than GenDS-Single, which confirms the benefit of multi-task learning for improving the fluency. For human evaluation, the GenDS-single achieves the best performance in terms of grammar, context relevance and information correctness. Although S2SA can generate fluent responses, these responses contain little correct information and are less semantically relevant with the message. The performance of GenDS is slightly worse than GenDS-single. We infer that is due to the task 2, where GenDS tends to generate fluent response instead of correct information. |
Flexible End-to-End Dialogue System for Knowledge Grounded Conversation | 1709.04264 | Table 5: Automatic Evaluation on the QA dataset | ['Models', 'BLEU', 'Precision', 'Recall'] | [['S2SA', '0.05', '0.08±0.125', '0.07±0.13'], ['GenQA', '0.12', '0.06±0.11', '0.04±0.09'], ['GenQAD', '0.13', '0.25±0.2', '0.34±0.235'], ['GenDS-Single', '0.226', '0.76±0.205', '[BOLD] 0.77±0.21'], ['GenDS-Static', '0.19', '0.64±0.23', '0.66±0.235'], ['GenDS', '[BOLD] 0.227', '[BOLD] 0.77±0.205', '0.76±0.215']] | MusicQA GenDS improves BLEU score, entity-accuracy and entity-recall significantly compared with S2SA, GenQA and GenQAD. GenQA does not obtain comparable performance with the original QA This may be the due to our mitigation of redundancy for the dataset. Unlike MusicConvers, GenDS-static exhibits decent performance on entity-accuracy and entity-recall. 99% questions in MusicQA only contain one entity in answer. Thus, most of messages do not need dynamic knowledge enquirer to generate multiple entities. However, GenDS still achieves higher entity-accuracy and entity-recall than GenDS-static. This verifies that the proposed dynamic knowledge enquirer is useful even when only one entity is generated. |
RETURNN as a Generic Flexible Neural Toolkit with Application to Translation and Speech Recognition | 1805.05225 | Table 1: Training speed and memory consumption on WMT 2017 German→English. Train time is for seeing the full train dataset once. Batch size is in words, such that it almost maximizes the GPU memory consumption. The BLEU score is for the converged models, reported for newstest2015 (dev) and newstest2017. The encoder has one bidirectional LSTM layer and either 3 or 5 unidirectional LSTM layers. | ['toolkit', 'encoder n. layers', 'time [h]', 'batch size', 'BLEU [%] 2015', 'BLEU [%] 2017'] | [['RETURNN', '4', '[BOLD] 11.25', '8500', '28.0', '28.4'], ['Sockeye', '[EMPTY]', '11.45', '3000', '[BOLD] 28.9', '[BOLD] 29.2'], ['RETURNN', '6', '[BOLD] 12.87', '7500', '28.7', '28.7'], ['Sockeye', '[EMPTY]', '14.76', '2500', '[BOLD] 29.4', '[BOLD] 29.1']] | We want to compare different toolkits in training and decoding for a recurrent attention model in terms of speed on a GPU. Here, we try to maximize the batch size such that it still fits into the GPU memory of our reference GPU card, the Nvidia GTX 1080 Ti with 11 GB of memory. We keep the maximum sequence length in a batch the same, which is 60 words. We always use Adam Kingma and Ba For these speed experiments, we did not tune any of the hyper parameters of RETURNN which explains its worse performance. The aim here is to match Sockeye’s exact architecture for speed and memory comparison. During training, we observed that the learning rate scheduling settings of Sockeye are more pessimistic, i.e. the decrease is slower and it sees the data more often until convergence. This greatly increases the total training time but in our experience also improves the model. |
RETURNN as a Generic Flexible Neural Toolkit with Application to Translation and Speech Recognition | 1805.05225 | Table 3: Comparison on German→English. | ['toolkit', 'BLEU [%] 2015', 'BLEU [%] 2017'] | [['RETURNN', '[BOLD] 31.2', '[BOLD] 31.3'], ['Sockeye', '29.7', '30.2']] | We report the best performing Sockeye model we trained, which has 1 bidirectional and 3 unidirectional encoder layers, 1 pre-attention target recurrent layer, and 1 post-attention decoder layer. We trained with a max sequence length of 75, and used the ‘coverage’ RNN attention type. For Sockeye, the final model is an average of the 4 best runs according to the development perplexity. We obtain the best results with Sockeye using a Transformer network model Vaswani et al. , where we achieve 32.0% BLEU on newstest2017. |
RETURNN as a Generic Flexible Neural Toolkit with Application to Translation and Speech Recognition | 1805.05225 | Table 4: Performance comparison on WMT 2017 English→German. The baseline systems (upper half) are trained on the parallel data of the WMT Enlgish→German 2017 task. We downloaded the hypotheses from here.66footnotemark: 6 The WMT 2017 system hypotheses (lower half) are generated using systems having additional back- translation (bt) data. These hypotheses are downloaded from here.77footnotemark: 7 | ['System', 'BLEU [%]'] | [['[EMPTY]', 'newstest2017'], ['RETURNN', '[BOLD] 26.1'], ['OpenNMT-py', '21.8'], ['OpenNMT-lua', '22.6'], ['Marian', '25.6'], ['Nematus', '23.5'], ['Sockeye', '25.3'], ['WMT 2017 Single Systems + bt data', 'WMT 2017 Single Systems + bt data'], ['LMU', '26.4'], ['+ reranking', '27.0'], ['Systran', '26.5'], ['Edinburgh', '26.5']] | We observe that our toolkit outperforms all other toolkits. The best result obtained by other toolkits is using Marian (25.5% BLEU). In comparison, RETURNN achieves 26.1%. We also compare RETURNN to the best performing single systems of WMT 2017. In comparison to the fine-tuned evaluation systems that also include back-translated data, our model performs worse by only 0.3 to 0.9 BLEU. We did not run experiments with back-translated data, which can potentially boost the performance by several BLEU points. |
RETURNN as a Generic Flexible Neural Toolkit with Application to Translation and Speech Recognition | 1805.05225 | Table 5: Performance comparison on Switchboard, trained on 300h. hybrid1 is the IBM 2017 ResNet model Saon et al. (2017). hybrid2 trained with Lattice-free MMI Hadian et al. (2018). CTC3 is the Baidu 2014 DeepSpeech model Hannun et al. (2014). Our attention model does not use any language model. | ['model', 'training', 'WER [%] Hub5’00', 'WER [%] Hub5’00', 'WER [%] Hub5’00', 'WER [%] Hub5’01'] | [['[EMPTY]', '[EMPTY]', 'Σ', 'SWB', 'CH', '[EMPTY]'], ['hybrid1', 'frame-wise', '[EMPTY]', '11.2', '[EMPTY]', '[EMPTY]'], ['hybrid2', 'LF-MMI', '15.8', '10.8', '[EMPTY]', '[EMPTY]'], ['CTC3', 'CTC', '25.9', '20.0', '31.8', '[EMPTY]'], ['hybrid', 'frame-wise', '[BOLD] 14.4', '[BOLD] 9.8', '[BOLD] 19.0', '14.7'], ['[EMPTY]', 'full-sum', '15.9', '10.1', '21.8', '[BOLD] 14.5'], ['attention', 'frame-wise', '20.3', '13.5', '27.1', '19.9']] | We also have preliminary results with recurrent attention models for speech recognition on the Switchboard task, which we trained on the 300h trainset. We report on both the Switchboard (SWB) and the CallHome (CH) part of Hub5’00 and Hub5’01. We also compare to a conventional frame-wise trained hybrid deep bidirectional LSTM with 6 layers Zeyer et al. The frame-wise trained hybrid model also uses focal loss Lin et al. All the hybrid models use a phonetic lexicon and an external 4-gram language model which was trained on the transcripts of both the Switchboard and the Fisher corpus. The attention model does not use any external language model nor a phonetic lexicon. Its output labels are byte-pair encoded subword units Sennrich et al. It has a 6 layer bidirectional encoder, which also applies max-pooling in the time dimension , i.e. it reduces the input sequence by factor 8. To our knowledge, this is the best reported result for an end-to-end system on Switchboard 300h without using a language model or the lexicon. For comparison, we also selected comparable results from the literature. From these, the Baidu DeepSpeech CTC model is modeled on characters and does not use the lexicon but it does use a language model. |
RETURNN as a Generic Flexible Neural Toolkit with Application to Translation and Speech Recognition | 1805.05225 | Table 6: Pretraining comparison. | ['encoder num. layers', 'BLEU [%] no pretrain', 'BLEU [%] with pretrain'] | [['2', '29.3', '-'], ['3', '29.9', '-'], ['4', '29.1', '30.3'], ['5', '-', '30.3'], ['6', '-', '30.6'], ['7', '-', '[BOLD] 30.9']] | RETURNN supports very generic and flexible pretraining which iteratively starts with a small model and adds new layers in the process. A similar pretraining scheme for deep bidirectional LSTMs acoustic speech models was presented earlier Zeyer et al. Here, we only study a layer-wise construction of the deep bidirectional LSTM encoder network of an encoder-decoder-attention model for translation on the WMT 2017 German→English task. The observations very clearly match our expectations, that we can both greatly improve the overall performance, and we are able to train deeper models. A minor benefit is faster training speed of the initial pretrain epochs. |
Assessing Language Proficiency from Eye Movements in Reading | 1804.07329 | Table 1: Pearson’s r of EyeScore for different feature sets with MET (training/development set, 88 participants) and TOEFL (all 53 participants). Fixed denotes the Fixed Text regime in which all the participants read the same sentences, and Any denotes the Any Text regime where different readers read different sentences. | ['[BOLD] Features', '[BOLD] MET Fixed', '[BOLD] MET Any', '[BOLD] TOEFL Fixed', '[BOLD] TOEFL Any'] | [['Reading Speed', '0.28', '0.27', '0.15', '0.13'], ['WP-Coefficients', '0.38', '0.37', '0.21', '0.13'], ['S-Clusters', '0.45', '[BOLD] 0.48', '0.50', '[BOLD] 0.45'], ['Transitions', '0.45', '[EMPTY]', '0.44', '[EMPTY]'], ['WFC', '[BOLD] 0.50', '[EMPTY]', '[BOLD] 0.54', '[EMPTY]']] | We evaluate the ability of EyeScore to capture language proficiency by comparing it against our two external proficiency tests, MET and TOEFL. Similarly to the EyeScore outcomes, the best performance in the Fixed Text regime is obtained using the WFC feature set, with a Pearson’s r of 0.7 and MAE of 3.31 for MET. This result is highly competitive with correlations between different standardized English proficiency tests. On TOEFL, WFC features obtain the strongest MAE of 6.68, while S-Clusters have a higher r coefficient of 0.55. |
Assessing Language Proficiency from Eye Movements in Reading | 1804.07329 | Table 2: Pearson’s r and Mean Absolute Error (MAE) for prediction of MET scores (test set, 57 participants) and TOEFL scores (leave-one-out cross validation, all 53 participants) from eye movement patterns in reading. We consider two baselines which do not use eyetracking information: (1) the average proficiency score in the training set, which yields 4.82 MAE on MET and 8.29 MAE on TOEFL, and (2) the reading speed of the participant. | ['[EMPTY]', '[BOLD] MET Fixed', '[BOLD] MET Fixed', '[BOLD] MET Any', '[BOLD] MET Any', '[BOLD] TOEFL Fixed', '[BOLD] TOEFL Fixed', '[BOLD] TOEFL Any', '[BOLD] TOEFL Any'] | [['[BOLD] Features', '[ITALIC] r', 'MAE', '[ITALIC] r', 'MAE', '[ITALIC] r', 'MAE', '[ITALIC] r', 'MAE'], ['Reading Speed', '0.27', '4.58', '0.24', '4.62', '0.09', '7.92', '0.06', '7.96'], ['WP-Coefficients', '0.43', '4.11', '0.44', '4.14', '0.34', '7.76', '0.31', '[BOLD] 7.49'], ['S-Clusters', '0.56', '3.87', '[BOLD] 0.49', '[BOLD] 4.11', '[BOLD] 0.55', '7.45', '[BOLD] 0.50', '7.76'], ['Transitions', '0.52', '3.93', '[EMPTY]', '[EMPTY]', '0.38', '7.11', '[EMPTY]', '[EMPTY]'], ['WFC', '[BOLD] 0.70', '[BOLD] 3.31', '[EMPTY]', '[EMPTY]', '0.50', '[BOLD] 6.68', '[EMPTY]', '[EMPTY]']] | We consider two baselines; the first is assigning all test set participants with the average score of the training participants. This baseline yields an MAE of 4.82 on MET and 8.29 on TOEFL. The second baseline uses reading speed as the sole feature for prediction. In all cases, our eyetracking based features outperform the average score and reading speed baselines. |
Learning Paraphrastic Sentence Embeddingsfrom Back-Translated Bitext | 1706.01847 | Table 9: Differences in entropy and repetition of unigrams/trigrams in references and translations. Negative values indicate translations have a higher value, so references show consistently higher entropies and lower repetition rates. | ['Lang.', 'Data', 'Ent. (uni)', 'Ent. (tri)', 'Rep. (uni)', 'Rep. (tri)'] | [['[EMPTY]', 'CC', '0.50', '1.13', '-7.57%', '-5.58%'], ['CS', 'EP', '0.14', '0.31', '-0.88%', '-0.11%'], ['[EMPTY]', 'News', '0.16', '0.31', '-0.96%', '-0.16%'], ['[EMPTY]', 'CC', '0.97', '1.40', '-8.50%', '-7.53%'], ['FR', 'EP', '0.51', '0.69', '-1.85%', '-0.58%'], ['FR', 'Giga', '0.97', '1.21', '-5.30%', '-7.74%'], ['[EMPTY]', 'News', '0.67', '0.75', '-2.98%', '-0.85%'], ['[EMPTY]', 'CC', '0.29', '0.57', '-1.09%', '-0.73%'], ['DE', 'EP', '0.32', '0.53', '-0.14%', '-0.11%'], ['[EMPTY]', 'News', '0.40', '0.37', '-1.02%', '-0.24%'], ['All', 'All', '0.46', '0.74', '-2.80%', '-2.26%']] | The translated text has lower n-gram entropies and higher rates of repetition. This appears for all datasets, but is strongest for common crawl and French-English 109. |
Learning Paraphrastic Sentence Embeddingsfrom Back-Translated Bitext | 1706.01847 | Table 5: Test correlations for our models when trained on sentences with particular length ranges (averaged over languages and data sources for the NMT rows). Results are on STS datasets (Pearson’s r×100). | ['Data', 'Model', 'Length Range 0-10', 'Length Range 10-20', 'Length Range 20-30', 'Length Range 30-100'] | [['SimpWiki', 'GRAN', '67.4', '67.7', '67.1', '67.3'], ['SimpWiki', 'Avg', '65.9', '65.7', '65.6', '65.9'], ['NMT', 'GRAN', '66.6', '66.5', '66.0', '64.8'], ['NMT', 'Avg', '65.7', '65.6', '65.3', '65.0']] | These results are averages across all language pairs and data sources of training data for each length range shown. We find it best to select NMT data where the translations have between 0 and 10 tokens, with performance dropping as sentence length increases. This is true for both the GRAN and Avg models. We do the same filtering for the SimpWiki data, though the trend is not nearly as strong. This may be because machine translation quality drops as sentence length increases. This trend appears even though the datasets with higher ranges have more tokens of training data, since only the number of training sentence pairs is kept constant across configurations. |
Learning Paraphrastic Sentence Embeddingsfrom Back-Translated Bitext | 1706.01847 | Table 6: Length filtering test results after tuning length ranges on development data (averaged over languages and data sources for the NMT rows). Results are on STS datasets (Pearson’s r×100). | ['Filtering Method', 'NMT GRAN', 'NMT Avg', 'SimpWiki GRAN', 'SimpWiki Avg'] | [['None (Random)', '66.9', '65.5', '67.2', '65.8'], ['Length', '67.3', '66.0', '67.4', '66.2'], ['Tuned Len. Range', '[0,10]', '[0,10]', '[0,10]', '[0,15]']] | We then tune the length range using our development data, considering the following length ranges: [0,10], [0,15], [0,20], [0,30], [0,100], [10,20], [10,30], [10,100], [15,25], [15,30], [15,100], [20,30], [20,100], [30,100]. We tune over ranges as well as language, data source, and stopping epoch, each time training on 24,000 sentence pairs. We compare to a baseline that draws a random set of data, showing that length-based filtering leads to gains of nearly half a point on average across our test sets. The tuned length ranges are short for both NMT and SimpWiki. |
Learning Paraphrastic Sentence Embeddingsfrom Back-Translated Bitext | 1706.01847 | Table 7: Quality filtering test results after tuning quality hyperparameters on development data (averaged over languages and data sources for the NMT rows). Results are on STS datasets (Pearson’s r×100). | ['Filtering Method', 'GRAN', 'Avg'] | [['None (Random)', '66.9', '65.5'], ['Translation Cost', '66.6', '65.4'], ['Language Model', '66.7', '65.5'], ['Reference Classification', '67.0', '65.5']] | The translation cost and language model are not helpful for filtering, as random selection outperforms them. Both methods are outperformed by the reference classifier, which slightly outperforms random selection when using the stronger GRAN model. We now discuss further how we trained the reference classifier and the data characteristics that it reveals. We did not experiment with quality filtering for SimpWiki since it is human-written text. |
Learning Paraphrastic Sentence Embeddingsfrom Back-Translated Bitext | 1706.01847 | Table 8: Results of reference/translation classification (accuracy×100). The highest score in each column is in boldface. Final two columns show accuracies of positive (reference) and negative classes, respectively. | ['Model', 'Lang.', 'Data', 'Test Acc.', '+ Acc.', '- Acc.'] | [['[EMPTY]', '[EMPTY]', 'CC', '72.2', '72.2', '72.3'], ['[EMPTY]', 'CS', 'EP', '72.3', '64.3', '80.3'], ['[EMPTY]', '[EMPTY]', 'News', '79.7', '73.2', '86.3'], ['[EMPTY]', '[EMPTY]', 'CC', '80.7', '82.1', '79.3'], ['LSTM', 'FR', 'EP', '79.3', '75.2', '83.4'], ['LSTM', 'FR', 'Giga', '[BOLD] 93.1', '[BOLD] 92.3', '93.8'], ['[EMPTY]', '[EMPTY]', 'News', '84.2', '81.2', '87.3'], ['[EMPTY]', '[EMPTY]', 'CC', '79.3', '71.7', '86.9'], ['[EMPTY]', 'DE', 'EP', '85.1', '78.0', '92.2'], ['[EMPTY]', '[EMPTY]', 'News', '89.8', '82.3', '[BOLD] 97.4'], ['[EMPTY]', '[EMPTY]', 'CC', '71.2', '68.9', '73.5'], ['[EMPTY]', 'CS', 'EP', '69.1', '63.0', '75.1'], ['[EMPTY]', '[EMPTY]', 'News', '77.6', '71.7', '83.6'], ['[EMPTY]', '[EMPTY]', 'CC', '78.8', '80.4', '77.2'], ['Avg', 'FR', 'EP', '78.9', '75.5', '82.3'], ['Avg', 'FR', 'Giga', '[BOLD] 92.5', '[BOLD] 91.5', '93.4'], ['[EMPTY]', '[EMPTY]', 'News', '82.8', '81.1', '84.5'], ['[EMPTY]', '[EMPTY]', 'CC', '77.3', '70.4', '84.1'], ['[EMPTY]', 'DE', 'EP', '82.7', '73.4', '91.9'], ['[EMPTY]', '[EMPTY]', 'News', '87.6', '80.0', '[BOLD] 95.3']] | While performance varies greatly across data sources, the LSTM always outperforms the word averaging model. For our translation-reference classification, we note that our results can be further improved. We also trained models on 90,000 examples, essentially doubling the amount of data, and the results improved by about 2% absolute on each dataset on both the validation and testing data. |
Learning Paraphrastic Sentence Embeddingsfrom Back-Translated Bitext | 1706.01847 | Table 12: Diversity filtering test results after tuning filtering hyperparameters on development data (averaged over languages and data sources for the NMT rows). Results are on STS datasets (Pearson’s r×100). | ['Filtering Method', 'NMT GRAN', 'NMT Avg', 'SimpWiki GRAN', 'SimpWiki Avg'] | [['Random', '66.9', '65.5', '67.2', '65.8'], ['Unigram Overlap', '66.6', '66.1', '67.8', '67.4'], ['Bigram Overlap', '67.0', '65.5', '68.0', '67.2'], ['Trigram Overlap', '66.9', '65.4', '67.8', '66.6'], ['BLEU Score', '67.1', '65.3', '67.5', '66.5']] | We find that the diversity filtering methods lead to consistent improvements when training on SimpWiki. We believe this is because many of the sentence pairs in SimpWiki are near-duplicates and these filtering methods favor data with more differences. |
Learning Paraphrastic Sentence Embeddingsfrom Back-Translated Bitext | 1706.01847 | Table 13: Test results when using more training data. More data helps both Avg and GRAN, where both models get close to or surpass training on SimpWiki and comfortably surpass the PPDB baseline. The amount of training examples used is in parentheses. | ['Data', 'GRAN', 'Avg'] | [['PPDB', '64.6', '66.3'], ['SimpWiki (100k/168k)', '67.4', '[BOLD] 67.7'], ['CC-CS (24k)', '66.8', '-'], ['CC-DE (24k)', '-', '66.6'], ['CC-CS (100k)', '[BOLD] 68.5', '-'], ['CC-DE (168k)', '-', '67.6']] | PPDB is soundly beaten by both data sources. Moreover, the CC-CS and CC-DE data have greatly improved their performance from training on just 24,000 examples from 66.77 and 66.61 respectively, providing more evidence that this approach does indeed scale. |
1 Introduction | 1602.02068 | Table 3: Accuracies for the natural language inference task. Shown are our implementations of a system without attention, and with logistic, soft, and sparse attentions. | ['[EMPTY]', 'Dev Acc.', 'Test Acc.'] | [['NoAttention', '81.84', '80.99'], ['LogisticAttention', '82.11', '80.84'], ['SoftAttention', '82.86', '82.08'], ['SparseAttention', '82.52', '[BOLD] 82.20']] | We observe that the soft and sparse-activated attention systems perform similarly, the latter being slightly more accurate on the test set, and that both outperform the NoAttention and LogisticAttention systems. |
DuTongChuanDuTongChuan, abbreviation of Baidu simultaneous interpreting in Chinese Pinyin. : Context-aware Translation Model for Simultaneous Interpreting | 1907.12984 | Table 5: Comparison between machine translation and human interpretation. The interpretation reference consists of a collection of interpretations from S, A and B. Our model is trained on the large-scale corpus. | ['Models', 'Translation Reference BLEU', 'Translation Reference Brevity Penalty', 'Interpretation Reference (3-references) BLEU', 'Interpretation Reference (3-references) Brevity Penalty'] | [['[ITALIC] Our Model', '20.93', '1.000', '28.08', '1.000'], ['[ITALIC] S', '16.02', '0.845', '-', '-'], ['[ITALIC] A', '16.38', '0.887', '-', '-'], ['[ITALIC] B', '12.08', '0.893', '-', '-']] | We concatenate the translation of each talk into one big sentence, and then evaluate it by BLEU score. Moreover, the length of interpretations are relatively short, and results in a high length penalty provided by the evaluation script. The result is unsurprising, because human interpreters often deliberately skip non-primary information to keep a reasonable ear-voice span, which may bring a loss of adequacy and yet a shorter lag time, whereas the machine translation model translates the content adequately. We also use human interpreting results as references. |
DuTongChuanDuTongChuan, abbreviation of Baidu simultaneous interpreting in Chinese Pinyin. : Context-aware Translation Model for Simultaneous Interpreting | 1907.12984 | Table 2: The overall results on NIST Chinese-English translation task. | ['Models', 'NIST02', 'NIST03', 'NIST04', 'NIST05', 'NIST08', 'Average'] | [['[ITALIC] baseline', '49.40', '49.71', '50.03', '48.83', '44.38', '40.39'], ['[ITALIC] sub-sentence', '45.41', '45.62', '46.06', '43.63', '43.11', '37.31'], ['[ITALIC] wait-1', '38.37', '36.87', '38.17', '36.09', '35.31', '30.80'], ['[ITALIC] wait-3', '40.75', '39.30', '40.57', '38.18', '38.29', '32.85'], ['[ITALIC] wait-5', '42.76', '41.43', '43.29', '40.43', '39.62', '34.59'], ['[ITALIC] wait-7', '44.05', '42.94', '44.17', '42.25', '40.61', '35.67'], ['[ITALIC] wait-9', '45.71', '44.49', '45.74', '43.14', '41.63', '36.78'], ['[ITALIC] wait-12', '[BOLD] 46.67', '45.63', '46.86', '44.59', '42.83', '37.76'], ['[ITALIC] wait-15', '46.41', '[BOLD] 46.43', '[BOLD] 47.38', '[BOLD] 45.63', '[BOLD] 43.60', '[BOLD] 38.24'], ['treat the information unit as sub-sentence ( [BOLD] IU=sub-sentence)', 'treat the information unit as sub-sentence ( [BOLD] IU=sub-sentence)', 'treat the information unit as sub-sentence ( [BOLD] IU=sub-sentence)', 'treat the information unit as sub-sentence ( [BOLD] IU=sub-sentence)', 'treat the information unit as sub-sentence ( [BOLD] IU=sub-sentence)', 'treat the information unit as sub-sentence ( [BOLD] IU=sub-sentence)', 'treat the information unit as sub-sentence ( [BOLD] IU=sub-sentence)'], ['[ITALIC] +context-aware', '47.79', '48.11', '48.29', '46.55', '44.57', '39.22'], ['[ITALIC] +partial decoding', '48.46', '48.51', '48.53', '47.05', '45.43', '39.66'], ['[ITALIC] +discard 2 tokens', '48.61', '48.54', '48.68', '47.11', '45.08', '39.67'], ['[ITALIC] +discard 3 tokens', '48.62', '48.52', '48.87', '47.16', '[BOLD] 45.30', '39.75'], ['[ITALIC] +discard 4 tokens', '48.71', '48.69', '[BOLD] 49.10', '[BOLD] 47.32', '45.11', '[BOLD] 39.82'], ['[ITALIC] +discard 5 tokens', '48.82', '[BOLD] 48.78', '48.98', '47.31', '44.48', '39.73'], ['[ITALIC] +discard 6 tokens', '[BOLD] 48.94', '48.70', '48.77', '47.21', '44.33', '39.66'], ['treat the information unit as segment ( [BOLD] IU=segment)', 'treat the information unit as segment ( [BOLD] IU=segment)', 'treat the information unit as segment ( [BOLD] IU=segment)', 'treat the information unit as segment ( [BOLD] IU=segment)', 'treat the information unit as segment ( [BOLD] IU=segment)', 'treat the information unit as segment ( [BOLD] IU=segment)', 'treat the information unit as segment ( [BOLD] IU=segment)'], ['[ITALIC] +discard 1 tokens', '46.89', '45.40', '47.05', '45.36', '43.06', '37.96'], ['[ITALIC] +discard 2 tokens', '48.09', '46.98', '48.45', '46.50', '44.00', '39.00'], ['[ITALIC] +discard 3 tokens', '48.70', '47.87', '48.85', '47.01', '44.48', '39.49'], ['[ITALIC] +discard 4 tokens', '48.75', '48.09', '[BOLD] 48.99', '46.86', '[BOLD] 45.07', '39.63'], ['[ITALIC] +discard 5 tokens', '48.84', '48.37', '48.71', '[BOLD] 46.95', '44.76', '39.56'], ['[ITALIC] +discard 6 tokens', '[BOLD] 48.88', '[BOLD] 48.60', '48.85', '47.17', '44.84', '[BOLD] 39.72']] | Effectiveness on Translation Quality. On an average the sub-sentence shows weaker performance by a 3.08 drop in BLEU score (40.39 → 37.31). Similarly, the wait-k model also brings an obvious decrease in translation quality, even with the best wait-15 policy, its performance is still worse than the baseline system, with a 2.15 drop, averagely, in BLEU (40.39 → 38.24). For a machine translation product, a large degradation in translation quality will largely affect the use experience even if it has low latency. Lastly, the discard 6 tokens obtains an impressive result, with an average improvement of 1.76 BLEU score (37.96 → 39.72). |
DuTongChuanDuTongChuan, abbreviation of Baidu simultaneous interpreting in Chinese Pinyin. : Context-aware Translation Model for Simultaneous Interpreting | 1907.12984 | Table 3: The comparison between our sequence detector and previous work. The latency represents the words requiring to make an explicit decision. | ['Models', 'Precision (%)', 'Recall (%)', 'F-score (%)', 'Average Latency', 'Max Latency'] | [['[ITALIC] 5-LM', '55.30', '72.63', '62.79', '8.68', '46'], ['[ITALIC] RNN', '67.61', '70.35', '68.95', '9.79', '48'], ['[ITALIC] Our model', '[BOLD] 75.09', '[BOLD] 81.70', '[BOLD] 78.26', '10.49', '39']] | In our context-sensitive model, the dynamic context based information unit boundary detector is essential to determine the IU boundaries in the steaming input. Both of two contrastive models are trained on approximate 2 million monolingual Chinese sentences. This observation indicates that with bidirectional context, the model can learn better representation to help the downstream tasks. In the next experiments, we will evaluate models given testing data with IU boundaries detected by our detector. |
DuTongChuanDuTongChuan, abbreviation of Baidu simultaneous interpreting in Chinese Pinyin. : Context-aware Translation Model for Simultaneous Interpreting | 1907.12984 | Table 4: The overall results on BSTC Chinese-English translation task (Pre-train represents training on the NIST dataset, and fine-tune represents fine-tuning on the BSTC dataset.). Clean input indicates the input is from human annotated transcription, while the ASR input represents the input contains ASR errors. ASR + Auto IU indicates that the sentence boundary as well as sub-sentence is detected by our IU detector. Therefore, this data basically reflects the real environment in practical product. | ['Models', 'Clean Input Pre-train', 'Clean Input Fine-tune', 'ASR Input Pre-train', 'ASR Input Fine-tune', 'ASR + Auto IU Pre-train', 'ASR + Auto IU Fine-tune'] | [['[ITALIC] baseline', '[BOLD] 15.85', '[BOLD] 21.98', '[BOLD] 14.60', '[BOLD] 19.91', '[BOLD] 14.41', '[BOLD] 17.35'], ['[ITALIC] sub-sentence', '14.39', '18.61', '13.50', '16.99', '13.76', '16.29'], ['[ITALIC] wait-3', '12.23', '16.74', '11.62', '15.59', '11.75', '14.68'], ['[ITALIC] wait-5', '12.84', '17.70', '11.96', '16.23', '12.25', '15.45'], ['[ITALIC] wait-7', '13.34', '19.32', '12.67', '17.41', '12.55', '16.08'], ['[ITALIC] wait-9', '13.92', '19.77', '13.05', '18.29', '13.12', '16.49'], ['[ITALIC] wait-12', '14.35', '20.15', '13.34', '19.07', '13.48', '[BOLD] 17.25'], ['[ITALIC] wait-15', '[BOLD] 14.70', '[BOLD] 21.11', '[BOLD] 13.56', '[BOLD] 19.53', '[BOLD] 13.70', '17.21'], ['[ITALIC] context-aware', '15.25', '20.72', '14.24', '18.42', '13.52', '16.83'], ['[ITALIC] +discard 2 tokens', '15.26', '21.07', '14.35', '19.17', '13.73', '17.02'], ['[ITALIC] +discard 3 tokens', '15.37', '21.09', '14.42', '19.39', '14.00', '17.41'], ['[ITALIC] +discard 4 tokens', '15.40', '21.02', '14.45', '19.41', '14.11', '17.36'], ['[ITALIC] +discard 5 tokens', '[BOLD] 15.59', '[BOLD] 21.23', '14.72', '[BOLD] 19.65', '[BOLD] 14.54', '17.37'], ['[ITALIC] +discard 6 tokens', '15.53', '21.21', '[BOLD] 14.77', '19.48', '14.58', '[BOLD] 17.49']] | Due to the relatively lower CER in ASR errors (10.32 %), the distinction between the clean input and the noisy input results in a BLEU score difference smaller than 2 points (15.85 vs. 14.60 for pre-train, and 21.98 vs. 19.91 for fine-tune). Despite the small size of the training data in BSTC, fine-tuning on this data is essential to improve the performance of all models. In all settings, the best system in context-aware model beats the wait-15 model. Pre-trained models are not sensitive to errors from Auto IU, while fine-tuned models are. |
XGPT: Cross-modal Generative Pre-Training for Image Captioning | 2003.01473 | Table 5: Results of image retrieval task on Flickr30k. | ['[EMPTY]', 'R@1', 'R@5', 'R@10'] | [['ViLBERT ', '58.2', '84.9', '91.5'], ['ViLBERT + augmentation', '[BOLD] 60.4', '[BOLD] 86.4', '[BOLD] 91.9']] | The higher relative gain on R@1 also indicates that the generator can produce high-quality image captions which can help the model better understand images. |
XGPT: Cross-modal Generative Pre-Training for Image Captioning | 2003.01473 | Table 4: Comparison of two masking methods on COCO Captions. | ['[EMPTY]', 'C', 'B@4', 'M', 'S'] | [['multi [MASK]', '117.8', '36.1', '28.2', '21.3'], ['single [MASK]', '[BOLD] 118.1', '[BOLD] 36.4', '[BOLD] 28.3', '[BOLD] 21.3']] | Comparing with one-stage pre-training, we find that each task combination pre-trained after the second stage (Row 6-8) gains approximately +2 on CIDEr. This indicates that two-stage pre-training with in-domain data enables the model to adapt to the downstream data better than only using out-of-domain pre-training. Combining all three tasks leads to the highest score on all metrics. We use this as the optimal pre-training setting for further experiments. How to Efficiently Mask? We conduct further experiments to compare two masking strategies for IDA: (1) Multi [MASK]: replace tokens in the sampled fragment with exactly the same number of [MASK] tokens, (2) Single [MASK]: replace the fragment with a single [MASK] token. Single [MASK] is obviously better than Multi [MASK]. Multi [MASK] provides the decoder with full position information of masked token, which reduced the difficulty for the decoder to predict correct words, thus, gives a less well performance (-0.3 on CIDEr). Therefore, we use the single [MASK] as the optimal pre-training setting. |
Movement Pruning: Adaptive Sparsity by Fine-Tuning | 2005.07683 | Table 3: Distillation-augmented performances for selected high sparsity levels. All pruning methods benefit from distillation signal further enhancing the ratio Performance VS Model Size. | ['[EMPTY]', 'BERT base fine-tuned', 'Remaining Weights (%)', 'MaP', '[ITALIC] L0 Regu', 'MvP', 'soft MvP'] | [['SQuAD - Dev EM/F1', '80.4/88.1', '10%', '70.2/80.1', '72.4/81.9', '75.6/84.3', '[BOLD] 76.6/ [BOLD] 84.9'], ['SQuAD - Dev EM/F1', '80.4/88.1', '3%', '45.5/59.6', '65.5/75.9', '67.5/78.0', '[BOLD] 72.9/ [BOLD] 82.4'], ['MNLI - Dev acc/MM acc', '84.5/84.9', '10%', '78.3/79.3', '78.7/79.8', '80.1/80.4', '[BOLD] 81.2/ [BOLD] 81.8'], ['MNLI - Dev acc/MM acc', '84.5/84.9', '3%', '69.4/70.6', '76.2/76.5', '76.5/77.4', '[BOLD] 79.6/ [BOLD] 80.2'], ['QQP - Dev acc/F1', '91.4/88.4', '10%', '79.8/65.0', '88.1/82.8', '89.7/86.2', '[BOLD] 90.5/ [BOLD] 87.1'], ['QQP - Dev acc/F1', '91.4/88.4', '3%', '72.4/57.8', '87.1/82.0', '86.1/81.5', '[BOLD] 89.3/ [BOLD] 85.6']] | The training objective is a linear combination of the training loss and a knowledge distillation loss on the output distributions. Overall, we observe that the relative comparisons of the pruning methods remain unchanged while the performances are strictly increased. When combined with distillation, soft movement pruning yields the strongest performances across all pruning methods and studied datasets: it reaches 95% of BERT-base with only a fraction of the weights in the encoder (∼5% on SQuAD and MNLI). |
Movement Pruning: Adaptive Sparsity by Fine-Tuning | 2005.07683 | Table 2: Performance at high sparsity levels. (Soft) movement pruning outperforms current state-of-the art pruning methods at different high sparsity levels. | ['[EMPTY]', 'BERT base fine-tuned', 'Remaining Weights (%)', 'MaP', '[ITALIC] L0 Regu', 'MvP', 'soft MvP'] | [['SQuAD - Dev EM/F1', '80.4/88.1', '10%', '67.7/78.5', '69.9/80.1', '[BOLD] 71.9/ [BOLD] 81.7', '71.3/81.5'], ['SQuAD - Dev EM/F1', '80.4/88.1', '3%', '40.1/54.5', '61.6/73.6', '65.2/76.3', '[BOLD] 69.6/ [BOLD] 79.9'], ['MNLI - Dev acc/MM acc', '84.5/84.9', '10%', '77.8/79.0', '77.9/78.5', '79.3/79.5', '[BOLD] 80.7/ [BOLD] 81.2'], ['MNLI - Dev acc/MM acc', '84.5/84.9', '3%', '68.9/69.8', '75.2/75.6', '76.1/76.7', '[BOLD] 79.0/ [BOLD] 79.7'], ['QQP - Dev acc/F1', '91.4/88.4', '10%', '78.8/75.1', '87.6/81.9', '89.1/85.5', '[BOLD] 90.2/ [BOLD] 86.8'], ['QQP - Dev acc/F1', '91.4/88.4', '3%', '72.1/58.4', '86.5/81.1', '85.6/81.0', '[BOLD] 89.2/ [BOLD] 85.5']] | Magnitude pruning on SQuAD achieves 54.5 F1 with 3% of the weights compared to 73.6 F1 with L0 regularization, 76.3 F1 for movement pruning, and 79.9 F1 with soft movement pruning. These experiments indicate that in high sparsity regimes, importance scores derived from the movement accumulated during fine-tuning induce significantly better pruned models compared to absolute values. |
Closed-Book Training to Improve Summarization Encoder Memory | 1809.04585 | Table 7: ROUGE F1 and METEOR scores of sanity check ablations, evaluated on CNN/DM validation set. | ['[EMPTY]', 'ROUGE 1', 'ROUGE 2', 'ROUGE L'] | [['pg baseline', '37.73', '16.52', '34.49'], ['pg + ptrdec', '37.66', '16.50', '34.47'], ['pg-2layer', '37.92', '16.48', '34.62'], ['pg-big', '38.03', '16.71', '34.84'], ['pg + cbdec', '[BOLD] 38.87', '[BOLD] 16.93', '[BOLD] 35.38']] | Model Capacity: To validate and sanity-check that the improvements are the result of the inclusion of our closed-book decoder and not due to some trivial effects of having two decoders or larger model capacity (more parameters), we train a variant of our model with two duplicated (initialized to be different) attention-pointer decoders. We also evaluate a pointer-generator baseline with 2-layer encoder and decoder (pg-2layer) and increase the LSTM hidden dimension and word embedding dimension of the pointer-generator baseline (pg-big) to exceed the total number of parameters of our 2-decoder model (34.5M versus 34.4M parameters). Moreover, as shown in Sec. |
Closed-Book Training to Improve Summarization Encoder Memory | 1809.04585 | Table 1: ROUGE F1 and METEOR scores (non-coverage) on CNN/Daily Mail test set of previous works and our models. ‘pg’ is the pointer-generator baseline, and ‘pg + cbdec’ is our 2-decoder model with closed-book decoder(cbdec). The model marked with ⋆ is trained and evaluated on the anonymized version of the data. | ['[EMPTY]', 'ROUGE 1', 'ROUGE 2', 'ROUGE L', 'MTR Full'] | [['previous works', 'previous works', 'previous works', 'previous works', 'previous works'], ['⋆(Nallapati16)', '35.46', '13.30', '32.65', '[EMPTY]'], ['pg (See17)', '36.44', '15.66', '33.42', '16.65'], ['our models', 'our models', 'our models', 'our models', 'our models'], ['pg (baseline)', '36.70', '15.71', '33.74', '16.94'], ['pg + cbdec', '38.21', '16.45', '34.70', '18.37'], ['RL + pg', '37.02', '15.79', '34.00', '17.55'], ['RL + pg + cbdec', '[BOLD] 38.58', '[BOLD] 16.57', '[BOLD] 35.03', '[BOLD] 18.86']] | We first report our evaluation results on CNN/Daily Mail dataset. In the reinforced setting, our 2-decoder model still maintains significant (p<0.001) advantage in all metrics over the pointer-generator baseline. and have a 95% ROUGE-significance interval of at most ±0.25. |
Closed-Book Training to Improve Summarization Encoder Memory | 1809.04585 | Table 2: ROUGE F1 and METEOR scores (with-coverage) on the CNN/Daily Mail test set. Coverage mechanism See et al. (2017) is used in all models except the RL model Paulus et al. (2018). The model marked with ⋆ is trained and evaluated on the anonymized version of the data. | ['[EMPTY]', 'ROUGE 1', 'ROUGE 2', 'ROUGE L', 'MTR Full'] | [['previous works', 'previous works', 'previous works', 'previous works', 'previous works'], ['pg (See17)', '39.53', '17.28', '36.38', '18.72'], ['RL⋆ (Paulus17)', '39.87', '15.82', '36.90', '[EMPTY]'], ['our models', 'our models', 'our models', 'our models', 'our models'], ['pg (baseline)', '39.22', '17.02', '35.95', '18.70'], ['pg + cbdec', '40.05', '17.66', '36.73', '19.48'], ['RL + pg', '39.59', '17.18', '36.16', '19.70'], ['RL + pg + cbdec', '[BOLD] 40.66', '[BOLD] 17.87', '[BOLD] 37.06', '[BOLD] 20.51']] | In the reinforced setting, our 2-decoder model (RL + pg + cbdec) outperforms our strong RL baseline (RL + pg) by a considerable margin (stat. significance of p<0.001). Fig. |
Closed-Book Training to Improve Summarization Encoder Memory | 1809.04585 | Table 6: ROUGE F1 scores of ablation studies, evaluated on CNN/Daily Mail validation set. | ['[EMPTY]', 'ROUGE 1', 'ROUGE 2', 'ROUGE L'] | [['Fixed-encoder ablation', 'Fixed-encoder ablation', 'Fixed-encoder ablation', 'Fixed-encoder ablation'], ['pg baseline’s encoder', '37.59', '16.27', '34.33'], ['2-decoder’s encoder', '[BOLD] 38.44', '[BOLD] 16.85', '[BOLD] 35.17'], ['Gradient-Flow-Cut ablation', 'Gradient-Flow-Cut ablation', 'Gradient-Flow-Cut ablation', 'Gradient-Flow-Cut ablation'], ['pg baseline', '37.73', '16.52', '34.49'], ['stop \\⃝raisebox{-0.9pt}{1}', '37.72', '16.58', '34.54'], ['stop \\⃝raisebox{-0.9pt}{2}', '[BOLD] 38.35', '[BOLD] 16.79', '[BOLD] 35.13']] | Fixed-Encoder Ablation: Next, we conduct an ablation study in order to prove the qualitative superiority of our 2-decoder model’s encoder to the baseline encoder. To do this, we train two pointer-generators with randomly initialized decoders and word embeddings. For the first model, we restore the pre-trained encoder from our pointer-generator baseline; for the second model, we restore the pre-trained encoder from our 2-decoder model. We then fix the encoder’s parameters for both models during the training, only updating the embeddings and decoders with gradient descent. Since these two models have the exact same structure with only the encoders initialized according to different pre-trained models, the significant improvements in metric scores suggest that our 2-decoder model does have a stronger encoder than the pointer-generator baseline. Gradient-Flow-Cut Ablation: We further design another ablation test to identify how the gradients from the closed-book decoder influence the entire model during training. Fig. As we can see, the closed-book decoder only depends on the word embeddings and encoder. Therefore it can affect the entire model during training by influencing either the encoder or the word-embedding matrix. When we stop the gradient flow between the encoder and closed-book decoder (\⃝raisebox{-0.9pt}{1} in Fig. This proves that the gradients back-propagated from closed-book decoder to the encoder can strengthen the entire model, and hence verifies the gradient-flow intuition discussed in introduction (Sec. |
Towards Unsupervised Grammatical Error Correction using Statistical Machine Translation with Synthetic Comparable Corpus | 1907.09724 | Table 3: The effect of source languages for comparable corpus creation. These News Crawl corpora are as of 2017 version. The number of sentences in each dataset is approximately 20M, respectively. These results are obtained by USMTforward in iter 1. | ['Src', 'Precision', 'Recall', 'F0.5'] | [['Fi News Crawl', '29.17', '28.52', '29.04'], ['Ru News Crawl', '27.11', '29.84', '27.62'], ['Fr News Crawl', '25.05', '30.27', '25.94'], ['De News Crawl', '23.26', '26.04', '25.04']] | We also examine how source languages of machine translation affect performance. The outputs using Finnish data is the best score among various languages; the more similar to English the source-side data is, the lower the F0.5 score of the output. |
Towards Unsupervised Grammatical Error Correction using Statistical Machine Translation with Synthetic Comparable Corpus | 1907.09724 | Table 2: M2 and GLEU results. The bold scores represent the best score in unsupervised SMT. The underlined scores represent the best overall score. | ['[EMPTY]', 'iter', 'CoNLL-14 (M2) P', 'CoNLL-14 (M2) R', 'CoNLL-14 (M2) F0.5', 'JFLEG GLEU'] | [['No edit', '-', '-', '-', '-', '40.54'], ['Supervised NMT', '-', '53.11', '26.47', '44.21', '54.04'], ['Supervised SMT', '-', '43.02', '33.18', '40.61', '55.93'], ['Unsupervised SMT', '0', '21.82', '[BOLD] 36.75', '23.75', '49.94'], ['w/ forward_refine', '1', '[BOLD] 25.92', '32.65', '[BOLD] 27.04', '[BOLD] 50.65'], ['[EMPTY]', '2', '25.58', '31.02', '26.51', '50.19'], ['[EMPTY]', '3', '23.95', '33.13', '24.54', '50.40'], ['w/ backward_refine', '1', '22.39', '33.39', '23.97', '49.02'], ['[EMPTY]', '2', '24.96', '27.13', '25.36', '48.90'], ['[EMPTY]', '3', '26.07', '21.01', '24.87', '48.75']] | The F0.5 score for USMTforward in iter 1 is 13.57 points lower than that of supervised SMT and 17.17 points lower than that of supervised NMT. On JFLEG, the highest score was achieved with USMTforward in iter 1 among the unsupervised SMT models; its GLEU scores are 5.28 points and 3.39 points lower than those of supervised SMT and supervised NMT, respectively. |
Towards Unsupervised Grammatical Error Correction using Statistical Machine Translation with Synthetic Comparable Corpus | 1907.09724 | Table 4: GEC results with W&I+LOCNESS test data. | ['Team', 'TP', 'FP', 'FN', 'P', 'R', 'F0.5'] | [['UEDIN-MS', '2,312', '982', '2,506', '70.19', '47.99', '64.24'], ['Kakao&Brain', '2,412', '1,413', '2,797', '63.06', '46.30', '58.80'], ['LAIX', '1,443', '884', '3,175', '62.01', '31.25', '51.81'], ['CAMB-CUED', '1,814', '1,450', '2,956', '55.58', '38.03', '50.88'], ['UFAL, Charles University, Prague', '1,245', '1,222', '2,993', '50.47', '29.38', '44.13'], ['Siteimprove', '1,299', '1,619', '3,199', '44.52', '28.88', '40.17'], ['WebSpellChecker.com', '2,363', '3,719', '3,031', '38.85', '43.81', '39.75'], ['TMU', '1,638', '4,314', '3,486', '27.52', '31.97', '28.31'], ['Buffalo', '446', '1,243', '3,556', '26.41', '11.14', '20.73']] | The F0.5 score for our system (TMU) is 28.31; this score is eighth among the nine teams. In particular, the number of false positives of our system is 4,314; this is the worst result of all. |
A Hybrid Retrieval-Generation Neural Conversation Model | 1904.09068 | Table 4. The hyper-parameter settings in the generation-based baselines and the generation module in the proposed hybrid neural conversation model. These settings are the optimized settings tuned with the validation data. | ['Models', 'Seq2Seq', 'Seq2Seq-Facts'] | [['Embedding size', '512', '256'], ['# LSTM layers in encoder/decoder', '2', '2'], ['LSTM hidden state size', '512', '256'], ['Learning rate', '0.0001', '0.001'], ['Learning rate decay', '0.5', '0.5'], ['# Steps between validation', '10000', '5000'], ['Patience of early stopping', '10', '10'], ['Dropout', '0.3', '0.3']] | Parameter Settings Hyper-parameters are tuned with the validation data. For the hyper-parameter settings in the hybrid ranking module, we set the window size of the convolution and pooling kernels as (6,6). The number of convolution kernels is 64. The dropout rate is set to 0.5. The margin in the pairwise-ranking hinge loss is 1.0. The distant supervision signals and the number of positive samples per context in the hybrid ranking module are tuned with validation data. The used distant supervision signal is BLEU-1 and we treat top 3 response candidates ranked by BLEU-1 as positive samples. All models are trained on a single Nvidia Titan X GPU by stochastic gradient descent with Adam (DBLP: journals/corr/KingmaB14) algorithm. The initial learning rate is 0.0001. The parameters of Adam, β1 and β2, are 0.9 and 0.999 respectively. The batch size is 500. The maximum conversation context/ response length is 30. |
A Hybrid Retrieval-Generation Neural Conversation Model | 1904.09068 | Table 9. The response generation performance when we vary the ratios of positive samples in distant supervision. | ['Model', 'Supervision # Positive', 'BLEU-1 BLEU', 'BLEU-1 ROUGE-L', 'BLEU-2 BLEU', 'BLEU-2 ROUGE-L', 'ROUGE-L BLEU', 'ROUGE-L'] | [['HybridNCM-RS', 'k’=1', '0.9022', '8.9596', '0.7547', '8.8351', '1.0964', '8.9234'], ['HybridNCM-RS', 'k’=2', '1.0649', '9.7241', '1.1099', '9.9168', '1.1019', '9.6216'], ['HybridNCM-RS', 'k’=3', '[BOLD] 1.3450', '[BOLD] 10.4078', '[BOLD] 1.1165', '[BOLD] 10.1584', '[BOLD] 1.1435', '[BOLD] 10.0928'], ['HybridNCM-RSF', 'k’=1', '1.0223', '9.2996', '[BOLD] 1.1027', '9.2453', '1.0035', '9.2812'], ['HybridNCM-RSF', 'k’=2', '1.3284', '9.8637', '1.0175', '9.8562', '[BOLD] 1.0999', '[BOLD] 9.8061'], ['HybridNCM-RSF', 'k’=3', '[BOLD] 1.3695', '[BOLD] 10.3445', '0.8239', '[BOLD] 9.8575', '0.9838', '9.7961']] | We further analyze the impact of the ratios of positive/ negative training samples on the response generation performance. The value of k′ is the number of positive response candidates for each conversation context when we train the hybrid ranking module. When k′=1, we select one positive candidate from the ground truth responses in the training data, which is equivalent to the negative sampling technique. As k′ increases, we construct the positive candidates by selecting one positive sample from the ground truth responses and k′−1 positive samples from the top ranked candidates by distant supervision. We find that larger k′ can improve the response generation performance. This is reasonable since larger k′ means the model can observe more positive training samples and positive/ negative response pairs in the pairwise ranking loss minimization process. However, increasing the value of k′ also adds risks of introducing noisy positive training data. Thus, the value of k′ is a hyper-parameter, and needs to be tweaked via trial and error. |
A Hybrid Retrieval-Generation Neural Conversation Model | 1904.09068 | Table 10. The response generation performance when we vary different distant supervision signals. This table shows the results for the setting “k’=3”, where there are 3 positive response candidates for each conversation context. “SentBLEU” denotes using sentence-level BLEU scores as distant supervision signals. | ['Model Supervision', 'HybridNCM-RS BLEU', 'HybridNCM-RS ROUGE-L', 'HybridNCM-RSF BLEU', 'HybridNCM-RSF ROUGE-L'] | [['BLEU-1', '[BOLD] 1.3450', '[BOLD] 10.4078', '[BOLD] 1.3695', '[BOLD] 10.3445'], ['BLEU-2', '1.1165', '10.1584', '0.8239', '9.8575'], ['ROUGE-L', '1.1435', '10.0928', '0.9838', '9.7961'], ['SentBLEU', '0.8326', '9.2887', '1.0631', '9.6338']] | We find that distant supervision signals like BLEU-1 are quite effective for training the hybrid ranking module. The sentence-level BLEU is not a good choice for the distant supervision signal. The reason is that the sentence-level BLEU is computed only based on the n-gram precision statistics for a given sentence pair. This score has a larger variance compared with the corpus-level BLEU. Since sentence-level BLEU scores would become very small smoothed values if there are no 4-gram or trigram matches between two sentences, which may happen frequently in short text pairs. |
Contextual Graph Attention for Answering Logical Queries over Incomplete Knowledge Graphs | 1910.00084 | Table 2. Macro-average AUC and APR over test queries with different DAG structures are used to evaluate the performance. All and H-Neg. denote macro-averaged across all query types and query types with hard negative sampling (see Section 3.2.3). | ['Dataset Metric', 'Bio AUC', 'Bio AUC', 'Bio APR', 'Bio APR', 'DB18 AUC', 'DB18 AUC', 'DB18 APR', 'DB18 APR', 'WikiGeo19 AUC', 'WikiGeo19 AUC', 'WikiGeo19 APR', 'WikiGeo19 APR'] | [['[EMPTY]', 'All', 'H-Neg', 'All', 'H-Neg', 'All', 'H-Neg', 'All', 'H-Neg', 'All', 'H-Neg', 'All', 'H-Neg'], ['Billinear[mean_simple]', '81.65', '67.26', '82.39', '70.07', '82.85', '64.44', '85.57', '71.72', '81.82', '60.64', '82.35', '64.22'], ['Billinear[min_simple]', '82.52', '69.06', '83.65', '72.7', '82.96', '64.66', '86.22', '73.19', '82.08', '61.25', '82.84', '64.99'], ['TransE[mean]', '80.64', '73.75', '81.37', '76.09', '82.76', '65.74', '85.45', '72.11', '80.56', '65.21', '81.98', '68.12'], ['TransE[min]', '80.26', '72.71', '80.97', '75.03', '81.77', '63.95', '84.42', '70.06', '80.22', '64.57', '81.51', '67.14'], ['GQE[mean]', '83.4', '71.76', '83.82', '73.41', '83.38', '65.82', '85.63', '71.77', '83.1', '63.51', '83.81', '66.98'], ['GQE[min]', '83.12', '70.88', '83.59', '73.38', '83.47', '66.25', '86.09', '73.19', '83.26', '63.8', '84.3', '67.95'], ['GQE+KG[min]', '83.69', '72.23', '84.07', '74.3', '84.23', '68.06', '86.32', '73.49', '83.66', '64.48', '84.73', '68.51'], ['[BOLD] CGA+KG+1[min]', '84.57', '74.87', '85.18', '77.11', '84.31', '67.72', '87.06', '74.94', '83.91', '64.83', '85.03', '69'], ['[BOLD] CGA+KG+4[min]', '[BOLD] 85.13', '[BOLD] 76.12', '85.46', '[BOLD] 77.8', '84.46', '67.88', '87.05', '74.66', '83.96', '64.96', '85.36', '69.64'], ['[BOLD] CGA+KG+8[min]', '85.04', '76.05', '[BOLD] 85.5', '77.76', '[BOLD] 84.67', '[BOLD] 68.56', '[BOLD] 87.29', '[BOLD] 75.23', '[BOLD] 84.15', '[BOLD] 65.23', '[BOLD] 85.69', '[BOLD] 70.28'], ['Relative Δ over GQE', '2.31', '[BOLD] 7.29', '2.28', '[BOLD] 5.97', '1.44', '[BOLD] 3.49', '1.39', '[BOLD] 2.79', '1.07', '[BOLD] 2.24', '1.65', '[BOLD] 3.43']] | We use the ROC AUC score and average percentile rank (APR) as two evaluation metrics. All evaluation results are macro-averaged across queries with different DAG structures The hyper-parameters for the baseline models GQE are tuned using grid search and the best ones are selected. Then we follow the practice of Hamilton et al. We use Adam optimizer for model optimization. The overall delta of CGA over GQE reported in Tab. This is because CGA will significantly outperform GQE in query types with intersection structures, e.g., the 9th query type in Fig. Macro-average computation over all query types makes the improvement less obvious. In order to compare the performance of different models on different query structures (different query types), we show the individual AUC and APR scores on each query type in three datasets for all models ( To highlight the difference, we subtract the minimum score from the other scores in each figure. We can see that our model consistently outperforms the baseline models in almost all query types on all datasets except for the sixth and tenth query type (see In both these two query types, GQE+KG[min] has the best performance. The advantage of our attention-based models is more obvious for query types with hard negative sampling strategy. For example, as for the 9th query type (Hard-3-inter) in Fig. Note that this query type has the largest number of neighboring nodes (3 nodes) which shows that our attention mechanism becomes more effective when a query type contains more neighboring nodes in an intersection structure. This indicates that the attention mechanism as well as the original KG training phase are effective in discriminating the correct answer from misleading answers. |
How would you say that? Pictures elicit better NLG data from the crowd. | 1608.00339 | Table 3: Human evaluation of the data collected with each MR (** = p<0.01 and *** = p<0.001 for Pictorial versus Textual conditions). Italics denote averages across all numbers of attributes. | ['[EMPTY]', '[BOLD] Textual MR Mean', '[BOLD] Textual MR StDev', '[BOLD] Pictorial MR Mean', '[BOLD] Pictorial MR StDev'] | [['[ITALIC] Informativeness', '[ITALIC] 4.28**', '[ITALIC] 1.54', '[ITALIC] 4.51**', '[ITALIC] 1.37'], ['3 attributes', '4.02', '1.39', '4.11', '1.32'], ['5 attributes', '4.31', '1.54', '4.46', '1.36'], ['8 attributes', '4.52', '1.65', '4.98', '1.29'], ['[ITALIC] Naturalness', '[ITALIC] 4.09***', '[ITALIC] 1.56', '[ITALIC] 4.43***', '[ITALIC] 1.35'], ['3 attributes', '4.13', '1.47', '4.35', '1.29'], ['5 attributes', '4.07', '1.56', '4.41', '1.36'], ['8 attributes', '4.07', '1.65', '4.55', '1.42'], ['[ITALIC] Phrasing', '[ITALIC] 4.01***', '[ITALIC] 1.69', '[ITALIC] 4.40***', '[ITALIC] 1.52'], ['3 attributes', '4.01', '1.62', '4.37', '1.47'], ['5 attributes', '4.04', '1.70', '4.28', '1.57'], ['8 attributes', '3.98', '1.75', '4.53', '1.54']] | Informativeness was defined (on the questionnaires) as whether the utterance “provides enough useful information about the venue”. A two-way ANOVA was conducted to examine the effect of MR modality and the number of attributes on the perceived Informativeness. There was no statistically significant interaction between the effects of modality and the number of attributes in the MR, F(2,1236) = 1.79, p=0.17. A main effects analysis showed that the average Informativeness of utterances elicited through the pictorial method (4.51) was significantly higher than that of utterances elicited using the textual/logical modality (4.28), with p<0.01. This is an increase of 0.23 points on the 6-point scale (=4.6%) in average Informativeness rating for the pictorial condition. Naturalness was defined (on the questionnaires) as whether the utterance “could have been produced by a native speaker”. A two-way ANOVA was conducted to examine the effects of MR modality and the number of attributes on the perceived Naturalness. There was no statistically significant interaction between the effects of modality and the number of attributes in the MR, F(2,1236) = 0.73, p=0.48. A main effects analysis showed that the average Naturalness of utterances elicited using the pictorial modality (4.43) was significantly higher than that of utterances elicited using the textual/logical modality (4.09), with p<0.001. This is an increase of about 0.34 points on the scale (=6.8%) for average Naturalness rating for the pictorial condition. A two-way ANOVA was conducted to examine the effect of MR modality and the number of attributes on the perceived Phrasing. There was no statistically significant interaction between the effects of modality and the number of attributes in MR, F(2,1236) = 0.85 , p=0.43. A main effects analysis showed that the average Phrasing score for the utterances elicited using the pictorial modality was significantly higher than that of the utterances elicited using the textual/logical modality, with p<0.001. This is an increase of +0.39 points (about 7.8%) in average Phrasing rating for the pictorial condition. |
How would you say that? Pictures elicit better NLG data from the crowd. | 1608.00339 | Table 2: Nature of the data collected with each MR. Italics denote averages across all numbers of attributes. | ['[EMPTY]', '[BOLD] Textual MR Mean', '[BOLD] Textual MR StDev', '[BOLD] Pictorial MR Mean', '[BOLD] Pictorial MR StDev'] | [['[ITALIC] Time, sec', '[ITALIC] 347.18', '[ITALIC] 301.74', '[ITALIC] 352.05', '[ITALIC] 249.34'], ['3 attributes', '283.37', '265.82', '298.97', '272.44'], ['5 attributes', '321.75', '290.89', '355.56', '244.57'], ['8 attributes', '433.41', '325.04', '405.56', '215.43'], ['[ITALIC] Length, char', '[ITALIC] 100.83', '[ITALIC] 46.40', '[ITALIC] 93.06', '[ITALIC] 37.78'], ['3 attributes', '61.25', '19.44', '67.98', '22.30'], ['5 attributes', '95.18', '26.71', '91.13', '21.19'], ['8 attributes', '144.79', '41.84', '121.94', '40.13'], ['[ITALIC] No of sentences', '[ITALIC] 1.43', '[ITALIC] 0.69', '[ITALIC] 1.31', '[ITALIC] 0.54'], ['3 attributes', '1.06', '0.24', '1.07', '0.25'], ['5 attributes', '1.37', '0.51', '1.25', '0.49'], ['8 attributes', '1.84', '0.88', '1.63', '0.64']] | A two-way ANOVA was conducted to examine the effect of MR modality and the number of attributes on average task duration. The difference between two modalities was not significant, with p=0.76. There was no statistically significant interaction between the effects of modality and the number of attributes in the MR, on time taken to collect the data. A main effects analysis showed that the average duration of utterance creation was significantly longer for larger numbers of attributes, F(2,1236) = 24.99, p<0.001, as expected. A two-way ANOVA was conducted to examine the effect of MR modality and the number of attributes on the length of utterance. There was a statistically significant interaction between the effects of modality and the number of attributes in the MR, F(2,1236) = 23.74, p<0.001. A main effects analysis showed that the average length of utterance was significantly larger not only for a larger number of attributes, with p<0.001, but also for the utterances created based on a textual/logical MR which had a higher number of attributes, p<0.001. A two-way ANOVA was conducted to examine the effect of MR modality and the number of attributes on the number of sentences per utterance. There was a statistically significant interaction between the effects of modality and the number of attributes in the MR, F(2,1236) = 3.83, p<0.05. A main effects analysis showed that the average number of sentences was significantly larger not only for a larger number of attributes, with p<0.001, but also for the utterances created based on a textual/logical MR which had a higher number of attributes, p<0.001. |
Feature Assisted bi-directional LSTM Model for Protein-Protein Interaction Identification from Biomedical Texts | 1807.02162 | Table 6: Comparative results of the proposed model (sdpLSTM) with different baselines and state-of-the-art systems. Ref. choi2016extraction ∗ and li2015approach ∗ denote the reimplementation of the systems proposed in choi2016extraction and li2015approach with the authors reported experimental setups. | ['[BOLD] Model', '[BOLD] Approach', '[BOLD] AIMED [BOLD] Precision', '[BOLD] AIMED [BOLD] Recall', '[BOLD] AIMED [BOLD] F1-Score', '[BOLD] BioInfer [BOLD] Precision', '[BOLD] BioInfer [BOLD] Recall', '[BOLD] BioInfer [BOLD] F1-Score'] | [['Baseline 1', 'MLP (SDP+Feature Embedding)', '59.73', '75.93', '66.46', '68.56', '72.05', '70.22'], ['Baseline 2', 'RNN (SDP+Feature Embedding)', '66.23', '74.72', '70.22', '71.89', '74.59', '73.21'], ['Proposed Model', 'sdpLSTM (SDP+Feature Embedding)', '91.10', '82.2', '86.45', '72.40', '83.10', '77.35'], ['hua2016shortest ', 'sdpCNN (SDP+CNN)', '64.80', '67.80', '66.00', '73.40', '77.00', '75.20'], ['choi2016extraction ', 'DCNN (CNN+word/position embeddings+ Semantic (WordNet) feature embeddings)', '-', '-', '85.2', '-', '-', '-'], ['choi2016extraction ∗', 'DCNN', '88.61', '81.72', '85.03', '72.05', '77.51', '74.68'], ['qian2012tree ', 'Single kernel+ Multiple Parser+SVM', '59.10', '57.60', '58.10', '63.61', '61.24', '62.40'], ['peng2017deep ', 'McDepCNN (CNN+word+PoS+Chunk+NEs Multi-channel embedding)', '67.3', '60.1', '63.5', '62.7', '68.2', '65.3'], ['zhao2016protein ', 'Deep neutral network', '51.5', '63.4', '56.1', '53.9', '72.9', '61.6'], ['tikk2010comprehensive ', 'All-path graph kernel', '49.2', '64.6', '55.3', '53.3', '70.1', '60.0'], ['li2015approach ', 'Multiple kernel+ Word Embedding+ SVM', '-', '-', '69.70', '-', '-', '74.0'], ['li2015approach ∗', 'Multiple kernel+ Word Embedding+ SVM', '67.18', '69.35', '68.25', '72.33', '74.94', '73.61'], ['choi2010simplicity ', 'Tuned tree kernels +SVM', '72.80', '62.10', '67.00', '74.5', '70.9', '72.60']] | We perform 10-fold cross validation on both the datasets. With no official development data set available, cross validation seems to be the most reliable method of evaluating our proposed model. To evaluate the performance of our model, we use standard recall, precision, and F1-score. The obtained results clearly show the effectiveness of our proposed sdpLSTM based model over the other models exploring neural network architectures or conventional kernel or SDP based machine learning model. Statistical significance tests verify that improvements over both the baselines are statistically significant as (p-value < 0.05). In our proposed model we obtain the significant F1-score improvement of 19.99 and 16.23 points over the first two baselines for the AiMed dataset, respectively. On BioInfer dataset, our system shows the F1-Score improvement of 7.13 and 4.14 points over these two baselines, respectively. In order to perform the comparative analysis with the existing approaches, we choose the recent approach exploiting neural network model. We observe that sdpLSTM significantly performs better than all the state-of the-art techniques for both the datasets. We further make an interesting observation that incorporating the latent features embedded into the neural network based architecture improves the performance of the system. |
Feature Assisted bi-directional LSTM Model for Protein-Protein Interaction Identification from Biomedical Texts | 1807.02162 | Table 5: Comparison of different activation functions with same hyperparameters values | ['[BOLD] Activation Function', '[BOLD] F-Score (AiMed)', '[BOLD] F-Score (BioInfer)'] | [['Sigmoid', '86.45', '77.35'], ['ReLU', '85.77', '76.92'], ['tanh', '84.40', '75.79']] | We setup all the experiments by varying the hyper-parameter values and analyze the behaviors of our model. For AiMed dataset, we observed that addition of LSTM units improves the model performance to a certain extent. Thereafter, it keeps on decreasing gradually. We define an optimal value 64 for the same, via cross-validation experiment. It was observed that deep MLP layer helps to improve the overall performance of the model when compared to a shallow MLP layer as shown in Figure 3. However, this improvement was dependent upon the size of the output layer, which increases initially from 20 to 30 and then starts decreasing while the size of the output layer is increased to 50. We also notice that stacking of another MLP layer makes our model complex, thus reducing the performance on a cross-validation setting. In case of BioInfer dataset, we also observe quite a similar trend in performance with the addition of LSTM units, size of MLP output layer and stacking of another MLP layer. We also analyze the performance of our model on the number of epochs for which training was performed on both the datasets. On AIMed dataset, the value of F1-score initially shows a short fall with the increasing number of epochs from 1 to 2 and then shows regular growth with the increasing number of epochs from 2 to 130, and finally a dip on further increasing the number of epochs to 135 and 140. This can be attributed to the fact that training on a very large number of epochs makes our model over-fitted and hence the cross-validation accuracy decreases. There has also been an initial decline in the F1-score in case of BioInfer dataset but then there has been steady increase with the increase in the number of epochs. We achieve the optimum performance with the same number of epochs (130) for both the datasets. To further compare the performance of ReLU with Sigmoid, we have also conducted the experiments considering both ReLU and Sigmoid on both the datasets. On both the datasets, the sigmoid function was found to be superior over ReLU. \captionof figureEffect of varying MLP layers and their output sizes \captionof figureEffect of varying epochs on F1-Score |
When and Why is Document-level Context Usefulin Neural Machine Translation? | 1910.00294 | Tabelle 3: Comparison of document-level model architectures and complexity. | ['Approach', 'Context Encoder Architecture', 'Context Encoder #layers', '[BOLD] en-it Bleu [%]', '[BOLD] en-it Ter [%]', '[BOLD] en-de Bleu [%]', '[BOLD] en-de Ter [%]'] | [['Baseline', '⋅', '⋅', '31.4', '56.1', '28.9', '61.8'], ['Single-Encoder', 'Transformer', '6', '31.5', '57.2', '28.9', '61.4'], ['Multi-Encoder (Out.)', 'Transformer', '6', '31.3', '56.1', '29.1', '61.4'], ['Multi-Encoder (Seq.)', 'Transformer', '6', '32.6', '55.2', '29.9', '60.7'], ['Multi-Encoder (Para.)', 'Transformer', '6', '[BOLD] 32.7', '[BOLD] 54.7', '30.1', '60.3'], ['Multi-Encoder (Para.)', 'Transformer', '2', '32.6', '55.2', '30.2', '60.5'], ['Multi-Encoder (Para.)', 'Transformer', '1', '32.2', '55.8', '30.0', '60.4'], ['Multi-Encoder (Para.)', 'Word Embedding', '⋅', '32.5', '54.8', '[BOLD] 30.3', '[BOLD] 59.9']] | Model Architecture The tested methods are equal or closest to: Single-Encoder: \newciteagrawal2018contextual Integration outside the decoder: \newcitevoita2018context without sharing the encoder hidden layers over current/context sentences Integration inside the decoder Sequential attention : Decoder integration of \newcitezhang2018improving with the order of attentions (current/context) switched Parallel attention: Gating version of \newcitebawden2018evaluating Context encoding without any sequential modeling (the last row) shows indeed comparable performance to using a full 6-layer encoder. This simplified encoding eases the memory-intensive document-level training by having 22% fewer model parameters, which allows us to adopt a larger batch size without accumulating gradients. For the remainder of this paper, we stick to using the multi-encoder approach with parallel attention components in the decoder and restricting the context encoding to only word embeddings. |
When and Why is Document-level Context Usefulin Neural Machine Translation? | 1910.00294 | Tabelle 4: Comparison of context word filtering methods. | ['Context sentence', '[BOLD] en-it Bleu [%]', '[BOLD] en-it Ter [%]', '[BOLD] en-de Bleu [%]', '[BOLD] en-de Ter [%]', '#tokens'] | [['None', '31.4', '56.1', '28.9', '61.8', '-'], ['Full sentence', '32.5', '54.8', '30.3', '59.9', '100%'], ['Remove stopwords', '32.2', '55.2', '30.3', '59.9', '63%'], ['Remove most frequent words', '32.1', '55.6', '30.2', '60.2', '51%'], ['Retain only named entities', '32.3', '55.4', '30.3', '60.3', '[BOLD] 13%'], ['Retain specific POS', '[BOLD] 32.5', '[BOLD] 55.2', '[BOLD] 30.4', '[BOLD] 60.0', '59%']] | All filtering methods shrink the context input drastically without a significant loss of performance. Each method has its own motivation to retain only useful tokens in the context; the results show that they are all reasonable in practice. In particular, using only named entities as context input, we achieve the same level of improvement with only 13% of tokens in the full context sentences. By filtering words in the context sentences, we can use more examples in each batch for a robust training. |
Suicidal Ideation and Mental Disorder Detectionwith Attentive Relation Networks | 2004.07601 | Table 4: Statistical information of SuicideWatch and mental health related subreddits, i.e., SWMH dataset | ['Subreddit', '#/% of train', '#/% of valid.', '#/% of test'] | [['depression', '11,940/34.29', '3,032/34.83', '3,774/34.68'], ['SuicideWatch', '6,550/18.81', '1,614/18.54', '2,018/18.54'], ['Anxiety', '6,136/17.62', '1,508/17.32', '1,911/17.56'], ['offmychest', '5,265/15.12', '1,332/15.30', '1,687/15.50'], ['bipolar', '4,932/14.16', '1,220/14.01', '1,493/13.72']] | As severe mental health issues are very likely to lead to suicidal ideation, we also collect another dataset from some mental health related subreddits in Reddit.com to further the study of mental disorders and suicidal ideation. We name this dataset as Reddit SuicideWatch and Mental Health Collection, or SWMH for short, where discussions comprise suicide-related intention and mental disorders like depression, anxiety, and bipolar. This collection contains totally 54,412 posts. |
Suicidal Ideation and Mental Disorder Detectionwith Attentive Relation Networks | 2004.07601 | Table 5: Selected linguistic statistical information of UMD dataset extracted by LIWC | ['Linguistic clues', 'Label -1', 'Label 0', 'Label 1'] | [['positive emotion', '3.30', '3.12', '2.96'], ['negative emotion', '1.56', '2.30', '2.74'], ['anxiety', '0.17', '0.33', '0.41'], ['sadness', '0.28', '0.50', '0.68'], ['family', '0.29', '0.39', '0.47'], ['friend', '0.43', '0.56', '0.54'], ['work', '2.54', '1.92', '1.80'], ['money', '1.13', '0.71', '0.61'], ['death', '0.22', '0.29', '0.36'], ['swear words', '0.23', '0.33', '0.40']] | We have a brief exploratory analysis on the data. Some selected linguistic statistical information of UMD dataset extracted by Linguistic Inquiry and Word Count software (LIWC) The risk of suicide increases among labels of -1, 0, and 1. The linguistic inquiry results show that negative emotion, anxiety, and sadness are expressed more in posts with high-level suicide risk. The same trend exists in family issues, death-related mentions and swear words. Naturally, positive emotions are less presented in posts with high suicide risk. |
Suicidal Ideation and Mental Disorder Detectionwith Attentive Relation Networks | 2004.07601 | Table 7: Comparison of different models on Reddit SWMH collection, where precision, recall, and F1 score are weighted average. | ['Model', 'Accuracy', 'Precision', 'Recall', 'F1'] | [['fastText', '0.5722', '0.5760', '0.5722', '0.5721'], ['CNN', '0.5657', '0.5925', '0.5657', '0.5556'], ['LSTM', '0.5934', '0.6032', '0.5934', '0.5917'], ['BiLSTM', '0.6196', '0.6204', '0.6196', '0.6190'], ['RCNN', '0.6096', '0.6161', '0.6096', '0.6063'], ['SSA', '0.6214', '0.6249', '0.6214', '0.6226'], ['RN', '[BOLD] 0.6474', '[BOLD] 0.6510', '[BOLD] 0.6474', '[BOLD] 0.6478']] | 5.2.2 Reddit SWMH Then, we perform experiments on the Reddit SWMH dataset, which contains both suicidal ideation and mental health issues to study the predictive performance of our model. It is a larger dataset with more instances when compared with the UMD dataset. Experiments on this dataset show the relational encoding capacity of RN for mental health related texts with similar characteristics. |
Suicidal Ideation and Mental Disorder Detectionwith Attentive Relation Networks | 2004.07601 | Table 8: Performance comparison on Twitter dataset, where precision, recall, and F1 score are weighted average. | ['Model', 'Accuracy', 'Precision', 'Recall', 'F1'] | [['fastText', '0.7927', '0.7924', '0.7927', '0.7918'], ['CNN', '0.7885', '0.7896', '0.7885', '0.7887'], ['LSTM', '0.8021', '0.8094', '0.8021', '0.8039'], ['BiLSTM', '0.8208', '0.8207', '0.8208', '0.8195'], ['RCNN', '0.8094', '0.8089', '0.8094', '0.8090'], ['SSA', '0.8156', '0.8149', '0.8156', '0.8152'], ['RN', '[BOLD] 0.8385', '[BOLD] 0.8381', '[BOLD] 0.8385', '[BOLD] 0.8377']] | Twitter Collection Lastly, we conduct experiments on the Twitter dataset with similar settings of previous experiments. Unlike posts in Reddit, tweets in this dataset are short sequences due to the tweet’s length limit of 280 characters. Among these competitive methods, our model gains the best performance on these four metrics, with 1.77% and 1.82% improvement than the second best BiLSTM model in terms of accuracy and F1-score respectively. Our proposed method introduces auxiliary information of lexicon-based sentiment and topics features learnt from corpus, and utilizes RNs for modeling the interaction between LSTM-based text encodings and risk indicators. Richer auxiliary information and efficient relational encoding help our model boost performance in short tweet classification. |
Suicidal Ideation and Mental Disorder Detectionwith Attentive Relation Networks | 2004.07601 | Table 9: Performance on each class of UMD suicidality dataset | ['Label', 'Metrics', 'BiLSTM', 'SSA', 'RN'] | [['-1', 'Precision', '0.62', '0.57', '0.69'], ['-1', 'Recall', '0.77', '[BOLD] 0.92', '0.70'], ['-1', 'F1-score', '0.69', '0.70', '0.69'], ['[EMPTY]', 'Precision', '0.51', '0.57', '0.48'], ['1', 'Recall', '0.55', '0.31', '[BOLD] 0.62'], ['[EMPTY]', 'F1-score', '0.53', '0.41', '0.54'], ['[EMPTY]', 'Precision', '0.15', '0.00', '0.24'], ['0', 'Recall', '0.02', '0.00', '0.09'], ['[EMPTY]', 'F1-score', '0.04', '0.00', '0.13']] | This section studies the performance on each class of UMD dataset. We select two baselines with better performance for comparison. The results are shown in Fig. The proposed RN-based model is poor on predicting post without suicidality, but good at predicting posts with high suicide risk. Unfortunately, all these three models have very poor capacity on predicting posts with low suicide risk. In the UMD dataset with a small volume of instances, these models tend to predict posts as classes with more instances, even though we apply penalty on the objective function. |
Suicidal Ideation and Mental Disorder Detectionwith Attentive Relation Networks | 2004.07601 | Table 10: Performance of different variants of risk indicator injection | ['Model', 'Accuracy', 'Precision', 'Recall', 'F1'] | [['BiLSTM', '0.8208', '0.8207', '0.8208', '0.8195'], ['BiLSTM+concat', '0.8167', '0.8262', '0.8167', '0.8190'], ['BiLSTM+RN+sentiment', '0.8240', '0.8246', '0.8240', '0.8239'], ['BiLSTM+RN+topic', '0.8198', '0.8177', '0.8198', '0.8183'], ['BiLSTM+RN+sent.+topic', '[BOLD] 0.8385', '[BOLD] 0.8381', '[BOLD] 0.8385', '[BOLD] 0.8377']] | We then conduct ablation study to explore several variants and compare their performance. We compare our complete framework with three different settings of injecting risk indicators. The BiLSTM+concat model concatenates final hidden state with sentiment and topic features. The BiLSTM+RN+sentiment and BiLSTM+RN+topic use single-channel relation network. Relation networks are generally better than BiLSTM with simple feature concatenation, where the performance of the latter model decreased compared to vanilla BiLSTM. The BiLSTM+RN+sentiment model is better than vanilla BiLSTM, while the BiLSTM+RN+topic is slightly worse. But two channels of sentiment and topic couple well, achieving the best performance. This study shows the effectiveness of proposed model. |
Alleviating Sequence Information Loss with Data Overlapping and Prime Batch Sizes | 1909.08700 | Table 3: Comparison between state-of-the-art models Merity et al. (2017); Yang et al. (2017) and a Simple LSTM, and the same models with Alleviated TOI. The comparison highlights how the addition of Alleviated TOI is able to improve state-of-the-art models, as well as a simple model that does not benefit from extensive hyper-parameter optimization. | ['Model', 'test ppl'] | [['AWD-LSTM Merity et al. ( 2017 )', '58.8'], ['AWD-LSTM + Alleviated TOI', '56.46'], ['AWD-LSTM-MoS Yang et al. ( 2017 )', '55.97'], ['AWD-LSTM-MoS + Alleviated TOI', '54.58'], ['Simple-LSTM', '75.36'], ['Simple-LSTM + Alleviated TOI', '74.44']] | Comparison with State of the Art and Simple LSTM. With the MoS model and an Alleviated TOI, we improve the current state of the art without fine tuning for the PTB dataset with 54.58 perplexity on the test set. AWD-LSTM-MoS Yang et al. We compare the results with the same hyper-parameters used on the original papers with the only exception of the batch size, that must be prime. To ensure fairness, we allocate the same computational resources for the base model as well the model with Alleviated TOI, i.e. we train with the equivalent number of epochs. |
Alleviating Sequence Information Loss with Data Overlapping and Prime Batch Sizes | 1909.08700 | Table 1: Data point repetition with period q for batch size K=20 and Alleviated TOI (P). | ['[ITALIC] P', '[BOLD] Period [ITALIC] q', '[BOLD] Repetitions'] | [['2', '10', '2'], ['5', '4', '5'], ['7', '20', '1'], ['10', '2', '10']] | The worst case is P=10 with 10 repetitions of the same data point within the same batch and the best case is P=7, which avoids any redundancy because the GCD of P and K is 1. |
Alleviating Sequence Information Loss with Data Overlapping and Prime Batch Sizes | 1909.08700 | Table 2: Perplexity score (PPL) comparison of the AWD model, on the three datasets, with batch sizes K=20 (PTB), K=80 (WT2) and K=60 (WT103), with different levels of Token Order Imbalance (TOI). With Alleviated TOI (P), we use a prime batch size of K=19 (PTB), K=79 (WT2) and K=59 (WT103). | ['Experiment', 'PTB', 'WT2', 'WT103'] | [['Extreme TOI', '63.49', '73.52', '36.19'], ['Inter-batch TOI', '64.20', '72.61', '36.39'], ['Standard TOI', '[BOLD] 58.94', '[BOLD] 65.86', '[BOLD] 32.94'], ['Alleviated TOI 2', '57.97', '65.14', '32.98'], ['Alleviated TOI 5', '57.14', '65.11', '33.07'], ['Alleviated TOI 7', '57.16', '64.79', '32.89'], ['Alleviated TOI 10', '[BOLD] 56.46', '[BOLD] 64.73', '[BOLD] 32.85']] | We use the test perplexity after the same equivalent number of epochs. The different Alleviated TOI (P) experiments use a different number of overlapped sequence: An Alleviated TOI (P) means building and concatenating P overlapped sequences. Our results indicate that an Alleviated TOI (P) is better than the Standard TOI, which is better than an Extreme or Inter-batch TOI. We note a tendency that higher values of P lead to better results, which is in accordance with our hypothesis that a higher TOI ratio (P−1)/P improves the results. |
Alleviating Sequence Information Loss with Data Overlapping and Prime Batch Sizes | 1909.08700 | Table 4: Perplexity score (PPL) comparison on the PTB dataset and the AWD model. We use two different values for the batch size K --- the original one with K=20, and a prime one with K=19. The results directly corroborate the observation portrayed in Figure 4, where the obtained score is related to the diversity of grayscale values in each row. | ['Experiment', 'K=20', 'K=19'] | [['Alleviated TOI 2', '59.37', '57.97'], ['Alleviated TOI 5', '60.50', '57.14'], ['Alleviated TOI 7', '[BOLD] 56.70', '57.16'], ['Alleviated TOI 10', '65.88', '[BOLD] 56.46']] | We compare the scores of a prime batch size K=19 with the scores of the original batch size K=20 for the AWD model with Alleviated TOI (P). When using a prime batch size, we observe consistent and increasing results as P increases. With the original batch size K=20, we observe a strong performance for P=7, but a low performance for P=10. The matrix with P=7 shows a good distribution---corresponding to the strong performance---and the matrix with P=10 shows that each row contains a low diversity of data points. |
Alleviating Sequence Information Loss with Data Overlapping and Prime Batch Sizes | 1909.08700 | Table 5: Token order imbalance (TOI) comparison for the IEMOCAP dataset on a SER task using angry, happy, neutral and sad classes with a simple LSTM model. | ['Experiment', 'WA', 'UA'] | [['Extreme TOI (15k steps)', '0.475', '0.377'], ['Inter-batch TOI (15k steps)', '0.478', '0.386'], ['Standard TOI (15k steps)', '0.486', '0.404'], ['Alleviated TOI (15k steps)', '[BOLD] 0.553', '[BOLD] 0.489'], ['Alleviated TOI (60 epochs)', '[BOLD] 0.591', '[BOLD] 0.523']] | When choosing the Extreme TOI instead of the Standard TOI approach we observe a smaller effect than in text related task: this is due to the different nature of the text datasets (large "continuous" corpuses) and the IEMOCAP one (composed of shorter utterances). The fact that we can still observe improvements on a dataset with short utterances is a proof of the robustness of the method. |
Alleviating Sequence Information Loss with Data Overlapping and Prime Batch Sizes | 1909.08700 | Table 6: Token order imbalance (TOI) comparison for the IEMOCAP dataset on a SER task using angry, happy, neutral and sad classes for 60 epochs using the Transformer model. | ['Experiment', 'WA (60 epochs)', 'UA (60 epochs)'] | [['Alleviated TOI 1', '0.591±0.012', '0.543±0.021'], ['Alleviated TOI 2', '0.594±0.007', '0.549±0.016'], ['Alleviated TOI 3', '0.605±0.018', '0.563±0.024'], ['Alleviated TOI 5', '0.608±0.015', '0.562±0.028'], ['Alleviated TOI 10', '[BOLD] 0.617±0.015', '[BOLD] 0.571±0.024'], ['Local attention', '[BOLD] 0.635', '[BOLD] 0.588']] | The previously described Transformer model is used in these experiments. the standard deviation computed for different P-values of Alleviated TOI (P). We want to highlight the fact that the goal of these experiments is to show the direct contribution of the Alleviated TOI technique for a different model. For this reason we use a smaller version of the Transformer in order to reduce the computational cost. We believe that with a more expressive model and more repetitions, the proposed method may further improve the results. This is in accordance with our hypothesis that a higher TOI ratio (P−1)/P improves the results. |
BP-Transformer: Modelling Long-Range Context via Binary Partitioning | 1911.04070 | Table 5: BLEU score on IWSLT 2015 Zh-En | ['Model', 'BLEU'] | [['Transformer vaswani2017attention', '16.87'], ['Transformer+cache tu2018learning', '17.32'], ['HAN-NMT miculicich-etal-2018-document', '17.78'], ['Transformer (ours, single sentence)', '18.91'], ['BPT (k=4, single sentence)', '19.19'], ['BPT (k=4,l=64)', '19.84']] | BPT with k=4 and context length of 32 could further improve the baseline result by 0.93 in terms of BLEU score, which is a significant margin. |
BP-Transformer: Modelling Long-Range Context via Binary Partitioning | 1911.04070 | Table 1: Test accuracy on SST-5 and IMDB. In BPT, k=2 and k=4 for SST and IMDB respectively. The last model used word embeddings pretrained with translation and additional character-level embeddings. | ['Model', 'SST-5', 'IMDB'] | [['[BOLD] BPT', '52.71(0.32)', '[BOLD] 92.12(0.11)'], ['Star Transformer', '52.9', '90.50'], ['Transformer', '50.4', '89.24'], ['Bi-LSTM (li2015tree)', '49.8', '-'], ['Tree-LSTM (socher2013recursive)', '51.0', '-'], ['QRNN DBLP:conf/iclr/0002MXS17', '-', '91.4'], ['BCN+Char+CoVe mccann2017learned', '53.7', '91.8']] | On SST-5, our model outperforms Transformer and LSTM based models. On IMDB, our proposed model outperforms a bidirectional LSTM initialized with pre-trained character embedding and CoVe embedding mccann2017learned. |
BP-Transformer: Modelling Long-Range Context via Binary Partitioning | 1911.04070 | Table 6: BLEU score vs context length on different models | ['Context length', '0', '32', '64', '128'] | [['Transformer', '18.85', '18.66', '17.59', '15.55'], ['BPT (k=4)', '19.19', '19.84', '19.71', '19.84'], ['BPT (k=8)', '19.13', '19.59', '19.78', '19.60']] | Similar to tu2018learning and miculicich-etal-2018-document , we found a small context length is enough for achieving good performance on IWSLT for Document-Level Translation. However, as we increases context size, the performance of BPT does not get worse as these models and Transformers, suggesting the inductive bias encoded by BPT makes the model less likely to overfit. |
BP-Transformer: Modelling Long-Range Context via Binary Partitioning | 1911.04070 | Table 7: BLEU score on newstest 2014 | ['Model', 'BLEU'] | [['ByteNet\xa0kalchbrenner2016neural', '23.75'], ['GNMT+RL\xa0wu2016google', '24.6'], ['ConvS2S\xa0GehringAGYD17', '25.16'], ['Transformer\xa0vaswani2017attention', '27.3'], ['Transformer (our implementation)', '27.2'], ['BPT (k=1)', '26.9'], ['BPT (k=2)', '27.4'], ['BPT (k=4)', '[BOLD] 27.6'], ['BPT (k=8)', '26.7']] | In the setting of k=2 and k=4, BPT outperforms Vanilla Transformer with the same number of parameters and a sparse attention pattern. |
Toward Computation and Memory Efficient Neural Network Acoustic Models with Binary Weights and Activations | 1706.09453 | Table 1: WERs (%) of binary weight networks on WSJ1. The number of hidden units is 1024 for experiments in this table. | ['Model', 'Input', 'Softmax', 'dev93', 'eval92'] | [['Baseline', '—', '—', '6.8', '3.8'], ['Binary weights ( [ITALIC] p=0)', 'fixed', 'fixed', '7.7', '4.8'], ['Binary weights ( [ITALIC] p=.001)', 'fixed', 'fixed', '8.0', '4.5'], ['Binary weights ( [ITALIC] p=.01)', 'fixed', 'fixed', '8.0', '4.4'], ['Binary weights ( [ITALIC] p=0)', 'real', 'real', '10.4', '6.7'], ['Binary weights ( [ITALIC] p=0)', 'binary', 'fixed', '12.0', '7.3'], ['Binary weights ( [ITALIC] p=0)', 'binary', 'binary', '19.0', '12.0']] | In our experiments, training neural networks with binary weights from random initialization usually did not converge, or converged to very poor models. We addressed this problem by initializing our models from a well-trained neural network model with real-valued weights. This approach worked well, and was used in all our experiments. We trained the baseline neural network with an initial learning rate of 0.008, following the nnet1 recipe in Kaldi. This initial learning rate was found to be a good tradeoff between convergence speed and model accuracy. Here, we explored several settings: the weights in the input layer and Softmax layer were binary, real-valued or fixed from initialization. We also did experiment to update those real-valued weights jointly with the binary weights using the same learning rate, but obtained much worse results. The reason may be that the gradients of the real-valued weights and binary weights are in very different ranges, and updating them using the same learning rate is not appropriate. In order to have a complete picture, we have also tried using binary weights for the input layer and the Softmax layer. As expected, we achieved much lower accuracy, confirming that reducing the resolution of the input features and activations for the Softmax classifier are harmful for classification. Applying this step very frequently is harmful to the SGD optimization as it can counteract the SGD update. In our experiments, we set a probability p to control the frequency of this operation. More precisely, after each SGD update, we draw a sample from a uniform distribution between 0 and 1, and if its value is smaller than p, then the semi-stochastic binarization approach will be applied. Therefore, a larger p means more frequent operations and vice versa. Note that, we used the dev93 set to choose the language model score for the eval92 set, and from our observations, the results of development and evaluation sets are usually not well aligned, demonstrating that there may be a slight mismatch between the two evaluation sets. The semi-stochastic binarization approach will be revisited on the AMI dataset. |
Toward Computation and Memory Efficient Neural Network Acoustic Models with Binary Weights and Activations | 1706.09453 | Table 3: WERs (%) of neural networks with binary weights and activations on the WSJ1 dataset. We set p=0 for the system with binary weights, and k=1 for the system with binary activations. | ['Model', 'Size', '[ITALIC] b', 'dev93', 'eval92'] | [['Baseline', '1024', '–', '6.8', '3.8'], ['Baseline', '2048', '–', '6.5', '3.5'], ['Binary weights', '1024', '(−1,+1)', '7.7', '4.8'], ['Binary weights', '1024', '(0,1)', 'Not Converged', 'Not Converged'], ['Binary activations', '1024', '(−1,+1)', '8.2', '4.4'], ['Binary activations', '1024', '(0,1)', '7.2', '4.1'], ['Binary neural network', '1024', '(−1,+1)', '15.6', '10.7'], ['Binary activations', '2048', '(−1,+1)', '7.3', '4.4'], ['Binary weights', '2048', '(−1,+1)', '7.5', '4.4'], ['Binary neural network', '2048', '(−1,+1)', 'Not Converged', 'Not Converged']] | For all the experiments in this table, the weights in the input and Softmax layer were fixed, and the first and last hidden layers used Sigmoid activations. Using a larger number of hidden units works slightly better for both binary weight and binary activation systems. However, we only managed to train the network with 1024 hidden units and achieved much inferior accuracies. Training fully binary neural networks is still a challenge from our study. With Sigmoid activations, using (0, 1) for binary weights can cause training divergence as the elements of the activation vector ^hl are always positive. Using (0,1) for binary activations, however, we achieved lower WER. The reason may be that the network was initialized from Sigmoid activations, and (0,1) is much closer to Sigmoid compared to (−1,1). In fact, (0,1) binary function can be viewed as the hard version of Sigmoid. Using (−1,+1) for binary activations may work better with networks initialized from Tanh activations, and that will be investigated in our future works. |
Toward Computation and Memory Efficient Neural Network Acoustic Models with Binary Weights and Activations | 1706.09453 | Table 4: WERs (%) of neural network with binary weights and activations on the AMI dataset. The number of hidden units is 2048, and b denotes binarization. | ['Model', '[ITALIC] b', 'dev', 'eval'] | [['Baseline', '–', '26.1', '27.5'], ['Binary weights ( [ITALIC] p=0)', '(−1,+1)', '30.3', '32.7'], ['Binary weights ( [ITALIC] p=.001)', '(−1,+1)', '30.0', '32.2'], ['Binary weights ( [ITALIC] p=.01)', '(−1,+1)', '29.6', '31.7'], ['Binary weights ( [ITALIC] p=.05)', '(−1,+1)', '29.6', '31.9'], ['Binary activations ( [ITALIC] k=1)', '(−1,+1)', '30.1', '32.5'], ['Binary activations ( [ITALIC] k=2)', '(−1,+1)', '29.9', '32.3'], ['Binary activations ( [ITALIC] k=3)', '(−1,+1)', '30.2', '32.4'], ['Binary activations ( [ITALIC] k=4)', '(−1,+1)', '29.8', '32.0'], ['Binary activations ( [ITALIC] k=1)', '(0,1)', '27.5', '29.5'], ['Binary activations ( [ITALIC] k=2)', '(0,1)', '28.0', '30.2'], ['Binary activations ( [ITALIC] k=3)', '(0,1)', '29.8', '32.2'], ['Binary neural network', '(−1,+1)', 'Not Converged', 'Not Converged']] | For the binary weight systems, we revisited the semi-stochastic binarization approach. While the convergence curves were similar Since the model was initialized from a Sigmoid network, the binary activation system with (0,1) worked much better than its counterpart with (−1,+1) when k=1. Again, we did experiments by tuning the threshold k for binary activation systems. Unlike the experiments on WSJ1, the models are relatively tolerant to changing k with (−1,+1) for binarization, and we only observed divergence when k=7. This may be because that we used different features in these experiments, causing differences in the distributions of ^hl. However, this is not the case for binarization with (0,1), as the system degrades rapidly when k increases. The reason may be that the binarization function with (0,1) is not symmetric, and the approximation error using an identity function is significant for large k when inputs are negative. Again, we failed to train binary neural networks with 2048 hidden units due to divergence in training. |
Generative Adversarial Zero-Shot Relational Learning for Knowledge Graphs | 2001.02332 | Table 3: Quantitative analysis of the generated relation embeddings by our generator. These presented relations are from the NELL test relation set. “# Can. Num.” denotes the number of candidates of test relations. For one relation, “# Cos. Sim.” denotes the mean cosine similarity between the corresponding generated relation embedding and the cluster center xrc of the relation triples. | ['Relations', '# Can. Num.', '# Cos. Sim.', 'MRR ZSGAN [ITALIC] KG', 'MRR ZS-DistMult', 'Hits@10 ZSGAN [ITALIC] KG', 'Hits@10 ZS-DistMult'] | [['animalThatFeedOnInsect', '293', '0.8580', '[BOLD] 0.347', '0.302', '[BOLD] 63.4', '61.8'], ['automobileMakerDealersInState', '600', '0.1714', '[BOLD] 0.066', '0.039', '[BOLD] 15.4', '5.1'], ['animalSuchAsInvertebrate', '786', '0.7716', '[BOLD] 0.419', '0.401', '[BOLD] 59.8', '57.6'], ['sportFansInCountry', '2100', '0.1931', '[BOLD] 0.066', '0.007', '[BOLD] 15.4', '1.3'], ['produceBy', '3174', '0.6992', '[BOLD] 0.467', '0.375', '[BOLD] 65.3', '51.2'], ['politicalGroupOfPoliticianus', '6006', '0.2211', '0.018', '[BOLD] 0.039', '[BOLD] 5.3', '3.9'], ['parentOfPerson', '9506', '0.5836', '0.343', '[BOLD] 0.381', '56.2', '[BOLD] 60.4'], ['teamCoach', '10569', '0.6764', '[BOLD] 0.393', '0.258', '[BOLD] 53.7', '39.9']] | Unlike image, our generated data cannot be observed intuitively. Instead, we calculate the cosine similarity between the generated relation embeddings and the cluster center xrc of their corresponding relations. It can be seen that our method indeed generates the plausible relation embeddings for many relations and the link prediction performance is positively correlated with the quality of the relation embeddings. |
Generative Adversarial Zero-Shot Relational Learning for Knowledge Graphs | 2001.02332 | Table 2: Zero-shot link prediction results on the unseen relations. The proposed baselines are shown at the top of the table; Our generative adversarial model is denoted as ZSGANKG and the results are shown at the bottom. Bold numbers denote the best results, and Underline numbers denote the best ones among our ZSGANKG methods. | ['Model', '[BOLD] NELL-ZS MRR', '[BOLD] NELL-ZS Hits@10', '[BOLD] NELL-ZS Hits@5', '[BOLD] NELL-ZS Hits@1', '[BOLD] Wiki-ZS MRR', '[BOLD] Wiki-ZS Hits@10', '[BOLD] Wiki-ZS Hits@5', '[BOLD] Wiki-ZS Hits@1'] | [['ZS-TransE', '0.097', '20.3', '14.7', '4.3', '0.053', '11.9', '8.1', '1.8'], ['ZS-DistMult', '0.235', '32.6', '28.4', '18.5', '0.189', '23.6', '21.0', '16.1'], ['ZS-ComplEx', '0.216', '31.6', '26.7', '16.0', '0.118', '18.0', '14.4', '8.3'], ['ZSGAN [ITALIC] KG (TransE)', '0.240', '[BOLD] 37.6', '[BOLD] 31.6', '17.1', '0.185', '26.1', '21.3', '14.1'], ['ZSGAN [ITALIC] KG (DistMult)', '[BOLD] 0.253', '37.1', '30.5', '[BOLD] 19.4', '[BOLD] 0.208', '[BOLD] 29.4', '[BOLD] 24.1', '[BOLD] 16.5'], ['ZSGAN [ITALIC] KG (ComplEx-re)', '0.231', '36.1', '29.3', '16.1', '0.186', '25.7', '21.5', '14.5'], ['ZSGAN [ITALIC] KG (ComplEx-im)', '0.228', '32.1', '27.0', '17.4', '0.185', '24.8', '20.9', '14.7']] | For NELL-ZS dataset, we set the embedding size as 100. For Wiki-ZS, we set the embedding size as 50 for faster training. The proposed generative method uses the pre-trained KG embeddings as input, which are trained on the triples in the training set. For TransE and DistMult, we directly use their 1-D vectors. For feature encoder, the upper limit of the neighbor number is 50, the number of reference triples k in one training step is 30, and the learning rate is 5e−4. For the generative model, the learning rate is 1e−4, and β1, β2 are set as 0.5, 0.9 respectively. When updating the generator one time, the iteration number nd of the discriminator is 5. The dimension of the random vector z is 15, and the number of the generated relation embedding Ntest is 20. Spectral normalization is applied for both generator and discriminator. These hyperparameters are also tuned on the validation set Dvalid. As for word embeddings, we directly use the released word embedding set GoogleNews- Even though NELL-ZS and Wiki-ZS have different scales of triples and relation sets, the proposed generative method still achieves consistent improvements over various baselines on both two zero-shot datasets. It demonstrates that the generator successfully finds the intermediate semantic representation to bridge the gap between seen and unseen relations and generates reasonable relation embeddings for unseen relations merely from their text descriptions. Therefore, once trained, our model can be used to predict arbitrary newly-added relations without fine-tuning, which is significant for real-world knowledge graph completion. Unlike image, our generated data cannot be observed intuitively. Instead, we calculate the cosine similarity between the generated relation embeddings and the cluster center xrc of their corresponding relations. It can be seen that our method indeed generates the plausible relation embeddings for many relations and the link prediction performance is positively correlated with the quality of the relation embeddings. |
Cross-lingual Word Analogies using Linear Transformations between Semantic Spaces | 1807.04175 | (i) verb-past-tense | ['[EMPTY]', 'En', 'De', 'Es', 'It', 'Cs', 'Hr'] | [['En', '42.2', '13.1', '18.6', '29.7', '58.0', '41.8'], ['De', '45.1', '33.4', '20.3', '35.1', '69.0', '46.8'], ['Es', '27.0', '0.8', '31.8', '48.6', '35.2', '22.4'], ['It', '32.2', '1.4', '39.3', '48.7', '48.8', '29.8'], ['Cs', '42.5', '13.1', '22.4', '40.0', '80.8', '55.6'], ['Hr', '43.6', '8.7', '33.1', '48.9', '73.4', '63.6']] | Again, rows represent the source language a and columns the target language b. The results were achieved using B-CCA-cu transformation with dictionaries of size n=20,000. Each language seems to have strengths and weaknesses. |
Cross-lingual Word Analogies using Linear Transformations between Semantic Spaces | 1807.04175 | Table 1: Number of word pairs for each language and each analogy type. | ['Semantic', 'family', 'En 24', 'De 24', 'Es 20', 'It 20', 'Cs 26', 'Hr 41'] | [['Semantic', 'state-currency', '29', '29', '28', '29', '29', '21'], ['Semantic', 'capital-common-countries', '23', '23', '21', '23', '23', '23'], ['Syntactic', 'state-adjective', '41', '41', '40', '41', '41', '41'], ['Syntactic', 'adjective-comparative', '23', '37', '5', '10', '40', '77'], ['Syntactic', 'adjective-superlative', '20', '34', '40', '29', '40', '77'], ['Syntactic', 'adjective-opposite', '29', '29', '20', '24', '27', '29'], ['Syntactic', 'noun-plural', '112', '111', '37', '36', '74', '46'], ['Syntactic', 'verb-past-tense', '38', '40', '39', '33', '95', '40']] | We combine and extend available corpora for monolingual word analogies in English (En ) Mikolov et al. (It) Berardi et al. Czech (Cs) Svoboda and Brychcín Svoboda and Beliga We consider only those analogy types, which exist across all six languages (three semantically oriented and six syntactically oriented analogy types). For all languages, questions composed of single words are taken into account (i.e., no phrases). In the following list we briefly introduce each analogy type and describe the changes and extensions we have made compared with the original corpora: For each analogy type we process all combinations of pairs between languages a and b (e.g., for the category family and the transformation from Czech to German, we have 26×24=624 questions). In the case a=b (i.e., monolingual experiments), we omit the questions composed from two same pairs (e.g., for the category family in Italian, we have 20×19=380 questions). The final accuracy is an average over accuracies for individual categories. By averaging the accuracies each analogy type contributes equally to the final score and the results are comparable across languages. In the following text, Acc@1 denotes the accuracy considering only the most similar word as a correct answer. Acc@5 assumes that the correct answer is in the list of five most similar words. All accuracies are expressed in percentages. |
Cross-lingual Word Analogies using Linear Transformations between Semantic Spaces | 1807.04175 | Table 2: The average accuracies across all combinations of language pairs for different linear transformations and post-processing techniques. The size of bilingual dictionary was set to n=20,000. No trans. denotes the monolingual experiments without transforming the spaces. | ['[EMPTY]', '[EMPTY]', '[BOLD] - [BOLD] Acc@1', '[BOLD] - [BOLD] Acc@5', '[BOLD] -c [BOLD] Acc@1', '[BOLD] -c [BOLD] Acc@5', '[BOLD] -u [BOLD] Acc@1', '[BOLD] -u [BOLD] Acc@5', '[BOLD] -cu [BOLD] Acc@1', '[BOLD] -cu [BOLD] Acc@5'] | [['Monoling.', 'No trans.', '49.6', '63.7', '50.1', '64.6', '50.6', '64.6', '[BOLD] 51.1', '[BOLD] 65.2'], ['Monoling.', 'M-LS', '40.2', '55.3', '40.3', '55.6', '41.3', '56.5', '41.3', '56.6'], ['Monoling.', 'M-OT', '49.6', '63.7', '50.1', '64.6', '50.6', '64.6', '[BOLD] 51.1', '[BOLD] 65.2'], ['Monoling.', 'M-CCA', '46.8', '61.8', '47.6', '62.5', '47.5', '62.4', '48.1', '63.0'], ['Cross-lingual', 'B-LS', '33.7', '51.4', '34.3', '52.3', '33.5', '51.1', '34.0', '52.0'], ['Cross-lingual', 'B-OT', '40.1', '55.9', '40.6', '56.6', '40.7', '56.5', '41.2', '57.3'], ['Cross-lingual', 'B-CCA', '42.3', '57.5', '42.7', '58.2', '42.6', '57.8', '[BOLD] 43.1', '[BOLD] 58.5'], ['Cross-lingual', 'M-LS', '32.2', '48.8', '32.7', '49.3', '32.9', '49.6', '32.5', '49.3'], ['Cross-lingual', 'M-OT', '37.3', '53.7', '37.6', '54.3', '37.8', '54.4', '[BOLD] 38.2', '[BOLD] 55.0'], ['Cross-lingual', 'M-CCA', '35.3', '52.7', '36.2', '53.8', '35.5', '52.9', '36.0', '53.5']] | The columns represent different post-processing techniques and rows different transformations. The upper part of the table shows the monolingual experiments with original spaces without transformation (No trans.) compared with the unified multilingual space for all six languages. The orthogonal transformation provides same results as the original semantic space. Canonical correlation analysis leads to slightly lower accuracies and least squares method is worst. The most interesting is the lower part of the table, i.e., cross-lingual experiments, showing the average accuracies over all language pairs, but where source a and target b languages differ a≠b. We can see that canonical correlation analysis performs best for bilingual cases, while orthogonal transformation yields better accuracies in multilingual spaces. In all cases, the mean centering followed by vector normalization led to the best results. |
Cross-lingual Word Analogies using Linear Transformations between Semantic Spaces | 1807.04175 | (a) family | ['[EMPTY]', 'En', 'De', 'Es', 'It', 'Cs', 'Hr'] | [['En', '68.8', '52.4', '85.4', '76.0', '41.2', '47.2'], ['De', '65.5', '48.0', '76.3', '66.5', '35.9', '40.9'], ['Es', '70.6', '49.0', '86.8', '74.5', '43.1', '45.2'], ['It', '65.4', '45.6', '81.8', '72.9', '39.2', '45.2'], ['Cs', '61.5', '38.6', '74.0', '65.0', '35.6', '42.0'], ['Hr', '57.4', '33.3', '62.8', '60.0', '32.6', '37.1']] | Again, rows represent the source language a and columns the target language b. The results were achieved using B-CCA-cu transformation with dictionaries of size n=20,000. Each language seems to have strengths and weaknesses. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.