paper
stringlengths
0
839
paper_id
stringlengths
1
12
table_caption
stringlengths
3
2.35k
table_column_names
large_stringlengths
13
1.76k
table_content_values
large_stringlengths
2
11.9k
text
large_stringlengths
69
2.82k
Design and Optimization of a Speech Recognition Front-End for Distant-Talking Control of a Music Playback Device
1405.1379
Table 2: Command Accuracy (%) for different commands at different SERs.
['SER (dB) params.', '−35∼−30 ASR', '−35∼−30 POLQA', '−30∼−25 ASR', '−30∼−25 POLQA', '−25∼−20 ASR', '−25∼−20 POLQA']
[['BACK', '73', '47', '83', '50', '90', '53'], ['NEXT', '70', '50', '90', '57', '90', '63'], ['PLAY', '80', '67', '94', '80', '96', '83'], ['PAUSE', '76', '50', '87', '57', '87', '60']]
4.2.2 Recording of Real-World Commands We used eight subjects (male/female, native/non-native) who uttered the command list at a distance of around 1m from the microphone of the Beats Pill™ portable speaker while music was playing. We used four different music tracks in the echo path, where the starting point of the track was chosen randomly. The playback level for the experiments was set to three different levels of 95 dB SPL, 90 dB SPL, and 85 dB SPL. We estimated the range of SER for the different setups to be approximately equal to −35 to −30 dB, −30 to −25 dB, and −25 to −20 dB for the three levels, respectively. The estimation of the SERs were made possible thanks to a lavalier microphone that recorded the near-end speech. Note that the SERs in the experiments are lower than the SERs used in the simulation, which validates the generalization of the tuning methodology. The results clearly show that our proposed tuning based on ASR maximization outperforms the POLQA-based tuning. The difference in performance seems to derive from the POLQA optimization being less aggressive on noise in order to preserve speech quality. More noise in the processed files translates into worse performance of the speech recognizer and the VAD. As a reference, when our speech enhancement front end was not used, the average recognition rate was 25% over all commands (coin toss) in the lowest SER setup.
Design and Optimization of a Speech Recognition Front-End for Distant-Talking Control of a Music Playback Device
1405.1379
Table 1: Phone Accuracy (%) for the noisy TIMIT database .
['noise model', 'mix uni.', 'mix bi.', 'babble uni.', 'babble bi.', 'music uni.', 'music bi.', 'factory uni.', 'factory bi.']
[['ASR', '[BOLD] 22.7', '[BOLD] 37.4', '[BOLD] 22.4', '[BOLD] 37.0', '[BOLD] 22.2', '[BOLD] 36.3', '[BOLD] 21.6', '[BOLD] 36.5'], ['POLQA', '21.7', '35.7', '21.6', '35.6', '21.1', '35.3', '21.4', '35.5']]
In order to provide a fair comparison, we also tuned the parameters to maximize the mean opinion score (MOS) using the Perceptual Objective Listening Quality Assessment (POLQA) , through the same GA setup and the same noisy TIMIT database. To assess the performance of our tuning method, we tested on data not used in the training by creating a second simulated noisy TIMIT database with different conditions. The SER and SNR were again chosen from uniform distributions ranging from −15 dB to −10 dB and from −10 dB to 10 dB, respectively. The “mix” noise was picked randomly from the babble, music, or factory noise. In the case of music, noisy files were generated from a set of tracks from different genres at different start points. When the front-end speech enhancer was not used, the PAR dropped to 10.1% (unigram) and 15.7% (bigram) for the noisy signal.
Scaling through abstractions – high-performance vectorial wave simulations for seismic inversion with Devito
2004.10519
TABLE I: Per-grid-point FLOPs of the finite-difference TTI wave-equation stencil with different spatial discretization orders.
['spatial order', 'w/o optimizations', 'w/ optimizations']
[['4', '501', '95'], ['8', '539', '102'], ['12', '1613', '160'], ['16', '5489', '276']]
Owing to the very high number of floating-point operations (FLOPs) needed per grid point for the weighted rotated Laplacian, this anisotropic wave-equation is extremely challenging to implement. The version without FLOP-reducing optimizations is a direct translation of the discretized operators into stencil expressions (see tti-so8-unoptimized.c). The version with optimizations employs transformations such as common sub-expressions elimination, factorization, and cross-iteration redundancy elimination – the latter being key in removing redundancies introduced by mixed derivatives. Implementing all of these techniques manually is inherently difficult and laborious. Further, to obtain the desired performance improvements it is necessary to orchestrate them with aggressive loop fusion (for data locality), tiling (for data locality and tensor temporaries), and potentially ad-hoc vectorization strategies (if rotating registers are used). what is emphasized here is that users can easily take full advantage of these optimizations without needed to concern themselves with the details.
Fixed Encoder Self-Attention Patterns in Transformer-Based Machine Translation
2002.10260
Table 6: Accuracy scores of the German–English models on the ContraWSD and MuCoW test suites.
['Encoder heads', 'ContraWSD 6+1', 'ContraWSD 6+6', 'MuCoW 6+1', 'MuCoW 6+6']
[['8L', '[BOLD] 0.804', '0.831', '[BOLD] 0.741', '0.761'], ['7Ftoken+1L', '0.793', '[BOLD] 0.834', '0.734', '[BOLD] 0.772 '], ['7Ftoken (H8 disabled)', '0.761', '0.816', '0.721', '0.757']]
Overall, the model with 6 decoder layers and fixed attentive patterns (7Ftoken+1L) achieves higher accuracy than the model with all learnable attention heads (8L), while the 1-layer decoder models show the opposite effect. It appears that having 6 decoder layers can effectively cope with WSD despite having only one learnable attention head. Interestingly enough, when we disable the learnable attention head (7Ftoken H8 disabled), performance drops consistently in both test suites, showing that the learnable head plays a key role for WSD, specializing in semantic feature extraction.
Memeify: A Large-Scale Meme Generation System
1910.12279
Table 3. Average ratings of volunteers for original and generated memes (baseline and ours) across themes. ARO represents average ratings for original memes, ARG represents average ratings for generated memes (our model), and ARB represents average ratings for generated memes (baseline model).
['[BOLD] Theme', '[BOLD] ARO', '[BOLD] ARG', '[BOLD] ARB']
[['Normie', '3.1', '2.9', '2.7'], ['Savage', '3.6', '3.6', '3.4'], ['Depressing', '3.2', '3.1', '3'], ['Unexpected', '3.5', '3.3', '3.3'], ['Frustrated', '3.2', '3', '2.9'], ['Wholesome', '2.8', '2.7', '2.6'], ['Overall', '3.23', '3.1', '2.98']]
To evaluate user satisfaction levels, we conducted a rating study. We generated a batch of 100 memes for each theme by randomly picking classes within each theme, individually for the baseline model and our model. For each theme, we showed a set of 5 generated memes from our model, 5 generated memes from the baseline model, and 5 original memes, in a mixed order (without revealing the identity of the original and generated memes), to each volunteer. We asked them to rate the generated and original memes from a range of 1 (lowest rating) to 5 (highest rating) based on caption-content, humour and originality. We observe that the quality of our generated memes is almost on par with the original memes. We also notice that our model outperforms the baseline model, across all themes. Therefore, we maintain this on a holistic level; all of the volunteers were satisfied with the quality of the generated memes as well.
Factor Graph Attention
1904.05880
Table 3: Performance of discriminative models on VisDial v1.0 test-std. Higher is better for MRR and recall@k, while lower is better for mean rank and NDCG. (*) denotes use of external knowledge.
['Model', 'MRR', 'R@1', 'R@5', 'R@10', 'Mean', 'NDCG']
[['LF\xa0', '0.554', '40.95', '72.45', '82.83', '5.95', '0.453'], ['HRE\xa0', '0.542', '39.93', '70.45', '81.50', '6.41', '0.455'], ['MN\xa0', '0.555', '40.98', '72.30', '83.30', '5.92', '0.475'], ['CorefNMN (ResNet-152)\xa0*', '0.615', '47.55', '78.10', '88.80', '4.40', '0.547'], ['NMN (ResNet-152)\xa0*', '0.588', '44.15', '76.88', '86.88', '4.81', '[BOLD] 0.581'], ['FGA (VGG)', '0.637', '49.58', '80.97', '88.55', '4.51', '0.521'], ['FGA (F-RCNNx101)', '0.662', '52.75', '82.92', '91.07', '3.8', '0.569'], ['5×FGA (VGG)', '0.673', '53.40', '85.28', '92.70', '3.54', '0.545'], ['5×FGA (F-RCNNx101)', '[BOLD] 0.693', '[BOLD] 55.65', '[BOLD] 86.73', '[BOLD] 94.05', '[BOLD] 3.14', '0.572']]
Our submission to the challenge significantly improved all metrics except for NDCG. We report our results in Tab. While the challenge did allow use of any external resources to improve the model, we only changed our approach to use an ensemble of 5 trained Factor Graph Attention models which were initialized randomly. These features are expensive to extract, and use external detector information.
Factor Graph Attention
1904.05880
Table 1: Performance of discriminative models on VisDial v0.9. Higher is better for MRR and recall@k, while lower is better for mean rank. (*) denotes use of external knowledge.
['Model', 'MRR', 'R@1', 'R@5', 'R@10', 'Mean']
[['LF\xa0', '0.5807', '43.82', '74.68', '84.07', '5.78'], ['HRE\xa0', '0.5846', '44.67', '74.50', '84.22', '5.72'], ['HREA\xa0', '0.5868', '44.82', '74.81', '84.36', '5.66'], ['MN\xa0', '0.5965', '45.55', '76.22', '85.37', '5.46'], ['HieCoAtt-QI\xa0', '0.5788', '43.51', '74.49', '83.96', '5.84'], ['AMEM\xa0', '0.6160', '47.74', '78.04', '86.84', '4.99'], ['HCIAE-NP-ATT\xa0', '0.6222', '48.48', '78.75', '87.59', '4.81'], ['SF-QIH-se-2\xa0', '0.6242', '48.55', '78.96', '87.75', '4.70'], ['CorefNMN\xa0*', '0.636', '50.24', '79.81', '88.51', '4.53'], ['CoAtt-GAN-w/ [BOLD] R [ITALIC] inte-TF\xa0', '0.6398', '50.29', '80.71', '88.81', '4.47'], ['CorefNMN (ResNet-152)\xa0*', '0.641', '50.92', '80.18', '88.81', '4.45'], ['FGA (VGG)', '0.6525', '51.43', '82.08', '89.56', '4.35'], ['FGA (F-RCNNx101)', '0.6712', '54.02', '83.21', '90.47', '4.08'], ['9×FGA (VGG)', '[BOLD] 0.6892', '[BOLD] 55.16', '[BOLD] 86.26', '[BOLD] 92.95', '[BOLD] 3.39']]
Visual question answering comparison: We first compare against a variety of baselines (see Tab. Note that almost all of the baselines (except LF, HRE and MN and SF-QIH-se-2) use attention, i.e., attention is an important element in any model. Note that our model uses the entire set of answers to predict each answer ’s score, i.e., we use p(ui|A,I,Q,C,H) This is in contrast to SF-QIH-se-2, which doesn’t use attention and models p(ui|^ui,I,Q,C,H). Because CoAtt-GAN uses a hierarchical approach, the ability to further improve the reasoning system is challenging and manual work. In contrast, our general attention mechanism allows to attend to the entire set of cues in the dataset, letting the model automatically choose the more relevant cues. We refer the readers to the appendix for analysis of utility-importance via importance score. As can be seen from Tab. We also report an ensemble of 9 models which differ only by the initial seed. We emphasize that our approach only uses VGG16. Lastly, some baselines report to use GloVe to initialize the word embeddings, while we didn’t use any pre-trained embedding weights.
Factor Graph Attention
1904.05880
Table 2: Performance on the question generation task. Higher is better for MRR and recall@k, while lower is better for mean rank.
['Model', 'MRR', 'R@1', 'R@5', 'R@10', 'Mean']
[['SF-QIH-se-2\xa0', '0.4060', '26.76', '55.17', '70.39', '9.32'], ['FGA', '[BOLD] 0.4138', '[BOLD] 27.42', '[BOLD] 56.33', '[BOLD] 71.32', '[BOLD] 9.1']]
We adapted to this task, by changing the input utilities to the previous interaction (Q+A)t−1 instead of the current question Qt. Our model also improves previous state-of-the-art results (see Tab.
Source Dependency-Aware Transformer with Supervised Self-Attention
1909.02273
Table 1: Case-insensitive BLEU scores (%) for Chinese-to-English translation on NIST datasets. “+CSH” denotes model only trained under the supervision of child attentional adjacency matrix (β = 0). “+PSH” denotes model only trained under the supervision of parent attentional adjacency matrix (α = 0). “+CSH+PSH” is trained under the supervision of both.
['System', 'NIST2005', 'NIST2008', 'NIST2012', 'Average']
[['RNNsearch', '38.41', '30.01', '28.48', '32.30'], ['Tree2Seq ', '39.44', '31.03', '29.22', '33.23'], ['SE-NMT (Wu et al. 2017)', '40.01', '31.44', '29.45', '33.63'], ['Transformer', '43.89', '34.83', '32.59', '37.10'], ['+CSH', '44.21', '36.63', '33.57', '38.14'], ['+PSH', '44.24', '36.17', '33.86', '38.09'], ['+CSH+PSH', '[BOLD] 44.87', '[BOLD] 36.73', '[BOLD] 34.28', '[BOLD] 38.63']]
We report on case-insensitive BLEU here since English words are lowercased. From the table we can see that syntax-aware RNN models always outperform the RNNsearch baseline. However, the performance of the Transformer is much higher than that of all RNN-based methods. In Transformer+CSH, we use only the child attentional adjacency matrix to guide the encoder. Being aware of child dependencies, Transformer+CSH gains 1.0 BLEU point improvement over the Transformer baseline on the average. Transformer+PSH, in which only the parent attentional adjacency matrix is used as supervision, also achieves about 1.0 BLEU point improvement. After combination, the new Transformer+CSH+PSH can further improve BLEU by about 0.5 point on the average, which significantly outperforms the baseline and other source syntax-based methods in all test sets. This demonstrates that both child dependencies and parent dependencies benefit the Transformer model and their effects can be accumulated.
Source Dependency-Aware Transformer with Supervised Self-Attention
1909.02273
Table 2: Evaluation results on the English-to-Japanese translation task.
['System', 'BLEU', 'RIBES']
[['RNNsearch', '34.83', '80.92'], ['Eriguchi et al. (2016)', '34.91', '81.66'], ['Transformer', '36.24', '81.90'], ['+CSH', '36.83', '82.15'], ['+PSH', '36.75', '82.09'], ['+CSH+PSH', '[BOLD] 37.22', '[BOLD] 82.37']]
We conduct experiments on the WAT2016 English-to-Japanese translation task in this section. Our baseline systems include RNNsearch, a tree2seq attentional NMT model using tree-LSTM proposed by Eriguchi et al. (2016) and Transformer. According to the table, our Transformer+CSH and Transformer+PSH outperform Transformer and the other existing NMT models in terms of both BLEU and RIBES.
Source Dependency-Aware Transformer with Supervised Self-Attention
1909.02273
Table 3: BLEU scores (%) for Chinese-to-English (Zh-En), English-to-Chinese (En-Zh) translation on WMT2017 datasets and English-to-German (En-De) task. Both char-level BLEU (CBLEU) and word-level BLEU (WBLEU) are used as metrics for the En-Zh task.
['System', 'Zh-En', 'En-Zh CBLEU', 'En-Zh WBLEU', 'En-De']
[['Transformer', '21.29', '32.12', '19.14', '25.71'], ['+CSH', '21.60', '32.46', '19.54', '26.01'], ['+PSH', '21.67', '32.37', '19.53', '25.87'], ['+CSH+PSH', '[BOLD] 22.15', '[BOLD] 33.03', '[BOLD] 20.19', '[BOLD] 26.31']]
To verify the effect of syntax knowledge on large-scale translation tasks, we further conduct three experiments on the WMT2017 bidirectional English-Chinese tasks and WMT2014 English-to-German. For the Chinese-to-English, our proposed method outperforms baseline by 0.86 BLEU score. For the English-to-Chinese task, the Transformer+CSH+PSH gains 0.91 and 1.05 improvements on char-level BLEU and word-level BLEU respectively. For En-De task, the improvement is 0.6 which is not as much as the other two. We speculate that as the grammars of English and German are very similar, the original model can capture the syntactic knowledge well. Even though, the improvement still illustrates the effectiveness of our method.
ERNIE: Enhanced Language Representation with Informative Entities
1905.07129
Table 5: Results of various models on FewRel and TACRED (%).
['Model', 'FewRel P', 'FewRel R', 'FewRel F1', 'TACRED P', 'TACRED R', 'TACRED F1']
[['CNN', '69.51', '69.64', '69.35', '70.30', '54.20', '61.20'], ['PA-LSTM', '-', '-', '-', '65.70', '64.50', '65.10'], ['C-GCN', '-', '-', '-', '69.90', '63.30', '66.40'], ['BERT', '85.05', '85.11', '84.89', '67.23', '64.81', '66.00'], ['ERNIE', '88.49', '88.44', '[BOLD] 88.32', '69.97', '66.08', '[BOLD] 67.97']]
As FewRel does not have any null instance where there is not any relation between entities, we adopt macro averaged metrics to present the model performances. Since FewRel is built by checking whether the sentences contain facts in Wikidata, we drop the related facts in KGs before pre-training for fair comparison. As the training data does not have enough instances to train the CNN encoder from scratch, CNN just achieves an F1 score of 69.35%. However, the pre-training models including BERT and ERNIE increase the F1 score by at least 15%. (2) ERNIE achieves an absolute F1 increase of 3.4% over BERT, which means fusing external knowledge is very effective. In TACRED, there are nearly 80% null instances so that we follow the previous work Zhang et al. The results of CNN, PA-LSTM, and C-GCN come from the paper by \newcitezhang2018graph, which are the best results of CNN, RNN, and GCN respectively. The entity mask strategy refers to replacing each subject (and object similarly) entity with a special NER token, which is similar to our proposed pre-training task dEA. (2) ERNIE achieves the best recall and F1 scores, and increases the F1 of BERT by nearly 2.0%, which proves the effectiveness of the knowledgeable module for relation classification.
ERNIE: Enhanced Language Representation with Informative Entities
1905.07129
Table 2: Results of various models on FIGER (%).
['Model', 'Acc.', 'Macro', 'Micro']
[['NFGEC (Attentive)', '54.53', '74.76', '71.58'], ['NFGEC (LSTM)', '55.60', '75.15', '71.73'], ['BERT', '52.04', '75.16', '71.63'], ['ERNIE', '[BOLD] 57.19', '[BOLD] 76.51', '[BOLD] 73.39']]
From the results, we observe that: (1) BERT achieves comparable results with NFGEC on the macro and micro metrics. However, BERT has lower accuracy than the best NFGEC model. As strict accuracy is the ratio of instances whose predictions are identical to human annotations, it illustrates some wrong labels from distant supervision are learned by BERT due to its powerful fitting ability. (2) Compared with BERT, ERNIE significantly improves the strict accuracy, indicating the external knowledge regularizes ERNIE to avoid fitting the noisy labels and accordingly benefits entity typing.
ERNIE: Enhanced Language Representation with Informative Entities
1905.07129
Table 3: Results of various models on Open Entity (%).
['Model', 'P', 'R', 'F1']
[['NFGEC (LSTM)', '68.80', '53.30', '60.10'], ['UFET', '77.40', '60.60', '68.00'], ['BERT', '76.37', '70.96', '73.56'], ['ERNIE', '[BOLD] 78.42', '[BOLD] 72.90', '[BOLD] 75.56']]
From the table, we observe that: (1) BERT and ERNIE achieve much higher recall scores than the previous entity typing models, which means pre-training language models make full use of both the unsupervised pre-training and manually-annotated training data for better entity typing. (2) Compared to BERT, ERNIE improves the precision by 2% and the recall by 2%, which means the informative entities help ERNIE predict the labels more precisely.
ERNIE: Enhanced Language Representation with Informative Entities
1905.07129
Table 6: Results of BERT and ERNIE on different tasks of GLUE (%).
['Model', 'MNLI-(m/mm)', 'QQP', 'QNLI', 'SST-2']
[['[EMPTY]', '392k', '363k', '104k', '67k'], ['BERTBASE', '84.6/83.4', '71.2', '-', '93.5'], ['ERNIE', '84.0/83.2', '71.2', '91.3', '93.5'], ['Model', 'CoLA', 'STS-B', 'MRPC', 'RTE'], ['[EMPTY]', '8.5k', '5.7k', '3.5k', '2.5k'], ['BERTBASE', '52.1', '85.8', '88.9', '66.4'], ['ERNIE', '52.3', '83.2', '88.2', '68.8']]
We notice that ERNIE is consistent with BERTBASE on big datasets like MNLI, QQP, QNLI, and SST-2. The results become more unstable on small datasets, that is, ERNIE is better on CoLA and RTE, but worse on STS-B and MRPC.
ERNIE: Enhanced Language Representation with Informative Entities
1905.07129
Table 7: Ablation study on FewRel (%).
['Model', 'P', 'R', 'F1']
[['BERT', '85.05', '85.11', '84.89'], ['ERNIE', '88.49', '88.44', '[BOLD] 88.32'], ['w/o entities', '85.89', '85.89', '85.79'], ['w/o dEA', '85.85', '85.75', '85.62']]
In this subsection, we explore the effects of the informative entities and the knowledgeable pre-training task (dEA) for ERNIE using FewRel dataset. w/o entities and w/o dEA refer to fine-tuning ERNIE without entity sequence input and the pre-training task dEA respectively. still injects knowledge information into language representation during pre-training, which increases the F1 score of BERT by 0.9%. (2) Although the informative entities bring much knowledge information which intuitively benefits relation classification, ERNIE without dEA takes little advantage of this, leading to the F1 increase of 0.7%.
Generating Sentiment-Preserving Fake Online Reviews Using Neural Language Models and Their Human- and Machine-based Detection
1907.09177
Table 5: Fluency of reviews (in MOS). Bold font indicates highest score.
['Model', 'Native', 'Amazon Non-native', 'Overall', 'Native', 'Yelp Non-native', 'Overall']
[['Original review', '2.85', '3.09', '2.95', '[BOLD] 3.43', '[BOLD] 3.56', '[BOLD] 3.49'], ['Pretrained GPT-2', '2.93', '3.16', '3.06', '2.68', '2.72', '2.70'], ['Fine-tuned GPT-2', '3.24', '3.22', '3.23', '3.35', '3.25', '3.30'], ['mLSTM', '3.06', '[BOLD] 3.37', '3.21', '3.12', '2.96', '3.04'], ['Sentiment modeling', '[BOLD] 3.61', '3.35', '[BOLD] 3.47', '2.90', '2.86', '2.88']]
The fine-tuning improved the fluency compared with that of the reviews generated by the pre-trained GPT-2. This suggests that an attack can be made more effective by simply fine-tuning existing models. For the Amazon dataset, the reviews generated by explicitly modeling the sentiment (sentiment modeling) had the highest overall score, followed by those generated by the fine-tuned GPT-2 model. Interestingly, the scores for all fake review were higher than that for the original review. This observation is similar to that of Yao et al. This observation does not hold for the Yelp database — the score for the original reviews is higher than those for the fake ones. Among the fake review generation models, the fine-tuned GPT-2 model had the highest score (3.30).
Generating Sentiment-Preserving Fake Online Reviews Using Neural Language Models and Their Human- and Machine-based Detection
1907.09177
Table 3: Rate (in %) and standard error of fake reviews preserving sentiment of original review.
['LM', 'Amazon', 'Yelp']
[['Pretrained GPT-2', '62.1±0.9', '64.3±1.4'], ['Fine-tuned GPT-2', '67.0±1.4', '67.7±1.2'], ['mLSTM', '63.2±0.7', '71.0±1.3'], ['Sentiment modeling', '70.7±1.3', '70.1±1.2']]
This means that a large number of fake reviews can be efficiently generated with a desired sentiment by just fine-tuning an LM. The sentiment modeling method had the highest rate for the Amazon database. This was because explicitly modeling sentiment benefits from the additional sentiment information given before the fake reviews are generated. This indicates that explicitly modeling sentiment could be a more efficient way to generate desired sentiment reviews. For the Yelp reviews, fine-tuned GPT-2 was also clearly better than the pretrained GPT-2 and the mLSTM had the highest rate. Further analysis revealed that the mLSTM model performs very well only for food and restaurant reviews but it did not generalize well to other domains or it generates reviews completely outside the context of the original review. This suggests that we need to further explicitly preserve context. (We leave this for future work.)
Multilingual bottleneck features for subword modelingin zero-resource languages
1803.08863
Table 3: Word error rates of monolingual sgmm and 10-lingual TDNN ASR system evaluated on the development sets.
['[BOLD] Language', '[BOLD] Mono', '[BOLD] Multi']
[['BG', '17.5', '16.9'], ['CS', '17.1', '15.7'], ['DE', '9.6', '9.3'], ['FR', '24.5', '24.0'], ['KO', '20.3', '19.3']]
The multilingual model shows small but consistent improvements for all languages except Vietnamese. Ultimately though, we are not so much interested in the performance on typical \glsasr tasks, but in whether \glsbnfs from this model also generalize to zero-resource applications on unseen languages.
Multilingual bottleneck features for subword modelingin zero-resource languages
1803.08863
Table 2: Average precision scores on the same-different task (dev sets), showing the effects of applying vtln to the input features for the utd and/or cae systems. cae input is either MFCC or MFCC+VTLN. Topline results (rows 5-6) train cAE on gold standard pairs, rather than UTD output. Baseline results (final rows) directly evaluate acoustic features without UTD/cAE training. Best unsupervised result in bold.
['[BOLD] UTD input', '[BOLD] cAE input', '[BOLD] ES', '[BOLD] HA', '[BOLD] HR', '[BOLD] SV', '[BOLD] TR', '[BOLD] ZH']
[['PLP', '[EMPTY]', '28.6', '39.9', '26.9', '22.2', '25.2', '20.4'], ['PLP', '+VTLN', '46.2', '48.2', '36.3', '37.9', '31.4', '35.7'], ['PLP+VTLN', '[EMPTY]', '40.4', '45.7', '35.8', '25.8', '25.9', '26.9'], ['PLP+VTLN', '+VTLN', '[BOLD] 51.5', '[BOLD] 52.9', '[BOLD] 39.6', '[BOLD] 42.9', '[BOLD] 33.4', '[BOLD] 44.4'], ['[ITALIC] Gold pairs', '[EMPTY]', '65.3', '65.2', '55.6', '52.9', '50.6', '60.5'], ['[ITALIC] Gold pairs', '+VTLN', '68.9', '70.1', '57.8', '56.9', '56.3', '69.5'], ['[ITALIC] Baseline: MFCC', '[ITALIC] Baseline: MFCC', '18.3', '19.6', '17.6', '12.3', '16.8', '18.3'], ['[ITALIC] Baseline: MFCC+VTLN', '[ITALIC] Baseline: MFCC+VTLN', '27.4', '28.4', '23.2', '20.4', '21.3', '27.7']]
We find that \glscae features as trained previously are slightly better than MFCC+VTLN, but can be improved considerably by applying \glsvtln to the input of both \glsutd and \glscae training—indeed, even using gold pairs as \glscae input applying VTLN is beneficial. This suggests that \glscae training and VTLN abstract over different aspects of the speech signal, and that both should be used when only target language data is available.
Multilingual bottleneck features for subword modelingin zero-resource languages
1803.08863
Table 3: Word error rates of monolingual sgmm and 10-lingual TDNN ASR system evaluated on the development sets.
['[EMPTY]', '[BOLD] Mono', '[BOLD] Multi']
[['PL', '16.5', '15.1'], ['PT', '20.5', '19.9'], ['RU', '27.5', '26.9'], ['TH', '34.3', '33.3'], ['VI', '11.3', '11.6']]
The multilingual model shows small but consistent improvements for all languages except Vietnamese. Ultimately though, we are not so much interested in the performance on typical \glsasr tasks, but in whether \glsbnfs from this model also generalize to zero-resource applications on unseen languages.
Multilingual bottleneck features for subword modelingin zero-resource languages
1803.08863
Table 4: ap on the same-different task when training cae on the 10-lingual bnfs from above (cAE-BNF) with UTD and gold standard word pairs (test set results). Baselines are MFCC+VTLN and the cAE models from rows 4 and 6 of Table 2 that use MFCC+VTLN as input features. Best result without target language supervision in bold.
['[BOLD] Features', '[BOLD] ES', '[BOLD] HA', '[BOLD] HR', '[BOLD] SV', '[BOLD] TR', '[BOLD] ZH']
[['MFCC+VTLN', '44.1', '22.3', '25.0', '34.3', '17.9', '33.4'], ['cAE UTD', '72.1', '41.6', '41.6', '53.2', '29.3', '52.8'], ['cAE gold', '85.1', '66.3', '58.9', '67.1', '47.9', '70.8'], ['10-lingual BNFs', '[BOLD] 85.3', '[BOLD] 71.0', '[BOLD] 56.8', '72.0', '[BOLD] 65.3', '77.5'], ['cAE-BNF UTD', '85.0', '67.4', '40.3', '[BOLD] 74.3', '64.6', '[BOLD] 78.8'], ['cAE-BNF gold', '89.2', '79.0', '60.8', '79.9', '69.5', '81.6']]
We trained the \glscae with the same sets of same-word pairs as before, but replaced VTLN-adapted MFCCs with the 10-lingual \glsbnfs as input features without any other changes in the training procedure. The limiting factor appears to be the quality of the \glsutd pairs. With gold standard pairs, the \glscae features improve in all languages.
Incremental Parsing with Minimal Features Using Bi-Directional LSTM
1606.06406
Table 2: Development and test set results for shift-reduce dependency parser on Penn Treebank using only (s1, s0, q0) positional features.
['Parser', 'Dev UAS', 'Dev LAS', 'Test UAS', 'Test LAS']
[['C & M 2014', '92.0', '89.7', '91.8', '89.6'], ['Dyer et al.\xa02015', '93.2', '90.9', '93.1', '90.9'], ['Weiss et al.\xa02015', '-', '-', '93.19', '91.18'], ['+ Percept./Beam', '-', '-', '93.99', '92.05'], ['Bi-LSTM', '93.31', '91.01', '93.21', '91.16'], ['2-Layer Bi-LSTM', '93.67', '91.48', '93.42', '91.36']]
Despite the minimally designed feature representation, relatively few training iterations, and lack of pre-computed embeddings, the parser performed on par with state-of-the-art incremental dependency parsers, and slightly outperformed the state-of-the-art greedy parser.
Incremental Parsing with Minimal Features Using Bi-Directional LSTM
1606.06406
Table 3: Ablation studies on PTB dev set (wsj 22). Forward and backward context, and part-of-speech input were all critical to strong performace.
['Parser', 'UAS', 'LAS']
[['Bi-LSTM Hierarchical†', '93.31', '91.01'], ['† - Hierarchical Actions', '92.94', '90.96'], ['† - Backward-LSTM', '91.12', '88.72'], ['† - Forward-LSTM', '91.85', '88.39'], ['† - tag embeddings', '92.46', '89.81']]
We found that performance could be improved, however, by factoring out the decision over structural actions (i.e., shift, left-reduce, or right-reduce) and the decision of which arc label to assign upon a reduce. We therefore use separate classifiers for those decisions, each with its own fully-connected hidden and output layers but sharing the underlying recurrent architecture. Using only word forms and no part-of-speech input similarly degraded performance.
Incremental Parsing with Minimal Features Using Bi-Directional LSTM
1606.06406
Table 5: Test F-scores for constituency parsing on Penn Treebank and CTB-5.
['Parser', '[ITALIC] b', 'English greedy', 'English beam', 'Chinese greedy', 'Chinese beam']
[['zhu+:2013', '16', '86.08', '90.4', '75.99', '85.6'], ['Mi & Huang (05)', '32', '84.95', '90.8', '75.61', '83.9'], ['Vinyals et al.\xa0(05)', '10', '-', '90.5', '-', '-'], ['Bi-LSTM', '-', '89.75', '-', '79.44', '-'], ['2-Layer Bi-LSTM', '-', '[BOLD] 89.95', '-', '[BOLD] 80.13', '-']]
Although our work are definitely less accurate than those beam-search parsers, we achieve the highest accuracy among greedy parsers, for both English and Chinese.
Incremental Parsing with Minimal Features Using Bi-Directional LSTM
1606.06406
Table 6: Hyperparameters and training settings.
['[EMPTY]', 'Dependency', 'Constituency']
[['[BOLD] Embeddings', '[BOLD] Embeddings', '[BOLD] Embeddings'], ['Word (dims)', '50', '100'], ['Tags (dims)', '20', '100'], ['Nonterminals (dims)', '-', '100'], ['Pretrained', 'No', 'No'], ['[BOLD] Network details', '[BOLD] Network details', '[BOLD] Network details'], ['LSTM units (each direction)', '200', '200'], ['ReLU hidden units', '200 / decision', '1000'], ['[BOLD] Training', '[BOLD] Training', '[BOLD] Training'], ['Training epochs', '10', '10'], ['Minibatch size (sentences)', '10', '10'], ['Dropout (LSTM output only)', '0.5', '0.5'], ['L2 penalty (all weights)', 'none', '1×10−8'], ['ADADELTA [ITALIC] ρ', '0.99', '0.99'], ['ADADELTA [ITALIC] ϵ', '1×10−7', '1×10−7']]
All experiments were conducted with minimal hyperparameter tuning. We also applied dropout (with p=0.5) to the output of each LSTM layer (separately for each connection in the case of the two-layer network).
Neural Recovery Machine for Chinese Dropped Pronoun
1605.02134
Table 6: Experimental results on dropped pronoun recovery. ∗ indicate that our approach is statistical significant over all the baselines (within 0.95 confidence interval using the t-test).
['[BOLD] Models', '[BOLD] Accuracy [BOLD] DPI', '[BOLD] Accuracy [BOLD] DPI', '[BOLD] Accuracy [BOLD] DPG', '[BOLD] Accuracy [BOLD] DPG']
[['[BOLD] Models', '[BOLD] OntoNotes 4.0', '[BOLD] Baidu Zhidao', '[BOLD] OntoNotes 4.0', '[BOLD] Baidu Zhidao'], ['[ITALIC] SVMSL', '0.59', '0.755', '0.2', '0.456'], ['[ITALIC] SVMSS', '0.5', '0.5', '0.073', '0.315'], ['[ITALIC] SVMDL', '0.562', '0.739', '[BOLD] 0.21', '0.47'], ['[ITALIC] SVMDS', '0.5', '0.5', '0.073', '0.315'], ['SOTA', '0.524', '0.58', '0.191', '0.246'], ['NRM', '[BOLD] 0.592∗', '[BOLD] 0.879∗', '0.18', '[BOLD] 0.584∗']]
We compare the performance of the proposed NRM and the baselines in both of the OntoNotes 4.0 and Baidu Zhidao datasets. the Baidu Zhidao dataset for DPG task. To compare the results of the SVMSL and SVMDL, we can see that the dense representation of the dropped hypothesis has better impact on the fine grained task, namely DPG rather than the DPI task. To see the results on the two datasets, we can see that the results on Baidu Zhidao dataset are better than those on OntoNotes 4.0 dataset. It may be caused by the different scales of the two datasets and the different category numbers. We also note that the SOTA, which is a feature engineering approach, performs good when the data scale is small, such as the experiment results of DPG on OntoNotes 4.0 dataset. It may illustrates that the feature engineering approach can fit the data better when the data scale is small. However, the feature engineering is a highly empirical process and also a complicated task.
Neural Recovery Machine for Chinese Dropped Pronoun
1605.02134
Table 7: Experimental results on zero pronoun resolution. ∗ indicate that our approach is statistical significant over all the baselines (within 0.95 confidence interval using the t-test).
['[EMPTY]', '[BOLD] Automatic Parsing & Automatic AZP [BOLD] SOTA', '[BOLD] Automatic Parsing & Automatic AZP [BOLD] SOTA', '[BOLD] Automatic Parsing & Automatic AZP [BOLD] SOTA', '[BOLD] Automatic Parsing & Automatic AZP [BOLD] ZPSNN', '[BOLD] Automatic Parsing & Automatic AZP [BOLD] ZPSNN', '[BOLD] Automatic Parsing & Automatic AZP [BOLD] ZPSNN', '[BOLD] Automatic Parsing & Automatic AZP [BOLD] ZPSNN+NRM', '[BOLD] Automatic Parsing & Automatic AZP [BOLD] ZPSNN+NRM', '[BOLD] Automatic Parsing & Automatic AZP [BOLD] ZPSNN+NRM']
[['[EMPTY]', '[BOLD] R', '[BOLD] P', '[BOLD] F', '[BOLD] R', '[BOLD] P', '[BOLD] F', '[BOLD] R', '[BOLD] P', '[BOLD] F'], ['overall', '19.6', '15.5', '17.3', '31.6', '24.5', '27.6', '[BOLD] 34.4∗', '[BOLD] 24.8∗', '[BOLD] 28.8∗'], ['NW', '11.9', '14.3', '13.0', '[BOLD] 20.0', '[BOLD] 19.4', '[BOLD] 19.7', '[BOLD] 20.0', '17.9', '18.9'], ['MZ', '4.9', '4.7', '4.8', '12.5', '[BOLD] 11.9', '12.2', '[BOLD] 14.3', '11.5', '[BOLD] 12.8'], ['WB', '20.1', '14.3', '16.7', '34.4', '26.3', '29.8', '[BOLD] 38.6', '[BOLD] 27.8', '[BOLD] 32.3'], ['BN', '18.2', '[BOLD] 22.3', '20.0', '24.6', '19.0', '21.4', '[BOLD] 26.4', '20.3', '[BOLD] 23.0'], ['BC', '19.4', '14.6', '16.7', '35.4', '[BOLD] 27.6', '31.0', '[BOLD] 41.6', '27.4', '[BOLD] 33.1'], ['TC', '31.8', '17.0', '22.2', '50.7', '32.0', '39.2', '[BOLD] 52.1', '[BOLD] 32.4', '[BOLD] 39.9']]
on zero pronoun resolution task. Meanwhile, the ZPSNN+NRM significantly outperforms the SOTA approach in statistics. For further analysis, we will show the case study on the following section.
Natural Language Generation for Non-Expert Users
1909.08250
Table 7: BLUE and ROUGE score
['[EMPTY]', 'People', 'Mathematics', 'Food & drink']
[['[BOLD] BLEU', '39.1', '33.4', '52.2'], ['[BOLD] ROUGE-1', '20', '17.9', '14.6'], ['[BOLD] ROUGE-2', '6.7', '6.7', '5.5'], ['[BOLD] ROUGE-L', '11.4', '10.5', '8.13']]
Note that the average BLUE score is calculated only on BLEU assessable sentences, while average ROUGE score is calculated on the sentences whose structure can be recognized and encoded by our system. We note that the BLEU or ROUGE score might not be sufficiently high for a good quality translation. We believe that two reasons contribute to this low score. First, the present system uses fairly simple sentence structures. Second, it does not consider the use of relative clauses to enrich the sentences. This feature will be added to the next version of the system.
Natural Language Generation for Non-Expert Users
1909.08250
Table 6: BLEU assessable sentences
['[EMPTY]', 'People', 'Mathematics', 'Food & drink']
[['[BOLD] #sentences', '15', '24', '23'], ['[BOLD] #sentences recognized', '15', '22', '23'], ['[BOLD] BLEU assessable', '10', '15', '11']]
With the small set of rules introducing in this paper to recognize sentence structure, there would be very limited 4-gram in the generated text appearing in original Wikipedia corpus. Therefore, we use BLEU-3 with equal weight distribution instead of BLEU-4 to assess the generated content. Out of 62 sentences from 3 portals, the system cannot determine the structure 2 sentences in Mathematics due to their complexity. This low number of failure shows that our 5 proposed sentence structures effectively act as a lower bound on sentence recognition module.
[
2002.01207
Table 9: Comparison to other systems for full diacritization
['Setup', 'WER%']
[['MSA', 'MSA'], ['[BOLD] Our System', '[BOLD] 6.0'], ['Microsoft ATKS', '12.2'], ['Farasa', '12.8'], ['RDI (rashwan2015deep)', '16.0'], ['MADAMIRA (pasha2014madamira)', '19.0'], ['MIT (belinkov2015arabic)', '30.5'], ['CA', 'CA'], ['Our system', '[BOLD] 4.3'], ['Our best MSA system on CA', '14.7']]
As the results show for MSA, our overall diacritization WER is 6.0% while the state of the art system has a WER of 12.2%. As for CA, our best system produced an error rate of 4.3%, which is significantly better than using our best MSA system to diacritize CA.
[
2002.01207
Table 3: Error analysis: Core word error types for MSA
['Error', 'Freq.', '%', 'Explanation', 'Examples']
[['Wrong selection', '215', '40.8', 'Homographs with different diacritized forms', '“qaSor” (قَصْر¿ – palace) vs. “qaSar” (قَصَر¿ – he limited)'], ['Foreign word', '124', '23.5', 'transliterated words including 96 foreign named entities', 'wiykiymaAnoyaA (وِيكِيمَانْيَا¿ – Wikimania)'], ['Invalid diacritized form', '57', '10.8', 'invalid form', 'ya*okur (يّذْكُر¿ – he mentions) vs. ya*okar (يّذْكَر¿)'], ['Named entity', '56', '10.6', 'Arabic named entities', '“EabÃdiy” (عَبَّادِي¿ – name) vs. “EibAdiy” (عِبَادِي¿ – my servants)'], ['both correct', '48', '9.1', 'Some words have multiple valid diacritized forms', '“wikAlap” (وِكَالَة¿) and “wakAlap” (وَكَالَة¿ – agency)'], ['Affix diacritization error', '16', '3.0', 'Some sufixes are erroneously diacritized', 'b [ITALIC] [BOLD] aAkt$Afihim (بَاكتشافِهِم¿ – with their discovery)'], ['Reference is wrong', '10', '1.9', 'the truth diacritics were incorrect', 'AlofiyfaA (الْفِيفَا¿ – FIFA) vs. AlofayofaA (الْفَيْفَا¿)'], ['dialectal word', '1', '0.2', 'dialectal word', 'mawaAyiliy (مَوَايِلِي¿ – my chant)']]
For error analysis, we analyzed all the errors (527 errors). The most prominent error type arises from the selection of a valid diacritized form that does not match the context (40.8%). Perhaps, including POS tags as a feature or augmenting the PRIOR feature with POS tag information and a bigram language model may reduce the error rate further. The second most common error is due to transliterated foreign words including foreign named entities (23.5%). Such words were not observed during training. Further, Arabic Named entities account for 10.6% of the errors, where they were either not seen in training or they share identical non-diacritized forms with other words. Perhaps, building larger gazetteers of diacritized named entities may resolve NE related errors. In 10.8% of the cases, the diacritizer produced in completely incorrect diacritized forms. In some the cases (9.1%), though the diacritizer produced a form that is different from the reference, both forms were in fact correct. Most of these cases were due to variations in diacritization conventions (ex. “bare alef” (A) at start of a word receiving a diacritic or not). Other cases include foreign words and some words where both diacritized forms are equally valid.
[
2002.01207
Table 4: MSA case errors accounting from more than 1% of errors
['Error', 'Count', '%', 'Most Common Causes']
[['a ⇔ u', '133', '19.3', '[ITALIC] POS error: ex. “ka$afa” (كَشَفَ¿ – he exposed) vs. “ka$ofu” (كَشْفُ¿ – exposure) & [ITALIC] Subject vs. object: ex. “tuwHy [BOLD] mivolu” (تُوحِي مِثْلُ¿ – [BOLD] such indicates) vs. “tuwHy [BOLD] mivola” (تُوحِي مِثْلَ¿ – she indicates [BOLD] such)'], ['i ⇔ a', '130', '18.9', '[ITALIC] Incorrect attachment (due to coordinating conjunction or distant attachment): ex. “Alogaza Alomusay~ili lilidumuEi – [BOLD] wa+AlraSaSi vs. [BOLD] wa+AlraSaSa (الغَازَ الْمُسَيِّلَ لِلدُمُوعِ والرَصَاص¿ – tear gas and bullets) where bullets were attached incorrectly to tear instead of gas & [ITALIC] indeclinability such as foreign words and feminine names: ex. “kaAnuwni” (كَانُونِ¿ – Cyrillic month name) vs. “kaAuwna” (كَانُونَ¿)'], ['i ⇔ u', '95', '13.8', '[ITALIC] POS error of previous word: ex. “tadahowuru [BOLD] waDoEihi” (تَدَهْؤُرُ وَضْعِهِ¿ – deterioration of his situation – situtation is part of idafa construct) vs. “tadahowara [BOLD] waDoEihu” (تَذَهْوَرَ وَضْعُهُ¿ – his situation deteriorated – situation is subject) & [ITALIC] Incorrect attachment (due to coordinating conjunction or distant attachment): (as example for i ⇔ a)'], ['a ⇔ o', '60', '8.7', '[ITALIC] Foreign named entities: ex. “siyraAloyuna” (سِيرَالْيُونَ¿ – Siera Leon) vs. “siyraAloyuno” (سِيرَالْيُونْ¿)'], ['i ⇔ K', '27', '4.0', '[ITALIC] Incorrect Idafa: “ [BOLD] liAt~ifaqi ha\u2062aA Alo>usobuwE” (لِاتِفَاقِ هَذَا الأُسْبُوع¿ – this week’s agreement) vs. “ [BOLD] liAt~ifaqK ha\u2062aA Alo>usobuwE” (لِاتِّفَاقٍ هَذَا الْأُسْبُوع¿ – to an agreement this week)'], ['K ⇔ N', '29', '4.2', '[ITALIC] Subject vs. object (as in a ⇔ u) and Incorrect attachment (as in i ⇔ a)'], ['F ⇔ N', '25', '3.7', '[ITALIC] Words ending with feminine marker “p” or “At”: ex. “muHaADarap” (مُحَاضَرَة¿ – lecture)'], ['i ⇔ o', '22', '3.2', '[ITALIC] Foreign named entities (as in a ⇔ o)'], ['F ⇔ a', '16', '2.3', '[ITALIC] Incorrect Idafa (as in i ⇔ K)'], ['u ⇔ o', '14', '2.0', '[ITALIC] Foreign named entities (as in a ⇔ o)'], ['F ⇔ K', '9', '1.3', '[ITALIC] Words ending with feminine marker (as in F ⇔ N)'], ['K ⇔ a', '8', '1.2', '[ITALIC] Incorrect Idafa (as in i ⇔ K)']]
For example, the most common error type involves guessing a fatHa (a) instead of damma (u) or vice versa (19.3%). The most common reasons for this error type, based on inspecting the errors, were due to: POS errors (ex. a word is tagged as a verb instead of a noun); and a noun is treated as a subject instead of an object or vice versa. The table details the rest of the error types. Overall, some of the errors are potentially fixable using better POS tagging, improved detection of non-Arabized foreign names, and detection of indeclinability. However, some errors are more difficult and require greater understanding of semantics such as improper attachment, incorrect idafa, and confusion between subject and object. Perhaps, such semantic errors can be resolved using parsing.
[
2002.01207
Table 5: CA case errors accounting from more than 1% of errors
['Error', 'Count', '%', 'Most Common Causes']
[['a ⇔ u', '2,907', '28.4', '[ITALIC] Subject vs. object: ex. “wafaqa [BOLD] yawoma” (وَفَقَ يَوْمَ¿ – he matches the day) vs. ex. “wafaqa [BOLD] yawomu” (وَفَقَ يَوْمُ¿ – the day matches) & [ITALIC] False subject (object behaves like subject in passive tense): ex. “yufar~iqu [BOLD] qaDaA’a” (يُفَرِّقُ الْقَضَاءَ¿ – he separates the make up) vs. “yufar~aqu [BOLD] qaDaA’u” (يُفَرَّقٌ الْقَضَاءُ¿ – the make up is separated) & [ITALIC] Incorrect attachment (due to coordinating conjunction): ex. “f+a>aEohadu” (فَأَعْهَدَ¿ – so I entrust) vs. “f+a>aEohadu” (فَأَعْهِدُ¿)'], ['i ⇔ u', '1,316', '12.9', '[ITALIC] Incorrect attachment (due to coordinating conjunctions or distant attachment): (as in a ⇔ u)'], ['i ⇔ a', '1,019', '10.0', '[ITALIC] Incorrect attachment (as in a ⇔ u) & [ITALIC] Indeclinability such as foreign words and feminine names: ex. “>ajoyaAdiyni” (أَجْيَادِينِ¿ – Ajyadeen (city name)) vs. “>ajoyaAiyna” (أَجْيَادِينَ¿)'], ['a ⇔ #', '480', '4.7', '[ITALIC] Problem with reference where the case for some words, particularly non-Arabic names, is not provided in the reference: ex. “'], ['u ⇔ #', '426', '4.2', 'same problems as in a ⇔ #'], ['K ⇔ i', '371', '3.6', '[ITALIC] Incorrect Idafa: ex. “ [BOLD] EaTaA’i Alofaqiyh” (عَطَاءِ الْفَقِيه¿ – the providence of the jurist) vs. “ [BOLD] EaTaA’K Alofaqiyh” (عَطَاءٍ الْفَقِيه¿ – Ataa the jurist)'], ['K ⇔ a', '328', '3.2', '[ITALIC] words ending with feminine marker: ex. “tayomiyap” (تَيْمِيَة¿ –Taymiya) & [ITALIC] Indeclinability: ex. “bi'], ['u ⇔ o', '300', '2.9', '[ITALIC] confusion between past, present, and imperative moods of verbs and preceding markers (imperative “laA” vs. negation “laA): ex. “laA tano$ariHu” (لا تَنْشَرِحُ¿ – does not open up) vs. “laA tano$ariHo” (لا تَنْشَرِحْ¿ – do not open up)'], ['a ⇔ o', '278', '2.7', '[ITALIC] confusion between past, present, and imperative moods of verbs (as in u ⇔ o)'], ['K ⇔ N', '253', '2.5', '[ITALIC] Incorrect attachment (as in i ⇒ u)'], ['N ⇔ u', '254', '2.5', '[ITALIC] Incorrect Idafa (as in K ⇒ i)'], ['F ⇔ N', '235', '2.3', '[ITALIC] words ending with feminine marker (as in K ⇒ a)'], ['i ⇔ o', '195', '1.9', '[ITALIC] Differing conventions concerning handling two consecutive letters with sukun: ex. “ [BOLD] Eano Aboni” (عَنْ ابْنِ¿ – on the authority of the son of)\xa0vs. “ [BOLD] Eani Aboni” (عَنِ ابْنِ¿)'], ['i ⇔ #', '178', '1.7', 'same errors as for a ⇒ #'], ['o ⇔ #', '143', '1.4', 'same errors as for a ⇒ #']]
The error types are similar to those observed for MSA. Some errors are more syntactic and morphological in nature and can be addressed using better POS tagging and identification of indeclinability, particularly as they relate to named entities and nouns with feminine markers. Other errors such as incorrect attachment, incorrect idafa, false subject, and confusion between subject and object can perhaps benefit from the use of parsing. As with the core-word errors for CA, the reference has some errors (ex. {a,i,o} ⇒ #), and extra rounds of reviews of the reference are in order.
[
2002.01207
Table 6: Error analysis: Core word error types for CA
['Error', 'Freq.', '%', 'Explanation', 'Examples']
[['Invalid diacritized form', '195', '38.8', 'invalid form', '“>aqosaAm” (أِقْسَام¿ – portions) vs. “>aqasaAm” (أَقَسَام¿)'], ['Wrong selection', '157', '31.4', 'Homographs with different diacritized forms', '“raAfoE” (رَقْع¿ – lifting) vs. “rafaE” (رَفَع¿ – he lifted)'], ['Affix diacritization error', '66', '13.2', 'Some affixes are erroneously diacritized', '“baladhu” (بَلَدهُ¿ – his country, where country is subject of verb) vs. “baladhi” (بَلَدهِ¿ – his country, where country is subject or object of preposition)'], ['Named entities', '44', '8.8', 'Named entities', '“Alr~ayob” (الرَّيْب¿ – Arrayb) vs. “Alr~iyab” (الرِّيَب¿))'], ['Problems with reference', '22', '4.4', 'Some words in the reference were partially diacritized', '“nuEoTaY” (نُعْطَى¿ – we are given) vs. “nETY” (نعطى¿))'], ['Guess has no diacritics', '9', '1.8', 'system did not produce any diacritics', '“mhnd” (مهند¿ – sword) vs. “muhan~ad” (مُهَنَّد¿))'], ['Different valid forms', '7', '1.4', 'Some words have multiple valid diacritized forms', '“maA}op” (مَائْة¿ – hundred) and “miA}op” (مِائَة¿)'], ['Misspelled word', '1', '0.2', '[EMPTY]', '“lbAlmsjd” (لبالمسجد¿) vs. “lbAlmsjd” (بالمسجد¿ – in the mosque))']]
We randomly selected and analyzed 500 errors (5.2% of the errors). The two most common errors involve the system producing completely correct diacritized forms (38.8%) or correct forms that don’t match the context (31.4%). The relatively higher percentage of completely incorrect guesses, compared to MSA, may point to the higher lexical diversity of classical Arabic. As for MSA, we suspect that adding additional POS information and employing a word bigram to constrain the PRIOR feature may help reduce selection errors. Another prominent error is related to the diacritics that appear on attached suffixes, particularly pronouns, which depend on the choice of case ending (13.2%). Errors due to named entities are slightly fewer than those seen for MSA (8.8%). A noticeable number of mismatches between the guess and the reference are due to partial diacritization of the reference (4.4%). We plan to conduct an extra round of checks on the test set.
[
2002.01207
Table 7: Comparing our system to state-of-the-art systems – Core word diacritics
['System', 'Error Rate WER', 'Error Rate DER']
[['MSA', 'MSA', 'MSA'], ['[BOLD] Our system', '[BOLD] 2.9', '[BOLD] 0.9'], ['(rashwan2015deep)', '3.0', '1.0'], ['Farasa', '3.3', '1.1'], ['Microsoft ATKS', '5.7', '2.0'], ['MADAMIRA', '6.7', '1.9'], ['(belinkov2015arabic)', '14.9', '3.9'], ['CA', 'CA', 'CA'], ['Our system', '2.2', '0.9'], ['Our best MSA system on CA', '8.5', '3.7']]
Moreover, post correction improved results overall. We compare our results to five other systems, namely Farasa (darwish2017arabic), MADAMIRA (pasha2014madamira), RDI (Rashwan et al., 2015), MIT (Belinkow and Glass, 2015), and Microsoft ATKS (microsoft2013diac). As the results show, our results beat the current state-of-the-art.
Multimodal Analytics for Real-world News using Measures of Cross-modal Entity Consistency
2003.10421
Table 1. Number of test documents |D|, unique entities T∗ in all articles, and mean amount of unique entities ¯¯¯¯T in articles containing a given entity type (for context this is the mean amount of nouns as explained in Section 3.1.2) for TamperedNews (left) and News400 (right). Valid image-text relations for News400 were first manually verified according to Section 4.1.3.
['[BOLD] TamperedNews dataset [BOLD] Documents', '[BOLD] TamperedNews dataset | [ITALIC] D|', '[BOLD] TamperedNews dataset [ITALIC] T∗', '[BOLD] TamperedNews dataset ¯¯¯¯ [ITALIC] T']
[['All\xa0(context)', '72,561', '—', '121.40'], ['With persons', '34,051', '4,784', '4.03'], ['With locations', '67,148', '3,455', '4.90'], ['With events', '16,786', '897', '1.33']]
They were both manipulated to perform experiments for cross-modal consistency verification. Experiments and comparisons to related work (jaiswal2017multimedia; sabir2018deep) on datasets such as MEIR (sabir2018deep) are not reasonable since 1) they do not contain public persons or events, and 2) rely on pre-defined reference or training data for given entities. These restrictions severely limit the application in practice. We propose an automated solution for real-world scenarios that works for public personalities and entities represented in a knowledge base.
Multimodal Analytics for Real-world News using Measures of Cross-modal Entity Consistency
2003.10421
Table 1. Number of test documents |D|, unique entities T∗ in all articles, and mean amount of unique entities ¯¯¯¯T in articles containing a given entity type (for context this is the mean amount of nouns as explained in Section 3.1.2) for TamperedNews (left) and News400 (right). Valid image-text relations for News400 were first manually verified according to Section 4.1.3.
['[BOLD] News400 dataset [BOLD] Documents', '[BOLD] News400 dataset | [ITALIC] D|', '[BOLD] News400 dataset [ITALIC] T∗', '[BOLD] News400 dataset ¯¯¯¯ [ITALIC] T']
[['All (verified context)', '400 (91)', '—', '137.35'], ['With persons (verified)', '322 (116)', '424', '5.41'], ['With locations (verified)', '389 (69)', '451', '9.22'], ['With events (verified)', '170 (31)', '39', '1.84']]
They were both manipulated to perform experiments for cross-modal consistency verification. Experiments and comparisons to related work (jaiswal2017multimedia; sabir2018deep) on datasets such as MEIR (sabir2018deep) are not reasonable since 1) they do not contain public persons or events, and 2) rely on pre-defined reference or training data for given entities. These restrictions severely limit the application in practice. We propose an automated solution for real-world scenarios that works for public personalities and entities represented in a knowledge base.
Multimodal Analytics for Real-world News using Measures of Cross-modal Entity Consistency
2003.10421
Table 5. Results for document verification (DV) and collection retrieval for the the News400 dataset. Results are reported for all available and verified documents |D|.
['[BOLD] Test set (| [ITALIC] D|)', '[BOLD] DV [BOLD] VA', '[BOLD] Collection Retrieval [BOLD] AUC', '[BOLD] Collection Retrieval [BOLD] AP-clean', '[BOLD] Collection Retrieval [BOLD] AP-clean', '[BOLD] Collection Retrieval [BOLD] AP-clean', '[BOLD] Collection Retrieval [BOLD] AP-tampered', '[BOLD] Collection Retrieval [BOLD] AP-tampered', '[BOLD] Collection Retrieval [BOLD] AP-tampered']
[['[BOLD] Test set (| [ITALIC] D|)', '[BOLD] VA', '[BOLD] AUC', '@25%', '@50%', '@100%', '@25%', '@50%', '@100%'], ['[BOLD] Persons (116)', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['Random', '0.94', '0.92', '100.0', '100.0', '93.79', '87.68', '87.57', '86.77'], ['PsC', '0.93', '0.90', '100.0', '99.49', '92.30', '83.77', '84.91', '84.48'], ['PsG', '0.91', '0.91', '98.95', '98.24', '92.29', '82.80', '84.86', '84.93'], ['PsCG', '0.93', '0.91', '100.0', '99.82', '93.62', '86.63', '86.66', '86.15'], ['[BOLD] Location (69)', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['• [BOLD] Outdoor (54)', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['Random', '0.89', '0.85', '100.0', '98.44', '88.21', '82.91', '81.36', '79.91'], ['GCD(750, 2500)', '0.81', '0.80', '92.61', '88.51', '81.03', '68.77', '70.84', '72.64'], ['GCD(200, 750)', '0.80', '0.75', '87.55', '81.95', '74.77', '65.07', '68.38', '68.34'], ['GCD(25, 200)', '0.81', '0.73', '87.55', '81.72', '73.33', '66.17', '70.16', '68.08'], ['• [BOLD] Indoor (15)', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['Random', '0.80', '0.76', '91.67', '81.98', '76.07', '88.75', '86.44', '77.94'], ['GCD(750, 2500)', '0.67', '0.64', '62.20', '59.98', '60.52', '80.42', '81.06', '69.09'], ['GCD(200, 750)', '0.87', '0.70', '73.33', '69.10', '67.56', '67.92', '73.47', '69.10'], ['GCD(25, 200)', '0.73', '0.66', '74.70', '68.95', '65.66', '67.92', '70.32', '65.29'], ['[BOLD] Events (31)', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['Random', '0.87', '0.85', '92.94', '89.10', '83.37', '100.0', '92.44', '85.28'], ['EsP', '0.65', '0.64', '52.76', '56.43', '58.38', '85.10', '80.71', '68.98'], ['[BOLD] Context (91)', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['Random', '0.70', '0.70', '87.03', '87.50', '73.62', '61.11', '63.09', '63.19'], ['Similar\xa0(top-25%)', '0.70', '0.68', '92.19', '88.43', '72.96', '53.60', '57.77', '59.69'], ['Similar\xa0(top-10%)', '0.64', '0.66', '70.54', '74.12', '65.58', '56.15', '59.72', '59.75'], ['Similar\xa0(top-5%)', '0.66', '0.63', '74.48', '73.09', '64.18', '50.77', '55.99', '56.98']]
Since the number of documents is rather limited and cross-modal mutual presence of entities was manually verified, results for News400 are reported for all documents with verified relations. However, results while retrieving tampered documents are noticeable worse. This is mainly caused by the fact that some untampered entities in the documents that are depicted in both image and text, can be either unspecific (e.g. mentioning of a country) or the retrieved images for visual verification do not fit the document’s image content. this problem was bypassed. We have verified the same behavior for News400 when experimenting on these subsets. In addition, performance for context verification is worse compared to TamperedNews. We assume that this is due to the less powerful word embedding for the German language.
MHSAN: Multi-Head Self-Attention Network for Visual Semantic Embedding
2001.03712
Table 5: Impact of diversity regularization on performance of retrieval. The lower the diversity loss, the more diverse the local regions focused by the multiple attention maps.
['Diversity Regularization', 'Sentence Retrieval', 'Image Retrieval', 'Diversity Loss']
[['W/', '[BOLD] 73.5', '[BOLD] 59.1', '[BOLD] 5.59'], ['W/O', '72.3', '58.4', '7.36']]
In this subsection, we experiment to observe the effect of diversity regularization in the loss. We conjecture that diversity regularization encourages encoded vectors to contain various semantic meanings in an image, reducing the redundancy of the encoded vectors and attention weight vectors. Visualization. We qualitatively present the multiple attention maps focusing on more distinct local regions by introducing the diversity regularization. Therefore, the image embedding encodes more distinct components in an image, resulting in an improvement of performance on the image-text retrieval tasks.
MHSAN: Multi-Head Self-Attention Network for Visual Semantic Embedding
2001.03712
Table 3: Effect of self-attention (1-head) in each encoder. We denote the image encoder using selective spatial pooling and the text encoder using the last hidden state which are used in Engilberg [3] as W/O attention. We report only R@1 for effective comparison.
['Image Encoder', 'Text Encoder', 'Sentence Retrieval(R@1)', 'Image Retrieval(R@1)']
[['W/O attention ', 'W/O attention ', '69.8', '55.9'], ['Our image encoder(1-head)', 'W/O attention ', '[BOLD] 70.1', '55.8'], ['W/O attention ', 'Our text encoder(1-head)', '69.9', '[BOLD] 56.2']]
For this experiment, we use the MS-COCO dataset. To see the effect of the single-head self-attention mechanism at each encoder, we apply the single-head attention to each encoder separately. Visualization. We observe that self-attention with a single-head captures the missing parts on the selective spatial pooling, Through the visualization, we demonstrate that our model is more suitable for visual semantic embedding, where the encoding of rich details is necessary.
Improvements to Deep Convolutional Neural Networks for LVCSR
1309.1501
Table 13: WER on Broadcast News, 400 hrs
['model', 'dev04f', 'rt04']
[['Hybrid DNN', '15.1', '13.4'], ['DNN-based Features', '15.3', '13.5'], ['Old CNN-based Features ', '13.4', '12.2'], ['Proposed CNN-based Features', '13.6', '12.5'], ['Proposed Hybrid CNN', '[BOLD] 12.7', '[BOLD] 11.7']]
While the proposed 512-hybrid CNN-based feature system did improve (14.1 WER) over the old CNN (14.8 WER), performance slightly deteriorates after CNN-based features are extracted from the network. However, the 5,999-hybrid CNN offers between a 13-16% relative improvement over the DNN hybrid system, and between a 4-5% relative improvement over the old CNN-based features systems. This helps to strengthen the hypothesis that hybrid CNNs have more potential for improvement, and the proposed fMLLR and ReLU+dropout techniques provide substantial improvements over DNNs and CNNs with a sigmoid non-linearity and VTLN-warped log-mel features.
Improvements to Deep Convolutional Neural Networks for LVCSR
1309.1501
Table 2: WER as a Function of # of Convolutional Layers
['# of Convolutional vs.', 'WER']
[['Fully Connected Layers', '[EMPTY]'], ['No conv, 6 full (DNN)', '24.8'], ['1 conv, 5 full', '23.5'], ['2 conv, 4 full', '22.1'], ['3 conv, 3 full', '22.4']]
Note that for each experiment, the number of parameters in the network is kept the same. The table shows that increasing the number of convolutional layers up to 2 helps, and then performance starts to deteriorate. Furthermore, we can see from the table that CNNs offer improvements over DNNs for the same input feature set.
Improvements to Deep Convolutional Neural Networks for LVCSR
1309.1501
Table 3: WER as a function of # of hidden units
['Number of Hidden Units', 'WER']
[['64', '24.1'], ['128', '23.0'], ['220', '22.1'], ['128/256', '21.9']]
Again the total number of parameters in the network is kept constant for all experiments. We can observe that as we increase the number of hidden units up to 220, the WER steadily decreases. We do not increase the number of hidden units past 220 as this would require us to reduce the number of hidden units in the fully connected layers to be less than 1,024 in order to keep the total number of network parameters constant. We have observed that reducing the number of hidden units from 1,024 results in an increase in WER. We were able to obtain a slight improvement by using 128 hidden units for the first convolutional layer, and 256 for the second layer. This is more hidden units in the convolutional layers than are typically used for vision tasks as many hidden units are needed to capture the locality differences between different frequency regions in speech.
Improvements to Deep Convolutional Neural Networks for LVCSR
1309.1501
Table 6: Results with Different Pooling Types
['Method', 'WER']
[['Max Pooling', '18.9'], ['Stochastic Pooling', '18.8'], ['[ITALIC] lp pooing', '18.9']]
Given the success of lp and stochastic pooling, we compare both of these strategies to max-pooling on an LVCSR task. Stochastic pooling seems to provide improvements over max and lp pooling, though the gains are slight. Unlike vision tasks, in appears that in tasks such as speech recognition which have a lot more data and thus better model estimates, generalization methods such as lp and stochastic pooling do not offer great improvements over max pooling.
Improvements to Deep Convolutional Neural Networks for LVCSR
1309.1501
Table 8: Pooling in Time
['Method', 'WER']
[['Baseline', '18.9'], ['Pooling in Time, Max', '18.9'], ['Pooling in Time, Stochastic', '18.8'], ['Pooling in Time, [ITALIC] lp', '18.8']]
We see that pooling in time helps slightly with stochastic and lp pooling. However, the gains are not large, and are likely to be diminished after sequence training. It appears that for large tasks with more data, regularizations such as pooling in time are not helpful, similar to other regularization schemes such as lp/stochastic pooling and pooling with overlap in frequency.
Improvements to Deep Convolutional Neural Networks for LVCSR
1309.1501
Table 9: WER With Improved fMLLR Features
['Feature', 'WER']
[['VTLN-warped log-mel+d+dd', '18.8'], ['proposed fMLLR + VTLN-warped log-mel+d+dd', '[BOLD] 18.3']]
Notice that by applying fMLLR in a decorrelated space, we can achieve a 0.5% improvement over the baseline VTLN-warped log-mel system.
Improvements to Deep Convolutional Neural Networks for LVCSR
1309.1501
Table 10: WER of HF Sequence Training + Dropout
['Non-Linearity', 'WER']
[['Sigmoid', '15.7'], ['ReLU, No Dropout', '15.6'], ['ReLU, Dropout Fixed for CG Iterations', '[BOLD] 15.0'], ['ReLU, Dropout Per CG Iteration', '15.3']]
By using dropout but fixing the dropout mask per utterance across all CG iterations, we can achieve a 0.6% improvement in WER. Finally, if we compare this to varying the dropout mask per CG training iteration, the WER increases. This shows experimental evidence that if the dropout mask is not fixed, we cannot guarantee that CG iterations produce conjugate search directions for the loss function.
Improvements to Deep Convolutional Neural Networks for LVCSR
1309.1501
Table 11: HF Seq. Training WER Per CE Iteration
['CE Iter', '# Times Annealed', 'CE WER', 'HF WER']
[['4', '1', '20.8', '15.3'], ['6', '2', '19.8', '15.0'], ['8', '3', '19.4', '15.0'], ['13', '7', '18.8', '15.0']]
Finally, we explore if we can reduce the number of CE iterations before moving to sequence training. A main advantage of sequence training is that it is more closely linked to the speech recognition objective function compared to cross-entropy. Using this fact, we explore how many iterations of CE are actually necessary before moving to HF training. Note that HF training is started and lattices are dumped using the CE weight that is stopped at. Notice that just by annealing two times, we can achieve the same WER after HF training, compared to having the CE weights converge. This points to the fact that spending too much time in CE is unnecessary. Once the weights are in a relatively decent space, it is better to just jump to HF sequence training which is more closely matched to the speech objective function.
Improvements to Deep Convolutional Neural Networks for LVCSR
1309.1501
Table 12: WER on Broadcast News, 50 hours
['model', 'dev04f', 'rt04']
[['Hybrid DNN', '16.3', '15.8'], ['Old Hybrid CNN ', '15.8', '15.0'], ['Proposed Hybrid CNN', '[BOLD] 15.4', '[BOLD] 14.7'], ['DNN-based Features', '17.4', '16.6'], ['Old CNN-based Features ', '15.5', '15.2'], ['Proposed CNN-based Features', '15.3', '15.1']]
The proposed CNN hybrid system offers between a 6-7% relative improvement over the DNN hybrid, and a 2-3% relative improvement over the old CNN hybrid system. While the proposed CNN-based feature system offers a modest 1% improvement over the old CNN-based feature system, this slight improvements with feature-based system is not surprising all. We have observed huge relative improvements in WER (10-12%) on a hybrid sequence trained DNN with 512 output targets, compared to a hybrid CE-trained DNN. Feature-based systems use the neural network to learn a feature transformation, and seem to saturate in performance even when the hybrid system used to extract the features improves. Thus, as the table shows, there is more potential to improve a hybrid system as opposed to a feature-based system.
Balancing Training for Multilingual Neural Machine Translation
2004.06748
Table 4: Mean and variance of the average BLEU score for the Diverse group. The models trained with MultiDDS-S perform better and have less variance.
['[BOLD] Method', '[BOLD] M2O [BOLD] Mean', '[BOLD] M2O [BOLD] Var.', '[BOLD] O2M [BOLD] Mean', '[BOLD] O2M [BOLD] Var.']
[['MultiDDS', '26.85', '0.04', '18.20', '0.05'], ['MultiDDS-S', '26.94', '0.02', '18.24', '0.02']]
MultiDDS -S also results in smaller variance in the final model performance. We run MultiDDS and MultiDDS-S with 4 different random seeds, and record the mean and variance of the average BLEU score.
Balancing Training for Multilingual Neural Machine Translation
2004.06748
Table 1: Average BLEU for the baselines and our methods. Bold indicates the highest value.
['[EMPTY]', '[BOLD] Method', '[BOLD] M2O [BOLD] Related', '[BOLD] M2O [BOLD] Diverse', '[BOLD] O2M [BOLD] Related', '[BOLD] O2M [BOLD] Diverse']
[['Baseline', 'Uni. ( [ITALIC] τ=∞)', '22.63', '24.81', '15.54', '16.86'], ['Baseline', 'Temp. ( [ITALIC] τ=5)', '24.00', '26.01', '16.61', '17.94'], ['Baseline', 'Prop. ( [ITALIC] τ=1)', '24.88', '26.68', '15.49', '16.79'], ['Ours', 'MultiDDS', '25.26', '26.65', '17.17', '[BOLD] 18.40'], ['Ours', 'MultiDDS-S', '[BOLD] 25.52', '[BOLD] 27.00', '[BOLD] 17.32', '18.24']]
First, comparing the baselines, we can see that there is no consistently strong strategy for setting the sampling ratio, with proportional sampling being best in the M2O setting, but worst in the O2M setting. Next, we can see that MultiDDS outperforms the best baseline in three of the four settings and is comparable to proportional sampling in the last M2O-Diverse setting. With the stabilized reward, MultiDDS-S consistently delivers better overall performance than the best baseline, and outperforms MultiDDS in three settings. From these results, we can conclude that MultiDDS-S provides a stable strategy to train multilingual systems over a variety of settings.
Balancing Training for Multilingual Neural Machine Translation
2004.06748
Table 3: Average BLEU of the best baseline and three MultiDDS-S settings for the Diverse group. MultiDDS-S always outperform the baseline.
['[BOLD] Setting', '[BOLD] Baseline', '[BOLD] MultiDDS-S [BOLD] Regular', '[BOLD] MultiDDS-S [BOLD] Low', '[BOLD] MultiDDS-S [BOLD] High']
[['M2O', '26.68', '27.00', '26.97', '27.08'], ['O2M', '17.94', '18.24', '17.95', '18.55']]
We performed experiments with these aggregation methods on the Diverse group, mainly because there is more performance trade-off among these languages. The languages are ordered on the x-axis from left to right in decreasing perplexity. Low generally performs better on the low-performing languages on the left, while High generally achieves the best performance on the high-performing languages on the right, with results most consistent in the O2M setting. This indicates that MultiDDS is able to prioritize different predefined objectives.
Inducing Multilingual Text Analysis Tools Using Bidirectional Recurrent Neural Networks
1609.09382
Table 2: Super Sense Tagging (SST) accuracy for Simple Projection, RNN and their combination.
['[BOLD] Model Baseline', '[BOLD] Model', '[BOLD] Italian [BOLD] MSC-IT-1', '[BOLD] Italian [BOLD] MSC-IT-2', '[BOLD] French [BOLD] MSC-FR-1', '[BOLD] French [BOLD] MSC-FR-2']
[['Baseline', '[EMPTY]', '[BOLD] trans man.', '[BOLD] trans. auto', '[BOLD] trans. auto', '[BOLD] trans auto.'], ['Baseline', 'Simple Projection', '61.3', '45.6', '42.6', '44.5'], ['SST Based RNN', 'SRNN', '59.4', '46.2', '46.2', '47.0'], ['SST Based RNN', 'BRNN', '59.7', '46.2', '46.0', '47.2'], ['SST Based RNN', 'SRNN-POS-In', '61.0', '47.0', '46.5', '47.3'], ['SST Based RNN', 'SRNN-POS-H1', '59.8', '46.5', '46.8', '47.4'], ['SST Based RNN', 'SRNN-POS-H2', '63.1', '48.7', '47.7', '49.8'], ['SST Based RNN', 'BRNN-POS-In', '61.2', '47.0', '46.4', '47.3'], ['SST Based RNN', 'BRNN-POS-H1', '60.1', '46.5', '46.8', '47.5'], ['SST Based RNN', 'BRNN-POS-H2', '63.2', '48.8', '47.7', '50'], ['SST Based RNN', 'BRNN-POS-H2 - OOV', '64.6', '49.5', '48.4', '50.7'], ['Combination', 'Projection + SRNN', '62.0', '46.7', '46.5', '47.4'], ['Combination', 'Projection + BRNN', '62.2', '46.8', '46.4', '47.5'], ['Combination', 'Projection + SRNN-POS-In', '62.9', '47.4', '46.9', '47.7'], ['Combination', 'Projection + SRNN-POS-H1', '62.5', '47.0', '47.1', '48.0'], ['Combination', 'Projection + SRNN-POS-H2', '63.5', '49.2', '48.0', '50.1'], ['Combination', 'Projection + BRNN-POS-In', '62.9', '47.5', '46.9', '47.8'], ['Combination', 'Projection + BRNN-POS-H1', '62.7', '47.0', '47.0', '48.0'], ['Combination', 'Projection + BRNN-POS-H2', '63.6', '49.3', '48.0', '50.3'], ['Combination', 'Projection + BRNN-POS-H2 - OOV', '[BOLD] 64.7', '49.8', '48.6', '51.0'], ['S-E', 'MFS Semeval 2013', '60.7', '60.7', '[BOLD] 52.4', '[BOLD] 52.4'], ['S-E', 'GETALP ', '40.2', '40.2', '34.6', '34.6']]
SRNN-POS-X and BRNN-POS-X refer to our RNN variants: In means input layer, H1 means first hidden layer and H2 means second hidden layer. We achieve the best performance on Italian using MSC-IT-1 clean corpus while noisy training corpus degrades SST performance. The best results are obtained with combination of simple projection and RNN which confirms (as for POS tagging) that both approaches are complementary.
Inducing Multilingual Text Analysis Tools Using Bidirectional Recurrent Neural Networks
1609.09382
Table 1: Token-level POS tagging accuracy for Simple Projection, SRNN using MultiVec bilingual word embeddings as input, RNN444For RNN models, only one (same) system is used to tag German, Greek and Spanish, Projection+RNN and methods of Das & Petrov (2011), Duong et al (2013) and Gouws & Søgaard (2015).
['[25mm] [BOLD] ModelLang.', '[BOLD] French All words', '[BOLD] French OOV', '[BOLD] German All words', '[BOLD] German OOV', '[BOLD] Greek All words', '[BOLD] Greek OOV', '[BOLD] Spanish All words', '[BOLD] Spanish OOV']
[['Simple Projection', '80.3', '77.1', '78.9', '73.0', '77.5', '72.8', '80.0', '79.7'], ['SRNN MultiVec', '75.0', '65.4', '70.3', '68.8', '71.1', '65.4', '73.4', '62.4'], ['SRNN', '78.5', '70.0', '76.1', '76.4', '75.7', '70.7', '78.8', '72.6'], ['BRNN', '80.6', '70.9', '77.5', '76.6', '77.2', '71.0', '80.5', '73.1'], ['BRNN - OOV', '81.4', '77.8', '77.6', '77.8', '77.9', '75.3', '80.6', '74.7'], ['Projection + SRNN', '84.5', '78.8', '81.5', '77.0', '78.3', '74.6', '83.6', '81.2'], ['Projection + BRNN', '85.2', '79.0', '81.9', '77.1', '79.2', '75.0', '84.4', '81.7'], ['Projection + BRNN - OOV', '[BOLD] 85.6', '[BOLD] 80.4', '82.1', '[BOLD] 78.7', '79.9', '[BOLD] 78.5', '[BOLD] 84.4', '[BOLD] 81.9'], ['(Das, 2011)', '—', '—', '82.8', '—', '[BOLD] 82.5', '—', '84.2', '—'], ['(Duong, 2013)', '—', '—', '[BOLD] 85.4', '—', '80.4', '—', '83.3', '—'], ['(Gouws, 2015a)', '—', '—', '84.8', '—', '—', '—', '82.6', '—']]
We note that the POS tagger based on bidirectional RNN (BRNN) has better performance than simple RNN (SRNN), which means that both past and future contexts help select the correct tag. It is shown that after replacing OOVs by the closest words using CBOW, the tagging accuracy significantly increases.
Piecewise Latent Variables for Neural Variational Text Processing
1612.00377
Table 1: Test perplexities on three document modeling tasks: 20-NewGroup (20-NG), Reuters corpus (RCV1) and CADE12 (CADE). Perplexities were calculated using 10 samples to estimate the variational lower-bound. The H-NVDM models perform best across all three datasets.
['[BOLD] Model', '[BOLD] 20-NG', '[BOLD] RCV1', '[BOLD] CADE']
[['[ITALIC] LDA', '1058', '−−', '−−'], ['[ITALIC] docNADE', '896', '−−', '−−'], ['[ITALIC] NVDM', '836', '−−', '−−'], ['[ITALIC] G-NVDM', '651', '905', '339'], ['[ITALIC] H-NVDM-3', '607', '865', '[BOLD] 258'], ['[ITALIC] H-NVDM-5', '[BOLD] 566', '[BOLD] 833', '294']]
Next, we observe that integrating our proposed piecewise variables yields even better results in our document modeling experiments, substantially improving over the baselines. More importantly, in the 20-NG and Reuters datasets, increasing the number of pieces from 3 to 5 further reduces perplexity. Thus, we have achieved a new state-of-the-art perplexity on 20 News-Groups task and — to the best of our knowledge – better perplexities on the CADE12 and RCV1 tasks compared to using a state-of-the-art model like the G-NVDM. We also evaluated the converged models using an non-parametric inference procedure, where a separate approximate posterior is learned for each test example in order to tighten the variational lower-bound. H-NVDM also performed best in this evaluation across all three datasets, which confirms that the performance improvement is due to the piecewise components. See appendix for details.
Piecewise Latent Variables for Neural Variational Text Processing
1612.00377
Table 3: Ubuntu evaluation using F1 metrics w.r.t. activities and entities. G-VHRED, P-VHRED and H-VHRED all outperform the baseline HRED. G-VHRED performs best w.r.t. activities and H-VHRED performs best w.r.t. entities.
['[BOLD] Model', '[BOLD] Activity', '[BOLD] Entity']
[['[ITALIC] HRED', '4.77', '2.43'], ['[ITALIC] G-VHRED', '[BOLD] 9.24', '2.49'], ['[ITALIC] P-VHRED', '5', '2.49'], ['[ITALIC] H-VHRED', '8.41', '[BOLD] 3.72']]
All latent variable models outperform HRED w.r.t. both activities and entities. This strongly suggests that the high-level concepts represented by the latent variables help generate meaningful, goal-directed responses. Furthermore, each type of latent variable appears to help with a different aspects of the generation task. G-VHRED performs best w.r.t. activities (e.g. download, install and so on), which occur frequently in the dataset. This suggests that the Gaussian latent variables learn useful latent representations for frequent actions. On the other hand, H-VHRED performs best w.r.t. entities (e.g. Firefox, GNOME), which are often much rarer and mutually exclusive in the dataset. This suggests that the combination of Gaussian and piecewise latent variables help learn useful representations for entities, which could not be learned by Gaussian latent variables alone. We further conducted a qualitative analysis of the model responses, which supports these conclusions.
Piecewise Latent Variables for Neural Variational Text Processing
1612.00377
Table 6: Approximate posterior word encodings (20-NG). For P-KL, we bold every case where piecewise variables showed greater word sensitivity than Gaussian variables w/in the same hybrid model.
['[BOLD] Word', '[BOLD] G-NVDM', '[BOLD] H-NVDM-5', '[EMPTY]']
[['[BOLD] Time-related', '[BOLD] G-KL', '[BOLD] G-KL', '[BOLD] P-KL'], ['months', '23', '33', '[BOLD] 40'], ['day', '28', '32', '[BOLD] 35'], ['time', '[BOLD] 55', '22', '[BOLD] 40'], ['century', '[BOLD] 28', '13', '[BOLD] 19'], ['past', '[BOLD] 30', '18', '[BOLD] 28'], ['days', '[BOLD] 37', '14', '[BOLD] 19'], ['ahead', '[BOLD] 33', '20', '[BOLD] 33'], ['years', '[BOLD] 44', '16', '[BOLD] 38'], ['today', '46', '27', '[BOLD] 71'], ['back', '31', '30', '[BOLD] 47'], ['future', '[BOLD] 20', '15', '[BOLD] 20'], ['order', '[BOLD] 42', '14', '[BOLD] 26'], ['minute', '15', '34', '[BOLD] 40'], ['began', '[BOLD] 16', '5', '[BOLD] 13'], ['night', '[BOLD] 49', '12', '[BOLD] 18'], ['hour', '[BOLD] 18', '[BOLD] 17', '16'], ['early', '42', '42', '[BOLD] 69'], ['yesterday', '25', '26', '[BOLD] 36'], ['year', '[BOLD] 60', '17', '[BOLD] 21'], ['week', '28', '54', '[BOLD] 58'], ['hours', '20', '26', '[BOLD] 31'], ['minutes', '[BOLD] 40', '34', '[BOLD] 38'], ['months', '23', '33', '[BOLD] 40'], ['history', '[BOLD] 32', '18', '[BOLD] 28'], ['late', '41', '[BOLD] 45', '31'], ['moment', '[BOLD] 23', '[BOLD] 17', '16'], ['season', '[BOLD] 45', '29', '[BOLD] 37'], ['summer', '29', '28', '[BOLD] 31'], ['start', '30', '14', '[BOLD] 38'], ['continue', '21', '32', '[BOLD] 34'], ['happened', '22', '27', '[BOLD] 35']]
The Gaussian variables were originally were sensitive to some of the words in the table. However, in the hybrid model, nearly all of the temporal words that the Gaussian variables were once more sensitive to now more strongly affect the piecewise variables, which themselves also capture all of the words that were originally missed This shift in responsibility indicates that the piecewise constant variables are better equipped to handle certain latent factors. This effect appears to be particularly strong in the case of certain nationality-based adjectives (e.g., “american”, “israeli”, etc.). While the G-NVDM could model multi-modality in the data to some degree, this work would be primarily done in the model’s decoder. In the H-NVDM, the piecewise variables provide an explicit mechanism for capturing modes in the unknown target distribution, so it makes sense that the model would learn to use the piecewise variables instead, thus freeing up the Gaussian variables to capture other aspects of the data, as we found was the case with names (e.g., “jesus”, “kent”, etc.).
Piecewise Latent Variables for Neural Variational Text Processing
1612.00377
Table 6: Approximate posterior word encodings (20-NG). For P-KL, we bold every case where piecewise variables showed greater word sensitivity than Gaussian variables w/in the same hybrid model.
['[BOLD] Word', '[BOLD] G-NVDM', '[BOLD] H-NVDM-5', '[EMPTY]']
[['[BOLD] Names', '[BOLD] G-KL', '[BOLD] G-KL', '[BOLD] P-KL'], ['henry', '33', '[BOLD] 47', '39'], ['tim', '[BOLD] 32', '27', '11'], ['mary', '26', '[BOLD] 51', '30'], ['james', '40', '[BOLD] 72', '30'], ['jesus', '28', '[BOLD] 87', '39'], ['george', '26', '[BOLD] 56', '29'], ['keith', '65', '[BOLD] 94', '61'], ['kent', '51', '[BOLD] 56', '15'], ['chris', '38', '[BOLD] 55', '28'], ['thomas', '19', '[BOLD] 35', '19'], ['hitler', '10', '[BOLD] 14', '9'], ['paul', '25', '[BOLD] 52', '18'], ['mike', '38', '[BOLD] 76', '40'], ['bush', '[BOLD] 21', '20', '14'], ['[BOLD] Adjectives', '[BOLD] G-KL', '[BOLD] G-KL', '[BOLD] P-KL'], ['american', '[BOLD] 50', '12', '[BOLD] 40'], ['german', '[BOLD] 25', '21', '[BOLD] 22'], ['european', '20', '17', '[BOLD] 27'], ['muslim', '19', '7', '[BOLD] 23'], ['french', '11', '[BOLD] 17', '[BOLD] 17'], ['canadian', '[BOLD] 18', '10', '[BOLD] 16'], ['japanese', '16', '9', '[BOLD] 24'], ['jewish', '[BOLD] 56', '37', '[BOLD] 54'], ['english', '19', '16', '[BOLD] 26'], ['islamic', '14', '18', '[BOLD] 28'], ['israeli', '[BOLD] 24', '14', '[BOLD] 18'], ['british', '[BOLD] 35', '15', '[BOLD] 17'], ['russian', '14', '19', '[BOLD] 20']]
The Gaussian variables were originally were sensitive to some of the words in the table. However, in the hybrid model, nearly all of the temporal words that the Gaussian variables were once more sensitive to now more strongly affect the piecewise variables, which themselves also capture all of the words that were originally missed This shift in responsibility indicates that the piecewise constant variables are better equipped to handle certain latent factors. This effect appears to be particularly strong in the case of certain nationality-based adjectives (e.g., “american”, “israeli”, etc.). While the G-NVDM could model multi-modality in the data to some degree, this work would be primarily done in the model’s decoder. In the H-NVDM, the piecewise variables provide an explicit mechanism for capturing modes in the unknown target distribution, so it makes sense that the model would learn to use the piecewise variables instead, thus freeing up the Gaussian variables to capture other aspects of the data, as we found was the case with names (e.g., “jesus”, “kent”, etc.).
MultiFiT: Efficient Multi-lingual Language Model Fine-tuning
1909.04761
Table 6: Bootstrapping results on MLDoc with and without pretraining, trained on 1k/10k LASER labels.
['[EMPTY]', 'de', 'es', 'fr', 'it', 'ja', 'ru', 'zh']
[['LASER, code', '87.65', '75.48', '84.00', '71.18', '64.58', '66.58', '76.65'], ['Random init. (1k)', '77.80', '70.50', '75.65', '68.52', '68.50', '61.37', '79.19'], ['Random init. (10k)', '90.53', '69.75', '87.40', '72.72', '67.55', '63.67', '81.44'], ['MultiFiT, pseudo (1k)', '[BOLD] 91.34', '[BOLD] 78.92', '[BOLD] 89.45', '[BOLD] 76.00', '[BOLD] 69.57', '[BOLD] 68.19', '[BOLD] 82.45']]
S6SS0SSS0Px3 Robustness to noise We suspect that MultiFiT is able to outperform its teacher as the information from pretraining makes it robust to label noise. To test this hypothesis, we train MultiFiT and a randomly initialized model with the same architecture on 1k and 10k examples of the Spanish MLDoc. The pretrained MultiFiT is able to partially ignore the noise, up to 65% of noisy training examples. Without pretraining, the model does not exceed the theoretical baseline (the percentage of correct examples). Pretraining enables MultiFiT to achieve much better performance compared to a randomly initialised model. Both results together suggest a) that pretraining increases robustness to noise and b) that information from monolingual and cross-lingual language models is complementary.
MultiFiT: Efficient Multi-lingual Language Model Fine-tuning
1909.04761
Table 2: Comparison of zero-shot and supervised methods on MLDoc.
['[EMPTY]', 'de', 'es', 'fr', 'it', 'ja', 'ru', 'zh']
[['[ITALIC] Zero-shot (1,000 source language examples)', '[ITALIC] Zero-shot (1,000 source language examples)', '[ITALIC] Zero-shot (1,000 source language examples)', '[ITALIC] Zero-shot (1,000 source language examples)', '[ITALIC] Zero-shot (1,000 source language examples)', '[ITALIC] Zero-shot (1,000 source language examples)', '[ITALIC] Zero-shot (1,000 source language examples)', '[ITALIC] Zero-shot (1,000 source language examples)'], ['MultiCCA', '81.20', '72.50', '72.38', '69.38', '67.63', '60.80', '74.73'], ['LASER, paper', '86.25', '[BOLD] 79.30', '78.30', '70.20', '60.95', '67.25', '70.98'], ['LASER, code', '87.65', '75.48', '84.00', '71.18', '64.58', '66.58', '76.65'], ['MultiBERT', '82.35', '74.98', '83.03', '68.27', '64.58', '[BOLD] 71.58', '66.17'], ['MultiFiT, pseudo', '[BOLD] 91.62', '79.10', '[BOLD] 89.42', '[BOLD] 76.02', '[BOLD] 69.57', '67.83', '[BOLD] 82.48'], ['[ITALIC] Supervised (100 target language examples)', '[ITALIC] Supervised (100 target language examples)', '[ITALIC] Supervised (100 target language examples)', '[ITALIC] Supervised (100 target language examples)', '[ITALIC] Supervised (100 target language examples)', '[ITALIC] Supervised (100 target language examples)', '[ITALIC] Supervised (100 target language examples)', '[ITALIC] Supervised (100 target language examples)'], ['MultiFit', '90.90', '89.00', '85.03', '80.12', '80.55', '73.55', '88.02'], ['[ITALIC] Supervised (1,000 target language examples)', '[ITALIC] Supervised (1,000 target language examples)', '[ITALIC] Supervised (1,000 target language examples)', '[ITALIC] Supervised (1,000 target language examples)', '[ITALIC] Supervised (1,000 target language examples)', '[ITALIC] Supervised (1,000 target language examples)', '[ITALIC] Supervised (1,000 target language examples)', '[ITALIC] Supervised (1,000 target language examples)'], ['MultiCCA', '93.70', '94.45', '92.05', '85.55', '85.35', '85.65', '87.30'], ['LASER, paper', '92.70', '88.75', '90.80', '85.93', '85.15', '84.65', '88.98'], ['MultiBERT', '94.00', '95.15', '93.20', '85.82', '87.48', '86.85', '90.72'], ['Monolingual BERT', '94.93', '-', '-', '-', '-', '-', '92.17'], ['MultiFiT, no wiki', '95.23', '95.07', '94.65', '89.30', '88.63', '87.52', '90.03'], ['MultiFiT', '[BOLD] 95.90', '[BOLD] 96.07', '[BOLD] 94.75', '[BOLD] 90.25', '[BOLD] 90.03', '[BOLD] 87.65', '[BOLD] 92.52']]
In the zero-shot setting, MultiBERT underperforms the comparison methods as the shared embedding space between many languages is overly restrictive. Our monolingual LMs outperform their cross-lingual teacher LASER in almost every setting. When fine-tuned with only 100 target language examples, they are able to outperform all zero-shot approaches except MultiFiT on de and fr. This calls into question the need for zero-shot approaches, as fine-tuning with even a small number of target examples is able to yield superior performance. When fine-tuning with 1,000 target examples, MultiFiT—even without pretraining—outperforms all comparison methods, including monolingual BERT.
MultiFiT: Efficient Multi-lingual Language Model Fine-tuning
1909.04761
Table 3: Comparison of zero-shot, translation-based and supervised methods (with 2k training examples) on all domains of CLS. MT-BOW and CL-SCL results are from Zhou et al. (2016).
['[EMPTY]', '[EMPTY]', 'de Books', 'de DVD', 'de Music', 'fr Books', 'fr DVD', 'fr Music', 'ja Books', 'ja DVD', 'ja Music']
[['[ITALIC] Zero-shot', 'LASER, code', '84.15', '78.00', '79.15', '83.90', '83.40', '80.75', '74.99', '74.55', '76.30'], ['[ITALIC] Zero-shot', 'MultiBERT', '72.15', '70.05', '73.80', '75.50', '74.70', '76.05', '65.41', '64.90', '70.33'], ['[ITALIC] Zero-shot', 'MultiFiT, pseudo', '[BOLD] 89.60', '[BOLD] 81.80', '[BOLD] 84.40', '[BOLD] 87.84', '[BOLD] 83.50', '[BOLD] 85.60', '[BOLD] 80.45', '[BOLD] 77.65', '[BOLD] 81.50'], ['[ITALIC] Translat.', 'MT-BOW', '79.68', '77.92', '77.22', '80.76', '78.83', '75.78', '70.22', '71.30', '72.02'], ['[ITALIC] Translat.', 'CL-SCL', '79.50', '76.92', '77.79', '78.49', '78.80', '77.92', '73.09', '71.07', '75.11'], ['[ITALIC] Translat.', 'BiDRL', '84.14', '84.05', '84.67', '84.39', '83.60', '82.52', '73.15', '76.78', '78.77'], ['[ITALIC] Super.', 'MultiBERT', '86.05', '84.90', '82.00', '86.15', '86.90', '86.65', '80.87', '82.83', '79.95'], ['[ITALIC] Super.', 'MultiFiT', '[BOLD] 93.19', '[BOLD] 90.54', '[BOLD] 93.00', '[BOLD] 91.25', '[BOLD] 89.55', '[BOLD] 93.40', '[BOLD] 86.29', '[BOLD] 85.75', '[BOLD] 86.59']]
Cls MultiFiT is able to outperform its zero-shot teacher LASER across all domains. Importantly, the bootstrapped monolingual model also outperforms more sophisticated models that are trained on translations across almost all domains. In the supervised setting, MultiFiT similarly outperforms multilingual BERT. For both datasets, our methods that have been pretrained on 100 million tokens outperform both multilingual BERT and LASER, models that have been trained with orders of magnitude more data and compute.
MultiFiT: Efficient Multi-lingual Language Model Fine-tuning
1909.04761
Table 5: Comparison of MultiFiT results with different pretraining corpora and ULMFiT, fine-tuned with 1k labels on MLDoc.
['[EMPTY]', 'de', 'es', 'zh']
[['ULMFiT', '94.19', '95.23', '66.82'], ['MultiFiT, no wiki', '95.23', '95.07', '90.03'], ['MultiFiT, small Wiki', '95.37', '95.30', '89.80'], ['MultiFiT', '[BOLD] 95.90', '[BOLD] 96.07', '[BOLD] 92.52']]
Pretraining on more data generally helps. MultiFiT outperforms ULMFiT significantly; the performance improvement is particularly pronounced in Chinese where ULMFiT’s word-based tokenization underperformed.
MultiFiT: Efficient Multi-lingual Language Model Fine-tuning
1909.04761
Table 7: Comparison of different tokenization strategies for different languages on MLDoc.
['[EMPTY]', 'de', 'es', 'fr', 'it', 'ru']
[['Word-based', '95.28', '95.97', '94.72', '89.97', '[BOLD] 88.02'], ['Subword', '[BOLD] 96.10', '[BOLD] 96.07', '[BOLD] 94.75', '[BOLD] 94.75', '87.65']]
Tokenization Subword tokenization has been found useful for language modeling with morphologically rich languages Czapla et al. We train models with the best performing vocabulary sizes for subword (15k) and regular word-based tokenization (60k) with the Moses tokenizer Koehn et al. Subword tokenization outperforms word-based tokenization on most languages, while being faster to train due to the smaller vocabulary size.
SpeedRead: A Fast Named Entity Recognition Pipeline
1301.2857
Table 7: Confusion Matrix of the POS tags assigned by SpeedRead over the words of sections 22-24 of PTB. O represents all the other not mentioned tags.
['[0pt][l]RefTest', 'DT', 'IN', 'JJ', 'NN', 'NNP', 'NNPS', 'NNS', 'RB', 'VBD', 'VBG', 'O']
[['DT', '11094', '62', '3', '7', '3', '0', '0', '1', '0', '0', '13'], ['IN', '15', '13329', '9', '1', '0', '0', '0', '88', '0', '0', '50'], ['JJ', '1', '11', '7461', '[BOLD] 257', '130', '2', '10', '65', '38', '81', '159'], ['NN', '1', '5', '[BOLD] 288', '17196', '111', '0', '18', '11', '2', '109', '93'], ['NNP', '8', '13', '118', '109', '12585', '264', '31', '8', '0', '2', '39'], ['NNPS', '0', '0', '0', '0', '70', '81', '16', '0', '0', '0', '0'], ['NNS', '0', '0', '1', '23', '20', '42', '7922', '0', '0', '0', '53'], ['RB', '17', '[BOLD] 281', '103', '23', '8', '0', '0', '3892', '0', '1', '80'], ['VBD', '0', '0', '8', '5', '4', '0', '0', '0', '4311', '1', '232'], ['VBG', '0', '0', '25', '104', '5', '0', '0', '0', '0', '1799', '0'], ['O', '26', '163', '154', '172', '47', '4', '107', '67', '174', '2', '45707']]
Proper nouns are the second source of errors as most of the capitalized words will be mistakenly tagged as proper nouns while they are either adjectives or nouns. Such errors are the result of the weak logic implemented in the backoff tagger in SpeedRead, where regular expressions are applied in sequence returning the first match. Other types of errors are adverbs (RB) and propositions (IN). These errors are mainly because of the ambiguity of the functional words. Functional words need deeper understanding of discourse, semantic and syntactic nature of the text. Taking into consideration the contexts around the words improves the accuracy of tagging. However, trigrams are still small to be considered sufficient context for resolving all the ambiguities.
SpeedRead: A Fast Named Entity Recognition Pipeline
1301.2857
Table 8: F1 scores of the chunking phase using different POS tags. F1 score is calculated over tokens and not entities.
['[100pt][l]PhaseDataset', 'Train', 'Dev', 'Test']
[['SR+SR POS', '94.24', '94.49', '[BOLD] 93.12'], ['SR+Stanford POS L3W', '92.98', '93.37', '92.05'], ['SR+CONLL POS', '90.88', '90.82', '89.43'], ['SR+SENNA POS', '94.73', '95.07', '[BOLD] 93.80']]
This score is calculated over the chunking tags of the words. I and B tags are considered as one class while O is left as it is. that using better POS taggers does not necessarily produce better results. The quality of SpeedRead POS tagging is sufficient for the chunking stage. SENNA and SpeedRead POS taggers work better for the detection phase because they are more aggressive, assigning the NNP tag to any capitalized word. On the other hand, Stanford tagger prefers to assign the tag of the lowered case shape of the word, if it is a common word.
SpeedRead: A Fast Named Entity Recognition Pipeline
1301.2857
Table 10: F1 scores calculated using conlleval.pl script for NER taggers. The table shows that SpeedRead F1 score is 10% below the sate-of-art achieved by SENNA.
['[80pt][l]PhaseDataset', 'Training', 'Dev', 'Test']
[['SR+Gold Chunks', '90.80', '91.98', '87.87'], ['SpeeRead', '82.05', '83.35', '78.28'], ['Stanford', '99.28', '92.98', '89.03'], ['SENNA', '96.75', '97.24', '89.58']]
First row shows that, given chunked input, the classification phase is able to achieve close scores to the state-of- art classifiers. However, given the chunks generated by SpeedRead, the scores drop around 9.5% in F1 scores.
SpeedRead: A Fast Named Entity Recognition Pipeline
1301.2857
Table 12: Speed of different NER taggers. SpeedRead is faster by 13.9 times using half the memory consumed by Stanford.
['[BOLD] NER Tagger', '[BOLD] Token/Sec', '[BOLD] Relative', '[BOLD] Memory']
[['[EMPTY]', '[EMPTY]', '[BOLD] Speed', 'MiB'], ['Stanford', '11,612', '1.00', '1900'], ['SENNA', '18,579', '2.13', '150'], ['SpeedRead', '153,194', '[BOLD] 13.9', '950']]
SENNA achieves close accuracy with twice the speed and less memory usage. SpeedRead takes another approach by focusing on speed. We are able to speed up the pipeline to the factor of 13. SpeedRead’s memory footprint is half the memory consumed by the Stanford pipeline. Even though SpeedRead’s accuracy is not close to the state-of-art, it still achieves 18% increase over the CONLL 2003 baseline. Moreover, adapting the pipeline to new domains could be easily done by integrating other knowledge base sources as freebase or Wikipedia. SENNA and SpeedRead are able to calculate POS tags at the end of the NER phase without extra computation while that is not true of Stanford pipeline standalone NER application. Using Stanford corenlp pipeline does not guarantee better execution time.
Deep Speech Inpainting of Time-frequency Masks
1910.09058
Table 3: Blind speech inpainting (section 3.2). STOI & PESQ were computed between the recovered and the actual speech segments from validation set and averaged. The best scores were denoted in bold. PC - informed (partial conv.), FC - blind (full conv.) inpainting.
['Intrusion', 'Size', 'Gaps STOI', 'Gaps PESQ', 'Noise STOI', 'Noise PESQ', 'PC - gaps STOI', 'PC - gaps PESQ', 'FC - gaps STOI', 'FC - gaps PESQ', 'FC - noise STOI', 'FC - noise PESQ', 'FC - additive STOI', 'FC - additive PESQ']
[['Time', '10%', '0.893', '2.561', '0.901', '2.802', '[BOLD] 0.938', '[BOLD] 3.240', '0.930', '3.191', '0.919', '3.066', '0.933', '3.216'], ['Time', '20%', '0.772', '1.872', '0.800', '2.260', '0.887', '2.809', '0.875', '2.725', '0.863', '2.677', '[BOLD] 0.906', '[BOLD] 2.971'], ['Time', '30%', '0.641', '1.476', '0.688', '1.919', '0.811', '2.450', '0.798', '2.384', '0.792', '2.374', '[BOLD] 0.882', '[BOLD] 2.788'], ['Time', '40%', '0.536', '1.154', '0.598', '1.665', '0.730', '2.179', '0.714', '2.086', '0.707', '2.072', '[BOLD] 0.854', '[BOLD] 2.617'], ['Time + Freq.', '10%', '0.869', '2.423', '0.873', '2.575', '[BOLD] 0.920', '[BOLD] 3.034', '0.911', '3.000', '0.896', '2.859', '0.912', '3.023'], ['Time + Freq.', '20%', '0.729', '1.790', '0.746', '2.010', '0.853', '2.566', '0.840', '2.490', '0.828', '2.411', '[BOLD] 0.885', '[BOLD] 2.759'], ['Time + Freq.', '30%', '0.598', '1.391', '0.629', '1.653', '0.772', '2.178', '0.757', '2.108', '0.743', '2.041', '[BOLD] 0.854', '[BOLD] 2.562'], ['Time + Freq.', '40%', '0.484', '1.053', '0.520', '1.329', '0.680', '1.845', '0.665', '1.772', '0.659', '1.787', '[BOLD] 0.828', '[BOLD] 2.413'], ['Random', '10%', '0.880', '2.842', '0.892', '3.063', '[BOLD] 0.944', '[BOLD] 3.496', '0.932', '3.442', '0.917', '3.272', '0.932', '3.435'], ['Random', '20%', '0.809', '2.233', '0.830', '2.543', '[BOLD] 0.918', '3.114', '0.904', '3.061', '0.887', '2.897', '0.910', '[BOLD] 3.117'], ['Random', '30%', '0.713', '1.690', '0.745', '2.085', '0.878', '2.735', '0.869', '2.701', '0.853', '2.596', '[BOLD] 0.891', '[BOLD] 2.893'], ['Random', '40%', '0.644', '1.355', '0.682', '1.802', '0.846', '2.479', '0.832', '2.412', '0.813', '2.307', '[BOLD] 0.868', '[BOLD] 2.664']]
All of the considered framework configurations successfully recovered missing or degraded parts of the input speech resulting in improved STOI and PESQ scores. The framework for the informed inpainting (PC-gaps) yielded better results, as compared to its blind counterpart, when the masked parts of the input were set to either zeros or random noise (FC -gaps, -noise, respectively). Notably, when input speech was mixed with, not replaced by, the high-amplitude noise, the framework operating in the blind setup (FC-additive) led to the best results for larger intrusions of all the considered shapes. These results suggest that the framework for the blind speech inpainting takes advantage of the original information underlying the noisy intrusions, even when the transient SNR is very low (below -10 dB).
Experiments with Universal CEFR Classification
1804.06636
Table 1: Composition of MERLIN Corpus
['[BOLD] CEFR level', '[BOLD] DE', '[BOLD] IT', '[BOLD] CZ']
[['A1', '57', '29', '0'], ['A2', '306', '381', '188'], ['B1', '331', '393', '165'], ['B2', '293', '0', '81'], ['C1', '42', '0', '0'], ['Total', '1029', '803', '434']]
To test our hypotheses, we need corpora graded with CEFR scale for multiple languages. One such multi-lingual corpus is the freely available MERLIN Boyd et al. This corpus consists of 2286 manually graded texts written by second language learners of German (DE), Italian (IT), and Czech (CZ) as a part of written examinations at authorized test institutions. The aim of these examinations is to test the knowledge of the learners on the CEFR scale which consists of six categories – A1, A2, B1, B2, C1, C2 – which indicate improving language abilities. The writing tasks primarily consisted of writing formal/informal letters/emails and essays. MERLIN corpus has a multi-dimensional annotation of language proficiency covering aspects such as grammatical accuracy, vocabulary range, socio-linguistic awareness etc., and we used the “Overall CEFR rating” as the label for our experiments in this paper. Other information provided about the authors included- age, gender, and native language, and information about the task such as topic, and the CEFR level of the test itself. We did not use these information in the experiments reported in this paper. Further, we removed all Language-CEFR Category combinations that had less than 10 examples in the corpus (German had 5 examples for level C2 and Italian had 2 examples for B2 which were removed from the data). We also removed all the unrated texts from the original corpus.
NRC-Canada at SMM4H Shared Task: Classifying Tweets Mentioning Adverse Drug Reactions and Medication Intake
1805.04558
Table 7: Task 2: Results of our best system (submission 1) on the test set when one of the feature groups is removed.
['[BOLD] Submission', '[ITALIC] Pclass1+ [ITALIC] class2', '[ITALIC] Rclass1+ [ITALIC] class2', '[ITALIC] Fclass1+ [ITALIC] class2']
[['a. submission 1 (all features)', '0.708', '0.642', '0.673'], ['b. all − general textual features', '0.697', '0.603', '0.647'], ['b.1. all − general [ITALIC] n-grams', '0.676', '0.673', '0.674'], ['b.2. all − general embeddings', '0.709', '0.638', '0.671'], ['b.3. all − general clusters', '0.685', '0.671', '0.678'], ['b.4. all − negation − Twitter-specific − punctuation', '0.683', '0.670', '0.676'], ['c. all − domain-specific features', '0.679', '0.653', '0.666'], ['c.1. all − domain generalized [ITALIC] n-grams', '0.680', '0.652', '0.665'], ['c.2. all − domain embeddings', '0.682', '0.671', '0.676'], ['d. all − sentiment lexicon features', '0.685', '0.673', '0.679'], ['e. all − class weights', '0.718', '0.645', '0.680']]
In this task, the general textual features (row b) played a bigger role in the overall performance than the domain-specific (row c) or sentiment lexicon (row d) features. Removing this group of features results in more than 2.5 percentage points drop in the F-measure affecting both precision and recall (row b). However, removing any one feature subgroup in this group (e.g., general n-grams, general clusters, general embeddings, etc.) results only in slight drop or even increase in the performance (rows b.1–b.4). This indicates that the features in this group capture similar information. Among the domain-specific features, the n-grams generalized over domain terms are the most useful. The model trained without these n-grams features performs almost one percentage point worse than the model that uses all the features (row c.1). The sentiment lexicon features were not helpful (row d).
NRC-Canada at SMM4H Shared Task: Classifying Tweets Mentioning Adverse Drug Reactions and Medication Intake
1805.04558
Table 4: Task 1: Results for our three official submissions, baselines, and top three teams. Evaluation measures for Task 1 are precision (P), recall (R), and F1-measure (F) for class 1 (ADR).
['[BOLD] Submission', '[ITALIC] Pclass1', '[ITALIC] Rclass1', '[ITALIC] Fclass1']
[['[ITALIC] a. Baselines', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['a.1. Assigning class 1 (ADR) to all instances', '0.077', '1.000', '0.143'], ['a.2. SVM-unigrams', '0.391', '0.298', '0.339'], ['[ITALIC] b. Top 3 teams in the shared task', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['b.1. NRC-Canada', '0.392', '0.488', '0.435'], ['b.2. AASU', '0.437', '0.393', '0.414'], ['b.3. NorthEasternNLP', '0.395', '0.431', '0.412'], ['[ITALIC] c. NRC-Canada official submissions', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['c.1. submission 1', '0.392', '0.488', '0.435'], ['c.2. submission 2', '0.386', '0.413', '0.399'], ['c.3. submission 3', '0.464', '0.396', '0.427'], ['[ITALIC] d. Our best result', '0.398', '0.508', '0.446']]
The best results in Fclass1 were obtained with submission 1 (row c.1). The results for submission 2 are the lowest, with F-measure being 3.5 percentage points lower than the result for submission 1 (row c.2). The ensemble classifier (submission 3) shows a slightly worse performance than the best result. However, in the post-competition experiments, we found that larger ensembles (with 7–11 classifiers, each trained on a random sub-sample of the majority class to reduce class imbalance to 1:2) outperform our best single-classifier model by over one percentage point with Fclass1 reaching up to 0.446 (row d). Our best submission is ranked first among the nine teams participated in this task (rows b.1–b.3). The first baseline is a classifier that assigns class 1 (ADR) to all instances (row a.1). The performance of this baseline is very low (Fclass1=0.143) due to the small proportion of class 1 instances in the test set. The second baseline is an SVM classifier trained only on the unigram features (row a.2). Its performance is much higher than the performance of the first baseline, but substantially lower than that of our system. By adding a variety of textual and domain-specific features as well as applying under-sampling, we are able to improve the classification performance by almost ten percentage points in F-measure.
NRC-Canada at SMM4H Shared Task: Classifying Tweets Mentioning Adverse Drug Reactions and Medication Intake
1805.04558
Table 5: Task 1: Results of our best system (submission 1) on the test set when one of the feature groups is removed.
['[BOLD] Submission', '[ITALIC] Pclass1', '[ITALIC] Rclass1', '[ITALIC] Fclass1']
[['a. submission 1 (all features)', '0.392', '0.488', '0.435'], ['b. all − general textual features', '0.390', '0.444', '0.415'], ['b.1. all − general [ITALIC] n-grams', '0.397', '0.484', '0.436'], ['b.2. all − general embeddings', '0.365', '0.480', '0.414'], ['b.3. all − general clusters', '0.383', '0.498', '0.433'], ['b.4. all − Twitter-specific − punctuation', '0.382', '0.494', '0.431'], ['c. all − domain-specific features', '0.341', '0.523', '0.413'], ['c.1. all − domain generalized [ITALIC] n-grams', '0.366', '0.514', '0.427'], ['c.2. all − Pronoun lexicon', '0.385', '0.496', '0.433'], ['c.3. all − domain embeddings', '0.365', '0.515', '0.427'], ['c.4. all − domain clusters', '0.386', '0.492', '0.432'], ['d. all − under-sampling', '0.628', '0.217', '0.322']]
To investigate the impact of each feature group on the overall performance, we conduct ablation experiments where we repeat the same classification process but remove one feature group at a time. Comparing the two major groups of features, general textual features (row b) and domain-specific features (row c), we observe that they both have a substantial impact on the performance. Removing one of these groups leads to a two percentage points drop in Fclass1. The general textual features mostly affect recall of the ADR class (row b) while the domain-specific features impact precision (row c). Among the general textual features, the most influential feature is general-domain word embeddings (row b.2). Among the domain-specific features, n-grams generalized over domain terms (row c.1) and domain word embeddings (row c.3) provide noticeable contribution to the overall performance. In the Appendix, we provide a list of top 25 n-gram features (including n-grams generalized over domain terms) ranked by their importance in separating the two classes.
NRC-Canada at SMM4H Shared Task: Classifying Tweets Mentioning Adverse Drug Reactions and Medication Intake
1805.04558
Table 6: Task 2: Results for our three official submissions, baselines, and top three teams. Evaluation measures for Task 2 are micro-averaged P, R, and F1-score for class 1 (intake) and class 2 (possible intake).
['[BOLD] Submission', '[ITALIC] Pclass1+ [ITALIC] class2', '[ITALIC] Rclass1+ [ITALIC] class2', '[ITALIC] Fclass1+ [ITALIC] class2']
[['[ITALIC] a. Baselines', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['a.1. Assigning class 2 to all instances', '0.359', '0.609', '0.452'], ['a.2. SVM-unigrams', '0.680', '0.616', '0.646'], ['[ITALIC] b. Top 3 teams in the shared task', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['b.1. InfyNLP', '0.725', '0.664', '0.693'], ['b.2. UKNLP', '0.701', '0.677', '0.689'], ['b.3. NRC-Canada', '0.708', '0.642', '0.673'], ['[ITALIC] c. NRC-Canada official submissions', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['c.1. submission 1', '0.708', '0.642', '0.673'], ['c.2. submission 2', '0.705', '0.639', '0.671'], ['c.3. submission 3', '0.704', '0.635', '0.668']]
The best results in Fclass1+class2 are achieved with submission 1 (row c.1). The results for the other two submissions, submission 2 and submission 3, are quite similar to the results of submission 1 in both precision and recall (rows c.2–c.3). Adding the features from the ADR lexicon and the Pronoun lexicon did not result in performance improvement on the test set. Our best system is ranked third among the nine teams participated in this task (rows b.1–b.3). The first baseline is a classifier that assigns class 2 (possible medication intake) to all instances (row a.1). Class 2 is the majority class among the two positive classes, class 1 and class 2, in the training set. The performance of this baseline is quite low (Fclass1+class2=0.452) since class 2 covers only 36% of the instances in the test set. The second baseline is an SVM classifier trained only on the unigram features (row a.2). The performance of such a simple model is surprisingly high (Fclass1+class2=0.646), only 4.7 percentage points below the top result in the competition.
Evaluating Semantic Parsing against a Simple Web-based Question Answering Model
1707.04412
Table 4: Feature ablation results. The five features that lead to largest drop in performance are displayed.
['[BOLD] Feature Template', '[BOLD] F1', 'Δ']
[['WebQA', '53.6', '[EMPTY]'], ['- Max-NE', '51.8', '-1.8'], ['- Ne+Common', '51.8', '-1.8'], ['- Google Rank', '51.4', '-2.2'], ['- In Quest', '50.1', '-3.5'], ['- TF-IDF', '41.5', '-12']]
Note that TF-IDF is by far the most impactful feature, leading to a large drop of 12 points in performance. This shows the importance of using the redundancy of the web for our QA system.
Evaluating Semantic Parsing against a Simple Web-based Question Answering Model
1707.04412
Table 3: Results on development (average over random splits) and test set. Middle: results on all examples. Bottom: results on the subset where candidate extraction succeeded.
['[BOLD] System', '[BOLD] Dev [BOLD] F1', '[BOLD] Dev [BOLD] p@1', '[BOLD] Test [BOLD] F1', '[BOLD] Test [BOLD] p@1', '[BOLD] Test [BOLD] MRR']
[['STAGG', '-', '-', '37.7', '-', '-'], ['CompQ', '-', '-', '[BOLD] 40.9', '-', '-'], ['WebQA', '35.3', '36.4', '32.6', '33.5', '42.4'], ['WebQA-extrapol', '-', '-', '34.4', '-', '-'], ['CompQ-Subset', '-', '-', '48.5', '-', '-'], ['WebQA-Subset', '53.6', '55.1', '51.9', '53.4', '67.5']]
WebQA obtained 32.6 F1 (33.5 p@1, 42.4 MRR) compared to 40.9 F1 of CompQ. Our candidate extraction step finds the correct answer in the top-K candidates in 65.9% of development examples and 62.7% of test examples. Thus, our test F1 on examples for which candidate extraction succeeded (WebQA-Subset) is 51.9 (53.4 p@1, 67.5 MRR). Not using a KB, results in a considerable disadvantage for WebQA. KB entities have normalized descriptions, and the answers have been annotated according to those descriptions. We, conversely, find answers on the web and often predict a correct answer, but get penalized due to small string differences. E.g., for “what is the longest river in China?” we answer “yangtze river”, while the gold answer is “yangtze”. To quantify this effect we manually annotated all 258 examples in the first random development set split, and determined whether string matching failed, and we actually returned the gold answer. This improved performance from 53.6 F1 to 56.6 F1 (on examples that passed candidate extraction). Further normalizing gold and predicted entities, such that “Hillary Clinton” and “Hillary Rodham Clinton” are unified, improved F1 to 57.3 F1. Extrapolating this to the test set would result in an F1 of 34.4
CoSQL: A Conversational Text-to-SQL Challenge Towards Cross-Domain Natural Language Interfaces to Databases
1909.05378
Table 5: Performance of various methods over all questions (question match) and all interactions (interaction match).
['Model', 'Question Match', 'Question Match', 'Interaction Match', 'Interaction Match']
[['[EMPTY]', 'Dev', 'Test', 'Dev', 'Test'], ['CD-Seq2Seq', '13.8', '13.9', '2.1', '2.6'], ['SyntaxSQL-con', '15.1', '14.1', '2.7', '2.2']]
We use the same evaluation metrics used by the SParC dataset Yu et al. The two models achieve less than 16% question-level accuracy and less than 3% on interaction-level accuracy. Since the two models have been benchmarked on both CoSQL and SParC, we cross-compare their performance on these two datasets. Both models perform significantly worse on CoSQL DST than on SParC. This indicates that CoSQL DST is more difficult than SParC. The possible reasons is that the questions in CoSQL are generated by a more diverse pool of users (crowd workers instead of SQL experts), the task includes ambiguous questions and the context contains more complex intent switches.
CoSQL: A Conversational Text-to-SQL Challenge Towards Cross-Domain Natural Language Interfaces to Databases
1909.05378
Table 6: BLEU scores on the development and test sets, and human evaluations of logic correctness rate (LCR) and grammar check on the 100 examples randomly sampled from the test set.
['Model', 'BLEU', 'BLEU', 'LCR (%)', 'Grammar']
[['[EMPTY]', 'Dev', 'Test', 'Test', 'Test'], ['Template', '9.5', '9.3', '41.0', '4.0'], ['Seq2Seq', '15.3', '14.1', '27.0', '3.5'], ['Pointer-generator', '16.4', '15.1', '35.0', '3.6']]
et al. To compute LCR and grammar score, we randomly sampled 100 descriptions generated by each model. Three students proficient in English participated in the evaluation, They were asked to choose a score 0 or 1 for LCR, and 1 to 5 for grammar check (the larger, the better). For LCR, the final score was decided by majority vote. We computed the average grammar score.
CoSQL: A Conversational Text-to-SQL Challenge Towards Cross-Domain Natural Language Interfaces to Databases
1909.05378
Table 7: Accuracy of user dialog act prediction on the development and test sets.
['Model', 'Dev', 'Test']
[['Majority', '63.3', '62.8'], ['TBCNN-pair', '84.2', '83.9']]
The result of Majority indicates that about 40% of user questions cannot be directly converted into SQL queries. This confirms the necessity of considering a larger set of dialogue actions for building a practical NLIDB system. Even though TBCNN can predict around 85% of user intents correctly, most of the correct predictions are for simple classes such as INFORM_SQL, THANK_YOU, and GOODBYE etc. The F-scores for more interesting and important dialog acts such as INFER_SQL and AMBIGUOUS are around 10%. This indicates that improving the accuracy on user DAT prediction is still important.
End-to-End Abstractive Summarization for Meetings
2004.02016
Table 3: Ablation results of HMNet on AMI’s test set. “+role text” means the role vector is not used, but the role name is prepended to each turn’s transcript.
['Model', 'ROUGE-1', 'R-2', 'R-SU4']
[['HMNet', '[BOLD] 52.1', '[BOLD] 19.7', '[BOLD] 24.1'], ['−POS&ENT', '49.3', '18.8', '23.5'], ['−role vector', '47.8', '17.2', '21.7'], ['+role text', '47.4', '18.8', '23.7'], ['−hierarchy', '45.1', '15.9', '20.5']]
Ablation Study. We conduct an ablation study of HMNet to verify the effectiveness of its various components. When the role vector is removed, the ROUGE-1 score drops 4.3 points. The “+role text” setting removes the role vector. Instead, it prepends the role name to each turn’s transcript. Its performance is higher than that of “-role vector” setting in ROUGE-2 and ROUGE-SU4, but still falls behind HMNet. This indicates the effectiveness of the vectorized representation of speaker roles. When HMNet is without the hierarchy structure, i.e. the turn-level transformer is removed and role vectors are appended to word-level embeddings, the ROUGE-1 score drops as much as 7.0 points. Thus, all these components we propose both play an important role in the summarization capability of HMNet.
End-to-End Abstractive Summarization for Meetings
2004.02016
Table 2: ROUGE-1, ROUGE-2, ROUGE-SU4 scores of generated summary in AMI and ICSI datasets. Numbers in bold are the overall best result. Numbers with underscore are the best result from previous literature. ∗ The two baseline MM models require additional human annotations of topic segmentation and visual signals from cameras. ∗∗ Results are statistically significant at level 0.05.
['Model', 'AMI ROUGE-1', 'AMI R-2', 'AMI R-SU4', 'ICSI ROUGE-1', 'ICSI R-2', 'ICSI R-SU4']
[['Random', '35.13', '6.26', '13.17', '29.28', '3.78', '10.29'], ['Longest Greedy', '33.35', '5.11', '12.15', '30.23', '4.27', '10.90'], ['Template', '31.50', '6.80', '11.40', '/', '/', '/'], ['CoreRank Submodular', '36.13', '7.33', '14.18', '29.82', '4.00', '10.61'], ['PageRank Submodular', '36.1', '7.42', '14.32', '30.4', '4.42', '11.14'], ['TextRank', '35.25', '6.9', '13.62', '29.7', '4.09', '10.64'], ['ClusterRank', '35.14', '6.46', '13.35', '27.64', '3.68', '9.77'], ['UNS', '37.86', '7.84', '14.71', '31.60', '4.83', '11.35'], ['Extractive Oracle', '39.49', '9.65', '13.20', '34.66', '8.00', '10.49'], ['PGNet', '40.77', '14.87', '18.68', '32.00', '7.70', '12.46'], ['MM (TopicSeg+VFOA)∗', '[BOLD] 53.29', '13.51', '/', '/', '/', '/'], ['MM (TopicSeg)∗', '51.53', '12.23', '/', '/', '/', '/'], ['HMNet (ours)', '52.09', '[BOLD] 19.69∗∗', '[BOLD] 24.11∗∗', '[BOLD] 39.51∗∗', '[BOLD] 10.76∗∗', '[BOLD] 17.90∗∗']]
As shown, except for ROUGE-1 in AMI, HMNet outperforms all baseline models in all metrics, and the result is statistically significant at level 0.05, under paired t-test with the best baseline results. On ICSI dataset, HMNet achieves 7.51, 3.06 and 5.44 higher ROUGE points than previous best results.
Uncertainty in Neural Network Word Embedding Exploration of Threshold for Similarity
1606.06086
Table 3: In each cell, the top value shows the result of the potential threshold and the bottom reports the optimal value (shown as - when equal to our threshold value). † indicates a significant difference to the baseline. There is no significance difference between the results of the optimal value and our threshold.
['[BOLD] Collection', '[BOLD] 100 (0.81) [BOLD] MAP', '[BOLD] 100 (0.81) [BOLD] NDCG', '[BOLD] 200 (0.74) [BOLD] MAP', '[BOLD] 200 (0.74) [BOLD] NDCG', '[BOLD] 300 (0.69) [BOLD] MAP', '[BOLD] 300 (0.69) [BOLD] NDCG', '[BOLD] 400 (0.65) [BOLD] MAP', '[BOLD] 400 (0.65) [BOLD] NDCG']
[['TREC-6', '0.273†', '0.432', '0.274†', '0.441', '0.277†', '0.439', '0.275†', '0.442'], ['TREC-6', '-', '0.436', '-', '-', '-', '0.442', '0.278†', '0.447'], ['TREC-7', '0.211', '0.377', '0.217†', '0.390', '0.214', '0.386', '0.215', '0.386'], ['TREC-7', '0.212', '0.395', '-', '0.399', '-', '0.395', '-', '0.400'], ['TREC-8', '0.269', '0.446', '0.276†', '0.458', '0.277†', '0.461', '0.272', '0.451'], ['TREC-8', '-', '0.447', '-', '0.459', '-', '-', '0.277', '0.454'], ['HARD', '0.257†', '0.366†', '0.259†', '0.368†', '0.260†', '0.368†', '0.260†', '0.370†'], ['HARD', '-', '-', '-', '-', '-', '-', '0.261†', '0.371†']]
For each dimension our threshold and its confidence interval are shown with vertical lines. Significant differences of the results to the baseline are marked on the plots using the † symbol.
Uncertainty in Neural Network Word Embedding Exploration of Threshold for Similarity
1606.06086
Table 1: Potential thresholds
['Dimensionality', 'Threshold Boundaries Lower', 'Threshold Boundaries [BOLD] Main', 'Threshold Boundaries Upper']
[['100', '0.802', '[BOLD] 0.818', '0.829'], ['200', '0.737', '[BOLD] 0.756', '0.767'], ['300', '0.692', '[BOLD] 0.708', '0.726'], ['400', '0.655', '[BOLD] 0.675', '0.693']]
We also consider an upper and lower bound for this threshold based on the points that the confident intervals cross the approximated mean.
Generative Pre-training for Speech with Autoregressive Predictive Coding
1910.12607
Table 2: ASR WER results with varying amounts of training data randomly sampled from si284. Feature extractors pre-trained with just 10 hours of LibriSpeech audio are denoted with a subscript 10.
['Features', 'Proportion of si284 1', 'Proportion of si284 1/2', 'Proportion of si284 1/4', 'Proportion of si284 1/8', 'Proportion of si284 1/16', 'Proportion of si284 1/32']
[['log Mel', '18.3', '24.1', '33.4', '44.6', '66.4', '87.7'], ['CPC', '20.7', '28.3', '38.8', '50.9', '69.7', '88.1'], ['R-APC', '15.2', '18.3', '24.6', '35.8', '49.0', '66.8'], ['T-APC', '13.7', '16.4', '21.3', '31.4', '43.0', '63.2'], ['PASE10', '20.8', '26.6', '32.8', '42.1', '58.8', '78.6'], ['CPC10', '23.4', '30.0', '40.1', '53.5', '71.3', '89.3'], ['R-APC10', '17.6', '22.7', '28.9', '38.6', '55.3', '73.7'], ['T-APC10', '18.0', '23.8', '31.6', '43.4', '61.2', '80.4']]
For example, 1/16 means that we take only 72×1/16=4.5 hours from si284 for training. We find that for all input features, there is a significant increase in WER whenever the training size is reduced by half. When comparing R-APC and T-APC with log Mel, we see the former two always outperform the latter across all proportions, and the gap becomes larger as training size decreases. Note that when using only half of si284 for training, R-APC already matches the performance of log Mel trained on the full set (18.3), and T-APC even outperforms it (16.4 vs. 18.3). In particular, we observe that T-APC always outperforms log Mel by using half of the training data log Mel uses. This observation aligns with the findings in recent NLP literature Finally, we see that most of the time APC outperforms CPC and PASE. In some cases PASE is slightly better than T-APC10 (e.g., when only 1/8 or less of si284 is available), but is still worse than R-APC10.
Generative Pre-training for Speech with Autoregressive Predictive Coding
1910.12607
Table 1: ASR results (WER ↓) of APC with varying n during pre-training and different transfer learning approaches (Frozen vs. Finetuned). log Mel is the baseline that uses log Mel spectrograms as input features. The best transfer learning result is marked in bold.
['Features', '[ITALIC] n 1', '[ITALIC] n 2', '[ITALIC] n 3', '[ITALIC] n 5', '[ITALIC] n 10', '[ITALIC] n 20']
[['log Mel', '18.3', '18.3', '18.3', '18.3', '18.3', '18.3'], ['R-APC Scratch', '23.2', '23.2', '23.2', '23.2', '23.2', '23.2'], ['R-APC Frozen', '17.2', '15.8', '15.2', '16.3', '17.8', '20.9'], ['R-APC Finetuned', '18.2', '17.6', '16.9', '18.2', '19.7', '21.7'], ['T-APC Scratch', '25.0', '25.0', '25.0', '25.0', '25.0', '25.0'], ['T-APC Frozen', '19.0', '16.1', '14.1', '[BOLD] 13.7', '15.4', '21.3'], ['T-APC Finetuned', '22.4', '17.0', '15.5', '14.6', '16.9', '23.3']]
We also include the case where APC is randomly initialized and trained from scratch along with the seq2seq model. We think this is because for a small n, APC can exploit local smoothness in the spectrograms for predicting the target future frame (since xk can be very similar to xk+n when n is small) and thus does not need to learn to encode information useful for inferring more global structures; an overly large n, on the other hand, makes the prediction task too challenging such that APC is unable to generalize across the training set. The best n for R-APC is 3 and for T-APC it is 5. We also find that for all n, keeping pre-trained APC weights fixed (*-APC Frozen), surprisingly, works better than fine-tuning them (*-APC Finetuned), while the latter still outperforms the baseline. Furthermore, we see that training APC from scratch along with the seq2seq model (*-APC Scratch) always performs the worst—even worse than the baseline. With APC transfer learning, WER is reduced by more than 25% from 18.3 to 13.7.
Generative Pre-training for Speech with Autoregressive Predictive Coding
1910.12607
Table 3: ASR WER results using different numbers of GRU layers for the encoder in the ASR seq2seq model.
['Features', 'Number of encoder layers 1', 'Number of encoder layers 2', 'Number of encoder layers 3', 'Number of encoder layers 4']
[['log Mel', '28.8', '23.5', '20.8', '18.3'], ['CPC', '34.3', '29.8', '25.2', '23.7'], ['R-APC', '26.2', '20.3', '17.6', '15.2'], ['T-APC', '25.2', '18.6', '15.8', '13.7'], ['PASE10', '29.4', '25.7', '22.5', '20.8'], ['CPC10', '35.8', '31.3', '26.0', '24.4'], ['R-APC10', '27.6', '22.3', '19.6', '17.6'], ['T-APC10', '28.1', '23.2', '20.6', '18.0']]
The next aspect we examine is to what extent can we reduce the downstream model size with transfer learning. We see that when using the same number of layers, * -APC and *-APC10 always outperform other features. It is noteworthy that T-APC with just 2 layers performs similar to log Mel using 4 layers (18.6 vs. 18.3), which demonstrates the effectiveness of APC transfer learning for reducing downstream model size.
Generative Pre-training for Speech with Autoregressive Predictive Coding
1910.12607
Table 4: Speech translation results. BLEU scores (↑) are reported. We also include the results of the cascaded system (ASR + MT) reported in [28] and the S-Transformer model reported in [29]. Only the results on the test set are available for these two approaches.
['Methods', 'dev', 'test']
[['Cascaded', '-', '14.6'], ['S-Transformer', '-', '13.8'], ['log Mel', '12.5', '12.9'], ['CPC', '12.1', '12.5'], ['R-APC', '13.5', '13.8'], ['T-APC', '13.7', '14.3'], ['PASE10', '12.0', '12.4'], ['CPC10', '11.8', '12.3'], ['R-APC10', '13.2', '13.7'], ['T-APC10', '12.8', '13.4']]
Besides, our RNN-based model with T-APC features (14.3) outperforms S-Transformer (13.8), and is comparable with the cascaded system (14.6).
This before That:Causal Precedence in the Biomedical Domain
1606.08089
Table 4: Results of all proposed causal models, using stratified 10-fold cross-validation. The combined system is a sieve-based architecture that applies the models in decreasing order of their precision. The combined system significantly outperforms the best single model, SVM with L1 regularization, according to a bootstrap resampling test (p = 0.022).
['[ITALIC] Model', '[ITALIC] p', '[ITALIC] r', '[ITALIC] f1']
[['Intra-sentence', '0.5', '0.01', '0.01'], ['Inter-sentence', '0.5', '0.01', '0.01'], ['Reichenbach', '0', '0', '0'], ['LR+L1', '0.58', '0.32', '0.41'], ['LR+L2', '0.65', '0.26', '0.37'], ['SVM+L1', '0.54', '0.35', '[BOLD] 0.43'], ['SVM+L2', '0.54', '0.29', '0.38'], ['RF', '0.62', '0.25', '0.36'], ['LSTM', '0.40', '0.25', '0.31'], ['LSTM+P', '0.39', '0.20', '0.26'], ['FLSTM', '0.43', '0.15', '0.22'], ['FLSTM+P', '0.38', '0.22', '0.28'], ['Combined', '0.38', '0.58', '[BOLD] 0.46*']]
We report results using micro precision, recall, and F1 scores for each model. With fewer than 200 instances of causal precedence occurring in 1000 annotations, training and testing for both the feature-based classifiers and latent feature models was performed using stratified 10-fold cross validation. Weight updates were made on batches of 32 examples and all folds completed in fewer than 50 epochs.
Sentiment Tagging with Partial Labels using Modular Architectures
1906.00534
Table 5: Comparing our models with several state-of-the-art systems on the CoNLL 2003 English NER dataset.
['Model', 'English']
[['LSTM-CRF\xa0Lample et\xa0al. ( 2016 )', '90.94'], ['LSTM-CNN-CRF\xa0Ma and Hovy ( 2016 )', '91.21'], ['LM-LSTM-CRF\xa0Liu et\xa0al. ( 2018 )', '91.06'], ['LSTM-CRF-T', '90.8'], ['LSTM-CRF-TI', '91.16'], ['LSTM-CRF-TI(g)', '[BOLD] 91.68']]
A3SS0SSS0Px2 Results on NER For Dutch and Spanish, we used cross-lingual embedding as a way to exploit lexical information. The results are shown in Tab. Our best-performing model outperform all the competing systems.
Sentiment Tagging with Partial Labels using Modular Architectures
1906.00534
Table 2: Comparing our models with recent results on the Aspect Sentiment datasets.
['Models', 'English E+A', 'English E+A+S', 'Spanish E+A', 'Spanish E+A+S', 'Dutch E+A', 'Dutch E+A+S', 'Russian E+A', 'Russian E+A+S']
[['LSTM-CNN-CRFMa and Hovy ( 2016 )', '58.73', '44.20', '64.32', '50.34', '51.62', '36.88', '58.88', '38.13'], ['LSTM-CRF-LMLiu et\xa0al. ( 2018 )', '62.27', '45.04', '63.63', '50.15', '51.78', '34.77', '62.18', '38.80'], ['LSTM-CRF', '59.11', '48.67', '62.98', '52.10', '51.35', '37.30', '63.41', '42.47'], ['LSTM-CRF-T', '60.87', '49.59', '64.24', '52.33', '52.79', '37.61', '64.72', '43.01'], ['LSTM-CRF-TI', '63.11', '50.19', '64.40', '52.85', '53.05', '38.07', '64.98', '44.03'], ['LSTM-CRF-TI(g)', '[BOLD] 64.74', '[BOLD] 51.24', '[BOLD] 66.13', '[BOLD] 53.47', '[BOLD] 53.63', '[BOLD] 38.65', '[BOLD] 65.64', '[BOLD] 45.65']]
S6SS2SSS0Px2 Aspect Based Sentiment We evaluated our models on two tasks: The first uses two modules, for identifying the position of the aspect in the text (i.e., chunking) and the aspect category prediction (denoted E+A). The second adds a third module that predicts the sentiment polarity associated with the aspect (denoted E+A+S). I.e., for a given sentence, label its entity span, the aspect category of the entity and the sentiment polarity of the entity at the same time. The results over four languages are summarized in Tab. In all cases, our modular approach outperforms all monolithic approaches.
Sentiment Tagging with Partial Labels using Modular Architectures
1906.00534
Table 4: Performance on the target sentiment task
['System', 'Architecture', 'English Pre', 'English Rec', 'English F1', 'Spanish Pre', 'Spanish Rec', 'Spanish F1']
[['Zhang, Zhang and Vo (2015)', 'Pipeline', '43.71', '37.12', '40.06', '45.99', '40.57', '43.04'], ['Zhang, Zhang and Vo (2015)', 'Joint', '44.62', '35.84', '39.67', '46.67', '39.99', '43.02'], ['Zhang, Zhang and Vo (2015)', 'Collapsed', '46.32', '32.84', '38.36', '47.69', '34.53', '40.00'], ['Li and Lu (2017)', 'SS', '44.57', '36.48', '40.11', '46.06', '39.89', '42.75'], ['Li and Lu (2017)', '+embeddings', '47.30', '40.36', '43.55', '47.14', '41.48', '44.13'], ['Li and Lu (2017)', '+POS tags', '45.96', '39.04', '42.21', '45.92', '40.25', '42.89'], ['Li and Lu (2017)', '+semiMarkov', '44.49', '37.93', '40.94', '44.12', '40.34', '42.14'], ['Base Line', 'LSTM-CRF', '53.29', '46.90', '49.89', '51.17', '46.71', '48.84'], ['[ITALIC] This work', 'LSTM-CRF-T', '54.21', '48.77', '51.34', '51.77', '47.37', '49.47'], ['[ITALIC] This work', 'LSTM-CRF-Ti', '54.58', '49.01', '51.64', '52.14', '47.56', '49.74'], ['[ITALIC] This work', 'LSTM-CRF-Ti(g)', '[BOLD] 55.31', '[BOLD] 49.36', '[BOLD] 52.15', '[BOLD] 52.82', '[BOLD] 48.41', '[BOLD] 50.50']]
The complete results of our experiments on the target sentiment task are summarized in Tab. Our LSTM-CRF-TI(g) model outperforms all the other competing models in Precision, Recall and the F1 score.
Sentiment Tagging with Partial Labels using Modular Architectures
1906.00534
Table 6: Comparing our models with recent results on the 2002 CoNLL Dutch and Spanish NER datasets.
['Model', 'Dutch', 'Spanish']
[['Carreras et\xa0al. Carreras et\xa0al. ( 2002 )', '77.05', '81.39'], ['Nothman et\xa0al. Nothman et\xa0al. ( 2013 )', '78.60', '[EMPTY]'], ['dos Santos and Guimarães dos Santos and Guimarães ( 2015 )', '[EMPTY]', '82.21'], ['Gillick et\xa0al. Gillick et\xa0al. ( 2015 )', '82.84', '82.95'], ['Lample et\xa0al. Lample et\xa0al. ( 2016 )', '81.74', '85.75'], ['LSTM-CRF-T', '83.91', '84.89'], ['LSTM-CRF-TI', '84.12', '85.28'], ['LSTM-CRF-TI(g)', '[BOLD] 84.51', '[BOLD] 85.92']]
A3SS0SSS0Px2 Results on NER For Dutch and Spanish, we used cross-lingual embedding as a way to exploit lexical information. The results are shown in Tab. Our best-performing model outperform all the competing systems.
Argumentation Mining in User-Generated Web Discourse
1601.02403
Table 11: Results of classification of argument components in the cross-domain scenario. Macro-F1 scores reported, bold numbers denote the best results. HS – homeschooling, MS – mainstreaming, PIS – prayer in schools, PPS – private vs. public schools, RS – redshirting, SSE – single sex education. Results in the aggregated row are computed from an aggregated confusion matrix over all domains. The differences between the best feature set combination (4) and others are statistically significant (p<0.001; paired exact Liddell’s test).
['Domain', '[BOLD] Feature set combinations 0', '[BOLD] Feature set combinations 01', '[BOLD] Feature set combinations 012', '[BOLD] Feature set combinations 0123', '[BOLD] Feature set combinations 01234', '[BOLD] Feature set combinations 1234', '[BOLD] Feature set combinations 234', '[BOLD] Feature set combinations 34', '[BOLD] Feature set combinations 4']
[['[BOLD] HS', '0.087', '0.063', '0.044', '0.106', '0.072', '0.075', '0.065', '0.063', '[BOLD] 0.197'], ['[BOLD] MS', '0.072', '0.060', '0.070', '0.058', '0.038', '0.062', '0.045', '0.060', '[BOLD] 0.188'], ['[BOLD] PIS', '0.078', '0.073', '0.083', '0.074', '0.086', '0.073', '0.096', '0.081', '[BOLD] 0.166'], ['[BOLD] PPS', '0.070', '0.059', '0.070', '0.132', '0.059', '0.062', '0.071', '0.067', '[BOLD] 0.203'], ['[BOLD] RS', '0.067', '0.067', '0.082', '0.110', '0.097', '0.092', '0.075', '0.075', '[BOLD] 0.257'], ['[BOLD] SSE', '0.092', '0.089', '0.066', '0.036', '0.120', '0.091', '0.071', '0.066', '[BOLD] 0.194'], ['[BOLD] Aggregated', '0.079', '0.086', '0.072', '0.122', '0.094', '0.088', '0.089', '0.076', '[BOLD] 0.209']]
However, using only feature set 4 (embeddings), the system performance increases rapidly, so it is even comparable to numbers achieved in the in-domain scenario. These results indicate that embedding features generalize well across domains in our task of argument component identification.
Argumentation Mining in User-Generated Web Discourse
1601.02403
Table 5: Correlations between αU and various measures on different data sub-sets. SC – full sentence coverage; DL – document length; APL – average paragraph length; ASL = average sentence length; ARI, C-L (Coleman-Liau), Flesch, LIX – readability measures. Bold numbers denote statistically significant correlation (p<0.05).
['[EMPTY]', '[BOLD] SC', '[BOLD] DL', '[BOLD] APL', '[BOLD] ASL', '[BOLD] ARI', '[BOLD] C-L', '[BOLD] Flesch', '[BOLD] LIX']
[['all data', '-0.14', '-0.14', '0.01', '0.04', '0.07', '0.08', '-0.11', '0.07'], ['comments', '-0.17', '[BOLD] -0.64', '0.13', '0.01', '0.01', '0.01', '-0.11', '0.01'], ['forum posts', '-0.08', '-0.03', '-0.08', '-0.03', '0.08', '0.24', '-0.17', '0.20'], ['blog posts', '-0.50', '0.21', '[BOLD] -0.81', '-0.61', '-0.39', '0.47', '0.04', '-0.07'], ['articles', '0.00', '-0.64', '-0.43', '-0.65', '-0.25', '0.39', '-0.27', '-0.07'], ['homeschooling', '-0.10', '-0.29', '-0.18', '0.34', '0.35', '0.31', '-0.38', '0.46'], ['redshirting', '-0.16', '0.07', '-0.26', '-0.07', '0.02', '0.14', '-0.06', '-0.09'], ['prayer-in-school', '-0.24', '[BOLD] -0.85', '0.30', '0.07', '0.14', '0.11', '-0.25', '0.24'], ['single sex', '-0.08', '-0.36', '-0.28', '-0.16', '-0.17', '0.05', '0.06', '0.06'], ['mainstreaming', '-0.27', '-0.00', '-0.03', '0.06', '0.20', '0.29', '-0.19', '0.03'], ['public private', '0.18', '0.19', '0.30', '-0.26', '[BOLD] -0.58', '[BOLD] -0.51', '[BOLD] 0.51', '[BOLD] -0.56']]
We observed the following statistically significant (p<0.05) correlations. First, document length negatively correlates with agreement in comments. The longer the comment was the lower the agreement was. Second, average paragraph length negatively correlates with agreement in blog posts. The longer the paragraphs in blogs were, the lower agreement was reached. Third, all readability scores negatively correlate with agreement in the public vs. private school domain, meaning that the more complicated the text in terms of readability is, the lower agreement was reached. We observed no significant correlation in sentence coverage and average sentence length measures. We cannot draw any general conclusion from these results, but we can state that some registers and topics, given their properties, are more challenging to annotate than others.
Argumentation Mining in User-Generated Web Discourse
1601.02403
Table 8: Class distribution of the gold data Toulmin corpus approximated to the sentence level boundaries.
['[BOLD] Class', '[BOLD] Sentences in data [BOLD] Relative (%)', '[BOLD] Sentences in data [BOLD] Absolute', '[BOLD] Class', '[BOLD] Sentences in data [BOLD] Relative (%)', '[BOLD] Sentences in data [BOLD] Absolute']
[['Backing-B', '5.6', '220', 'Premise-I', '8.6', '336'], ['Backing-I', '7.2', '281', 'Rebuttal-B', '1.6', '61'], ['Claim-B', '4.4', '171', 'Rebuttal-I', '0.9', '37'], ['Claim-I', '0.4', '16', 'Refutation-B', '0.5', '18'], ['O', '56.8', '2214', 'Refutation-I', '0.4', '15'], ['Premise-B', '13.6', '530', '[BOLD] Total', '[EMPTY]', '3899']]
The little presence of rebuttal and refutation (4 classes account only for 3.4% of the data) makes this dataset very unbalanced. The overall best performance (Macro-F1 0.251) was achieved using the rich feature sets (01234 and 234) and significantly outperformed the baseline as well as other feature sets. Classification of non-argumentative text (the "O" class) yields about 0.7 F1 score even in the baseline setting. The boundaries of claims (Cla-B), premises (Pre-B), and backing (Bac-B) reach in average lower scores then their respective inside tags (Cla-I, Pre-I, Bac-I). It can be interpreted such that the system is able to classify that a certain sentence belongs to a certain argument component, but the distinction whether it is a beginning of the argument component is harder. The very low numbers for rebuttal and refutation have two reasons.
Latent Alignment and Variational Attention
1807.03756
Table 3: (Left) Comparison against the best prior work for NMT on the IWSLT 2014 German-English test set. (Upper Right) Comparison of inference alternatives of variational attention on IWSLT 2014. (Lower Right) Comparison of different models in terms of implied discrete entropy (lower = more certain alignment).
['Inference Method', '#Samples', 'PPL', 'BLEU']
[['REINFORCE', '1', '6.17', '33.30'], ['RWS', '5', '6.41', '32.96'], ['Gumbel-Softmax', '1', '6.51', '33.08']]
For NMT, on the IWSLT 2014 German-English task, variational attention with enumeration and sampling performs comparably to optimizing the log marginal likelihood, despite the fact that it is optimizing a lower bound. We believe that this is due to the use of q(z), which conditions on the entire source/target and therefore potentially provides better training signal to p(z|x,~x) through the KL term. Note that it is also possible to have q(z) Even with sampling, our system improves on the state-of-the-art. On the larger WMT 2017 English-German task, the superior performance of variational attention persists: our baseline soft attention reaches 24.10 BLEU score, while variational attention reaches 24.98. Note that this only reflects a reasonable setting without exhaustive tuning, yet we show that we can train variational attention at scale. For VQA the trend is largely similar, and results for NLL with variational attention improve on soft attention and hard attention. However the task-specific evaluation metrics are slightly worse. Note that hard attention has very low entropy (high certainty) whereas soft attention is quite high. The variational attention model falls in between. RWS reaches a comparable performance as REINFORCE, but at a higher memory cost as it requires multiple samples. Gumbel-Softmax reaches nearly the same performance and seems like a viable alternative; although we found its performance is sensitive to its temperature parameter. We also trained a non-amortized SVI model, but found that at similar runtime it was not able to produce satisfactory results, likely due to insufficient updates of the local variational parameters. Despite extensive experiments, we found that variational relaxed attention performed worse than other methods. In particular we found that when training with a Dirichlet KL, it is hard to reach low-entropy regions of the simplex, and the attentions are more uniform than either soft or variational categorical attention.
Latent Alignment and Variational Attention
1807.03756
Table 1: Evaluation on NMT and VQA for the various models. E column indicates whether the expectation is calculated via enumeration (Enum) or a single sample (Sample) during training. For NMT we evaluate intrinsically on perplexity (PPL) (lower is better) and extrinsically on BLEU (higher is better), where for BLEU we perform beam search with beam size 10 and length penalty (see Appendix B for further details). For VQA we evaluate intrinsically on negative log-likelihood (NLL) (lower is better) and extrinsically on VQA evaluation metric (higher is better). All results except for relaxed attention use enumeration at test time.
['Model', 'Objective', 'E', 'NMT PPL', 'NMT BLEU', 'VQA NLL', 'VQA Eval']
[['Soft Attention', 'log [ITALIC] p( [ITALIC] y|E[ [ITALIC] z])', '-', '7.17', '32.77', '1.76', '58.93'], ['Marginal Likelihood', 'logE[ [ITALIC] p]', 'Enum', '6.34', '33.29', '1.69', '60.33'], ['Hard Attention', 'E [ITALIC] p[log [ITALIC] p]', 'Enum', '7.37', '31.40', '1.78', '57.60'], ['Hard Attention', 'E [ITALIC] p[log [ITALIC] p]', 'Sample', '7.38', '31.00', '1.82', '56.30'], ['Variational Relaxed Attention', 'E [ITALIC] q[log [ITALIC] p]−KL', 'Sample', '7.58', '30.05', '-', '-'], ['Variational Attention', 'E [ITALIC] q[log [ITALIC] p]−KL', 'Enum', '6.08', '33.68', '1.69', '58.44'], ['Variational Attention', 'E [ITALIC] q[log [ITALIC] p]−KL', 'Sample', '6.17', '33.30', '1.75', '57.52']]
We first note that hard attention underperforms soft attention, even when its expectation is enumerated. This indicates that Jensen’s inequality alone is a poor bound. On the other hand, on both experiments, exact marginal likelihood outperforms soft attention, indicating that when possible it is better to have latent alignments.
Latent Alignment and Variational Attention
1807.03756
Table 2: (Left) Performance change on NMT from exact decoding to K-Max decoding with K=5. (see section 5 for definition of K-max decoding). (Right) Test perplexity of different approaches while varying K to estimate Ez[p(y|x,~x)]. Dotted lines compare soft baseline and variational with full enumeration.
['Model', 'PPL Exact', 'PPL [ITALIC] K-Max', 'BLEU Exact', 'BLEU [ITALIC] K-Max']
[['Marginal Likelihood', '6.34', '6.90', '33.29', '33.31'], ['Hard + Enum', '7.37', '7.37', '31.40', '31.37'], ['Hard + Sample', '7.38', '7.38', '31.00', '31.04'], ['Variational + Enum', '6.08', '6.42', '33.68', '33.69'], ['Variational + Sample', '6.17', '6.51', '33.30', '33.27']]
For all methods exact enumeration is better, however K-max is a reasonable approximation. This possibly indicates that soft attention models are approximating latent alignment models. On the other hand, training with latent alignments and testing with soft attention performed badly.
Latent Alignment and Variational Attention
1807.03756
Table 3: (Left) Comparison against the best prior work for NMT on the IWSLT 2014 German-English test set. (Upper Right) Comparison of inference alternatives of variational attention on IWSLT 2014. (Lower Right) Comparison of different models in terms of implied discrete entropy (lower = more certain alignment).
['[EMPTY]', 'IWSLT']
[['Model', 'BLEU'], ['Beam Search Optimization Wiseman2016 ', '26.36'], ['Actor-Critic Bahdanau2017 ', '28.53'], ['Neural PBMT + LM Huang2018 ', '30.08'], ['Minimum Risk Training Edunov2017 ', '32.84'], ['Soft Attention', '32.77'], ['Marginal Likelihood', '33.29'], ['Hard Attention + Enum', '31.40'], ['Hard Attention + Sample', '30.42'], ['Variational Relaxed Attention', '30.05'], ['Variational Attention + Enum', '33.69'], ['Variational Attention + Sample', '33.30']]
For NMT, on the IWSLT 2014 German-English task, variational attention with enumeration and sampling performs comparably to optimizing the log marginal likelihood, despite the fact that it is optimizing a lower bound. We believe that this is due to the use of q(z), which conditions on the entire source/target and therefore potentially provides better training signal to p(z|x,~x) through the KL term. Note that it is also possible to have q(z) Even with sampling, our system improves on the state-of-the-art. On the larger WMT 2017 English-German task, the superior performance of variational attention persists: our baseline soft attention reaches 24.10 BLEU score, while variational attention reaches 24.98. Note that this only reflects a reasonable setting without exhaustive tuning, yet we show that we can train variational attention at scale. For VQA the trend is largely similar, and results for NLL with variational attention improve on soft attention and hard attention. However the task-specific evaluation metrics are slightly worse. Note that hard attention has very low entropy (high certainty) whereas soft attention is quite high. The variational attention model falls in between. RWS reaches a comparable performance as REINFORCE, but at a higher memory cost as it requires multiple samples. Gumbel-Softmax reaches nearly the same performance and seems like a viable alternative; although we found its performance is sensitive to its temperature parameter. We also trained a non-amortized SVI model, but found that at similar runtime it was not able to produce satisfactory results, likely due to insufficient updates of the local variational parameters. Despite extensive experiments, we found that variational relaxed attention performed worse than other methods. In particular we found that when training with a Dirichlet KL, it is hard to reach low-entropy regions of the simplex, and the attentions are more uniform than either soft or variational categorical attention.
Latent Alignment and Variational Attention
1807.03756
Table 3: (Left) Comparison against the best prior work for NMT on the IWSLT 2014 German-English test set. (Upper Right) Comparison of inference alternatives of variational attention on IWSLT 2014. (Lower Right) Comparison of different models in terms of implied discrete entropy (lower = more certain alignment).
['Model', 'Entropy NMT', 'Entropy VQA']
[['Soft Attention', '1.24', '2.70'], ['Marginal Likelihood', '0.82', '2.66'], ['Hard Attention + Enum', '0.05', '0.73'], ['Hard Attention + Sample', '0.07', '0.58'], ['Variational Relaxed Attention', '2.02', '-'], ['Variational Attention + Enum', '0.54', '2.07'], ['Variational Attention + Sample', '0.52', '2.44']]
For NMT, on the IWSLT 2014 German-English task, variational attention with enumeration and sampling performs comparably to optimizing the log marginal likelihood, despite the fact that it is optimizing a lower bound. We believe that this is due to the use of q(z), which conditions on the entire source/target and therefore potentially provides better training signal to p(z|x,~x) through the KL term. Note that it is also possible to have q(z) Even with sampling, our system improves on the state-of-the-art. On the larger WMT 2017 English-German task, the superior performance of variational attention persists: our baseline soft attention reaches 24.10 BLEU score, while variational attention reaches 24.98. Note that this only reflects a reasonable setting without exhaustive tuning, yet we show that we can train variational attention at scale. For VQA the trend is largely similar, and results for NLL with variational attention improve on soft attention and hard attention. However the task-specific evaluation metrics are slightly worse. Note that hard attention has very low entropy (high certainty) whereas soft attention is quite high. The variational attention model falls in between. RWS reaches a comparable performance as REINFORCE, but at a higher memory cost as it requires multiple samples. Gumbel-Softmax reaches nearly the same performance and seems like a viable alternative; although we found its performance is sensitive to its temperature parameter. We also trained a non-amortized SVI model, but found that at similar runtime it was not able to produce satisfactory results, likely due to insufficient updates of the local variational parameters. Despite extensive experiments, we found that variational relaxed attention performed worse than other methods. In particular we found that when training with a Dirichlet KL, it is hard to reach low-entropy regions of the simplex, and the attentions are more uniform than either soft or variational categorical attention.